AU2021107205A4 - Deep learning based system and method for automatic identification of covid-19 regions on ct-images - Google Patents

Deep learning based system and method for automatic identification of covid-19 regions on ct-images Download PDF

Info

Publication number
AU2021107205A4
AU2021107205A4 AU2021107205A AU2021107205A AU2021107205A4 AU 2021107205 A4 AU2021107205 A4 AU 2021107205A4 AU 2021107205 A AU2021107205 A AU 2021107205A AU 2021107205 A AU2021107205 A AU 2021107205A AU 2021107205 A4 AU2021107205 A4 AU 2021107205A4
Authority
AU
Australia
Prior art keywords
covid
segmentation
classification
slice
proposed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2021107205A
Inventor
Subhalakshmi R. T.
Sasikala S.
Appavu alias Balamurugan Subramanian
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
RT Subhalakshmi
Original Assignee
RT Subhalakshmi
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by RT Subhalakshmi filed Critical RT Subhalakshmi
Priority to AU2021107205A priority Critical patent/AU2021107205A4/en
Application granted granted Critical
Publication of AU2021107205A4 publication Critical patent/AU2021107205A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

OF THE INVENTION The present invention is to detect the COVID-19 using a novel dual-branch combination network. The detection of lung infection automatically from the lung CT images affords the high potential to improve the conventional healthcare approach for dealing with COVID-19. However, CT- image-based detection of Covid-19 has confronted substantial challenges like maximum variation in the characteristics of the infection, less sensitivity, and less depth variation amidst infectious and regular tissues. Furthermore, it is complex to gather a huge amount of data in a short period. Subsequently, it is necessary to develop the automated methods for Covid-19 detection to observe the presence of disease from radiological images. To address these shortcomings, a novel dual-branch combination network for the detection of the COVID-19 is enhanced. It can attain classification at the individual level and segmentation of lesions at the same time. Also, a unique lesion attention component was designed to combine the intermediary results of segmentation. In addition, a slice probability mapping technique is introduced to study the transmission from slice to individual stage classification. The proposed technique attained maximum sensitivity, better accuracy, and good interpretability. R.T.Subhalakshmi,Dr.S.Appavu Alias Balamurugan,Dr.S.Sasikala Figures, Tables and Flow charts U-Net based Lesion segmentation segmentation Preprocessing Lesion Attention Covid CT images Classification Slice-level diagnosis Slice proba bility ma pping Dual Branch Combination Network Individual-level diagnosis Performance Analysis Figure 1.Flow of the proposed method

Description

R.T.Subhalakshmi,Dr.S.Appavu Alias Balamurugan,Dr.S.Sasikala
Figures, Tables and Flow charts
U-Net based Lesion segmentation segmentation
Preprocessing
Lesion Attention
Covid CT images Classification Slice-level diagnosis
Slice proba bility ma pping
Dual Branch Combination Network
Individual-level diagnosis
Performance Analysis
Figure 1.Flow of the proposed method
EDITORIAL NOTE
2021107205
THERE ARE 17 PAGES OF DESCRIPTION ONLY
DESCRIPTION OF THE INVENTION
DEEP LEARNING BASED SYSTEM AND METHOD FOR AUTOMATIC IDENTIFICATION OF COVID-19 REGIONS ON CT-IMAGES
Field of Invention and use of the Invention
The present disclosure relates to a method for detection of the COVID-19. In more detail, the detection of lung infection automatically from the lung CT images affords the high potential to improve the deep learning based conventional healthcare approach for dealing with COVID-19.
Novelty of the Invention
The proposed a novel dual-branch combination network for the detection of the COVID-19 is revealed. Classification at the individual level and segmentation of lesions at the same time is attained by this novel enhancement. Also, a unique lesion attention component was designed to combine the intermediary results of segmentation. In addition, a slice probability mapping technique is introduced to study the transmission from slice to individual stage classification. The proposed technique attained maximum sensitivity, better accuracy, and good interpretability.
Background of the Invention
Corona virus 2019 (COVID-19), which is brought about by the serious intense breathing disorder Covid 2, arose in December 2019 and quickly formed into a worldwide episode. Corona virus presents as an intense respiratory plot contamination disorder and is profoundly irresistible. Sick people with this disease possess a higher death rate. By April 20, 2020, more than 84,000 corona-affected representatives have been affirmed in China and above 2.30 million representatives all around the world. The World Health Organization first decided this outbreak was a normal wellbeing disaster of worldwide consideration and consequently an overall pandemic. The regular clinical qualities of COVID-19 cases incorporate fever, respiratory side effects, pneumonia, diminished white platelet (WBC) tally, or diminished lymphocyte check.
Testing individuals or a mass populace for any popular contamination includes bio-sensing of essence or nonappearance of analyses, for example, viral nucleic acids (DNA and RNA), viral proteins, flawless viral particles, and antibodies created by the patient insusceptible reaction against the infection. The major impediment of the framework is that master radiologists are needed to decipher the radiography pictures. In that capacity, PC-supported analytic frameworks can assist the radiologists with recognizing COVID-19 cases precisely and quickly. Because of the quick spread and expanding number of Covid sickness 19 cases caused by another Covid, SARS-CoV-2, fast and exact recognition of virus and infection is progressively indispensable to control the wellsprings of contamination and assist patients with forestalling the disease movement. The power of Al is being abused to concentrate on the corona virus outbreak, for example, the identification of corona virus in clinical chest X-beams.
RT-PCR is perceived as a top-notch and reliably utilized instrument to investigate and measure the distinctive RNA's in the laboratories and medical examinations because of raised affectability in RNA intensification. Resulting to the event of the corona virus, various methods helped with Reverse-transcription polymerase chain response for the location of corona virus have been accounted for existing methods. In correlation, non-ICU cases display reciprocal ground-glass haziness and sub-segmental spaces of solidification in their chest computed tomography pictures. In such cases, subsequent chest CT pictures show two-sided ground-glass darkness with settled solidification.
Artificial Intelligence (AI) helped devices have displayed alluring capability; for instance, chest computed tomography is shown to assume a significant part in the analysis and assessment of corona virus. Notwithstanding, building up an Artificial Intelligence analytic framework based on Computed Tomography for the illness discovery has confronted extensive difficulties, which is fundamental because of the absence of satisfactory physically depicted examples for preparing, just as the necessity of adequate affectability to unpretentious lesions in the early contamination stages. Few examining protocols for the acknowledgment are not reasonable and that the neural structures are training designs in the data which do not get connected to the existence of the corona virus. Advancement of machine-level learning-based frameworks to beat such challenges in ordering corona virus pictures have become a dire necessity. Transfer-based training with calibrating was utilized in our investigation to prepare the structure on generally little lungX-rays successfully.
ADECO-CNN which is an enhanced CNN model to isolate contaminated and not tainted cases contrasted with already prepared CNN-dependent VGG19, GoogleNet, and ResNetstructures. Three distinct investigations succeeding three preprocessing plans are done to assess and look at the created structures. The point is to assess how preprocessing the information influences the outcomes and enhances the logic. Slithered online media destinations and news reports and, through the utilization of robotized Al strategies, find the hidden story systems supporting the age of tales and fear inspired notions. The multiple-objective enhancement and extensive learning-dependent strategy for detecting the affected cases with COVID-19 utilizing the X-rays is presented through the existing methods. The technique of the J48 decision tree categorizes the extensive features of corona infected X-ray pictures for the effective identification of affected cases.
Cutting edge CNN termed Mobile Net and prepared without any preparation to research the significance of the separated characteristics for the classification part. Dark Net structure was used in this investigation as a classifier for the YOLO real-time target identification framework. Also, 17 layers of convolution were carried out and diverse sifting was performed on every layer. Proposed CoroNet, an extensive CNN structure to naturally distinguish corona virus contamination from lung X-beam pictures. The suggested system depends on Xception design which is already prepared on
ImageNet data and prepared start-to-finish on a data sequenced by collecting corona virus and alternate lung pneumonia X-beam images from diverse publicly accessible data sets.
An Al framework dependent on extensive meta-learning has been introduced in this exploration to speed up the examination of lung X-beam pictures in the automated discovery of corona virus patients. Unique corona virus Lung Infection Segmentation Deep Network to naturally recognize tainted locales from lung CT cuts. In this model, an equal fractional decoder is utilized to total the significant stage characters and produce a worldwide guide. An automatic approach dependent on an outfit of extensive transfer learning for the discovery of the Corona virus. An automated corona virus examining framework that utilizes radiometric surface descriptors extricated from lung x-ray pictures to recognize the typical, suspected, and non-corona virus-contaminated cases.
Two extensive learning structures are recommended that naturally identify positive corona virus subjects utilizing lung CT X-beam pictures. An extensive convolutional neural network dependent engineering, termed as CovXNet, is suggested that uses depth wise convolution with fluctuating widening rates for proficiently separating differentiated features from lung X-beams . These days, automatic infection recognition has become a vital issue in clinical science because of fast populace development.
Drawbacks of Existing state-of-art and how the invention addresses the drawbacks
A widespread corona virus disease 2019 (Covid-19) makes it essential to promote effective tools for its diagnosis at an early stage. Subsequently, it is necessary to develop the automated methods for Covid-19 detection to observe the presence of disease from radiological images. The detection of lung infection automatically from the lung CT images affords the high potential to improve the conventional healthcare approach for dealing with COVID-19. However, CT- image-based detection of Covid-19 has confronted substantial challenges like maximum variation in the characteristics of the infection, less sensitivity, and less depth variation amidst infectious and regular tissues.
Furthermore, it is complex to gather a huge amount of data in a short period. To address these shortcomings, we proposed a novel dual-branch combination network for the detection of the COVID-19. It can attain classification at the individual level and segmentation of lesions at the same time. Also, a unique lesion attention component was designed to combine the intermediary results of segmentation. In addition, a slice probability mapping technique is introduced to study the transmission from slice to individual stage classification. The proposed technique attained maximum sensitivity, better accuracy, and good interpretability.
Therefore, we built up a dual-branch combination network for the detection of COVID-19 that can all the while accomplishing classification at an individual level and segmentation of the lesions. To center the branch of classification, all the more seriously on the lesion territories, a unique lesion attention module was created to coordinate the intermediary results of segmentation.
Objectives of the Invention
An automatic sickness recognition system helps specialists in the finding of infection and gives definite, steady, and quick outcomes, and lessens the demise rate. (COVID-19) has gotten perhaps the most serious and intense illnesses lately and has spread around the world. Hence, an automated identification framework, as the quickest indicative choice, ought to be executed to obstruct COVID-19 from spreading.
Summary of the Invention
The method to detect the corona virus using a novel dual-branch combination network is proposed, this method comprises the following
SA system of joined segmentation and classification was proposed, which attained both the segmentation and the classification of the lesions of COVID-19 at the same time based on the CT pictures.
• The method as claimed in claim 1, to outline the shape of the lungs, U-net-based segmentation was initially carried out. At that point, to undergo segmentation and classification at the slice level, the suggested dual-branch combination network is utilized. And also the performance of classification is enhanced by analyzing a lesion attention component which uses the intermediary effects of both branches.
• The method as claimed in claim 2, a slice probability mapping methodology and a
completely associated system were embraced to get individual-stage outcomes from slice stage outcomes, adjusting our strategy to the scans of computed tomography with various counts of slices. Here the introduced strategy was very touchy to the classification of pictures with very small lesions.
• The method as claimed in claim 1, the detection of the COVID-19 at an early stage is useful since the lesions in the beginning phase are normally unobtrusive and hard to distinguish. The proposed consolidated segmentation-classification network for the identification of COVID-19 beat ordinarily utilized methods of classification on both inside and outer approval datasets.
• The method as claimed in claim 1 and claim 3, the suggested lesion attention module empowers the network to concentrate on contaminated points and altogether enhances the discovery of little lesions for the identification of COVID 19 at the beginning stage. In addition, the attention maps help the distinguishing proof of lesion position, in this way enhancing the understanding of classification.
Detailed Description of the Invention
To aid comprehension of the invention's principles, a reference to the embodiment depicted in the drawings will now be made, and precise terminology will be utilized to describe the same. It should be understood, however, that no limitation of the scope of the invention is intended, and that such alterations and further modifications in the illustrated system, as well as further applications of the principles of the invention as illustrated therein, are contemplated by one skilled in the art to which the invention relates. Those versed in the art will recognize that the preceding general description and the accompanying comprehensive explanation are meant to be illustrative and explanatory of the invention rather than restrictive.
This part clarifies the progression of the proposed technique. We build up a dual-branch combination model for joined classification and segmentation of corona virus utilizing computed tomography pictures. Motivated by the attention system, we introduce a lesion attention module to enhance the affectability of computed tomography pictures with little lesions and work with examining corona virus at an early stage. Exact guides of attention are provided by the lesion attention (LA) module to enhance the interpretability of the system and add to an additional evaluation of the result of classification.
Image acquirement CT composition and technique: Philips Ingenuity 64 row spiral computed tomographydevice, KV: 120, MAS: 240, the thickness of the layer 3 mm, spacing between the layers 3 mm, screw pitch 1.5: lung window (W: 1500 HU, L: -500 HU), Mediastinum window (W: 350 HU, L: 60 HU), reconstruction of thin layer depending on the lesion portrayal, thickness of layer and distance between layers are 1mm lung window image. The cases were located in a horizontal posture, inhaling intensely after holding in, and traditionally examined from the tip of the lung to the coastal diaphragm angle. For every case, 1-4 slices were selected by radiologists employing the slice level selection technique, since normally 4 slices are enough to enclose the lesion. For corona virus pneumonia cases, the slice displaying the biggest size and number of lesions was chosen. For ordinary cases, any level of the image can be chosen. The resolutions of every chosen image are 1024 x 1024 x 3. Table 1 lists the demographics of cases, where we have two divisions: (i) COVID-19 case, and (ii) healthy controlcases.
The actual dataset having 320 corona virus images and 320 healthy control images. The set of data is represented as D 1, each image is represented as di (i) ED, i = 1,2, --- , |DI = 640 . Here,Di = [di (1), di (2),...,di (i), ... , di(IDI)]. For each image, we represent the size as,size[di (i)] = WI x HI x C. Here W1 = Hi = 1024, C1 = 3. For training the deep neural networks, the actual images are not relevant. This is because of the following reasons: theypossess excessive details in three color channels; theypossess uneven contrast; they include background, checkup bed, and text details; and large size of the image. The below figure portrays the flow of preprocessing of the corona virus data. Initially, the color images are converted to grayscale just by maintaining the information ofbrightness. Thus, we obtained the set of a grayscale image represented by D 2
. D2 = GS (D 1) = {d 2 (1), d2 (2), ... ,d2 (i), ... d2 (|D 1)} (1)
Here GS denotes the operation of grayscale. Then, si[d2 (i)] = W 2 x H2 x C2 . Here W 2 H2 = 1024, C 2 - 1. Following this, we increase the contrast of every image by using the method of histogram stretching. For their image d 2 (i), i = 1,2, --- , |DI, we estimate the minimum and maximum grayscale values (amin(i)and ma(i)) accordingly by the following equations.
am(i)= minaminb d2 (il a,b) (2)
a.(i)= maxi1max b=d 2 (ia,b) (3) here (a, b) denotes the coordinates of picture element of the image d 2 (i). The new histogram stretched image d3 (i) is represented by, d 3 (i)=(d2 (i)- C6in(i)) Cnax(i) - Cin(i) (4)
Altogether, we obtain the histogram stretched image set represented by, D 3 = (E2)= {d 3 (1), d3 (2), --- , d3 (i), ... d3(IDI)}.
Next, to ignore the texts present in the areas of the margin and to eliminate the checkup bed at the lower region, the images must be cropped. Hence, the cropped dataset is obtained. It is given by,
D4 = (D3 ,[et, eb, ei , er ])={d4 (1), d4 (2), ,d 4 (i), , d4 (I DI)}(5)
Here C denotes crop operation. Specification (et, eb, e, e )represents the values of
the crop in terms of picture elements from the top, bottom, left, and right respectively. We fix et = eb = ei= er =150. Hence, the size of every image is represented as, si[d4 (i)]=
W4 x H4 x C4 . We can have W 4 = H4 = 724, and C 4 = C 2 - 1.
Subsequently, each image is down sampled to the size of [W5 , H5 ]. Hence, we
obtain the resized image set denoted by D 5 It is given by the following equation.
D 5 =4 (D 4 ,[W 5 , H 5 ])= {d 5 (1), d 5 (2),...,d 5 (i), ... d(DI)} (6)
Here J: a-b represents the function of down sampling, where b denotes a down sampled image of an actual image a. In this paper, W 5 = H5 = 256, C 5 = 1. The advantages of down sampling are two categories: (i) It can save stockpiling (ii) Minimum-size dataset can assist the accompanying classification framework from overfitting. The motivation behind why we fixedW5 = H5 = 256 depends on the experimentation strategy. We tracked down that a bigger size will acquire overfitting which impedes the presentation, and then, a more modest size will make the pictures foggy which likewise diminishes the classifier's exhibitions. Figure 3(a) shows samples of Covid-19 from the preprocessed dataset D5. Figure 3(b) portrays the lesions of (a) inside red circles.
The DCN is proposed to achieve concurrent classification and segmentation of computed tomography pictures. The structure comprises a categorization section and segmentation part, relating to the categorization and segmentation assignments, individually. ResNet-50 acts as the foundation of the classification part. It includes five residual blocks. U-net acts as the foundation of the segmentation part. It comprises an encoder and a decoder. The encoder comprises 64, 128, 256, 512, and 1,024 channels separately in the five blocks. Four 2 x 2 max-pooling layers and four 2 x 2 up-sampling layers are utilized for down-testing and up-inspecting. Every block of convolution comprises a 3 x 3 convolution layer, a batch standardization layer, a ReLU, and a subsequent 3 x 3 convolution layer. The yields of the encoding blocks are linked with the relating interpreting blocks utilizing skip associations. The moderate consequences of the two sections are joined with the introduced LA modules. The back spread between the two sections is sliced off to guarantee the teachability of the structure. The dual branch combination network gets the divided lung pictures as data sources and yields the slice-level classification and segmentation outputs. It tends to be seen that the segmentation of U-net is profoundly steady with the ground truth, which gives a solid assurance to a resulting investigation.
Our segmentation network is founded on the U-Net framework, where we incorporate an attention mechanism, res dil block, and profound oversight. The encoder of the U-Net is utilized to get the component portrayals. The element portrayal at every layer is a contribution to an attention procedure, where they will be weighted again based on the medium and space, and the most useful portrayals are acquired, lastly, they are extended by a decoder to the marked space to get the segmentation output. The encoder is utilized to acquire the component portrayals. It incorporates a convolution block, a res dil block succeeded by skip association. To keep up the dimensional data, we utilize a convolution with step = 2 to supplant pooling activity. It is probably going to need a diverse responsive area while portioning various areas in a picture. Every convolutionis 3 x 3 and the quantity of channel is expanded from 32 to 512. Every level in the decoder starts with an upsampling layer succeeded by a convolution to decrease the number of characters by a factor of 2. At that point, the characteristics that are upsampled are joined with the characteristics from the relating stage of the encoder section utilizing the link. Following the connection, we utilize the res dil square to expand the responsive area. Also, we utilize extreme oversight for the segmentation decoder by coordinating segmentation layers from various levels to shape the last network yield. We utilize residual squares with enlarged convolutions on both the encoder and the decoder sections to acquire characters at various scales. The res dilblock can get broader nearby data to assist in holding data and filling subtleties when preparing measures. The structure of lung segmentation has similar engineering as the segmentation part of the dual branch combination network, which is depicted previously.
To enhance the performance of classification and to consolidate the details of the two branches in a better way, we introduced the lesion attention module. There are two divisions in the input of the lesion attention module. They include yfrom the branch of classification and y, from the branch of segmentation. The main purpose of this attention module is to make the classification branch concentrate more on the lesions. The configurations of this mechanism are given below.
input = [Vrye + Be, VsyT + B,] (7)
fl = fz (8)
Here [Vfyc+Be,Vf Tys+Bs]denotes the concatenation at the channel-level; V G ZCcxCint,
v ceTZcsxcin and i, G Z2 Citx1represents weights of 1 x 1 Convolutional layers; B, , B,
andBat depict the associated biases; c and cscorrespond to the sizes of the input medium of the categorization and segmentation sections, accordingly; and Cin denotes the outcome medium size of the associated Convolutional layers. Operationsf 1 (y) = max (y, ) andf 2 (y) = 1/(1 + exp(-y)) associate to ReLU and sigmoid activation operation, accordingly. The attention map is further standardized to [0, 1]. The last outcome of the lesion attention component is represented as:
yout = f3([ x yc, ys]) (9)
Here f3 consists of a sequence of components involving two 1 x 1 Convolutional
layers(Z(c+Csxcc, ZC XCc), Batch Normalization, and a ReLU.
The dual branch combination network deals with the classification of each slice. At that point, it is necessary to consolidate the slice outcomes to accomplish individual stage categorization and decide if the patient is tainted by the corona virus. Notwithstanding, the slice counts shift in various patients attributable to the assorted slice densities, view areas, or lung quantities. A few investigations used max-pooling or normalpooling on completely associated layers to take out the impacts of such issues. Be that as it may, this may prompt loss of data as the methodology just saves the maximum or normal signs, everything being equal. To augment the data from every slice, we introduced a slice probability mapping procedure dependent on resampling. In particular, we arranged the aftereffects of slices in slipping requests and fixed the bend with a bilinear introduction strategy. We at that point gained 100 qualities from the bend in indistinguishable stretches and acquired sequential probabilities in sliding requests. A straightforward 3-layer fully connected network was further adapted to the categorization of individuals with the determined 100 qualities as info. The quantities of hubs in the two secret layers are 256 and 128, individually.
The suggested dual branch combination network is a slice-stage peer-to-peer system comprised of a categorization section and a segmentation section. The operation of loss of DCN includes two sections: losses of categorization and segmentation. Analogous to ResNet, we utilized loss of cross-entropy for the slice-level classification:
Loaz, -glog§+ (1 - g)log(1 -,§) (10)
Where g represents the original label of the specimen, and denotes the expected label.
The actual U-net utilized paired cross-entropy loss, which behaved inadequately on our data. Computed tomography pictures of subjects with corona virus are amazingly uneven information for segmentation because the area of lesions is generally a lot more modest contrasted with the typical area and foundation, and BCE loss isn't reasonable for this situation. To handle this challenge, Dice loss is utilized. It is an intentional operation that enhances the system on the validation metric. The dice loss at the slicing stage is represented by the following equation:
L2c Lod, = 1 - J2AnB 2|nB I 1j -j (sm g ENp IAI+IBI |+B pj +Ygj+sm (1)
Here A denotes the ground truth; Bdenotes the expected result; and p;, gjdenote the number of the jth component of the expected outcome and ground truth, accordingly.
The smooth specificationsm was utilized to avoid division by 0 and was fixed to 1 in our study.
Specimens from ordinary cases are essential to prepare the categorization section. Anyhow, for the section of segmentation, pictures of ordinary cases are negative specimens. It will worsen the unevenness of specimens, that further influences the preparation of the segmentation section. To overcome this issue, we suggested a unique weighted Dice loss for the section of segmentation:
Loseg = IW. LOdce (12)
1 if label = 1 10 if label=0 (13)
Here lw denotes the weight loss calculated by the characterization of the specimens. The weights of the slices with interpreted sores are fixed to 1 and the weights of the slices without interpreted sores are fixed to 0. It indicates that the slices with interpreted sores engage in the back propagation of the segmentation section. We represent the entire loss function by:
Lo = Lseg + yLocis = lw (1 - EjN pj +SM- yglog+y(l-g)og11 -,§) (14)
Here y denotes the trade-off specification for both the losses.We fix y= 1 in this paper analytically. Also, Dice and BCE losses are utilized for the segmentation of the lung system and FCN, accordingly.
An aggregate of 2,371 slices from suspected cases was clarified physically, and every single slice was commented on by a radiologist. We requested that three radiologists clarify a similar CT picture from part of suspected cases as an examination between the segmentation execution of dual branch combination network and radiologists. For every slice of suspected cases in the part of categorization, we assumed it as a positive case if the sores were set apart and the slice mark is fixed to 1. Else, we thought about the slice as a negative example and fixed the name to 0. The slices from sound regulations were marked as 0. Provided the huge measure of information in the outside data and the absence of comment specialists, we didn't clarify the outer data at the slicing stage.
All preparation and testing measures were performed utilizing Pytorch on a worker with NVIDIA Tesla P100 GPUs. Every one of the models was improved utilizing Adam's streamlining agent with an underlying training pace of0.001 and a training decay pace of0.95 per age. In the inner preparing phase, we used a five overlay cross approval method. For the outer approval phase, the structure was pre-prepared utilizing all examples of the interior data and tried on the outside data. The outcomes are guaranteed by testing the proposed model on CT pictures. Accuracy, specificity, and sensitivity were used to assess the classification execution. Accuracy is utilized to depict the presentation in a general dataset, though sensitivity and specificity address the classification outcomes for positive cases and ordinary cases, individually. Sensitivity evaluates the percentage of positives that are accurately separated. Specificity evaluates the percentage of negatives that are accurately segregated.
Accuracy tP+fP tp +t,+fp +f, (15)
Sensitivity =P tp +fp (16)
Specif icity (17)
where tp, f, t, and fndenotes the total of true-positive, false-positive, true negative, and false-negative specimens, accordingly. Further, to neglect the intervention of the data unevenness, the term average accuracy was also presented. It is represented by the given equation below.
AverageAccuracy = (Sensitivity +Specificity) 2 (18)
The introduced network attained 98.3% accuracy, 99.5% sensitivity, and 99.8% specificity at the individual stage, which substantially surpassed the performance of alternate methods. For a better evaluation of our proposed model, we contrasted it with various methods developed for the classification of the corona virus. The overall accuracy, sensitivity, selectivity of the proposed technique and the other existing methods are listed in table 2.
We noticed that the lesion attention module enhanced the precision of classification at the slice level, which underlines the viability of the attention instrument for corona virus classification. Further, the slice probability mapping enhanced the individual-stage exactness, particularly for the outer data, which demonstrated that slice probability mapping enhanced the speculation of the network structure. The comparative analysis of the proposed method in terms of accuracy over other various methods prevailing is illustrated in Figure 5. From the analysis, it was evident that the proposed technique is better than the existing ones.
Brief description of Figures, Tables and Flow charts
These and other features, aspects, and advantages of the present disclosure will become evident after reading the following comprehensive description with reference to the accompanying illustrations, where like characters denote like parts throughout the drawings.
Figure 1 illustrates a flow of Dual branch combination network (DCN) of the proposed method;
Figure 2 portrays the flow of preprocessing of the corona virus data;
Figure 3(a) illustrates samples of Covid-19 from the preprocessed dataset D5 and Figure 3(b) portrays the lesions of(a) inside colored box;
Figure 4 illustrates U-Net Segmentation Architecture
Figure 5 illustrates the Accuracy (%) of existing vs proposed methods.The comparative analysis of the proposed method in terms of accuracy over other various methods prevailing is shown in Figure 5.
Figure 6 illustrates the Sensitivity(%) of existing vs proposed methods.The comparative analysis of the proposed method in terms of sensitivity over other various methods prevailing is shown in Figure 6. From the analysis, it was evident that the proposed technique is better than the existing ones.
Figure 7 illustrates Specificity(%) of existing vs proposed methods.The comparative analysis of the proposed method in terms of specificity over other various methods prevailing is shown in Figure 7.
Table 1 lists the demographics of cases, where we have two divisions: (i) COVID-19 case, and (ii) healthy control cases.
Table 2 depicts the comparative analysis ofproposed and existing method.
Benefits, other advantages, and problem-solving methods have all been discussed in relation to certain embodiments. However, any component(s) that may enable any benefit, advantage, or solution to arise or become more evident are not to be read as a critical, required, or essential element or component of any or all of the claims.

Claims (5)

  1. EDITORIAL NOTE
    2021107205
    THERE IS ONE PAGE OF CLAIMS ONLY
    We claim The method to detect the COVID-19 using a novel dual-branch combination network is proposed, this method comprises the following 1. A system of joined segmentation and classification was proposed, which attained both the segmentation and the classification of the lesions of COVID-19 at the same time based on the C T pictures.
  2. 2. The method as claimed in claim 1, to outline the shape of the lungs, U-net-based segmentation was initially carried out. At that point, to undergo segmentation and classification at the slice level, the suggested dual-branch combination network is utilized. And also the performance of classification is enhanced by analyzing a lesion attention component which uses the intermediary effects ofbothbranches.
  3. 3. The method as claimed in claim 2, a slice probability mapping methodology and a completely associated system were embraced to get individual-stage outcomes from slice stage outcomes, adjusting our strategy to the scans of computed tomography with various counts of slices. Here the introduced strategy was very touchy to the classification of pictures with very small lesions.
  4. 4. The method as claimed in claim 1, the detection of the COVID-19 at an early stage is useful since the lesions in the beginning phase are normally unobtrusive and hard to distinguish. The proposed consolidated segmentation-classification network for the identification of COVID-19 beat ordinarily utilized methods of classification on both inside and outer approval datasets.
  5. 5. The method as claimed in claim 1 and claim 3, the suggested lesion attention module empowers the network to concentrate on contaminated points and altogether enhances the discovery of little lesions for the identification of COVID-19 at the beginning stage. In addition, the attention maps help the distinguishing proof of lesion position, in this way enhancing the understanding of classification.
    R.T.Subhalakshmi,Dr.S.Appavu Alias Balamurugan,Dr.S.Sasikala Aug 2021
    Figures, Tables and Flow charts
    U-Net based Lesion segmentation segmentation 2021107205
    Preprocessing
    Lesion Attention
    Covid CT images Classification Slice-level diagnosis
    Slice probability mapping
    Dual Branch Combination Network
    Individual-level diagnosis
    Performance Analysis
    Figure 1.Flow of the proposed method
    R.T.Subhalakshmi,Dr.S.Appavu Alias Balamurugan,Dr.S.Sasikala Aug 2021
    Original CT image set D1
    Grayscaled D2 2021107205
    HS D3
    Cropped D4
    Down-sampled D 5
    Figure 2.Preprocessing steps
    Figure 3. (a) Sample of preprocessed Covid-19 from dataset D5 (b) Lesions of (a)
    R.T.Subhalakshmi,Dr.S.Appavu Alias Balamurugan,Dr.S.Sasikala Aug 2021 2021107205
    Figure 4.U-Net Segmentation Archite cture
    Figure 5. Accuracy (%) of existing vs proposed methods
    Figure 6. Sensitivity(%) of existing vs proposed methods
    R.T.Subhalakshmi,Dr.S.Appavu Alias Balamurugan,Dr.S.Sasikala Aug 2021 2021107205
    Figure 7. Specificity(%) of existing vs proposed methods
    Table 1: Dataset of CT images
    No. of cases No. of images Range of Age
    COVID-19 142 320 22-91
    Healthy Control cases 142 320 21-76
    Table 2: Comparative analysis of proposed and existing method
    Methods Accuracy(%) Sensitivity(%) Specificity(%) VGG-16 96 92.64 97.27 ResNet-50 92 91.6 95.33 DenseNet-121 88 87.6 93.86 ResNet-101 88 90.4 93.53 GoogleNet 96.67 96.67 96.67 MS-Recurrent Neural Network(MS-RNN) 97.5 98.7 99.3 DCN (Proposed) 98.3 99.5 99.8
    .
AU2021107205A 2021-08-25 2021-08-25 Deep learning based system and method for automatic identification of covid-19 regions on ct-images Ceased AU2021107205A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2021107205A AU2021107205A4 (en) 2021-08-25 2021-08-25 Deep learning based system and method for automatic identification of covid-19 regions on ct-images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2021107205A AU2021107205A4 (en) 2021-08-25 2021-08-25 Deep learning based system and method for automatic identification of covid-19 regions on ct-images

Publications (1)

Publication Number Publication Date
AU2021107205A4 true AU2021107205A4 (en) 2022-01-06

Family

ID=78958468

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021107205A Ceased AU2021107205A4 (en) 2021-08-25 2021-08-25 Deep learning based system and method for automatic identification of covid-19 regions on ct-images

Country Status (1)

Country Link
AU (1) AU2021107205A4 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036984A (en) * 2023-10-09 2023-11-10 武汉大学 Cascade U-shaped network cloud detection method and system integrating attention mechanisms

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036984A (en) * 2023-10-09 2023-11-10 武汉大学 Cascade U-shaped network cloud detection method and system integrating attention mechanisms
CN117036984B (en) * 2023-10-09 2024-01-09 武汉大学 Cascade U-shaped network cloud detection method and system integrating attention mechanisms

Similar Documents

Publication Publication Date Title
Hasan et al. DenseNet convolutional neural networks application for predicting COVID-19 using CT image
Endres et al. Development of a deep learning algorithm for periapical disease detection in dental radiographs
Thanathornwong et al. Automatic detection of periodontal compromised teeth in digital panoramic radiographs using faster regional convolutional neural networks
US8019134B2 (en) Automatic image analysis and quantification for fluorescence in situ hybridization
Hryniewska et al. Checklist for responsible deep learning modeling of medical images based on COVID-19 detection studies
US8542899B2 (en) Automatic image analysis and quantification for fluorescence in situ hybridization
Nneji et al. Multi-channel based image processing scheme for pneumonia identification
Nijiati et al. Deep learning assistance for tuberculosis diagnosis with chest radiography in low-resource settings
Radha Analysis of COVID-19 and pneumonia detection in chest X-ray images using deep learning
Haq et al. Lung nodules localization and report analysis from computerized tomography (CT) scan using a novel machine learning approach
AU2021107205A4 (en) Deep learning based system and method for automatic identification of covid-19 regions on ct-images
Yamamoto et al. Effect of patient clinical variables in osteoporosis classification using hip x-rays in deep learning analysis
Nijiati et al. Artificial intelligence assisting the early detection of active pulmonary tuberculosis from chest X-rays: A population-based study
Ozsoz et al. Viral and bacterial pneumonia detection using artificial intelligence in the era of COVID-19
Seo et al. Deep focus approach for accurate bone age estimation from lateral cephalogram
Latif et al. Lung opacity pneumonia detection with improved residual networks
Shao et al. End-to-end deep-learning-based diagnosis of benign and malignant orbital tumors on computed tomography images
Ke et al. Biological gender estimation from panoramic dental x-ray images based on multiple feature fusion model
Chetoui et al. Deep efficient neural networks for explainable COVID-19 detection on CXR images
Duman et al. Second mesiobuccal canal segmentation with YOLOv5 architecture using cone beam computed tomography images
Pandiaraja et al. A Scrutiny on COVID-19 Detection using Convolutional Neural Network and Image Processing
US20220287647A1 (en) Disease classification by deep learning models
AU2021103578A4 (en) A Novel Method COVID -19 infection using Deep Learning Based System
Chen et al. Automatic and visualized grading of dental caries using deep learning on panoramic radiographs
Liu et al. Ratio of injured lung volume fraction in prognosis evaluation of acute PQ poisoning

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry