AU2021101377A4 - A system and method for automatic brain tumor classification in mri images - Google Patents
A system and method for automatic brain tumor classification in mri images Download PDFInfo
- Publication number
- AU2021101377A4 AU2021101377A4 AU2021101377A AU2021101377A AU2021101377A4 AU 2021101377 A4 AU2021101377 A4 AU 2021101377A4 AU 2021101377 A AU2021101377 A AU 2021101377A AU 2021101377 A AU2021101377 A AU 2021101377A AU 2021101377 A4 AU2021101377 A4 AU 2021101377A4
- Authority
- AU
- Australia
- Prior art keywords
- fcm
- image
- rbnn
- exponential
- clustering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
- A61B5/004—Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
- A61B5/0042—Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/043—Architecture, e.g. interconnection topology based on fuzzy logic, fuzzy membership or fuzzy inference, e.g. adaptive neuro-fuzzy inference systems [ANFIS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/143—Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2576/00—Medical imaging apparatus involving image processing or analysis
- A61B2576/02—Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
- A61B2576/026—Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part for the brain
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/40—Detecting, measuring or recording for evaluating the nervous system
- A61B5/4058—Detecting, measuring or recording for evaluating the nervous system for evaluating the central nervous system
- A61B5/4064—Evaluating the brain
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4842—Monitoring progression or stage of a disease
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7253—Details of waveform analysis characterised by using transforms
- A61B5/726—Details of waveform analysis characterised by using transforms using Wavelet transforms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/478—Contour-based spectral representations or scale-space representations, e.g. by Fourier analysis, wavelet analysis or curvature scale-space [CSS]
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Theoretical Computer Science (AREA)
- Biomedical Technology (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Surgery (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Fuzzy Systems (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Probability & Statistics with Applications (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Databases & Information Systems (AREA)
- Physiology (AREA)
- Primary Health Care (AREA)
- Neurology (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Epidemiology (AREA)
- Automation & Control Theory (AREA)
- Computational Mathematics (AREA)
Abstract
The present disclosure relates to a clustering scheme, namely Gaussian Hybrid Fuzzy
Clustering (GHFC) is developed by hybridizing the fuzzy c-means (FCM) clustering and
sparse FCM along with the Gaussian function for the segmentation purpose is disclosed.
After segmenting the image, the suitable features are extracted from the image and given to
the Exponential cuckoo based Radial Basis Neural Network (Exponential cuckoo based
RBNN) classifier. The features serve as the training information for the Exponential cuckoo
based RBNN classifier, and it finally detects the training class. Simulation of the proposed
work is done using BRATS and SIMBRATS databases, and the results are compared with
several state-of-art techniques. Simulation results depict that the proposed GHFC along with
the RBNN classifier achieved improved accuracy and mean squared error (MSE) results with
the values of 0.8952, and 0.0074, respectively, for the BRATS dataset and 0.8719 and
0.0036, for the SIMBRATS dataset.
37
C4 N C4 C4
E -u
0 L.L u xaa
CL Ca)
0 ~
Ln 0~- E
> CE
0 .2 .2 E ,u
C 'M
E~ ~ bfl C'E
Oc) a)
bD fl E C
E E
a) a)- ~
cua EC J~
a) .- -C 4-0
E 0
a)O c aj
bD >~ cu
~C 0 J 000
c rM C
n..ra) o'b
0*~ 0
ai EE
0 b-nL a) f
ai 0
0o _
0 =)
E o -o
C2
C2a
C~'J
Description
C4 N C4 C4
E -u
0 L.L u xaa CL Ca)
0 ~
Ln 0~- E > CE
0.2 .2 E ,u
E~ ~ bfl C'E Oc) a) bD fl C E
E E cua a) a)- ~ EC J~ a) .- -C 4-0
E 0 a)O c aj bD >~ cu
~C 0 J 000
c rM C
n..ra) o'b 0*~ 0 ai EE
0 b-nL a) f
ai 0
0o _
0=) E o -o C2 C2a C~'J
The present disclosure relates to a system and method for Gaussian Hybrid Fuzzy Clustering and Radial Basis Neural Network for automatic brain tumor classification in MRI images.
Brain tumours can be considered as one of the major reasons for the increased mortality of the adults and the children. It can be considered as the deadly disease as no proper cure has been identified for the brain tumour yet. Brain tumours are formed due to the abnormal growth of the tissues or brain cells that leads to death if left untreated. A brain tumour has less effect in the initial stage and can turn deadly in later stages, and hence, need to be diagnosed or treated at the initial stages. Accordingly, to survey the patients diagnosed with the initial stage got a better survival rate, than the later ones. Owing to the development in the diagnosing therapy, and computer automated tools, physicians prefer to diagnose the tumour through automated tools, as the manual processing of MRI is time-consuming and inefficient. The brain tumour can be categorized as high grade or low grade depending on its growth. The MRI images are preferred mostly for the treatment as they provide different images of different modalities. The MRI image provided for the analysis has four modalities, and they are T1, T2, TLC ad Flair.
Recently, a lot of Computer-Aided Design (CAD) related tools are developed for automating the detection of brain tumours. The major phases in detecting the brain tumour are segmentation and classification. The main goal of the segmentation task is to label the tumour and the non-tumour region. The segmentation task uses the clustering-based technique, as it necessarily groups the cells belonging to the cluster group by defining the centroid. Image segmentation scheme aims at splitting the brain image into various regions according to the image texture, colour, intensity, etc. Also, the image segmentation scheme varies according to the type of the tumour. For example, while segmenting the Glioblastoma multiforme brain tumour, dead region, active region, and edema, are separated from each other. Some works carried out the image segmentation with the help of the clustering techniques by selecting the proper pixels as the centroid. Also, some of the existing works carry out the image segmentation with the help of the clustering techniques by selecting the proper pixels as the centroid.
After the segmentation process, the brain regions are detected by automated classifiers. Literature has introduced many classification techniques for automated brain tumour classification. The classifiers are trained with the features extracted from the segmented image. The features depict the difference between the tumour and the nontumor regions. The classifier is trained with the feature and significantly identifies the tumour class. Literature has made use of several types of optimization-based techniques for tumour classification. A wrapper based genetic technique has been adopted for the brain tumour classification. Some works used rough set theory, Deep Neural Networks (DNN), Deep Convolutional Neural Network (DCNN) and random trees for the automated brain tumour classification. For the improved classification results, the tumour segmentation can also be considered as the labelling problem. A projection-based classification technique was employed for the tumour classification.
A. Ortiz, et al. presented the brain image segmentation approach by defining the self organising map (SOM) and the entropy gradients. The clustering defines a SOM measure for generating the map between the input and the clustered output. The results obtained by the scheme shown improved segmentation performance, but the image has the acquisition noise. N. M. Portela et al. proposed the semi-supervised clustering approach for the segmenting the tumour regions in MRI. The model does not require the labelled information, and hence manual interpretation is very less. For the clustering, they have used the Gaussian Mixture Model (GMM) technique. The Bayesian classifier is used along with the clustering for dealing with the classification. The technique is sensitive to initial parameters. N. Nabizadeh and M. Kubat proposed the fully automated brain tumour segmentation scheme, and they have used the statistical features for evaluating the classification performance. The use of the Gabor features for training has resulted in computational complexity, and hence they used the statistical related features for the classification. G. Vishnuvarthana net al. presented the hybrid approach with the combination of SOM and the fuzzy k means for formulating the segmentation task. The scheme performs the automated segmentation, and hence avoids the manual intervention. Even though, the scheme failed to handle complexity, as it required more time for the classification.
Xiaomei Zhao et al. proposed the deep learning model by integrating the Fully Convolutional Neural Networks (FCNNs) and Conditional Random Field (CRF) techniques for the segmentation of brain image. The newly developed CRF based recurrent neural network (CRF-RNN) performs the classification of the brain image. The model can achieve improved classification performance if the post-processing of images is done. Berkan Ural presented the computer-aided tumour detection scheme for brain tumour detection and classification. Classification is done by Probabilistic Neural Network (PNN) classifier. Segmentation of the image is done based on the integration of K-means and FCM techniques. The scheme achieved high classification performance through the segmentation technique. Javeria Amin et al. presented the cancer classification approach using the Support Vector Machine (SVM) classifier. The scheme extracted features, such as shape, texture, and intensity, for feature extraction. The classifier is evaluated under different cross-fold validation, and hence, the performance of the classifier is evaluated under different conditions. Taranjit Kaur et al. presented the classification approach based on the Fisher criterion and parameter-free Bat optimization technique. The scheme used the SVM classifier for the segmentation approach. The database is simulated for the small database, and hence, can be implemented in the larger database to see the actual performance. J. Seetha, and S. Selva kumar Raja developed a method for automatic brain tumour classification by using Convolutional Neural Networks (CNN). The outcome of this method was high accuracy and low complexity. Anyhow, the accuracy of this method was high, when a limited number of images were used. Srikanth Busa et al., developed a model for automatic brain tumour classification, which recognizes and states the multiple tumours in brain MRI images with the help of a simple technique. This method was to overcome the timing problems and outlier problems. The drawback of this method was the intra- and inter-slice resolutions sometimes affect the accuracy of the segmentation.
Even though the above-discussed techniques provide improved classification performance, some of the challenges posed by automated brain tumour classification technique are given as follows:
• Even though a large number of researchers have contributed to the brain tumour segmentation and classification, a lot of challenges still need to be overcome. The tumour regions in MRI images may have a different shape, appearance, and locality. Using the mono-modality scans for the segmentation will not provide sufficient information for the segmentation, and besides, it is less efficient. Recently, multimodality images are obtained through the MRI scan. The multimodality image gives detailed information about the tumour region rather than the unimodality scan, and thus, improves the classification process.
• Another issue in the brain image segmentation is the presence of the over-segmented zones, and these zones result in erroneous shapes.
• K-means clustering, one of the popularly known clustering schemes for image segmentation faces the intricacy issue. The issue was avoided by using the Gibbs Random-Fields version for the k-means clustering, and the spatially adjacent zones were obtained.
Selecting the appropriate features for the classifier is necessary to achieve accurate classification results. The texture features aid in the fast identification of the brain tumour, and hence, it can be used for the classifier training. The tumour differs from one patient to another, and it is necessary to collect the patient-specific features for the classification.
In order to overcome the aforementioned drawbacks, there exists a need to develop a system and method for Gaussian Hybrid Fuzzy Clustering and Radial Basis Neural Network for automatic brain tumor classification in MRI images.
The present disclosure seeks to provide a system and method for Gaussian Hybrid Fuzzy Clustering and Radial Basis Neural Network for automatic brain tumor classification in MRI images in order to differentiate tumor and non-tumor.
In an embodiment, a system for Gaussian Hybrid Fuzzy Clustering and Radial Basis Neural Network for automatic brain tumor classification in MRI images. The system comprises: a database constitutes a plurality of magnetic resonance imaging (MRI) images of four different modalities of human brains; a pre-processing unit for pre-processing MRI images of human brain by thresholding the image to convert the MRI images into binary form based on OTSU binarization and converting equivalent RGB value of the image into Lab conversion; a Gaussian Hybrid Fuzzy Clustering (GHFC) model for segmenting the binary form of MRI images to differentiate the pixels representing the tumor region from the normal ones into three cluster groups such as normal, edema, and core by finding optimal centroid by executing series of iterations, wherein the centroids calculated through the FCM and Sparse FCM are hybridized through a constant that is determined based on the Gaussian function; an extraction module for extracting features such as mean, entropy, PCA, wavelet transform, and LDP from the images based on the pixels related to the tumor and the non tumor segments; and a Radial Basis Neural Network (RBNN) classifier for classifying the extracted features to identify the tumor class such as normal, malignant brain tumors and benign brain tumors, wherein through the classifier training, the segmented images come under the category of normal, malignant brain tumors and benign brain tumors.
In an embodiment, the RBNN classifier finds the clustered output by selecting the optimal centroids through the exponential cuckoo technique, wherein the exponential cuckoo technique has a similar behavior as the cuckoo search technique and the update process gets modified based on the EWMA concept.
In an embodiment, the exponential cuckoo search technique allows the optimal selection of cluster centers for the classification process, wherein the solution encoding for the exponential cuckoo search technique requires the randomly initialized features extracted from the segmented image, wherein from the randomly initialized feature points, the exponential cuckoo search technique finds an efficient feature point, which can act as the feature center.
In another embodiment, a method for Gaussian Hybrid Fuzzy Clustering and Radial Basis Neural Network for automatic brain tumor classification in MRI images. The system comprises: pre-processing MRI images of human brain by thresholding the image to convert the MRI images into binary form based on OTSU binarization and converting equivalent RGB value of the image into Lab conversion; segmenting the binary form of MRI images to differentiate the pixels representing the tumor region from the normal ones into three cluster groups such as normal, edema, and core using a Gaussian Hybrid Fuzzy Clustering (GHFC) model by finding optimal centroid by executing series of iterations, wherein the centroids calculated through the FCM and Sparse FCM are hybridized through a constant that is determined based on the Gaussian function; extracting features such as mean, entropy, PCA, wavelet transform, and LDP from the images based on the pixels related to the tumor and the non-tumor segments; and classifying the extracted features to identify the tumor class such as normal, malignant brain tumors and benign brain tumors using a RBNN classifier, wherein through the classifier training, the segmented images come under the category of normal, malignant brain tumors and benign brain tumors.
In an embodiment, to group the clusters into regions, it is necessary to identify cluster centroids.
In an embodiment, steps for mathematical formulation of FCM technique comprises:
subjecting the MRI image to the segmentation process, and thereafter feeding a fuzzy c-means (FCM) clustering scheme, wherein the segmentation scheme differentiates the tumor region from the non-tumor region; identifying core and edema tumor regions from the brain MRI image and thereafter the FCM approach performs the clustering by arranging and grouping the pixels to the same class, wherein initially, the pixels of the image are arranged as the fuzzy matrix, to keep the clustering process simpler, wherein the pixels belonging to the image correspond to the tumor or no tumor class, and hence, for clustering, it is necessary to declare the number of centroids; calculating Euclidean distance measure depends on the distance between the pixel and the corresponding centroid and calculating cluster center through the fuzzy matrix; and recomputing the fuzzy matrix through the Euclidean distance measure and thereby finding optimal centroid by executing series of iterations, wherein the final centroid is selected by the FCM for the clustering process.
In an embodiment, sparse FCM regulates the clustering model of the FCM by introducing the model parameter and makes the model suitable for the hierarchical clustering.
In an embodiment, the exponential cuckoo search technique is developed by combining the Exponential Weighted Moving Average (EWMA) with Cuckoo Search (CS) Technique, wherein the exponential cuckoo search technique finds the optimal cluster centroids for grouping the tumor and non-tumor cells, wherein among the features send to the RBNN classifier, the optimization technique finds one suitable feature to be the centroid for the classification.
In an embodiment, the exponential cuckoo search technique simulates until the maximum iteration and retrieves the best possible centroids for the classification, wherein the best centroid is calculated through the minimization fitness function derive and at the end of the iteration, the optimal cluster centroids are identified, and they are provided to the RBNN classifier for the classification purpose.
An objective of the present disclosure is to develop a system for Gaussian Hybrid Fuzzy Clustering and Radial Basis Neural Network for automatic brain tumor classification in MRI images in order to differentiate tumor and non-tumor
Another object of the present disclosure is to perform fast identification of the brain tumor.
Another object of the present disclosure is to facilitate a Gaussian Hybrid Fuzzy Clustering (GHFC) which is formulated by integrating the FCM and the Sparse FCM to find the effective centroid for the classification.
Yet another object of the present invention is to deliver an expeditious and cost effective method for Gaussian Hybrid Fuzzy Clustering and Radial Basis Neural Network for automatic brain tumour classification in MRI images.
To further clarify advantages and features of the present disclosure, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
These and other features, aspects, and advantages of the present disclosure will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Figure 1 illustrates a block diagram of a system for Gaussian Hybrid Fuzzy Clustering and Radial Basis Neural Network for automatic brain tumor classification in MRI images in accordance with an embodiment of the present disclosure; Figure 2 illustrate a flow chart of a method for Gaussian Hybrid Fuzzy Clustering and Radial Basis Neural Network for automatic brain tumor classification in MRI images in accordance with an embodiment of the present disclosure; Figure 3 illustrates an architecture of the GHFC and RBNN based brain tumor classification approach in accordance with an embodiment of the present disclosure; Figure 4 illustrates an architecture of the proposed GHFC technique for segmentation in accordance with an embodiment of the present disclosure; Figure 5 illustrates experimental results of proposed GHFC scheme in accordance with an embodiment of the present disclosure; Figures 6A and 6B illustrate performance analysis based on segmentation accuracy on BRATS database, and SIMBRATS database in accordance with an embodiment of the present disclosure; Figures 7A, 7B, and 7C illustrate analysis based on segmentation techniques on the BRATS database accuracy, precision, and recall in accordance with an embodiment of the present disclosure; Figures 8A, 8B, and 8C illustrate analysis based on segmentation techniques on the SIMBRATS database accuracy, precision, and recall in accordance with an embodiment of the present disclosure;
Figures 9A, 9B, and 9C illustrate analysis of segmentation techniques in BRATS database accuracy, precision, and recall in accordance with an embodiment of the present disclosure; Figures 10A, 10B, and 10C illustrate analysis of segmentation techniques in SIMBRATS database accuracy, precision, and recall in accordance with an embodiment of the present disclosure; Figures 11A and 11B illustrate comparative analysis on BRATS database by varying training samples based on accuracy, and MSE in accordance with an embodiment of the present disclosure; Figures 12A and 12B illustrate comparative analysis on BRATS database by varying k-fold based on accuracy, and MSE in accordance with an embodiment of the present disclosure; Figures 13A and 13B illustrate comparative analysis on SIMBRATS database by varying training samples based on accuracy, and MSE in accordance with an embodiment of the present disclosure; and Figure 14 illustrates comparative analysis on SIMBRATS database by varying k-fold based on accuracy, and MSE in accordance with an embodiment of the present disclosure. Figure 15, 16 and 17 illustrate Table 1 depicts the comparative analysis based on various segmentation techniques; Table depicts classification performance of the proposed and the existing methods; and Table 3 shows the computational time of the proposed
+ Exponential cuckoo based RBNN with the existing methods respectively.
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present disclosure. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates.
It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the invention and are not intended to be restrictive thereof.
Reference throughout this specification to "an aspect", "another aspect" or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by "comprises...a" does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The system, methods, and examples provided herein are illustrative only and not intended to be limiting.
Embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings.
Referring to Figure 1, a block diagram of a system for Gaussian Hybrid Fuzzy Clustering and Radial Basis Neural Network for automatic brain tumor classification in MRI images is illustrated in accordance with an embodiment of the present disclosure. The system 100 includes a database 102 constitutes a plurality of magnetic resonance imaging (MRI) images of four different modalities of human brains.
In an embodiment, a pre-processing unit 104 is in connection with the database 102 for pre-processing MRI images of human brain by thresholding the image to convert the MRI images into binary form based on OTSU binarization and converting equivalent RGB value of the image into Lab conversion.
In an embodiment, a Gaussian Hybrid Fuzzy Clustering (GHFC) model 106 is associated with the pre-processing unit 104 for segmenting the binary form of MRI images to differentiate the pixels representing the tumor region from the normal ones into three cluster groups such as normal, edema, and core by finding optimal centroid by executing series of iterations. The centroids calculated through the FCM and Sparse FCM are hybridized through a constant that is determined based on the Gaussian function.
In an embodiment, an extraction module 108 is used for extracting features such as mean, entropy, PCA, wavelet transform, and LDP from the images based on the pixels related to the tumor and the non-tumor segments.
In an embodiment, a Radial Basis Neural Network (RBNN) classifier 110 is connected to the extraction module 108 for classifying the extracted features to identify the tumor class such as normal, malignant brain tumors and benign brain tumors, wherein through the classifier training, the segmented images come under the category of normal, malignant brain tumors and benign brain tumors.
In an embodiment, the RBNN classifier 110 finds the clustered output by selecting the optimal centroids through the exponential cuckoo technique, wherein the exponential cuckoo technique has a similar behavior as the cuckoo search technique and the update process gets modified based on the EWMA concept.
In an embodiment, the exponential cuckoo search technique allows the optimal selection of cluster centers for the classification process, wherein the solution encoding for the exponential cuckoo search technique requires the randomly initialized features extracted from the segmented image, wherein from the randomly initialized feature points, the exponential cuckoo search technique finds an efficient feature point, which can act as the feature center.
Figure 2 illustrate a flow chart of a method for Gaussian Hybrid Fuzzy Clustering and Radial Basis Neural Network for automatic brain tumor classification in MRI images in accordance with an embodiment of the present disclosure. At step 202, the method 200 includes pre-processing MRI images of human brain by thresholding the image to convert the MRI images into binary form based on OTSU binarization and converting equivalent RGB value of the image into Lab conversion.
At step 204, the method 200 includes segmenting the binary form of MRI images to differentiate the pixels representing the tumor region from the normal ones into three cluster groups such as normal, edema, and core using a Gaussian Hybrid Fuzzy Clustering (GHFC) model 106 by finding optimal centroid by executing series of iterations. The centroids calculated through the FCM and Sparse FCM are hybridized through a constant that is determined based on the Gaussian function.
At step 206, the method 200 includes extracting features such as mean, entropy, PCA, wavelet transform, and LDP from the images based on the pixels related to the tumor and the non-tumor segments.
At step 208, the method 200 includes classifying the extracted features to identify the tumor class such as normal, malignant brain tumors and benign brain tumors using a RBNN classifier 110, wherein through the classifier training, the segmented images come under the category of normal, malignant brain tumors and benign brain tumors.
In an embodiment, to group the clusters into regions, it is necessary to identify cluster centroids.
In an embodiment, steps for mathematical formulation of FCM technique includes a first step for subjecting the MRI image to the segmentation process, and thereafter feeding a fuzzy c-means (FCM) clustering scheme. The segmentation scheme differentiates the tumor region from the non-tumor region. A second step for identifying core and edema tumor regions from the brain MRI image and thereafter the FCM approach performs the clustering by arranging and grouping the pixels to the same class. Initially, the pixels of the image are arranged as the fuzzy matrix, to keep the clustering process simpler. The pixels belonging to the image correspond to the tumor or no tumor class, and hence, for clustering, it is necessary to declare the number of centroids. A third step for calculating Euclidean distance measure depends on the distance between the pixel and the corresponding centroid and calculating cluster center through the fuzzy matrix. A fourth step for recomputing the fuzzy matrix through the Euclidean distance measure and thereby finding optimal centroid by executing series of iterations, wherein the final centroid is selected by the FCM for the clustering process.
In an embodiment, sparse FCM regulates the clustering model 106 of the FCM by introducing the model parameter and makes the model suitable for the hierarchical clustering.
In an embodiment, the exponential cuckoo search technique is developed by combining the Exponential Weighted Moving Average (EWMA) with Cuckoo Search (CS) Technique, wherein the exponential cuckoo search technique finds the optimal cluster centroids for grouping the tumor and non-tumor cells, wherein among the features send to the RBNN classifier 110, the optimization technique finds one suitable feature to be the centroid for the classification.
In an embodiment, the exponential cuckoo search technique simulates until the maximum iteration and retrieves the best possible centroids for the classification, wherein the best centroid is calculated through the minimization fitness function derive and at the end of the iteration, the optimal cluster centroids are identified, and they are provided to the RBNN classifier 110 for the classification purpose.
Figure 3 illustrates an architecture of the GHFC and RBNN based brain tumor classification approach in accordance with an embodiment of the present disclosure. The architecture of the brain tumor segmentation and classification approach using the proposed GHFC and RBNN is depicted in Figure 3. As depicted in the figure, the MRI image in the database 102 is subjected to various processes, todifferentiate between the tumor and the nontumor regions. Initially, the images are subjected to pre-processing and then, provided to the segmentation. In the segmentation process, a new clustering technique namely, GHFC, is developed by the hybridization of Gaussian function, FCM, and sparse FCM. The clustering technique finds the centroid for the segmentation process. After the segmentation, essential features, such as mean, entropy, PCA, wavelet transform, and LDP are extracted from the segmented images. The extracted features are provided to the Exponential cuckoo based RBNN classifier 110 for the classification process. The classifier yields the tumor class of each test image. Each process involved in brain tumor classification process is briefed as follows:
In preprocessing, the images present in the database 102 are initially subjected to pre processing. The pre-processing makes the segmentation process more viable. The brain image database 102 constitutes MRI images of four different modalities. Consider the
databaseM has P images, and each of the images is represented asM ={I}1i P. The
MRI image li with the four modalities makes the diagnosis easier. The modalities of the
image are represented as ,T2,Tc,F1air). Pre-processing of the image Ii depends on the following steps,
1. Threshold using the OTSU binarization 2. RGB to Lab conversion
In the first step, a threshold is fixed, and the image is converted into its binary form based on OTSU binarization. Then, the equivalent RGB value of the image is converted into Lab conversion.
In segmentation, After the pre-processing of the images, the important step in the brain tumour classification is the segmentation task. Segmentation is an important step in the classification, as it necessarily differentiates the pixels representing the tumour region from
the normal ones. In this work, segmentation of the image li is carried out using GHFC, which will be newly developed by hybridizing FCM and sparse FCM technique. The proposed clustering scheme segments the image into three cluster groups, i.e., Normal, Edema, and core. In the proposed GHFC model 106, the centroids calculated through the FCM and Sparse FCM are hybridized through a constant that is determined based on the Gaussian function. Figure 4 depicts the architecture of the proposed GHFC scheme.
Consider the image li after the segmentation is formed as k clusters, and each cluster
requires centroid for the grouping process. To group the clusters into k regions, it is
necessary to identify k cluster centroids. The centroid update equation of the proposed GHFC clustering scheme is briefly discussed below:
In mathematical formulation of FCM technique, the FCM technique is briefly described here for segmenting the image. FCM [27] is one of the traditional techniques involving the fuzzy theory for clustering the image pixels. Segmentation done using the FCM approach has improved results as it defines a membership function based on a fuzzy theory
for formulating the centroid. Consider the image Ii subjected to the segmentation process, and it is fed to the FCM clustering scheme. The segmentation scheme tries to differentiate the tumour region from the nontumor region. After the segmentation, the core and edema tumour regions are identified from the brain MRI image.
The FCM approach performs the clustering by arranging and grouping the pixels to the same class. Initially, the pixels of the image are arranged as the fuzzy matrix, to keep the
clustering process simpler. Consider the image I'has Snumber of pixels represented as
I, ={U, 2 ,...UX,...Us1. The pixels belonging to the image correspond to the tumour or nontumor class, and hence, for clustering, it is necessary to declare the number of centroids.
The centroids are represented as z={1zz2' ... z",...zk}. Here, zerefers to the centroid
corresponding to the cth cluster. The clustering scheme finds thekclusters through the FCM approach.
For the clustering using FCM, it is necessary to define the fuzzy matrix, with the size
asPxk. The terms P and kcorrespond to the total number of rows and columns in the fuzzy matrix K. FCM derived with the fuzzy matrix K is represented as follows, k p Y=Z K q E,; 1 q oo c=1 x=1 (1) where, the variable q corresponds to the fuzziness and thus, extends up too. The terms K and cxindicate the fuzzy matrix, and the Euclidean distance measure, respectively. The Euclidean distance measure depends on the distance between the pixel and the corresponding centroid, and it is represented as,
E = |uU, - Zr | (2)
After calculating the Euclidean distance, the cluster centre is calculated through the fuzzy matrix, and the expression for computing the cluster centre is expressed below:
= zK ZKX x=1 (3)
Finally, the fuzzy matrix is recomputed through the Euclidean distance measure, and it is expressed as follows,
E1
d=1 Ed 4
The FCM finds the optimal centroid by executing series of iterations. The final
centroid selected by the FCM for the clustering process is indicated as CFCM .
In mathematical formulation of sparse FCM technique, the FCM clustering framework cannot regularize the noise present in the image, and besides, it fails in the high dimensional platform. To compensate for these effects, the image is further subjected to clustering using the Sparse FCM framework. The mathematical formulation of the sparse FCM is given as follows,
The sparse FCM regulates the clustering model 106 of the FCM by introducing the model parameter and makes the model suitable for the hierarchical clustering. The reformulated model of the Sparse FCM is represented as follows,
p max= u,(K,,#(V)) '1 x=V (5)
where, #(V)indicates the model parameter for regulation. Further, the sparse FCM defines the following clustering framework,
P max = r u.(K,,#(V)) r x=(V) 1 (6)
where, 5is interpreted as the pixel value based on the objective function. The expression of the sparse FCM to regularize the clustering framework is expressed as follows,
max F(K, k,r)= rBCSS(V) V,k,r (7)
where, BCSS(V)express the weights between cluster sum of squares. Finally, based on the above expression, the sparse FCM performs the clustering. The centroid obtained by
sparse FCM for clustering process is represented asCS_FCM
In the proposed GHFC approach, the cluster centroids identified through both the schemes are modified using constanta , which is a Gaussian function. Here, the Gaussian function normalizes the continuous binomial events, and thus, utilizing the Gaussian distribution function for finding the centroid point increases the chances of improved segmentation results. The expression for finding the optimal centroid for the segmentation using the proposed GHFC scheme is presented below:
C=aCFCM ±CSFCM (8) where, CFCMandCS _FCM indicate the centroids identified by FCM and Sparse FCM schemes. The term a indicates the Gaussian function, and another constant 8 is formulated from a , i.e.,# = 1-a. The expression for the Gaussian function a is expressed as follows,
1 -;~f Y e 2,
a =- 2T N (9)
where, " and 17 refer to the mean and the variance computed from the image segment. The termP indicates the total number of pixels in the segmented image. Applying the Gaussian function for finding the aggregate centroid, improves the accuracy of segmentation. Further, using the Gaussian function for the hybridization of the outcomes from FCM and Sparse FCM makes the result of being robust towards the noise.
In feature extraction: Constructing the feature vectors for the classifier, after the segmentation process, it is necessary to extract the suitable features from the segmented images. The training of the classifier depends on the features fed as the training sample, and hence, it is necessary to extract the suitable features from the segmented images. Here, some of the classification related features, such as mean, entropy, PCA, wavelet transform, and LDP, are extracted from the image. Each of the extracted features is described as follows,
The feature extraction concentrates on segmenting the tumour region from the non tumour region. Hence, the features are extracted based on the pixels related to the tumour and the nontumor segments.
Mean: It relates the mean value of the tumour-related and the nontumor related pixels within the segmented image. The mean features related to the tumour and the nontumor related pixels in the segmented image are given as follows:
1p ftrZI
P X=1 if (10)
I1r f 2 = Is, P X-1 If 1 (11) where,s indicates the xhpixel in the segmented image I.
Entropy: The next feature to be extracted from the segmented image is entropy, and it helps in identifying the high information content. The information provided by the edge and comer pixels are countered by calculating the entropy measure, and it is expressed as,
f ' =- fIs logI' x=' if ' (12)
p
ff'r - I' logI'_ I -0 x=1 If S, (13)
PCA: The next feature extracted from the segmented image is PCA. Applying PCA to the segmented image yields improved classification results, as it reduces the dimension of the features. PCA of the segmented image produces five features, and they are represented as follows,
PCA = PCA(IJ)={f5 ,f 6 ,f 7 ,fA,f9 (14)
Wavelet transform: The segmented images are subjected to the wavelet transform that leads to identifying the four bands and its entropy information. The four sub-bands after obtained through the wavelet transform are expressed as follows,
f = (-1 logIh, x=1 (15)
fI =-)1 ogIh2) X=( (16)
fi 2 = ,-xI 1 x=I (17) f (,) ogh3) P f Y=- IlogI~4 x=1 (18) where, s, , and "- indicate the sub-bands corresponding to the x pixel of approximation, horizontal, vertical and diagonal. The wavelet features extracted from the segmented image are represented as,
W={fiolffJaI f(19)
LDP: Finally, the LDP feature is extracted from the segmented image, and the LDP feature is expressed as follows,
7 LDP(ue, y, = g(l, - 1a ).2" "=0 (20)
where, correspondss to the Kirchhoff mask applied to the image for the extraction purpose. Finally, a total of 14 features are extracted from the image, and they are represented as follows,
F {ftr,f2 t rf3,4PCAW,LDP} (21)
The features are concatenated to form the feature vector, F, and thus, has the size of 1x 14. The extracted features are fed as the training information to the RBNN classifier 110 for the classification purpose.
In classification: Exponential cuckoo based RBNN classifier 110
After extracting the suitable features form the segmented image, the extracted features are fed to the RBNN classifier 110 for the brain tumour classification. The classification task tries to identify the tumour class through the extracted features. Through the classifier training, the segmented images come under the category of normal, malignant brain tumours and benign brain tumours. The RBNN classifier 110 used in this work has two layers feed forward network. The RBNN classifier 110 finds the output by defining Ukernels, and its output expression is indicated as
cu(R(o)) =YJ ,.(o) u=1 (22) where,R(o)refers to the input andfr(O)refers to the output response for theu'M kernel. The term - indicates the weight for the u'"kernel. The kernel weights are computed as follows,
P = (OT0)lOTS (23)
where, 0 indicates the suggestion matrix and s refers to the output vector with the training set. The output of the RBNN classifier 110 depends on the radial basis function, and it is expressed as follows,
fr(o) = exp -R(0)-Y,||2] 2 - (24)
where,"u indicates the width associated with the uthkernel. The term K indicates the
position of the U'kernel centre, and it is calculated optimally through the exponential cuckoo search technique.
In technique description of Exponential cuckoo search technique
The RBNN classifier 110 finds the clustered output by selecting the optimal centroids through the exponential cuckoo technique. The exponential cuckoo technique is developed in
[1] for aiding the RBNN classifier 110 and it makes the classification task easier. The exponential cuckoo technique has a similar behaviour as the cuckoo search technique [30], and the update process gets modified based on the EWMA concept [29]. The solution space and the fitness derived for the exponential cuckoo search technique are explained as follows:
i) Solution Encoding: The exponential cuckoo search technique allows the optimal selection of cluster centres for the classification process. The solution encoding for the exponential cuckoo search technique requires the randomly initialized features extracted from the segmented image. From the randomly initialized feature points, the exponential cuckoo search technique finds an efficient feature point, which can act as the feature centre. The solution encoding for the exponential cuckoo search technique for selecting the optimal
centroid has the size ofmx b .
ii) Fitness Evaluation: The fitness function used for evaluating the optimal centroid is derived based on minimum square distance. Here, the fitness is derived as the minimization function, and hence, it is expressed as follows,
m Fitness= min clu (f) E j1,2,..5 (25)
where, refers to the feature data, and clui(f) refers to the ith cluster formed with
the feature data. The exponential cuckoo search technique finds b clusters for the brain tumour classification.
iii) Steps involved in Exponential cuckoo search technique
The technique description of the Exponential cuckoo search technique is presented in this section. The exponential cuckoo search technique is developed by combining the Exponential Weighted Moving Average (EWMA) with Cuckoo Search (CS) Technique. The exponential cuckoo search technique finds the optimal cluster centroids for grouping the tumour and non-tumour cells. Among the features send to the RBNN classifier 110, the optimization technique finds one suitable feature to be the centroid for the classification.
The technique procedure involved in exponential cuckoo search technique is presented below:
1)In the initial step, the position of the host nest is randomly initialized. Consider the
solution space constitutes of Q number of host nests, and the location of the host nest is given random value. The randomly initialized position of the host nest is represented as follows,
S=IS SQ (26)
2) In the next step, the solution generated by CSA with the updated host nest is represented as follows,
St+1 StdB (27) where,BIrefers to the standard normal distribution, and it has unity value for the standard deviation and 0 for the mean value. The term d indicates the step size for the optimization procedure.
3) The solution update presented in above expression is modified by introducing the EWMA concept. The choice of cluster centroid for the classification is updated based on the modified expression of the exponential cuckoo search technique.
J+ Xst+(I- X)J (28)
where,% refers to the constant for EWMA, and it ranges fromO to 1. The term I'+
indicates the updated feature at time t+1. The above equation is rewritten as,
(29)
4) After identifying S based on the EWMA technique, the update from the CS
technique is modified with the expression given in equation (29). Here, S computed based on
EWMA replaces the S in equation (27). The final updated position of the host nest based on the proposed Exponential cuckoo search technique is given as follows,
SI + dB, (30)
5)The exponential cuckoo search technique simulates until the maximum iteration and retrieves the best possible centroids for the classification. The best centroid is calculated through the minimization fitness function derived. At the end of the iteration, the optimal cluster centroids are identified, and they are provided to the RBNN classifier 110 for the classification purpose.
The results are simulated by considering the image samples from the BRATS and the SIMBRATS database, and the results are evaluated based on segmentation accuracy, MSE, and accuracy.
The entire work for automating the brain tumour segmentation process is implemented in the MATLAB tool. As the tool developed is automated, the PC used for the implementation requires a configuration with Windows 10 OS, 4 GB, and Intel 13 processor, respectively.
The images for simulating the proposed GHFC + Exponential cuckoo based RBNN classifier 110 is taken from the BRATS and the SIMBRATS database. The description of these databases is given as follows:
BRATS database: From the standard BRATS database, this work has considered 65 images, out of which 51 are high-grade images. The brain images are collected from three different universities. The database contains the images from four modalities, such as TI, Tlc, T2, and Flair, and all the four are considered for the analysis.
SimBRATS database: The SIMBRATS database has 50 MRI images, among which 25 are of high quality. The other 25 images present in the database have low quality. The simulation considers all the images for the analysis.
For the simulation environment, it is necessary to define well-qualified evaluation metrics as the field involved here is a medical diagnosis. The proposed segmentation and classification work can be evaluated based on the metrics, such as Segmentation Accuracy (SA), MSE, and accuracy. The evaluation metrics are briefed as follows,
SA: SA indicates the accuracy to which the segmentation technique separates the tumour region from the nontumor region.
MSE: MSE refers to the deviation of the classifier from the actual ground response, and it is expressed as,
MSE = Er((ni,i ()2 (31)
where, indicates the average output response of the classifier, and i indicates the
ground response of the "image.
Accuracy: Accuracy measure depicts the exactness of the classifier in identifying the brain tumour in the image, and it is represented as,
Accuracy= TP+TN+FP+FN (32)
where, TP, TN, FP, and FN indicate true positive, true negative, false positive and false negative achieved while classifying the images.
Precision: Precision is defined by the expected outcome of the stock market prediction, and it is represented as,
Precision= TP + FP ( 3 3 )
Recall: Recall is defined as the actual outcome of the stock market prediction, and it is represented as,
Re call TP TP+ FN (34)
The simulation results of the proposed GHFC with the Exponential cuckoo based RBNN classifier 110 is compared with the several works, such as Levenberg-Marquardt neural network(LMNN), Exponential cuckoo based RBNN classifier 110, RBNN + k-means clustering, and RBNN + cuckoo models, Genetic Technique- Support Vector Machine (GA SVM), and Probabilistic Neural Network (PNN).
LMNN The LMNN model finds the classified output through the second-order approach, and it makes use of the Hessian matrix for the classification.
Exponential cuckoo based RBNN classifier: This technique is defined in previous work for the classification of the brain tumour. The exponential cuckoo search technique is used to identify the optimal cluster centroids for clustering purposes.
RBNN + k-means clustering: Here, the k-means clustering technique is adopted for the segmentation and the classification is done through the RBNN classifier 110.
RBNN + cuckoo: Here, CS technique is adopted for selecting the optimal centroid point for segmentation. After clustering, the classification is done through the RBNN classifier 110.
GA-SVM:. Here, GA used for the selection of most informative input features, and the classification is done using the SVM classifier.
PNN: Here, the features are extracted and the classification is done by using PNN classifier.
Figure 4 illustrates an architecture of the proposed GHFC technique for segmentation in accordance with an embodiment of the present disclosure.
Figure 5 illustrates experimental results of proposed GHFC scheme in accordance with an embodiment of the present disclosure. The experimental results achieved by the proposed GHFC segmentation scheme are presented in figure 5. Here, the results are depicted for the two input images taken from each of the experimentation databases. Figure 5.i presents the results of the proposed GHFC scheme experimented using the BRATS database. The original image, ground truth, LDP feature, and the segmented output from GHFC are depicted in figure 5.i.a, figure 5.i.b, figure 5.i.c, and figure 5.i.d, respectively, for the sample input images 1 and 2. Figure 5.b presents the experimental results of the proposed GHFC scheme for the sample input images 1 and 2 obtained from the SIMBRATS database. Figure 5.ii.a, figure 5.ii.b, figure 5.ii.c, and figure 5.ii.d presents the original image, ground truth, LDP feature, and the segmented output from GHFC.
Figure 5 illustrates experimental results of proposed GHFC scheme for i) BRATS database with a) Input image, b) Ground truth, c) LDP features d) Segmented image and ii) SIMBRATS database with a) Input image, b) Ground truth, c) LDP features d) Segmented image.
Figures 6A and 6B illustrate performance analysis based on segmentation accuracy on BRATS database, and SIMBRATS database in accordance with an embodiment of the present disclosure. This subsection presents the performance analysis based on the SA metric, and the graph is depicted in Figure 6. Analysis based on the SA is done by varying the number of patients in each of the databases. From the BRATS database, a total of 30 patients is considered, while the SIMBRATS database has 50 patients. The images constitute four different modalities, such as Flair, TI, TLC, and T2. Figure 6.a presents the SA analysis for the BRATS database. For patient 1, the proposed GHFC model 106 achieved the SA value of
0.9581, 0.9242, 0.9597, and 0.9804, for Flair, TI, TLC, and T2 modalities, respectively. Similarly, for the patient 30, the GHFC technique achieved SA values of 0.9597, 0.9492, 0.9969, and 0.9965, for Flair, TI, TLC, and T2 modalities, respectively. Figure 6.b presents the SA analysis for the SIMBRATS database. For the patient 1 in the SIMBRATS database, the proposed GHFC achieved SA value of 0.9947, 0.9345, 0.9985, and 0.9803, for Flair, TI, TLC, and T2 modalities, respectively. For the patient 32, the proposed scheme achieved SA value of 0.9918, 0.9398, 0.9807, and 0.9796,for the Flair, TI, TLC, and T2 modalities, respectively.
Figures 7A, 7B, and7C illustrate analysis based on segmentation techniques on the BRATS database accuracy, precision, and recall in accordance with an embodiment of the present disclosure. In comparative analysis-based Segmentation performance. Figure 7 presents the analysis based on various segmentation techniques, such as k-means, GLCM, and FCM, with the proposed segmentation technique GHFC, by considering image resolution and image type from BRATS and SIMBRATS database and evaluated based on accuracy, precision, and recall, respectively.
Figure 7 shows the various segmentation techniques analysis in the BRATS dataset. This evaluation is carried out using the metrics, such as accuracy, precision, and recall. Figure 7 a) shows the accuracy of the proposed segmentation technique GHFC compared with the existing segmentation techniques, such as k-means, GLCM, and FCM. When the image resolution is 120, the accuracy is 0.8987, 0.8777, and 0.894 for the existing segmentation techniques, such as k-means, GLCM, and FCM. For the same image resolution, the proposed segmentation technique GHFC has a maximum accuracy of 0.8998. The precision of the various segmentation methods is shown in Figure 7 b. The precision of the segmentation techniques, such as k-means, GLCM, FCM, and the proposed GHFC is 0.8889, 0.915, 0.896, and 0.927 respectively, for image resolution 160. Figure 7 c) depicts the recall of the various segmentation techniques. The proposed method has a maximum recall of 0.9251, when the image resolution is 160. For the same resolution, existing segmentation techniques, such as k-means, GLCM, and FCM is 0.8833, 0.9225, and 0.9132, respectively. Hence, this analysis clearly shows that the proposed segmentation technique is better than the existing segmentation techniques, such as k-means, GLCM, and FCM.
Figures 8A, 8B, and 8C illustrate analysis based on segmentation techniques on the SIMBRATS database accuracy, precision, and recall in accordance with an embodiment of the present disclosure. The analysis based on the various segmentation techniques using the SIMBRATS dataset is given in Figure 8. Figure 8 a) gives the accuracy of the proposed segmentation with the existing techniques, such as k-means, GLCM, and FCM. For image resolution 192, the accuracy is 0.8757, 0.8728, 0.8747, and 0.882 for the methods, such as k means, GLCM, FCM, and the proposed GHFC, respectively. Figure 8 b) shows the segmentation analysis based on precision. When the image resolution is 192, the precision of the existing segmentation techniques is 0.8843, 0.8895, and 0.903, respectively. For the same resolution, the proposed method has a precision of 0.9078. The recall of the various segmentation techniques is discussed in Figure 8 c. The recall of the methods, such as k means, GLCM, FCM, and the proposed GHFC is 0.8764, 0.8751, 0.8794, and 0.8866, respectively, for image resolution 192.
Figures 9A, 9B, and 9C illustrate analysis of segmentation techniques in BRATS database accuracy, precision, and recall in accordance with an embodiment of the present disclosure. Figure 9 represents the analysis of various segmentation techniques based on image types, namely Flair, TI, TIC, and T2 using BRATS database. The accuracy of the various segmentation techniques on image types is discussed in Figure 9 a. For the TI image type, the accuracy of the various segmentation methods, such as k-means, GLCM, FCM, and the proposed GHFC is 0.908, 0.9126, 0.9086, and 0.9269, respectively. Figure 9 b) shows the precision of the image types based on various segmentation techniques. The precision of the TI image is 0.8351, 0.8448, 0.8735, and 0.8963, for the various segmentation methods, such as k-means, GLCM, FCM, and the proposed GHFC, respectively. Figure 9 c) gives the recall based on various segmentation techniques. For the T2 image type, the precision of the proposed GHFC is 0.87421. The existing methods k-means, GLCM, and FCM have 0.8337, 0.8361, and 0.8551, respectively.
Figures 10A, 10B, and 10C illustrate analysis of segmentation techniques in SIMBRATS database accuracy, precision, and recall in accordance with an embodiment of the present disclosure. Figure 10 discusses the accuracy, precision, and specificity of various segmentation techniques based on the image types. Figure 10 a) describes the accuracy of the proposed and the existing segmentation techniques. For the flair image type the accuracy of the existing segmentation techniques, such as k-means, GLCM, and FCM is 0.8556,0.8516, and 0.8574, and for the proposed GHFC has the maximum accuracy of 0.8622, for the same image type. Figure 10 b) shows the precision of the proposed and the existing segmentation techniques. The precision of the image type TI is 0.9335, 0.9187, 0.8923, and 0.9378, for the methods, such as k-means, GLCM, FCM and proposed GHFC, respectively. The recall of the various segmentation techniques based on image types is given in Figure 10 c. The recall of the existing segmentation techniques, such as k-means, GLCM, and FCM is 0.8871, 0.8641, and 0.8597, respectively, for the TIC image type. For the same image type, the proposed GHFC has a maximum recall of 0.9379.
Figures 11A and 11B illustrate comparative analysis on BRATS database by varying training samples based on accuracy, and MSE in accordance with an embodiment of the present disclosure. In this analysis, the performance of comparative techniques is measured by varying the training samples from the BRATS database, using figure 11. As suggested in figure 11.a, the analysis based on accuracy depicts that the existing methods, such as LMNN, RBNN + k-means, RBNN + cuckoo, exponential cuckoo based RBNN, GA-SVM, and PNN achieved lower accuracy than the proposed technique with the values of0.834, 0.7349, 0.8284,0.8851, 0.8727, and 0.8645, respectively, for the training percentage of 90. Further, the details reveal, how the proposed GHFC + exponential cuckoo based RBNN outclassed existing model with a high accuracy value of0.8878for the same training percentage. Figure 11.b presents the performance of the comparative models on the BRATS database with varying training data based on the MSE metric. From the analysis, it is evident that LMNN, RBNN + k-means, RBNN + cuckoo, exponential cuckoo based RBNN, GA-SVM, and PNN techniques have high MSE values of0.0298, 0.0617, 0.0457, 0.0298, 0.0207, and 0.0314, respectively, for the training data of 90%. For the same training information, the proposed GHFC + exponential cuckoo based RBNN classifier 110 has a low MSE value of 0.0074. The results depict that the proposed GHFC + exponential cuckoo based RBNN classifier 110 has improved performance while evaluating the sample images from the BRATS database.
Figures 12A and 12B illustrate comparative analysis on BRATS database by varying k-fold based on accuracy, and MSE in accordance with an embodiment of the present disclosure. In this analysis, the performance of comparative techniques is measured by varying the k-fold from the BRATS database, as given in figure 12. As suggested in figure 12.a, the analysis based on accuracy depicts that the existing methods, such as RBNN + k means, RBNN + cuckoo, exponential cuckoo based RBNN, GA-SVM, and PNN achieved lower accuracy than the proposed technique with the values of0.5384, 0.5381, 0.6822, 0.6964, and 0.7444, respectively for k-fold = 10. Further, the details reveal, how the proposed
GHFC + exponential cuckoo based RBNN outclassed existing model with a high accuracy value of0.8246, for k-fold = 10. Figure 12.b presents the performance of the comparative models on the BRATS database with varying training data based on the MSE metric. From the analysis, it is evident that RBNN + k-means, RBNN + cuckoo, exponential cuckoo based RBNN, GA-SVM, and PNN have high MSE values of0.0841, 0.0866, 0.0578, 0.0619, and 0.0386,respectively, for k-fold =10. For the same k-fold, the proposed GHFC + exponential cuckoo based RBNN classifier 110 has a low MSE value of 0.02. Thus, the performance of the proposed classifier is improved for various k-folds considered, where the increase in the k-fold has a great impact on the performance of the proposed classifier.
Figures 13A and 13B illustrate comparative analysis on SIMBRATS database by varying training samples based on accuracy, and MSE in accordance with an embodiment of the present disclosure. In this analysis, the performance of comparative techniques is measured by varying the training samples from the SIMBRATS database and is given in figure 13. As suggested in figure 13.a, the analysis based on accuracy depicts that the existing methods, such as LMNN, RBNN + k-means, RBNN + cuckoo, exponential cuckoo based RBNN, GA-SVM, and PNN achieved lower accuracy than the proposed technique with the values of0.7507, 0.5577, 0.5875, 0.7667, 0.8041, and 0.7696,respectively for 70%training data. Further, the details reveal that the proposed GHFC + exponential cuckoo based RBNN outclassed the existing model with a high accuracy value of0.8524 for the same training data. Figure 13.b presents the performance of the comparative models on the BRATS database with varying training data based on the MSE metric. From the analysis, it is evident that RBNN + k-means, RBNN + cuckoo, exponential cuckoo based RBNN techniques, GA-SVM, and PNN have high MSE value of 0.0626, 0.069, 0.0511, 0.051, 0.0439, and 0.0362, respectively, for the training data of 50 %. For the same training information, the proposed GHFC + exponential cuckoo based RBNN classifier 110 has a low MSE value of 0.0186.
Figure 14 illustrates comparative analysis on SIMBRATS database by varying k-fold based on accuracy, and MSE in accordance with an embodiment of the present disclosure. In this analysis, the performance of comparative techniques is measured by varying the k-fold from the SIMBRATS database, using figure 14. As suggested in figure 14.a, the analysis based on accuracy depicts that the existing methods, such as RBNN + k-means, RBNN +
cuckoo, exponential cuckoo based RBNN, GA-SVM, and PNN achieved lower accuracy than the proposed technique with the values of0.5901, 0.6428, 0.7069, 0.7158, and
0.7617,respectively for k-fold=6. The proposed GHFC + exponential cuckoo based RBNN outclassed existing model with a high accuracy value of0.81 for the same k-fold. Figure 14.b presents the performance of the comparative models on the BRATS database with varying k fold based on the MSE metric. From the analysis, it is evident that the RBNN + k-means, RBNN + cuckoo, exponential cuckoo based RBNN, GA-SVM, and PNN have high MSE value of0.0334, 0.0342, 0.0307, 0.0216, and 0.026,for k-fold=6. For the same k-fold, the proposed GHFC + exponential cuckoo based RBNN classifier 110 has a low MSE value of 0.0284.
The comparative discussion based on segmentation techniques and classification techniques of the proposed GHFC with the various existing methods is presented in this section. Table 1 depicts the comparative analysis based on various segmentation techniques. From the table, it is depicted that the proposed GHFC has the maximum accuracy, precision, and recall, then the existing segmentation techniques, such as k-means, GLCM, and FCM, for both BRAT and SIMBRAT datasets. The classification performance of the proposed and the existing methods are given in Table 2. As depicted in table 2, the classification performance of the proposed GHFC is considerably high in comparison to another state of art techniques. For the BRATS database, the proposed GHFC with the exponential cuckoo based RBNN technique achieved better classification performance with the values of 0.8952, and 0.0074, for the accuracy and the MSE, respectively. The accuracy of the proposed GHFC + Exponential cuckoo based RBNN is 6.836%, 17.906%, 7.46%, 1.128%, 2.51%, and 6.78% better than the accuracy of the existing methods, such as LMNN, RBNN + k-means, RBNN +cuckoo, Exponential cuckoo based RBNN, GA-SVM, and PNN, respectively, for the BRATS database. For MSE, the proposed GHFC + Exponential cuckoo based RBNN is 75.16%, 88.01%, 83.81%, 75.25%, 64.25%, and 76.51%, better than the existing methods, such as LMNN, RBNN + k-means, RBNN +cuckoo, Exponential cuckoo based RBNN, GA-SVM, and PNN, respectively, for the BRATS database.
For SIMBRATS dataset, the existing Exponential cuckoo based RBNN technique achieved values of 0.8719, and 0.0036, for accuracy and MSE, respectively. For the SIMBRATS database, the accuracy of the proposed GHFC + Exponential cuckoo based RBNN is 11.99%, 31.84%, 11.23%, 5.86%, 4.08%, and 3.32%, better than the existing methods, such as LMNN, RBNN + k-means, RBNN +cuckoo, Exponential cuckoo based
RBNN, GA-SVM, and PNN, respectively. Similarly the MSE of the proposed GHFC
+ Exponential cuckoo based RBNN is 94.25, 94.78%, 92.95%, and 92.95%, 83.02%, and 83.02%, better than the existing methods, such as LMNN, RBNN + k-means, RBNN +cuckoo, Exponential cuckoo based RBNN, GA-SVM, and PNN, respectively.
Table 3 shows the computational time of the proposed + Exponential cuckoo based RBNN with the existing methods, such as LMNN, RBNN + k-means, RBNN +cuckoo, Exponential cuckoo based RBNN, GA-SVM, and PNN, in which the proposed method has the minimum computational time of 6.09 sec.
In an embodiment, it is concluded that this work presented the brain tumor segmentation and classification approach by developing a novel clustering framework. The MRI images of four different modalities can be considered for the analysis, and hence, the proposed work is suitable for the practical framework. Initially, the images are subjected to pre-processing and subjected to the segmentation. Here, the segmentation is carried out by developing a hybrid clustering framework, namely GHFC. The proposed GHFC is formulated by integrating the FCM and the Sparse FCM to find the effective centroid for the classification. The GHFC yields the segmented images and provides for further processing. From the segmented images, some of the well-known classification related features are provided for the classification. After that, based on the extracted features, the classification is performed through an Exponential cuckoo based RBNN classifier 110. The simulation of the proposed approach is done through the images from the BRATS and SIMBRATS database and evaluated based on MSE, accuracy, and SA, respectively. Results reveal that the proposed GHFC along with the exponential cuckoo technique based RBNN achieved maximum accuracy and minimum MSE results with the values of 0.8952, and 0.0074, respectively, for the BRATS dataset and 0.8719 and 0.0036, for the SIMBRATS dataset. In future, to improve the classification results, the proposed system will be further improved by using the deep learning strategies.
The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any component(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or component of any or all the claims.
Claims (9)
1. A system for Gaussian Hybrid Fuzzy Clustering and Radial Basis Neural Network for automatic brain tumor classification in MRI images, the system comprises:
a database constitutes a plurality of magnetic resonance imaging (MRI) images of four different modalities of human brains; a pre-processing unit for pre-processing MRI images of human brain by thresholding the image to convert the MRI images into binary form based on OTSU binarization and converting equivalent RGB value of the image into Lab conversion; a Gaussian Hybrid Fuzzy Clustering (GHFC) model for segmenting the binary form of MRI images to differentiate the pixels representing the tumour region from the normal ones into three cluster groups such as normal, edema, and core by finding optimal centroid by executing series of iterations, wherein the centroids calculated through the FCM and Sparse FCM are hybridized through a constant that is determined based on the Gaussian function; an extraction module for extracting features such as mean, entropy, PCA, wavelet transform, and LDP from the images based on the pixels related to the tumour and the non-tumour segments; and a RBNN classifier for classifying the extracted features to identify the tumour class such as normal, malignant brain tumours and benign brain tumours, wherein through the classifier training, the segmented images come under the category of normal, malignant brain tumours and benign brain tumours.
2. The system as claimed in claim 1, wherein the RBNN classifier finds the clustered output by selecting the optimal centroids through the exponential cuckoo technique, wherein the exponential cuckoo technique has a similar behavior as the cuckoo search technique and the update process gets modified based on the EWMA concept.
3. The system as claimed in claim 1, wherein the exponential cuckoo search technique allows the optimal selection of cluster centers for the classification process, wherein the solution encoding for the exponential cuckoo search technique requires the randomly initialized features extracted from the segmented image, wherein from the randomly initialized feature points, the exponential cuckoo search technique finds an efficient feature point, which can act as the feature center.
4. A method for Gaussian Hybrid Fuzzy Clustering and Radial Basis Neural Network for automatic brain tumor classification in MRI images, the method comprises:
pre-processing MRI images of human brain by thresholding the image to convert the MRI images into binary form based on OTSU binarization and converting equivalent RGB value of the image into Lab conversion; segmenting the binary form of MRI images to differentiate the pixels representing the tumour region from the normal ones into three cluster groups such as normal, edema, and core using a Gaussian Hybrid Fuzzy Clustering (GHFC) model by finding optimal centroid by executing series of iterations, wherein the centroids calculated through the FCM and Sparse FCM are hybridized through a constant that is determined based on the Gaussian function; extracting features such as mean, entropy, PCA, wavelet transform, and LDP from the images based on the pixels related to the tumour and the non-tumour segments; and classifying the extracted features to identify the tumour class such as normal, malignant brain tumours and benign brain tumours using a Radial Basis Neural Network (RBNN) classifier, wherein through the classifier training, the segmented images come under the category of normal, malignant brain tumours and benign brain tumours.
5. The method as claimed in claim 4, wherein to group the clusters into regions, it is necessary to identify cluster centroids.
6. The method as claimed in claim 4, wherein steps for mathematical formulation of FCM technique comprises:
subjecting the MRI image to the segmentation process, and thereafter feeding a fuzzy c-means (FCM) clustering scheme, wherein the segmentation scheme differentiates the tumor region from the non-tumor region; identifying core and edema tumor regions from the brain MRI image and thereafter the FCM approach performs the clustering by arranging and grouping the pixels to the same class, wherein initially, the pixels of the image are arranged as the fuzzy matrix, to keep the clustering process simpler, wherein the pixels belonging to the image correspond to the tumor or no tumor class, and hence, for clustering, it is necessary to declare the number of centroids; calculating Euclidean distance measure depends on the distance between the pixel and the corresponding centroid and calculating cluster center through the fuzzy matrix; and recomputing the fuzzy matrix through the Euclidean distance measure and thereby finding optimal centroid by executing series of iterations, wherein the final centroid is selected by the FCM for the clustering process.
7. The method as claimed in claim 6, wherein sparse FCM regulates the clustering model of the FCM by introducing the model parameter and makes the model suitable for the hierarchical clustering.
8. The method as claimed in claim 6, wherein the exponential cuckoo search technique is developed by combining the Exponential Weighted Moving Average (EWMA) with Cuckoo Search (CS) Technique, wherein the exponential cuckoo search technique finds the optimal cluster centroids for grouping the tumour and non-tumor cells, wherein among the features send to the RBNN classifier, the optimization technique finds one suitable feature to be the centroid for the classification.
9. The method as claimed in claim 8, wherein the exponential cuckoo search technique simulates until the maximum iteration and retrieves the best possible centroids for the classification, wherein the best centroid is calculated through the minimization fitness function derive and at the end of the iteration, the optimal cluster centroids are identified, and they are provided to the RBNN classifier for the classification purpose.
Database 102 Pre-processing unit 104
Gaussian Hybrid Extraction Module Fuzzy Clustering 108 Model 106
RBNN classifier 110
Figure 1 pre-processing MRI images of human brain by thresholding the image to convert the MRI images into binary form 202 based on OTSU binarization and converting equivalent RGB value of the image into Lab conversion segmenting the binary form of MRI images to differentiate the pixels representing the tumour region from the normal 204 ones into three cluster groups such as normal, edema, and core using a Gaussian Hybrid Fuzzy Clustering (GHFC) model by finding optimal centroid by executing series of iterations, wherein the centroids calculated through the FCM and Sparse FCM are hybridized through a constant that is determined based on the Gaussian function 206 extracting features such as mean, entropy, PCA, wavelet transform, and LDP from the images based on the pixels related to the tumour and the non-tumour segments 208 classifying the extracted features to identify the tumour class such as normal, malignant brain tumours and benign brain tumours using a RBNN classifier, wherein through the classifier training, the segmented images come under the category of normal, malignant brain tumours and benign brain tumours
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6A Figure 6B
Figure 7A Figure 7B
Figure 7C
Figure 8A Figure 8B
Figure 8C
Figure 9A Figure 9B
Figure 9C
Figure 10A Figure 10B
Figure 10C
Figure 11A Figure 11B
Figure 12A Figure 12B
Figure 13A Figure 13B
Figure 14A Figure 14B
Figure 15
Figure 16
Figure 17
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2021101377A AU2021101377A4 (en) | 2021-03-17 | 2021-03-17 | A system and method for automatic brain tumor classification in mri images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2021101377A AU2021101377A4 (en) | 2021-03-17 | 2021-03-17 | A system and method for automatic brain tumor classification in mri images |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2021101377A4 true AU2021101377A4 (en) | 2021-05-13 |
Family
ID=75829056
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2021101377A Ceased AU2021101377A4 (en) | 2021-03-17 | 2021-03-17 | A system and method for automatic brain tumor classification in mri images |
Country Status (1)
Country | Link |
---|---|
AU (1) | AU2021101377A4 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113160167A (en) * | 2021-04-16 | 2021-07-23 | 重庆飞唐网景科技有限公司 | Medical image data extraction working method through deep learning network model |
CN117593594A (en) * | 2024-01-18 | 2024-02-23 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Brain MRI image classification method, equipment and medium based on consistency alignment |
-
2021
- 2021-03-17 AU AU2021101377A patent/AU2021101377A4/en not_active Ceased
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113160167A (en) * | 2021-04-16 | 2021-07-23 | 重庆飞唐网景科技有限公司 | Medical image data extraction working method through deep learning network model |
CN113160167B (en) * | 2021-04-16 | 2022-01-14 | 深圳市铱硙医疗科技有限公司 | Medical image data extraction working method through deep learning network model |
CN117593594A (en) * | 2024-01-18 | 2024-02-23 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Brain MRI image classification method, equipment and medium based on consistency alignment |
CN117593594B (en) * | 2024-01-18 | 2024-04-23 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Brain MRI image classification method, equipment and medium based on consistency alignment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zeng et al. | A new deep belief network-based multi-task learning for diagnosis of Alzheimer’s disease | |
Yan et al. | A three-stage deep learning model for accurate retinal vessel segmentation | |
Hussain et al. | Segmentation of glioma tumors in brain using deep convolutional neural network | |
Liu et al. | A hybrid deep learning model for predicting molecular subtypes of human breast cancer using multimodal data | |
Shen et al. | Multi-crop convolutional neural networks for lung nodule malignancy suspiciousness classification | |
Saha et al. | Brain image segmentation using semi-supervised clustering | |
Xu et al. | Weakly supervised histopathology cancer image segmentation and classification | |
Hameed Al-bayati et al. | Evolutionary feature optimization for plant leaf disease detection by deep neural networks | |
Keerthana et al. | An intelligent system for early assessment and classification of brain tumor | |
Cao et al. | A multi-kernel based framework for heterogeneous feature selection and over-sampling for computer-aided detection of pulmonary nodules | |
Khan et al. | Automated design for recognition of blood cells diseases from hematopathology using classical features selection and ELM | |
Nanda et al. | SSO-RBNN driven brain tumor classification with Saliency-K-means segmentation technique | |
AU2021101377A4 (en) | A system and method for automatic brain tumor classification in mri images | |
Nabizadeh-Shahre-Babak et al. | Detection of COVID-19 in X-ray images by classification of bag of visual words using neural networks | |
Mathew et al. | Plant disease detection using GLCM feature extractor and voting classification approach | |
Saravanakumar et al. | A computer aided diagnosis system for identifying Alzheimer’s from MRI scan using improved Adaboost | |
Sathish et al. | Gaussian hybrid fuzzy clustering and radial basis neural network for automatic brain tumor classification in MRI images | |
Rani et al. | Multiple instance learning: Robust validation on retinopathy of prematurity | |
Lakshmi et al. | A novel M-ACA-based tumor segmentation and DAPP feature extraction with PPCSO-PKC-based MRI classification | |
Michael Mahesh et al. | Multiclassifier for severity‐level categorization of glioma tumors using multimodal magnetic resonance imaging brain images | |
Arif et al. | Application of Genetic Algorithm and U‐Net in Brain Tumor Segmentation and Classification: A Deep Learning Approach | |
Priya et al. | Optimal deep belief network with opposition based pity beetle algorithm for lung cancer classification: A DBNOPBA approach | |
Diwakaran et al. | Breast cancer prognosis based on transfer learning techniques in deep neural networks | |
Basha et al. | An effective and robust cancer detection in the lungs with BPNN and watershed segmentation | |
CN115310491A (en) | Class-imbalance magnetic resonance whole brain data classification method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGI | Letters patent sealed or granted (innovation patent) | ||
MK22 | Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry |