CN116597214A - Alzheimer's disease classification method and system based on multi-mode hypergraph attention network - Google Patents

Alzheimer's disease classification method and system based on multi-mode hypergraph attention network Download PDF

Info

Publication number
CN116597214A
CN116597214A CN202310565619.6A CN202310565619A CN116597214A CN 116597214 A CN116597214 A CN 116597214A CN 202310565619 A CN202310565619 A CN 202310565619A CN 116597214 A CN116597214 A CN 116597214A
Authority
CN
China
Prior art keywords
hypergraph
cross
patient
modal
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310565619.6A
Other languages
Chinese (zh)
Inventor
曾安
李艺
潘丹
杨宝瑶
张逸群
杨洋
刘军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202310565619.6A priority Critical patent/CN116597214A/en
Publication of CN116597214A publication Critical patent/CN116597214A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4088Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Databases & Information Systems (AREA)
  • Neurology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Probability & Statistics with Applications (AREA)
  • Child & Adolescent Psychology (AREA)
  • Physiology (AREA)
  • Neurosurgery (AREA)
  • Psychology (AREA)
  • Psychiatry (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Hospice & Palliative Care (AREA)
  • Developmental Disabilities (AREA)

Abstract

The invention provides a method and a system for classifying Alzheimer's disease based on a multi-mode hypergraph attention network, wherein the method comprises the following steps: obtaining sMRI image data of brains of a plurality of Alzheimer's disease patients and preprocessing the sMRI image data; feature extraction is carried out on the preprocessed sMRI image data, and a plurality of cross-mode hypergraphs are constructed according to the image features and morphological features of the brain region of the patient; establishing a hypergraph attention neural network model, training by using a cross-mode hypergraph, finally obtaining sMRI image data of the brain of a patient to be diagnosed, constructing a corresponding hypergraph, inputting the hypergraph into the trained hypergraph attention neural network model for classification, and obtaining Alzheimer disease classification results and attention weights corresponding to the hypergraphs; the invention can effectively improve the accuracy of the Alzheimer disease classification task, can find out which brain regions and hypergraphs of morphological structures have great contribution in the model, and is beneficial to the accurate diagnosis of doctors.

Description

Alzheimer's disease classification method and system based on multi-mode hypergraph attention network
Technical Field
The invention relates to the technical field of deep learning and neural image processing, in particular to a method and a system for classifying Alzheimer's disease based on a multi-mode hypergraph attention network.
Background
Alzheimer's Disease (AD) is a typical neurodegenerative Disease, which clinically manifests as memory loss, loss of language ability, loss of life self-care ability, and the like. Mild cognitive impairment (MildCognitive Impairment, MCI) is a condition between AD and Health Controls (HC) that can be subdivided into mild cognitive impairment that will translate to AD (MCI patients who willconvert to AD, MCIc) and mild cognitive impairment that will not translate to AD (MCI patients who will notconvert to AD, MCInc), and accurate screening of MCI patients aids in early diagnosis of AD.
In recent years, machine learning techniques have been widely used to analyze neuro-image data, however, existing approaches have mostly focused on extracting features from a single modality. However, diagnosis of AD is multi-modal in nature, requiring a physician to perform a comprehensive analysis of the patient's physiological or behavioral symptoms, medical history, and related medical images to diagnose the patient. Different modalities may reveal different pathological changes of AD, so the use of multi-modal data will help to diagnose AD more accurately.
Hypergraph (hypergraph) is a generalized graph, is the most general discrete structure in a limited set, and has wide application in the fields of information science, life science and the like. One edge (edge) of which can be connected to any number of vertices (vertices). The hypergraph structure can well record the multi-element high-order correlation relation among all the ROIs in the image data.
The prior art discloses a medical image classification method and system based on multi-mode Alzheimer's disease, wherein the method comprises the following steps: inputting the structural nuclear magnetic resonance imaging image into a first 3DCNN network to obtain a first characteristic of the structural nuclear magnetic resonance imaging image; inputting the positron emission tomography imaging image into a second 3DCNN network to obtain a second characteristic of the positron emission tomography imaging image; inputting the first feature and the second feature into an encoder module in a transducer model to obtain fusion features of the brain to be classified; inputting the fusion characteristics into a multi-layer perceptron network to obtain the structural nuclear magnetic resonance imaging image and the Alzheimer's disease degree image classification result of the positron emission tomography imaging image of the brain to be classified; in the method in the prior art, although the Alzheimer's disease is classified by the feature fusion of the multi-modal data, the multi-modal data in the method is medical image data, and the relation among multi-modal subjects cannot be fully mined, so that the early diagnosis accuracy of the Alzheimer's disease is lower by using only simple image data;
the prior art also discloses an abnormal brain connection prediction system, an abnormal brain connection prediction method, an abnormal brain connection prediction device and a readable storage medium, wherein the abnormal brain connection prediction system, the abnormal brain connection prediction method, the abnormal brain connection prediction device and the readable storage medium automatically extract high-order related features in different modes and high-order complementary features among different modes through a deep learning method, and realize analysis of abnormal connection of a multi-mode brain network and prediction of different cognitive diseases through a countermeasure training method; the method utilizes priori knowledge to guide a model to learn the explanatory characterization, constrains the consistency of different modal characterization distribution through a pair of cooperative discriminants, reconstructs brain map data through a reverse generator and a decoder for feature codes, finally extracts high-order related features among modes and in modes through a hypergraph perception fusion module, and sets antagonism loss, reconstruction loss and classification loss functions to guide model learning so as to achieve the aim of mining abnormal brain connection of Alzheimer's disease; although the method in the prior art carries out the classified prediction of the Alzheimer's disease by constructing multi-mode hypergraph data, the method is only a brain connection abnormal region diagnosis method in a broad sense, and because important brain regions of AD have different performances in different periods, particularly for MCic and MCInc classification with higher complexity, the method in the prior art cannot accurately predict.
Disclosure of Invention
The invention provides the Alzheimer's disease classification method and system based on the multi-mode hypergraph attention network, which not only improves the accuracy of classification tasks, but also can find out which hypergraphs have more contribution degree to classification results, thereby being beneficial to paying more attention to corresponding brain areas when doctors diagnose different patients.
In order to solve the technical problems, the technical scheme of the invention is as follows:
the invention provides a multi-mode hypergraph attention network-based Alzheimer's disease classification method, which comprises the following steps:
s1: obtaining sMRI image data of brains of a plurality of Alzheimer's disease patients and preprocessing the sMRI image data;
s2: extracting features of the preprocessed sMRI image data to obtain image features and morphological features of a brain region of a patient;
s3: constructing a plurality of cross-mode hypergraphs according to the image characteristics and morphological characteristics of the brain region of the patient;
s4: establishing a hypergraph attention neural network model, and training by using a cross-mode hypergraph to obtain a trained hypergraph attention neural network model;
S5: obtaining sMRI image data of the brain of a patient to be diagnosed, and obtaining a plurality of cross-modal hypergraphs of the patient to be diagnosed; inputting a plurality of cross-mode hypergraphs of the patient to be diagnosed into a trained hypergraph attention neural network model for classification, and obtaining Alzheimer disease classification results of the patient to be diagnosed and attention weights corresponding to the cross-mode hypergraphs of the patient to be diagnosed.
Preferably, in the step S1, the specific method for acquiring and preprocessing the sMRI image data of the brains of the patients with the alzheimer' S disease includes:
acquiring sMRI image data of brains of a plurality of Alzheimer's disease patients and respectively carrying out image preprocessing and morphological preprocessing;
the specific method for preprocessing the image comprises the following steps:
sequentially performing space segmentation, skull removal and registration on sMRI image data to perform space and image smoothing processing of a standard Montreal nerve institute to obtain smoothed sMRI image data;
the specific method for morphological pretreatment comprises the following steps:
sequentially performing skull removal, intensity standardization, labeling volume, white matter segmentation, smoothing flattening, cortex division, statistics and mapping treatment on sMRI image data to obtain morphological indexes of 210 brain regions;
And jointly storing the smoothed sMRI image data and morphological indexes of all brain areas as preprocessed sMRI image data.
Preferably, the morphological indexes of the brain region include: the mean thickness of the brain region, standard deviation of thickness, gray matter volume, area, fold index, curvature, mean curvature, and gaussian curvature.
Preferably, in the step S2, the specific method for extracting the features of the preprocessed sMRI image data to obtain the image features and morphological features of the brain region of the patient includes:
aligning the smoothed sMRI image data with a preset Brainneome template, and extracting 4 regions of interest of the hippocampus, wherein the method specifically comprises the following steps: a left oscillometric hippocampal brain region, a right oscillometric hippocampal brain region, a left caudal hippocampal brain region, and a right caudal hippocampal brain region;
extracting brain region deep features of all sea horse region interested regions by using a trained three-dimensional convolutional neural network, and storing the extracted brain region deep features as image features of brain regions of patients;
and (3) jointly storing all morphological indexes of the 210 brain regions as morphological characteristics to be selected, and sequentially carrying out normalization processing and feature selection on the morphological characteristics to be selected to obtain the morphological characteristics of the brain region of the patient.
Preferably, the specific method for selecting the characteristics is as follows: feature selection is performed by using a chi-square detection method: and calculating the score corresponding to the normalized morphological characteristics to be selected for each brain region, and selecting K morphological indexes with the highest scores from all the morphological characteristics to be selected as the morphological characteristics of the brain region of the patient.
Preferably, in the step S3, the specific method for constructing a plurality of cross-mode hypergraphs according to the image features and morphological features of the brain region of the patient is as follows:
combining the image features and morphological features of the brain region of the patient to obtain cross-modal features of the 4 regions of interest of the hippocampus;
for each region of interest of the hippocampus, a cross-modal hypergraph is constructed by utilizing corresponding cross-modal characteristics, and the specific method comprises the following steps:
for the cross-modal characteristics of each region of interest of the hippocampus, constructing a cross-modal hypergraph by using a K-nearest neighbor method, wherein the cross-modal hypergraph comprises the following specific steps of:
selecting one patient as a central vertex, using other patients as other vertices, calculating the cross-modal characteristic difference between the central vertex and the other vertices by using Euclidean distance, and constructing a superside taking the central vertex as a center, wherein the superside is used for connecting k other vertices with the minimum cross-modal characteristic difference;
If n patients exist, constructing n central vertexes and repeating the method to obtain a cross-mode hypergraph containing n hyperedges;
repeating the steps to obtain 4 cross-mode hypergraphs containing n hyperedges.
Preferably, before the calculation of the cross-modal feature difference between the center vertex and the other vertices by using the euclidean distance, the method further includes converting the over-edge weight between the center vertex and the other vertices to a value less than 1, specifically:
calculating the over-edge weight W between the ith center vertex and the jth other vertices according to the following formula i,j
Wherein D is i,j The distance between the ith center vertex and the jth other vertex is the cross-modal feature distance, and delta is the average cross-modal feature distance between the center vertex and the other vertices.
Preferably, the hypergraph attention neural network model established in the step S4 includes a plurality of hypergraph convolution layers, a first attention layer, a dynamic hypergraph construction layer, a second attention layer and a decision layer which are connected in sequence and are arranged in parallel; the output end of the first attention layer is also connected with the input end of the second attention layer.
Preferably, in the dynamic hypergraph construction layer, the cross-modal hypergraph features fused by the first attention layer are dynamically updated by using a k-NN algorithm and a k-means clustering algorithm, and new cross-modal hypergraph features are generated.
The invention also provides an Alzheimer's disease classification system based on the multi-mode hypergraph attention network, which applies the Alzheimer's disease classification method based on the multi-mode hypergraph attention network, and comprises the following steps:
pretreatment unit: the method comprises the steps of obtaining sMRI image data of brains of a plurality of Alzheimer's disease patients and preprocessing the sMRI image data;
feature extraction unit: the method comprises the steps of performing feature extraction on preprocessed sMRI image data to obtain image features and morphological features of a brain region of a patient;
hypergraph construction unit: the method is used for constructing a plurality of cross-mode hypergraphs according to the image characteristics and morphological characteristics of the brain region of the patient;
model training unit: the method comprises the steps of establishing a hypergraph attention neural network model, training by using a cross-mode hypergraph, and obtaining a trained hypergraph attention neural network model;
classification prediction unit: the method comprises the steps of obtaining sMRI image data of the brain of a patient to be diagnosed, and obtaining a plurality of cross-modal hypergraphs of the patient to be diagnosed; inputting a plurality of cross-mode hypergraphs of the patient to be diagnosed into a trained hypergraph attention neural network model for classification, and obtaining Alzheimer disease classification results of the patient to be diagnosed and attention weights corresponding to the cross-mode hypergraphs of the patient to be diagnosed.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention provides a multi-mode hypergraph attention network-based Alzheimer's disease classification method and system, which comprises the steps of firstly acquiring sMRI image data of brains of a plurality of Alzheimer's disease patients and preprocessing; extracting features of the preprocessed sMRI image data to obtain image features and morphological features of a brain region of a patient; constructing a plurality of cross-mode hypergraphs according to the image characteristics and morphological characteristics of the brain region of the patient; establishing a hypergraph attention neural network model, and training by using a cross-mode hypergraph to obtain a trained hypergraph attention neural network model; finally, sMRI image data of the brain of the patient to be diagnosed are obtained, and a plurality of cross-modal hypergraphs of the patient to be diagnosed are obtained; inputting a plurality of cross-mode hypergraphs of a patient to be diagnosed into a trained hypergraph attention neural network model for classification, and obtaining Alzheimer disease classification results of the patient to be diagnosed and attention weights corresponding to the cross-mode hypergraphs of the patient to be diagnosed;
according to the invention, the cross-mode hypergraph is constructed through the MRI and morphological characteristics to represent the high-order structural relationship between patients, so that the accuracy of Alzheimer disease classification tasks can be effectively improved; in addition, the hypergraph attention neural network model established by the invention can output different contribution degrees of different hypergraphs to classification results by comparing and learning the new hypergraph characteristics and the old hypergraph characteristics, thereby being beneficial to paying attention to corresponding brain areas when doctors diagnose different patients and improving the accuracy rate of early diagnosis of Alzheimer's disease and the generalization capability of the model.
Drawings
Fig. 1 is a flowchart of a method for classifying alzheimer's disease based on a multi-mode hypergraph attention network provided in embodiment 1.
Fig. 2 is a feature selection flowchart provided in embodiment 2.
Fig. 3 is a diagram showing the structure of the hypergraph attention neural network model provided in embodiment 2.
Fig. 4 is a graph of attention weight versus various cross-modality hypergraphs provided in example 2.
Fig. 5 is a diagram of a system for classifying alzheimer's disease based on a multi-modal hypergraph attention network according to embodiment 3.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
for the purpose of better illustrating the embodiments, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the actual product dimensions;
it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, the invention provides a multi-mode hypergraph attention network-based alzheimer disease classification method, which comprises the following steps:
s1: obtaining sMRI image data of brains of a plurality of Alzheimer's disease patients and preprocessing the sMRI image data;
S2: extracting features of the preprocessed sMRI image data to obtain image features and morphological features of a brain region of a patient;
s3: constructing a plurality of cross-mode hypergraphs according to the image characteristics and morphological characteristics of the brain region of the patient;
s4: establishing a hypergraph attention neural network model, and training by using a cross-mode hypergraph to obtain a trained hypergraph attention neural network model;
s5: obtaining sMRI image data of the brain of a patient to be diagnosed, and obtaining a plurality of cross-modal hypergraphs of the patient to be diagnosed; inputting a plurality of cross-mode hypergraphs of the patient to be diagnosed into a trained hypergraph attention neural network model for classification, and obtaining Alzheimer disease classification results of the patient to be diagnosed and attention weights corresponding to the cross-mode hypergraphs of the patient to be diagnosed.
In the specific implementation process, sMRI image data of the brains of a plurality of Alzheimer's disease patients are firstly obtained and preprocessed; extracting features of the preprocessed sMRI image data to obtain image features and morphological features of a brain region of a patient; constructing a plurality of cross-mode hypergraphs according to the image characteristics and morphological characteristics of the brain region of the patient; establishing a hypergraph attention neural network model, and training by using a cross-mode hypergraph to obtain a trained hypergraph attention neural network model; finally, sMRI image data of the brain of the patient to be diagnosed are obtained, and a plurality of cross-modal hypergraphs of the patient to be diagnosed are obtained; inputting a plurality of cross-mode hypergraphs of a patient to be diagnosed into a trained hypergraph attention neural network model for classification, and obtaining Alzheimer disease classification results of the patient to be diagnosed and attention weights corresponding to the cross-mode hypergraphs of the patient to be diagnosed;
Taking the brain region corresponding to the cross-mode hypergraph with the highest attention weight as the brain region with the largest contribution degree to the Alzheimer's disease classification result, and completing the classification and early diagnosis of the Alzheimer's disease patient;
according to the method, the cross-mode hypergraph is constructed through MRI and morphological characteristics to represent the high-order structural relationship between patients, so that the accuracy of Alzheimer disease classification tasks can be effectively improved; in addition, the hypergraph attention neural network model established by the method can output different contribution degrees of different hypergraphs to classification results by comparing and learning the new hypergraph characteristics and the old hypergraph characteristics, so that doctors can pay attention to corresponding brain areas during diagnosis of different patients, and the accuracy rate of early diagnosis of Alzheimer's disease and the generalization capability of the model are improved.
Example 2
The embodiment provides a method for classifying Alzheimer's disease based on a multi-mode hypergraph attention network, which comprises the following steps:
s1: obtaining sMRI image data of brains of a plurality of Alzheimer's disease patients and preprocessing the sMRI image data;
s2: extracting features of the preprocessed sMRI image data to obtain image features and morphological features of a brain region of a patient;
S3: constructing a plurality of cross-mode hypergraphs according to the image characteristics and morphological characteristics of the brain region of the patient;
s4: establishing a hypergraph attention neural network model, and training by using a cross-mode hypergraph to obtain a trained hypergraph attention neural network model;
s5: obtaining sMRI image data of the brain of a patient to be diagnosed, and obtaining a plurality of cross-modal hypergraphs of the patient to be diagnosed; inputting a plurality of cross-mode hypergraphs of a patient to be diagnosed into a trained hypergraph attention neural network model for classification, and obtaining Alzheimer disease classification results of the patient to be diagnosed and attention weights corresponding to the cross-mode hypergraphs of the patient to be diagnosed;
in the step S1, the specific method for acquiring and preprocessing the sMRI image data of the brains of the patients with the alzheimer' S disease includes:
acquiring sMRI image data of brains of a plurality of Alzheimer's disease patients and respectively carrying out image preprocessing and morphological preprocessing;
the specific method for preprocessing the image comprises the following steps:
sequentially performing space segmentation, skull removal and registration on sMRI image data to perform space and image smoothing processing of a standard Montreal nerve institute to obtain smoothed sMRI image data;
The specific method for morphological pretreatment comprises the following steps:
sequentially performing skull removal, intensity standardization, labeling volume, white matter segmentation, smoothing flattening, cortex division, statistics and mapping treatment on sMRI image data to obtain morphological indexes of 210 brain regions;
storing the smoothed sMRI image data and morphological indexes of all brain areas together as preprocessed sMRI image data;
the morphological indexes of the brain region include: the mean thickness, standard deviation of thickness, gray matter volume, area, fold index, curvature, mean curvature, and gaussian curvature of the brain region;
in the step S2, the specific method for extracting the features of the preprocessed sMRI image data and obtaining the image features and morphological features of the brain region of the patient includes:
aligning the smoothed sMRI image data with a preset Brainneome template, and extracting 4 regions of interest of the hippocampus, wherein the method specifically comprises the following steps: a left oscillometric hippocampal brain region, a right oscillometric hippocampal brain region, a left caudal hippocampal brain region, and a right caudal hippocampal brain region;
extracting brain region deep features of all sea horse region interested regions by using a trained three-dimensional convolutional neural network, and storing the extracted brain region deep features as image features of brain regions of patients;
Jointly storing all morphological indexes of 210 brain regions as morphological characteristics to be selected, sequentially carrying out normalization processing and feature selection on the morphological characteristics to be selected, and obtaining the morphological characteristics of the brain regions of the patient;
the specific method for selecting the characteristics comprises the following steps: feature selection is performed by using a chi-square detection method: for each brain region, calculating the score corresponding to the normalized morphological feature to be selected, and selecting K morphological indexes with the highest scores from all the morphological features to be selected as the morphological features of the brain region of the patient;
in the step S3, the specific method for constructing a plurality of cross-mode hypergraphs according to the image features and morphological features of the brain region of the patient is as follows:
combining the image features and morphological features of the brain region of the patient to obtain cross-modal features of the 4 regions of interest of the hippocampus;
for each region of interest of the hippocampus, a cross-modal hypergraph is constructed by utilizing corresponding cross-modal characteristics, and the specific method comprises the following steps:
for the cross-modal characteristics of each region of interest of the hippocampus, constructing a cross-modal hypergraph by using a K-nearest neighbor method, wherein the cross-modal hypergraph comprises the following specific steps of:
selecting one patient as a central vertex, using other patients as other vertices, calculating the cross-modal characteristic difference between the central vertex and the other vertices by using Euclidean distance, and constructing a superside taking the central vertex as a center, wherein the superside is used for connecting k other vertices with the minimum cross-modal characteristic difference;
If n patients exist, constructing n central vertexes and repeating the method to obtain a cross-mode hypergraph containing n hyperedges;
repeating the steps to obtain 4 cross-mode hypergraphs containing n hyperedges;
before the calculation of the cross-modal feature difference between the center vertex and other vertices by using the euclidean distance, the method further comprises converting the over-edge weight between the center vertex and other vertices into a value smaller than 1, specifically:
calculating the over-edge weight W between the ith center vertex and the jth other vertices according to the following formula i,j
Wherein D is i,j The cross-modal feature distance between the ith central vertex and the jth other vertices is delta, and the delta is the average cross-modal feature distance between the central vertex and the other vertices;
the hypergraph attention neural network model established in the step S4 comprises a plurality of hypergraph convolution layers, a first attention layer, a dynamic hypergraph construction layer, a second attention layer and a decision layer which are connected in sequence and are arranged in parallel; the output end of the first attention layer is also connected with the input end of the second attention layer;
in the dynamic hypergraph construction layer, the cross-mode hypergraph characteristics fused by the first attention layer are dynamically updated by using a k-NN algorithm and a k-means clustering algorithm, and new cross-mode hypergraph characteristics are generated.
In a specific implementation process, sMRI image data of brains of a patient suffering from Alzheimer's disease are firstly acquired and preprocessed, and the data used in the embodiment are derived from a public Alzheimer's Disease Neuroimaging Initiative (ADNI) database, wherein the main purpose of the ADNI is to detect whether a series of Magnetic Resonance Imaging (MRI), positron Emission Tomography (PET), other biomarkers and clinical and neuropsychological assessment can be jointly applied to measure the progress of Mild Cognitive Impairment (MCI) and early Alzheimer's Disease (AD);
the method comprises two directions of preprocessing, wherein one direction is image preprocessing, and the other direction is morphological preprocessing; the preprocessing step of the image comprises the following steps: including spatial segmentation, skull removal, registration to standard Montreal Nerve Institute (MNI) space and image smoothing, after preprocessing, all images have dimensions 121×145×121 (x×y×z), spatial resolution of 2X 2mm 3 A voxel;
the preprocessing of morphology includes: performing brain shell removal processing on the magnetic resonance imaging data of the Alzheimer disease; performing CA intensity standardization on the magnetic resonance imaging data after the brain shell is removed; performing CA labeling on the standardized magnetic resonance imaging data; white matter segmentation is carried out on the marked magnetic resonance imaging data; smoothing and flattening the segmented magnetic resonance imaging data by using a Tessell subdivision surface technology; performing spherical mapping and registration on the magnetic resonance imaging data after expansion; performing cortical division, statistics and mapping on the magnetic resonance imaging data after spherical registration; finally, 8 morphological index features such as the average thickness (thickness), the standard deviation (thickness), the surface area (area), the gray matter volume (volume), the integral correction average curvature (meancurr), the integral correction Gaussian curvature (gausacurr), the folding index (foldind), the intrinsic curvature index (curvind) and the like of the brain region are obtained; wherein, 210 brain region indexes on the cortex are extracted, so that the magnetic resonance imaging data of each Alzheimer disease extract 210×8=1680 brain region morphological characteristics in total;
Then, carrying out feature extraction on the preprocessed sMRI image data to obtain image features and morphological features of a brain region of a patient;
over time, a decrease in hippocampal volume can lead to a memory loss syndrome, a central feature of alzheimer's disease; accordingly, the method extracts a region of interest (region of interest, ROI) of the hippocampus in the image; when the size and coordinate space of the preprocessed image are consistent with those of the Brainneome template, extracting a hippocampal brain region image according to a brain region division mask of the template to serve as an input sample of a subsequent model; the hippocampus is divided into four ROIs in the brain template of the brain, left (215), right (216), left (217) and right (218) hippocampus, respectively, the numbers in brackets represent the tag ID in the brain template; therefore, each patient can obtain 4 regions of interest in the hippocampus, when the regions of interest in the hippocampus pass through the trained 3D CNN, the features in the 3D CNN full-connection layer are taken as deep features, and each ROI can obtain the corresponding deep features; storing all the extracted deep features of the brain region as image features of the brain region of the patient;
because not all brain region morphological features are effective information, 1680 brain region features need to be screened by feature selection to remove irrelevant features and redundant features; the method comprises the steps of jointly storing all morphological indexes of 210 brain regions as morphological characteristics to be selected, carrying out normalization treatment on the morphological characteristics to be selected so that brain region characteristic values of all samples are between 0 and 1, and then carrying out characteristic selection to obtain morphological characteristics of the brain region of a patient for combining with image characteristics of the brain region of the patient;
As shown in fig. 2, feature selection is performed by using a chi-square detection method: for each brain region, calculating the score corresponding to the normalized morphological feature to be selected, and selecting K morphological indexes with the highest score from all the morphological features to be selected as the morphological features of the brain region of the patient, wherein the method specifically comprises the following steps:
and (3) carrying out two operations on the normalized morphological characteristics to be selected simultaneously: the first operation: accumulating the brain region characteristics of each column according to the category to obtain an observed (2 multiplied by 1680); the second operation: accumulating the brain region characteristics of each column to obtain fts (1×1680), counting the frequencies of the two labels, and then running with fts dot products to obtain expected (2×1680); calculating feature scores score (i, j) = (observed (i, j) -expected (i, j)) × 2/expected (i, j) at corresponding positions, accumulating according to columns to obtain score (1X 1680), namely scores of all morphological features to be selected, and finally reserving brain region features with the highest scores to finish feature selection;
then constructing 4 cross-mode hypergraphs according to the image features and morphological features of the brain region of the patient, wherein the specific method comprises the following steps:
combining the image features and morphological features of the brain region of the patient to obtain cross-modal features of the 4 regions of interest of the hippocampus;
For each region of interest of the hippocampus, a cross-modal hypergraph is constructed by utilizing corresponding cross-modal characteristics, and the specific method comprises the following steps:
for the cross-modal characteristics of each region of interest of the hippocampus, constructing a cross-modal hypergraph by using a K-nearest neighbor method, wherein the cross-modal hypergraph comprises the following specific steps of:
selecting one patient as a central vertex, using other patients as other vertices, calculating the cross-modal characteristic difference between the central vertex and the other vertices by using Euclidean distance, and constructing a superside with the central vertex as a center, wherein the superside is used for connecting k other vertices with the minimum cross-modal characteristic difference, and in the embodiment, k=16;
if n patients exist, constructing n central vertexes and repeating the method to obtain a cross-mode hypergraph containing n hyperedges;
repeating the steps to obtain 4 cross-mode hypergraphs containing n hyperedges; notably, each cross-modality hypergraph contains critical information about ROI and morphology for standard high-order structural relationships between patients;
establishing a hypergraph attention neural network model, and training by using a cross-mode hypergraph to obtain a trained hypergraph attention neural network model;
as shown in fig. 3, the hypergraph attention neural network model established in the embodiment includes a plurality of hypergraph convolution layers, a first attention layer, a dynamic hypergraph construction layer, a second attention layer and a decision layer which are connected in sequence and arranged in parallel; the output end of the first attention layer is also connected with the input end of the second attention layer;
The hypergraph convolution layer carries out hypergraph convolution on each cross-modal hypergraph, then forms a set Gs and sends the set Gs to the first attention layer; the first attention layer may capture interactions between hypergraphs and fuse them; inputting the fused hypergraph characteristics into a dynamic hypergraph construction (dynamic hypergraph construction, DHG) layer to generate new hypergraph characteristics; then balancing the new hypergraph feature and the old hypergraph feature through a second attention layer; finally, the output is transmitted to a decision layer for classification;
in the dynamic hypergraph construction layer, dynamically updating the cross-modal hypergraph characteristics fused by the first attention layer by using a k-NN algorithm and a k-means clustering algorithm, and generating new cross-modal hypergraph characteristics;
the first and second attention layers in this embodiment are similar in structure, and for sample u, the features x thereof in hypergraph Gs are sequentially taken u ∈R 1×d Where d is the input dimension of the feature. Multi-layer perception (MLP) as feature x u Generating a weight value w u Then w is u Added to the weight set w. After obtaining the ownership weights, weights between 0 and 1 are mapped using a softmax function. Then sequentially taking the features of the features in the hypergraph set Gs as x u ∈R 1 ×d Will x u And corresponding weight w i The result of the multiplication is added to y u Finally, outputting the fusion characteristic y of the sample u u The specific algorithm is as follows:
the traditional hypergraph method constructs a hypergraph for each mode, and then horizontally combines the hypergraphs into a large hypergraph; it is difficult for the conventional method to explain which hypergraph plays a more important role; however, the method in the embodiment can dynamically fuse the multi-mode hypergraphs in the network, and compared with the traditional hypergraphs, the method not only maintains the structure of the original hypergraph, but also explains which hypergraph is more important, thereby being more beneficial to doctors to prescribe medicines for symptoms;
finally, sMRI image data of the brain of the patient to be diagnosed are obtained, and a plurality of cross-modal hypergraphs of the patient to be diagnosed are obtained; inputting a plurality of cross-mode hypergraphs of a patient to be diagnosed into a trained hypergraph attention neural network model for classification, and obtaining Alzheimer disease classification results of the patient to be diagnosed and attention weights corresponding to the cross-mode hypergraphs of the patient to be diagnosed;
the decision layer outputs Alzheimer's disease classification results of patients to be diagnosed, and attention weights corresponding to each cross-mode hypergraph are acquired in a second attention layer;
taking the brain region corresponding to the cross-mode hypergraph with the highest attention weight as the brain region with the largest contribution degree to the Alzheimer's disease classification result, and completing the early diagnosis of the Alzheimer's disease patient;
In this example, MRI images of 502 subjects, each between 55 and 90 years old, were downloaded from the ADNI database, with 133 AD patients, 161 HC,133 MCInc patients and 75 MCIc patients, one MRI image per sample; in order to verify the effectiveness of the method proposed in this example, experiments were performed in different scenarios, including AD vs. hc, MCIc vs. mcinc; a 5-fold cross-validation strategy was used to evaluate classification performance; in addition, four common classification evaluation indicators were used to evaluate the performance of the model, accuracy (ACC), area Under Curve (AUC), F1-Score, and Matthews Correlation Coefficient (MCC), respectively; the final comparative results are shown in table 1:
table 1 comparison of the method with other methods
In all the methods of table 1, the test set and training set samples are identical; patterning methods include GCN, HGNN, DHGNN, HGNN +; GCN is a classical graph convolution network, each edge can be connected with two nodes, HGNN is a classical hypergraph convolution network, and each edge can be connected with a plurality of nodes; both DHGNN and HGNN+ extend over HGNN; MRI and morphology modalities are used in the patterning process; methods of CNN include cnn+el and MADDi; cnn+el is a method combining convolutional neural network and ensemble learning, and MRI is used under the method of cnn+el; MADDi is an application framework of attention-based multi-modal deep learning in Alzheimer's disease diagnosis, and MRI and morphological modalities are used in the MADDi method; as can be seen from table 1, the results of the present method are superior to the existing methods in almost all cases;
The accuracy rates of AD and HC, MCic and HC, and MCic and MCInc are 88.00%, 87.21% and 71.10% respectively; compared with other methods, the accuracy of the method in AD and HC is improved by more than 2.76%; meanwhile, compared with the most advanced MADDi method before, the accuracy of the method in AD and HC is improved by 9.24%; compared with other methods, the method has the advantages that the accuracy in MCic and HC is improved by more than 1.25 percent; especially compared with the CNN+EL method, the accuracy of the method on MCic and HC is improved by 8.5%; in addition, in the MCic and the MCInc which are most difficult to classify, the accuracy of the method is improved by more than 4.37 percent; in addition, the method has great advantages in other index evaluation;
the attention weight of each cross-modal hypergraph is shown in fig. 4, and as can be seen from fig. 4, the cross-modal hypergraph constructed from the left oscillometric hippocampus (215) and morphology maintains stable contributions in all three classification tasks; the cross-modal hypergraph constructed from the right hippocampus (218) and morphology contributes least in the early alzheimer's disease classification task mcicvs.mcinc and most in the late alzheimer's disease classification task AD vs.hc, and therefore the cross-modal hypergraph constructed from morphological features and right hippocampal caudal features becomes more and more important with the progression of the disease, while also meaning that important areas of AD behave differently at different times, which will help doctors to pay attention to how different areas of the brain change at different times;
According to the method, the cross-mode hypergraph is constructed through MRI and morphological characteristics to represent the high-order structural relationship between patients, so that the accuracy of Alzheimer disease classification tasks can be effectively improved; in addition, the hypergraph attention neural network model established by the method can output different contribution degrees of different hypergraphs to classification results by comparing and learning the new hypergraph characteristics and the old hypergraph characteristics, so that doctors can pay attention to corresponding brain areas during diagnosis of different patients, and the accuracy rate of early diagnosis of Alzheimer's disease and the generalization capability of the model are improved.
Example 3
As shown in fig. 5, this embodiment provides a multi-mode hypergraph attention network-based alzheimer's disease classification system, and the multi-mode hypergraph attention network-based alzheimer's disease classification method described in embodiment 1 or 2 is applied, including:
the preprocessing unit 301: the method comprises the steps of obtaining sMRI image data of brains of a plurality of Alzheimer's disease patients and preprocessing the sMRI image data;
the feature extraction unit 302: the method comprises the steps of performing feature extraction on preprocessed sMRI image data to obtain image features and morphological features of a brain region of a patient;
hypergraph construction unit 303: the method is used for constructing a plurality of cross-mode hypergraphs according to the image characteristics and morphological characteristics of the brain region of the patient;
Model training unit 304: the method comprises the steps of establishing a hypergraph attention neural network model, training by using a cross-mode hypergraph, and obtaining a trained hypergraph attention neural network model;
classification prediction unit 305: the method comprises the steps of obtaining sMRI image data of the brain of a patient to be diagnosed, and obtaining a plurality of cross-modal hypergraphs of the patient to be diagnosed; inputting a plurality of cross-mode hypergraphs of the patient to be diagnosed into a trained hypergraph attention neural network model for classification, and obtaining Alzheimer disease classification results of the patient to be diagnosed and attention weights corresponding to the cross-mode hypergraphs of the patient to be diagnosed.
In a specific implementation process, firstly, a preprocessing unit 301 acquires sMRI image data of brains of a plurality of patients suffering from Alzheimer's disease and performs preprocessing; the feature extraction unit 302 performs feature extraction on the preprocessed sMRI image data to obtain image features and morphological features of a brain region of a patient; the hypergraph construction unit 303 constructs a plurality of cross-modal hypergraphs according to the image features and morphological features of the brain region of the patient; the model training unit 304 establishes a hypergraph attention neural network model, trains by using a cross-mode hypergraph, and obtains a trained hypergraph attention neural network model; finally, the classification prediction unit 305 acquires sMRI image data of the brain of the patient to be diagnosed, and acquires a plurality of cross-modal hypergraphs of the patient to be diagnosed; inputting a plurality of cross-mode hypergraphs of a patient to be diagnosed into a trained hypergraph attention neural network model for classification, and obtaining Alzheimer disease classification results of the patient to be diagnosed and attention weights corresponding to the cross-mode hypergraphs of the patient to be diagnosed;
Taking the brain region corresponding to the cross-mode hypergraph with the highest attention weight as the brain region with the largest contribution degree to the Alzheimer's disease classification result, and completing the classification and early diagnosis of the Alzheimer's disease patient;
the system characterizes the high-order structural relation among patients by constructing a cross-mode hypergraph through MRI and morphological characteristics, and can effectively improve the accuracy of Alzheimer disease classification tasks; in addition, the hypergraph attention neural network model established by the system can output different contribution degrees of different hypergraphs to classification results by comparing and learning new hypergraph characteristics and old hypergraph characteristics, so that doctors can pay attention to corresponding brain areas during diagnosis of different patients, and the accuracy rate of early diagnosis of Alzheimer's disease and the generalization capability of the model are improved.
The same or similar reference numerals correspond to the same or similar components;
the terms describing the positional relationship in the drawings are merely illustrative, and are not to be construed as limiting the present patent;
it is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (10)

1. The Alzheimer's disease classification method based on the multi-mode hypergraph attention network is characterized by comprising the following steps of:
s1: obtaining sMRI image data of brains of a plurality of Alzheimer's disease patients and preprocessing the sMRI image data;
s2: extracting features of the preprocessed sMRI image data to obtain image features and morphological features of a brain region of a patient;
s3: constructing a plurality of cross-mode hypergraphs according to the image characteristics and morphological characteristics of the brain region of the patient;
s4: establishing a hypergraph attention neural network model, and training by using a cross-mode hypergraph to obtain a trained hypergraph attention neural network model;
s5: obtaining sMRI image data of the brain of a patient to be diagnosed, and obtaining a plurality of cross-modal hypergraphs of the patient to be diagnosed; inputting a plurality of cross-mode hypergraphs of the patient to be diagnosed into a trained hypergraph attention neural network model for classification, and obtaining Alzheimer disease classification results of the patient to be diagnosed and attention weights corresponding to the cross-mode hypergraphs of the patient to be diagnosed.
2. The method for classifying the alzheimer 'S disease based on the multi-modal hypergraph attention network according to claim 1, wherein in the step S1, the specific method for acquiring and preprocessing the sMRI image data of the brains of the plurality of patients with the alzheimer' S disease is as follows:
Acquiring sMRI image data of brains of a plurality of Alzheimer's disease patients and respectively carrying out image preprocessing and morphological preprocessing;
the specific method for preprocessing the image comprises the following steps:
sequentially performing space segmentation, skull removal and registration on sMRI image data to perform space and image smoothing processing of a standard Montreal nerve institute to obtain smoothed sMRI image data;
the specific method for morphological pretreatment comprises the following steps:
sequentially performing skull removal, intensity standardization, labeling volume, white matter segmentation, smoothing flattening, cortex division, statistics and mapping treatment on sMRI image data to obtain morphological indexes of 210 brain regions;
and jointly storing the smoothed sMRI image data and morphological indexes of all brain areas as preprocessed sMRI image data.
3. The method for classifying alzheimer's disease based on a multimodal hypergraph attention network of claim 2, wherein the morphological indicators of the brain region comprise: the mean thickness of the brain region, standard deviation of thickness, gray matter volume, area, fold index, curvature, mean curvature, and gaussian curvature.
4. The method for classifying alzheimer' S disease based on the multi-modal hypergraph attention network according to claim 3, wherein in the step S2, the specific method for extracting features of the preprocessed sMRI image data and obtaining the image features and morphological features of the brain region of the patient is as follows:
Aligning the smoothed sMRI image data with a preset Brainneome template, and extracting 4 regions of interest of the hippocampus, wherein the method specifically comprises the following steps: a left oscillometric hippocampal brain region, a right oscillometric hippocampal brain region, a left caudal hippocampal brain region, and a right caudal hippocampal brain region;
extracting brain region deep features of all sea horse region interested regions by using a trained three-dimensional convolutional neural network, and storing the extracted brain region deep features as image features of brain regions of patients;
and (3) jointly storing all morphological indexes of the 210 brain regions as morphological characteristics to be selected, and sequentially carrying out normalization processing and feature selection on the morphological characteristics to be selected to obtain the morphological characteristics of the brain region of the patient.
5. The method for classifying alzheimer's disease based on a multi-modal hypergraph attention network according to claim 4, wherein the specific method for selecting the features is as follows: feature selection is performed by using a chi-square detection method: and calculating the score corresponding to the normalized morphological characteristics to be selected for each brain region, and selecting K morphological indexes with the highest scores from all the morphological characteristics to be selected as the morphological characteristics of the brain region of the patient.
6. The method for classifying alzheimer' S disease based on the multi-modal hypergraph attention network according to claim 5, wherein in the step S3, the specific method for constructing a plurality of cross-modal hypergraphs according to the image features and morphological features of the brain region of the patient is as follows:
combining the image features and morphological features of the brain region of the patient to obtain cross-modal features of the 4 regions of interest of the hippocampus;
for each region of interest of the hippocampus, a cross-modal hypergraph is constructed by utilizing corresponding cross-modal characteristics, and the specific method comprises the following steps:
for the cross-modal characteristics of each region of interest of the hippocampus, constructing a cross-modal hypergraph by using a K-nearest neighbor method, wherein the cross-modal hypergraph comprises the following specific steps of:
selecting one patient as a central vertex, using other patients as other vertices, calculating the cross-modal characteristic difference between the central vertex and the other vertices by using Euclidean distance, and constructing a superside taking the central vertex as a center, wherein the superside is used for connecting k other vertices with the minimum cross-modal characteristic difference;
if n patients exist, constructing n central vertexes and repeating the method to obtain a cross-mode hypergraph containing n hyperedges;
repeating the steps to obtain 4 cross-mode hypergraphs containing n hyperedges.
7. The method for classifying alzheimer's disease based on a multi-modal hypergraph attention network according to claim 6, wherein before calculating the difference of the cross-modal characteristics between the central vertex and other vertices by using the euclidean distance, further comprises converting the hyperedge weight between the central vertex and other vertices into a value smaller than 1, specifically:
calculating the over-edge weight W between the ith center vertex and the jth other vertices according to the following formula i,j
Wherein D is i,j The distance between the ith center vertex and the jth other vertex is the cross-modal feature distance, and delta is the average cross-modal feature distance between the center vertex and the other vertices.
8. The method for classifying alzheimer' S disease based on a multi-modal hypergraph attention network according to claim 7, wherein the hypergraph attention neural network model established in the step S4 comprises a plurality of hypergraph convolution layers, a first attention layer, a dynamic hypergraph construction layer, a second attention layer and a decision layer which are arranged in parallel and are connected in sequence; the output end of the first attention layer is also connected with the input end of the second attention layer.
9. The method for classifying alzheimer's disease based on a multi-modal hypergraph attention network according to claim 8, wherein the dynamic hypergraph construction layer dynamically updates the cross-modal hypergraph features fused by the first attention layer by using a k-NN algorithm and a k-means clustering algorithm and generates new cross-modal hypergraph features.
10. A multi-modal hypergraph attention network-based alzheimer's disease classification system, applying the multi-modal hypergraph attention network-based alzheimer's disease classification method as claimed in any one of claims 1-9, comprising:
pretreatment unit: the method comprises the steps of obtaining sMRI image data of brains of a plurality of Alzheimer's disease patients and preprocessing the sMRI image data;
feature extraction unit: the method comprises the steps of performing feature extraction on preprocessed sMRI image data to obtain image features and morphological features of a brain region of a patient;
hypergraph construction unit: the method is used for constructing a plurality of cross-mode hypergraphs according to the image characteristics and morphological characteristics of the brain region of the patient;
model training unit: the method comprises the steps of establishing a hypergraph attention neural network model, training by using a cross-mode hypergraph, and obtaining a trained hypergraph attention neural network model;
classification prediction unit: the method comprises the steps of obtaining sMRI image data of the brain of a patient to be diagnosed, and obtaining a plurality of cross-modal hypergraphs of the patient to be diagnosed; inputting a plurality of cross-mode hypergraphs of the patient to be diagnosed into a trained hypergraph attention neural network model for classification, and obtaining Alzheimer disease classification results of the patient to be diagnosed and attention weights corresponding to the cross-mode hypergraphs of the patient to be diagnosed.
CN202310565619.6A 2023-05-18 2023-05-18 Alzheimer's disease classification method and system based on multi-mode hypergraph attention network Pending CN116597214A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310565619.6A CN116597214A (en) 2023-05-18 2023-05-18 Alzheimer's disease classification method and system based on multi-mode hypergraph attention network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310565619.6A CN116597214A (en) 2023-05-18 2023-05-18 Alzheimer's disease classification method and system based on multi-mode hypergraph attention network

Publications (1)

Publication Number Publication Date
CN116597214A true CN116597214A (en) 2023-08-15

Family

ID=87604120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310565619.6A Pending CN116597214A (en) 2023-05-18 2023-05-18 Alzheimer's disease classification method and system based on multi-mode hypergraph attention network

Country Status (1)

Country Link
CN (1) CN116597214A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541844A (en) * 2023-09-27 2024-02-09 合肥工业大学 Weak supervision histopathology full-section image analysis method based on hypergraph learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541844A (en) * 2023-09-27 2024-02-09 合肥工业大学 Weak supervision histopathology full-section image analysis method based on hypergraph learning

Similar Documents

Publication Publication Date Title
Loddo et al. Deep learning based pipelines for Alzheimer's disease diagnosis: a comparative study and a novel deep-ensemble method
CN110236543B (en) Alzheimer disease multi-classification diagnosis system based on deep learning
Ashtari-Majlan et al. A multi-stream convolutional neural network for classification of progressive MCI in Alzheimer’s disease using structural MRI images
CN114999629B (en) AD early prediction method, system and device based on multi-feature fusion
EP3654343A1 (en) Application of deep learning for medical imaging evaluation
US20230005251A1 (en) Diagnostic assistance apparatus and model generation apparatus
Rahim et al. Prediction of Alzheimer's progression based on multimodal deep-learning-based fusion and visual explainability of time-series data
CN113962930B (en) Alzheimer disease risk assessment model establishing method and electronic equipment
Abdullah et al. Multi-sectional views textural based SVM for MS lesion segmentation in multi-channels MRIs
Peixoto et al. Automatic classification of pulmonary diseases using a structural co-occurrence matrix
Jia et al. Deep learning and multimodal feature fusion for the aided diagnosis of Alzheimer's disease
Na et al. Retinal vascular segmentation using superpixel‐based line operator and its application to vascular topology estimation
CN116597214A (en) Alzheimer's disease classification method and system based on multi-mode hypergraph attention network
Zhang et al. THAN: task-driven hierarchical attention network for the diagnosis of mild cognitive impairment and Alzheimer’s disease
Wu et al. Identification of invisible ischemic stroke in noncontrast CT based on novel two‐stage convolutional neural network model
Pan et al. Multi-classification prediction of Alzheimer’s disease based on fusing multi-modal features
Xi et al. Brain Functional Networks with Dynamic Hypergraph Manifold Regularization for Classification of End-Stage Renal Disease Associated with Mild Cognitive Impairment.
Li et al. Medical image identification methods: A review
US11712192B2 (en) Biomarker for early detection of alzheimer disease
Li et al. Ensemble of convolutional neural networks and multilayer perceptron for the diagnosis of mild cognitive impairment and Alzheimer's disease
Guida et al. Improving knee osteoarthritis classification using multimodal intermediate fusion of X-ray, MRI, and clinical information
Pallawi et al. Study of Alzheimer’s disease brain impairment and methods for its early diagnosis: a comprehensive survey
WO2020099941A1 (en) Application of deep learning for medical imaging evaluation
CN114983341A (en) Multi-modal feature fusion based multi-classification prediction system for Alzheimer's disease
Biswas et al. DFU_XAI: a deep learning-based approach to diabetic foot ulcer detection using feature explainability

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination