CN116861210A - AD feature extraction method and system based on MRI (magnetic resonance imaging) bimodal self-adaptive weighting feature fusion - Google Patents
AD feature extraction method and system based on MRI (magnetic resonance imaging) bimodal self-adaptive weighting feature fusion Download PDFInfo
- Publication number
- CN116861210A CN116861210A CN202310606550.7A CN202310606550A CN116861210A CN 116861210 A CN116861210 A CN 116861210A CN 202310606550 A CN202310606550 A CN 202310606550A CN 116861210 A CN116861210 A CN 116861210A
- Authority
- CN
- China
- Prior art keywords
- feature
- brain
- node
- extraction method
- mri
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 59
- 238000002595 magnetic resonance imaging Methods 0.000 title claims abstract description 49
- 230000004927 fusion Effects 0.000 title claims abstract description 45
- 230000002902 bimodal effect Effects 0.000 title claims abstract description 20
- 210000004556 brain Anatomy 0.000 claims abstract description 101
- 238000002599 functional magnetic resonance imaging Methods 0.000 claims abstract description 18
- 238000000034 method Methods 0.000 claims abstract description 15
- 230000003044 adaptive effect Effects 0.000 claims abstract description 13
- 230000010354 integration Effects 0.000 claims abstract description 10
- 230000006870 function Effects 0.000 claims abstract description 6
- 238000004364 calculation method Methods 0.000 claims description 24
- 230000003925 brain function Effects 0.000 claims description 7
- 238000005481 NMR spectroscopy Methods 0.000 claims description 6
- 238000000513 principal component analysis Methods 0.000 claims description 6
- 210000003710 cerebral cortex Anatomy 0.000 claims description 3
- 230000002490 cerebral effect Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 208000024827 Alzheimer disease Diseases 0.000 description 28
- 230000000875 corresponding effect Effects 0.000 description 4
- 208000010877 cognitive disease Diseases 0.000 description 3
- 201000010099 disease Diseases 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 230000032683 aging Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 208000027061 mild cognitive impairment Diseases 0.000 description 2
- 238000007500 overflow downdraw method Methods 0.000 description 2
- 206010012289 Dementia Diseases 0.000 description 1
- 238000012307 MRI technique Methods 0.000 description 1
- 230000006736 behavioral deficit Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000000090 biomarker Substances 0.000 description 1
- 230000006999 cognitive decline Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000002598 diffusion tensor imaging Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000002427 irreversible effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000037323 metabolic rate Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 208000015122 neurodegenerative disease Diseases 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000004451 qualitative analysis Methods 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/40—Detecting, measuring or recording for evaluating the nervous system
- A61B5/4076—Diagnosing or monitoring particular conditions of the nervous system
- A61B5/4088—Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Psychiatry (AREA)
- Physiology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Neurology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Neurosurgery (AREA)
- Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Developmental Disabilities (AREA)
- Child & Adolescent Psychology (AREA)
- Radiology & Medical Imaging (AREA)
- High Energy & Nuclear Physics (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The invention relates to an AD feature extraction method and system based on MRI bi-modal self-adaptive weighting feature fusion, which are implemented by acquiring functional MRI data and structural MRI data of a brain region of a patient; obtaining the characteristics of the brain functional mode by using a graph theory parameter characteristic extraction method; obtaining the characteristics of the brain structural mode by using a voxel value characteristic extraction method; carrying out self-adaptive weighted feature fusion on the two modal features through an information integration strategy to form a bimodal one-dimensional feature vector; extracting feature subsets which are related to AD, MCI and normal old people target results and can effectively reflect category differences by using a feature extraction method based on multitasking; and outputting the feature subset closely related to each task as an optimal feature set. The method can integrate the bimodal features based on an adaptive weighted fusion algorithm of information integration, perform multi-task learning on three classification tasks of AD, MCI and normal old people, share input and an integral loss function, and select an optimal feature set for different targets.
Description
Technical Field
The invention relates to a medical image processing technology, in particular to an AD feature extraction method and system based on MRI (magnetic resonance imaging) bimodal self-adaptive weighting feature fusion.
Background
Alzheimer's Disease (AD) is a progressive degenerative disease of the nervous system with hidden onset, manifested by persistent cognitive decline and behavioral impairment, which is highly correlated with age and irreversible disease course. 62% of medical practitioners investigated consider dementia as part of normal aging, and therefore many elderly people suffering from AD symptoms cannot be diagnosed. Clinically, the progression of AD is generally divided into three phases, normal subjects (Cognitively Normal, abbreviated CN), mild cognitive impairment (mild cognitive impairment, abbreviated MCI) and AD patients. Wherein MCI is a transitional state between normal and AD. There is no cure for AD at present, and the middle and late treatment approaches are very limited, and MCI is often misdiagnosed as normal aging, and 44% of MCI is found in the study to be likely to be finally converted into AD within a few years. The MCI phase is therefore the most suitable treatment phase, especially for early classification prediction of AD.
The magnetic resonance imaging (Magnetic Resonance Imaging, abbreviated as MRI) technology is one of important clinical examination means in AD diagnosis, and has the biggest advantage of being capable of obtaining higher spatial resolution, and the detail definition of image data is important for disease diagnosis, and quantitative and qualitative analysis and research on transformation of structures at different stages of diseases can be performed through MRI. Currently, biomarkers of multiple modalities have been demonstrated to be able to identify AD and MCI. Common markers are: structural MRI techniques are used to measure brain structural changes and functional MRI measurements track images of brain metabolic rate changes.
The majority of existing MRI-based AD case classification approaches mainly include the following categories:
1. feature extraction is performed using only one modality information (e.g., functional MRI, structural MRI, diffusion tensor imaging), such as chinese patent CN111938644a, etc. The classification performance of the classifier is not ideal because of the limited information that can be provided by the single modality data employed in such a manner.
2. The characteristics of multiple modes are fused, but the fusion mode is relatively simple, the low-dimensional characteristics of different modes are spliced to obtain the high-dimensional fusion characteristics, and an adaptive weighting fusion method, such as Chinese patent CN113616184A, is not used. Because the method can lead the features to contain more redundant information, and in the subsequent feature extraction, the traditional single-task inspection method, such as Chinese patent CN103093087A, is still adopted, and the connection among tasks is ignored, so that the model complexity is higher, and the classification precision is poor.
Disclosure of Invention
The invention aims to solve the problems in the prior art, and provides an AD feature extraction method and system based on MRI bi-modal self-adaptive weighting feature fusion, which can fuse bi-modal features based on an information integration self-adaptive weighting fusion algorithm and perform multi-task learning on three classification tasks of AD, MCI and normal old people, share input and integral loss functions, so that optimal feature sets for different targets are selected, and the high precision of classification is effectively ensured.
The invention aims at realizing the following technical scheme:
in one aspect, an AD feature extraction method based on MRI bi-modal adaptive weighting feature fusion comprises the following steps:
s1, acquiring functional MRI data and structural MRI data of a brain region of a patient obtained by brain nuclear magnetic resonance scanning;
s2, obtaining the characteristics of the brain functional mode by using a graph theory parameter characteristic extraction method based on the functional MRI data;
s3, obtaining the characteristics of the brain structural mode by using a voxel value characteristic extraction method based on structural MRI data;
s4, carrying out self-adaptive weighted feature fusion on the two modal features obtained in the S2 and the S3 respectively through an information integration strategy to form a bimodal one-dimensional feature vector;
s5, extracting a feature subset which is related to the AD, MCI and normal old people target results and can effectively reflect the category difference by using a feature extraction method based on multiple tasks;
s6, outputting the feature subset closely related to each task as an optimal feature set.
In step S2, the graph theory parameter feature extraction method includes the following steps:
s21, dividing cerebral cortex by using a cerebral template to realize division of each region of the brain;
s22, obtaining a time sequence of each voxel according to the functional MRI data, and calculating a time sequence mean value of each voxel of each brain region;
s23, calculating time sequence correlation coefficients among all brain areas by using the Pearson correlation coefficient;
s24, according to the size of the time sequence correlation coefficient between brain areas, taking the size of the time sequence correlation coefficient as a basis for distributing edges between nodes in a brain network, and obtaining a brain network topology structure for brain function connection;
s25, extracting graph theory parameters in the brain network as characteristics of brain function modes, including node strength, node efficiency and node order centrality, through the obtained brain network topological structure of brain function connection.
In step S23, the calculation formula of the time-series correlation coefficient is as follows:
c x,y (mu) is the time series correlation coefficient of any two brain regions x and y in all brain regions; var (x) and var (y) represent the time series variance of all voxels in the two brain regions x, y, respectively; cov (Cov) x,y (mu) represents the covariance of the time series mean of the two brain regions x, y.
In step S25, the node strength is expressed as the sum of the weights of all other node edges in the network that have connection relation with the node, node strength D i The calculation formula is as follows:
D i =∑ j∈N w ij
wherein w is ij A weight value between a node i and a node j is represented, and N represents the number of nodes in the network;
the node efficiency is expressed as the average value of the sum of the shortest path reciprocal between any two nodes in the network formed by the node neighborhood, and the node efficiency E i The calculation formula of (2) is as follows:
wherein G is i A network formed for the neighborhood of nodes; j. k is two nodes in the neighborhood network; l (L) jk Is the shortest path between node j and node k in the neighborhood network;
the node betweenness centrality is expressed as the ratio of the number of paths containing the node in all the shortest paths of the whole network to the number of all the shortest paths of the whole network, and the node order centrality B i The calculation formula is as follows:
in sigma jk Representing the number of all shortest paths from any node j to any node k; sigma (sigma) jk (i) The shortest path number passing through the node i among the shortest path numbers is represented.
In step S3, the voxel value feature extraction method includes the following steps:
s31, dividing structural MRI data by using a brain template to divide each region of the brain;
s32, calculating gray voxel values in each dividing area;
s33, scaling gray voxel values through normalization, and enabling the normalized voxel values X to be * As a feature of the brain structural mode, the calculation formula is as follows:
wherein X is gray voxel value; mu is the mean value; sigma is variance.
In step S4, the method for adaptively weighting feature fusion includes the following steps:
s41, mapping the obtained brain functional mode characteristics including node strength, node efficiency and node order centrality and the obtained brain structural mode characteristics to a binary space through hash codes of different modes;
s42, carrying out maximum variance projection on the principal component analysis PCA in each local model to obtain a projection result v of a maximum variance space i Removing unstable characteristic points through residual error matching constraint, and adaptively determining local vector dimensions;
s43, fusing the characteristics through a global constraint algorithm: and normalizing the global constraint through feature anticontrol, adaptively determining the final global weight of the local vector, and carrying out feature fusion by combining each local feature weight and the local vector based on the final global weight.
In step S43, the calculation formula of the feature fusion is as follows:
wherein w is i Weighting each local feature; c is a preset super-parameter, and n is the number of local features.
In step S5, the method for extracting the optimal feature set uses the following calculation formula:
wherein t is the number of tasks; t represents a transpose operation; omega j A weight vector for the j-th task; v (V) j Is the whole feature in the j-th task; y is Y j E {1, -1} is a category label; p1 is a regularization factor; w= { ω 1 ,ω 2 ,...,ω t },||W|| 1 Is an L1 regular term;
by calculation, V corresponding to the minimum objective function can be obtained j And selecting the features corresponding to the non-zero weight values in each task, namely the feature subset which has better expression capability and is closely related to each task.
On the other hand, an extraction system of the AD feature extraction method based on MRI bi-modal self-adaptive weighting feature fusion comprises:
an MRI data acquisition module for acquiring functional MRI data and structural MRI data of a brain region of a patient obtained by brain nuclear magnetic resonance scanning;
the feature acquisition module of the brain functional mode acquires the features of the brain functional mode by using a graph theory parameter feature extraction method based on the functional MRI data;
the feature acquisition module of the brain structural mode utilizes a voxel value feature extraction method based on structural MRI data to acquire the features of the brain structural mode;
the self-adaptive weighting feature fusion module is used for carrying out self-adaptive weighting feature fusion on the two mode features obtained respectively through an information integration strategy so as to form a bimodal one-dimensional feature vector;
the feature extraction module based on the multitasking is used for extracting feature subsets which are related to AD, MCI and normal old people target results and can effectively reflect category differences by using a feature extraction method based on the multitasking;
and the optimal feature set output module is used for outputting the feature subset closely related to each task as an optimal feature set.
The AD feature extraction method and system based on MRI bi-modal self-adaptive weighting feature fusion have the following beneficial effects:
1. by adopting a self-adaptive weighted fusion mode, the dual-mode characteristics are fused, and the correlation of high-level semantics among different modes can be mined by a cross-mode hash coding method, so that the problem of insufficient information quantity of single-mode data is avoided, and simultaneously, the uncorrelated and redundant characteristics are removed;
2. the global constraint algorithm is adopted to determine the weight of each local weight, and compared with other methods, the method can keep the characteristic of higher correlation, and improves the reliability of the model;
3. compared with the traditional single-task feature selection method, the method for extracting the multi-task features uses the complementary information to learn and classify simultaneously, reserves the optimal feature set, and can effectively improve the classification accuracy.
Drawings
FIG. 1 is a general flow diagram of an AD feature extraction method based on MRI bimodal adaptive weighting feature fusion of the present invention;
FIG. 2 is a block flow diagram of a graph theory parameter feature extraction method of the present invention
FIG. 3 is a flow chart of the voxel value feature extraction method of the present invention;
FIG. 4 is a flow chart of the adaptive weighted feature fusion method of the present invention;
fig. 5 is a flow chart of the optimal feature set extraction method based on multitasking of the present invention.
Detailed Description
The following describes specific embodiments of the present invention in detail with reference to the drawings of examples.
The AD feature extraction method based on MRI bi-modal self-adaptive weighting feature fusion is shown in figure 1, and specifically comprises the following steps:
s1, acquiring functional MRI data and structural MRI data of a brain region of a patient obtained by brain nuclear magnetic resonance scanning;
s2, obtaining the characteristics of the brain functional mode by using a graph theory parameter characteristic extraction method based on the functional MRI data;
s3, obtaining the characteristics of the brain structural mode by using a voxel value characteristic extraction method based on structural MRI data;
s4, carrying out self-adaptive weighted feature fusion on the two modal features obtained in the S2 and the S3 respectively through an information integration strategy to form a bimodal one-dimensional feature vector;
s5, extracting a feature subset which is related to the AD, MCI and normal old people target results and can effectively reflect the category difference by using a feature extraction method based on multiple tasks;
s6, outputting the feature subset closely related to each task as an optimal feature set.
In step S2, the graph theory parameter feature extraction method is shown in fig. 2, and specifically includes the following steps:
s21, dividing cerebral cortex by using a cerebral template to realize division of each region of the brain; as an example, the brain template may be AAL (Automatic anatomical labeling) brain template, and the brain region may be divided into 90 regions by using AAL brain template, so that 90 brain regions may be obtained as nodes in the brain network;
s22, obtaining a time sequence of each voxel according to the functional MRI data, and calculating a time sequence mean value of each voxel of all brain regions (such as 90 brain regions);
s23, calculating time sequence correlation coefficients among all brain areas by using the Pearson correlation coefficient;
s24, according to the size of the time sequence correlation coefficient between brain areas, taking the size of the time sequence correlation coefficient as a basis for distributing edges between nodes in a brain network, and obtaining a brain network topology structure for brain function connection; the specific method comprises the following steps: when the correlation coefficient between two brain areas exceeds a threshold value, determining that functional connection exists between the two brain areas, and distributing an edge for the two brain areas; the threshold is used as a super parameter and can be set manually according to actual conditions;
s25, a brain network topological structure graph= (V, E) of the brain functional connection obtained through S21-S24, wherein V is a set of nodes in a brain region, E is a set of edges between the nodes, and Graph parameters in the brain network can be extracted to serve as characteristics of brain functional modes, including node strength, node efficiency and node order centrality.
In the above step S23, the calculation formula of the time-series correlation coefficient is as follows:
wherein, c x,y (mu) is the time series correlation coefficient of any two brain regions x and y in all brain regions; var (x) and var (y) represent the time series variance of all voxels in the two brain regions x, y, respectively; cov (Cov) x,y (mu) represents the covariance of the time series mean of the two brain regions x, y.
In step S25, the node strength is expressed as the sum of the weights of all other node edges in the network that have connection relation with the node, node strength D i The calculation formula is as follows:
wherein w is ij A weight value between a node i and a node j is represented, and N represents the number of nodes in the network;
the node efficiency is expressed as the average value of the sum of the shortest path reciprocal between any two nodes in the network formed by the node neighborhood, and the node efficiency E i The calculation formula of (2) is as follows:
wherein G is i A network formed for the neighborhood of nodes; j. k is two nodes in the neighborhood network; l (L) jk Is the shortest path between node j and node k in the neighborhood network;
the node betweenness centrality is expressed as the ratio of the number of paths containing the node in all the shortest paths of the whole network to the number of all the shortest paths of the whole network, and the node order centrality B i The calculation formula is as follows:
in sigma jk Representing the number of all shortest paths from any node j to any node k; sigma (sigma) jk (i) The shortest path number passing through the node i among the shortest path numbers is represented.
In step S3, the voxel value feature extraction method is shown in fig. 3, and specifically includes the following steps:
s31, dividing structural MRI data by using brain templates to divide each region of the brain, wherein an AAL brain template can be adopted to obtain 90 brain dividing regions;
s32, calculating gray voxel values in each dividing area;
s33, scaling the gray voxel values by normalization and then normalizing the voxel value X due to large span and uneven distribution of the gray voxel values in different areas * As a feature of the brain structural mode, the calculation formula is as follows:
wherein X is gray voxel value; mu is the mean value; sigma is variance.
In step S4, the method for adaptively fusing weighted features is shown in fig. 4, and specifically includes the following steps:
s41, mapping the obtained brain functional mode characteristics including node strength, node efficiency and node order centrality and the obtained brain structural mode characteristics to a binary space through hash codes of different modes;
s42, carrying out maximum variance projection on each local model through principal component analysis PCA (Principal Component Analysis) to obtain a projection result v of a maximum variance space i Removing unstable characteristic points through residual error matching constraint, and adaptively determining local vector dimensions;
s43, fusing the characteristics through a global constraint algorithm: and normalizing the global constraint through feature anticontrol, adaptively determining the final global weight of the local vector, and carrying out feature fusion by combining each local feature weight and the local vector based on the final global weight.
In step S43, the calculation formula of the feature fusion is as follows:
wherein w is i Weighting each local feature; c is a preset super-parameter, and n is the number of local features.
In step S5, the method for obtaining the optimal feature set is shown in fig. 5, and specifically adopts the following calculation formula:
wherein t is the number of tasks; t represents a transpose operation; omega j A weight vector for the j-th task; v (V) j Is the whole feature in the j-th task; y is Y j E {1, -1} is a category label; p1 is a regularization factor; w= { ω 1 ,ω 2 ,...,ω t },||W|| 1 Is an L1 regular term;
by calculation, V corresponding to the minimum objective function can be obtained j And selecting the features corresponding to the non-zero weight values in each task, namely the feature subset which has better expression capability and is closely related to each task.
On the other hand, an extraction system of the AD feature extraction method based on MRI bi-modal self-adaptive weighting feature fusion comprises:
an MRI data acquisition module for acquiring functional MRI data and structural MRI data of a brain region of a patient obtained by brain nuclear magnetic resonance scanning;
the feature acquisition module of the brain functional mode acquires the features of the brain functional mode by using a graph theory parameter feature extraction method based on the functional MRI data;
the feature acquisition module of the brain structural mode utilizes a voxel value feature extraction method based on structural MRI data to acquire the features of the brain structural mode;
the self-adaptive weighting feature fusion module is used for carrying out self-adaptive weighting feature fusion on the two mode features obtained respectively through an information integration strategy so as to form a bimodal one-dimensional feature vector;
the feature extraction module based on the multitasking is used for extracting feature subsets which are related to AD, MCI and normal old people target results and can effectively reflect category differences by using a feature extraction method based on the multitasking;
and the optimal feature set output module is used for outputting the feature subset closely related to each task as an optimal feature set.
Since the content and principle of the system are substantially the same as those of the extraction method, detailed description thereof will not be repeated.
In summary, by adopting the AD feature extraction method and system based on the MRI bi-modal self-adaptive weighted feature fusion, the self-adaptive weighted fusion algorithm based on information integration can be used for fusing the bi-modal features and performing multi-task learning on three classification tasks of AD, MCI and normal old people, and sharing input and overall loss functions, so that the optimal feature sets for different targets are selected, and the high classification precision is effectively ensured.
It will be appreciated by persons skilled in the art that the above embodiments are provided for the purpose of illustrating the invention and are not intended to be limiting, and that changes and modifications to the above described embodiments are intended to fall within the scope of the appended claims, provided they fall within the true scope of the invention.
Claims (9)
1. An AD feature extraction method based on MRI bi-modal self-adaptive weighting feature fusion is characterized by comprising the following steps:
s1, acquiring functional MRI data and structural MRI data of a brain region of a patient obtained by brain nuclear magnetic resonance scanning;
s2, obtaining the characteristics of the brain functional mode by using a graph theory parameter characteristic extraction method based on the functional MRI data;
s3, obtaining the characteristics of the brain structural mode by using a voxel value characteristic extraction method based on structural MRI data;
s4, carrying out self-adaptive weighted feature fusion on the two modal features obtained in the S2 and the S3 respectively through an information integration strategy to form a bimodal one-dimensional feature vector;
s5, extracting a feature subset which is related to the AD, MCI and normal old people target results and can effectively reflect the category difference by using a feature extraction method based on multiple tasks;
s6, outputting the feature subset closely related to each task as an optimal feature set.
2. The AD feature extraction method based on MRI bimodal adaptive weighted feature fusion according to claim 1, characterized in that in step S2, the graph-theory parametric feature extraction method comprises the steps of:
s21, dividing cerebral cortex by using a cerebral template to realize division of each region of the brain;
s22, obtaining a time sequence of each voxel according to the functional MRI data, and calculating a time sequence mean value of each voxel of each brain region;
s23, calculating time sequence correlation coefficients among all brain areas by using the Pearson correlation coefficient;
s24, according to the size of the time sequence correlation coefficient between brain areas, taking the size of the time sequence correlation coefficient as a basis for distributing edges between nodes in a brain network, and obtaining a brain network topology structure for brain function connection;
s25, extracting graph theory parameters in the brain network as characteristics of brain function modes, including node strength, node efficiency and node order centrality, through the obtained brain network topological structure of brain function connection.
3. The AD feature extraction method based on MRI bimodal adaptive weighting feature fusion according to claim 2, wherein in step S23, the calculation formula of the time series correlation coefficient is as follows:
wherein, c x,y (mu) is the time series correlation coefficient of any two brain regions x and y in all brain regions; var (x) and var (y) represent the time series variance of all voxels in the two brain regions x, y, respectively; cov (Cov) x,y (mu) represents the covariance of the time series mean of the two brain regions x, y.
4. The method for extracting AD features based on the MRI bimodal adaptive weighting feature fusion according to claim 2, wherein in step S25, the node strength is expressed as a sum of weights of all other node edges connected to the node in the network, and the node strength D i The calculation formula is as follows:
D i =∑ j∈N w ij
wherein w is ij A weight value between a node i and a node j is represented, and N represents the number of nodes in the network;
the node efficiency is expressed as the average value of the sum of the shortest path reciprocal between any two nodes in the network formed by the node neighborhood, and the node efficiency E i The calculation formula of (2) is as follows:
wherein G is i A network formed for the neighborhood of nodes; j. k is two nodes in the neighborhood network;l jk is the shortest path between node j and node k in the neighborhood network;
the node betweenness centrality is expressed as the ratio of the number of paths containing the node in all the shortest paths of the whole network to the number of all the shortest paths of the whole network, and the node order centrality B i The calculation formula is as follows:
in sigma jk Representing the number of all shortest paths from any node j to any node k; sigma (sigma) jk (i) The shortest path number passing through the node i among the shortest path numbers is represented.
5. The AD feature extraction method based on MRI bimodal adaptive weighted feature fusion according to claim 1, characterized in that in step S3, the voxel value feature extraction method comprises the steps of:
s31, dividing structural MRI data by using a brain template to divide each region of the brain;
s32, calculating gray voxel values in each dividing area;
s33, scaling gray voxel values through normalization, and enabling the normalized voxel values X to be * As a feature of the brain structural mode, the calculation formula is as follows:
wherein X is gray voxel value; mu is the mean value; sigma is variance.
6. The AD feature extraction method based on MRI bimodal adaptive weighted feature fusion of claim 1, wherein: in step S4, the method for adaptively weighting feature fusion includes the following steps:
s41, mapping the obtained brain functional mode characteristics and the obtained brain structural mode characteristics to a binary space through hash codes of different modes;
s42, carrying out maximum variance projection on the principal component analysis PCA in each local model to obtain a projection result v of a maximum variance space i Removing unstable characteristic points through residual error matching constraint, and adaptively determining local vector dimensions;
s43, fusing the characteristics through a global constraint algorithm: and normalizing the global constraint through feature anticontrol, adaptively determining the final global weight of the local vector, and carrying out feature fusion by combining each local feature weight and the local vector based on the final global weight.
7. The AD feature extraction method based on MRI bimodal adaptive weighting feature fusion of claim 6, wherein: in step S43, the calculation formula of the feature fusion is as follows:
wherein w is i Weighting each local feature; c is a preset super-parameter, and n is the number of local features.
8. The AD feature extraction method based on MRI bimodal adaptive weighted feature fusion of claim 1, wherein: in step S5, the method for extracting the optimal feature set uses the following calculation formula:
wherein t is the number of tasks; t represents a transpose operation; omega j A weight vector for the j-th task; v (V) j Is the whole feature in the j-th task; y is Y j E {1, -1} is a category label; p1 is a regularization factor; w= { ω 1 ,ω 2 ,...,ω t },||W|| 1 Is an L1 regular term;
by calculation, V corresponding to the minimum objective function can be obtained j And selecting the features corresponding to the non-zero weight values in each task, namely the feature subset which has better expression capability and is closely related to each task.
9. The extraction system of the AD feature extraction method based on MRI bimodal adaptive weighted feature fusion according to any one of claims 1-8, characterized by comprising:
an MRI data acquisition module for acquiring functional MRI data and structural MRI data of a brain region of a patient obtained by brain nuclear magnetic resonance scanning;
the feature acquisition module of the brain functional mode acquires the features of the brain functional mode by using a graph theory parameter feature extraction method based on the functional MRI data;
the feature acquisition module of the brain structural mode utilizes a voxel value feature extraction method based on structural MRI data to acquire the features of the brain structural mode;
the self-adaptive weighting feature fusion module is used for carrying out self-adaptive weighting feature fusion on the two mode features obtained respectively through an information integration strategy so as to form a bimodal one-dimensional feature vector;
the feature extraction module based on the multitasking is used for extracting feature subsets which are related to AD, MCI and normal old people target results and can effectively reflect category differences by using a feature extraction method based on the multitasking;
and the optimal feature set output module is used for outputting the feature subset closely related to each task as an optimal feature set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310606550.7A CN116861210A (en) | 2023-05-26 | 2023-05-26 | AD feature extraction method and system based on MRI (magnetic resonance imaging) bimodal self-adaptive weighting feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310606550.7A CN116861210A (en) | 2023-05-26 | 2023-05-26 | AD feature extraction method and system based on MRI (magnetic resonance imaging) bimodal self-adaptive weighting feature fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116861210A true CN116861210A (en) | 2023-10-10 |
Family
ID=88229285
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310606550.7A Pending CN116861210A (en) | 2023-05-26 | 2023-05-26 | AD feature extraction method and system based on MRI (magnetic resonance imaging) bimodal self-adaptive weighting feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116861210A (en) |
-
2023
- 2023-05-26 CN CN202310606550.7A patent/CN116861210A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109409416B (en) | Feature vector dimension reduction method, medical image identification method, device and storage medium | |
Shoeibi et al. | Applications of deep learning techniques for automated multiple sclerosis detection using magnetic resonance imaging: A review | |
CN113616184B (en) | Brain network modeling and individual prediction method based on multi-mode magnetic resonance image | |
CN110236543B (en) | Alzheimer disease multi-classification diagnosis system based on deep learning | |
Biffi et al. | Explainable anatomical shape analysis through deep hierarchical generative models | |
Loog et al. | Segmentation of the posterior ribs in chest radiographs using iterated contextual pixel classification | |
Zuo et al. | Multimodal representations learning and adversarial hypergraph fusion for early Alzheimer’s disease prediction | |
CN113096169A (en) | Non-rigid multimode medical image registration model establishing method and application thereof | |
Osmanlıoğlu et al. | Connectomic consistency: a systematic stability analysis of structural and functional connectivity | |
CN115272295A (en) | Dynamic brain function network analysis method and system based on time domain-space domain combined state | |
CN116597214A (en) | Alzheimer's disease classification method and system based on multi-mode hypergraph attention network | |
Wegmayr et al. | Generative aging of brain MR-images and prediction of Alzheimer progression | |
Alharthi et al. | Do it the transformer way: a comprehensive review of brain and vision transformers for autism spectrum disorder diagnosis and classification | |
Xi et al. | Brain Functional Networks with Dynamic Hypergraph Manifold Regularization for Classification of End-Stage Renal Disease Associated with Mild Cognitive Impairment. | |
Meng et al. | Research on early diagnosis of Alzheimer's disease based on dual fusion cluster graph convolutional network | |
Anderson et al. | Classification of spatially unaligned fMRI scans | |
Kasim et al. | Gaussian mixture model-expectation maximization algorithm for brain images | |
CN114463320B (en) | Magnetic resonance imaging brain glioma IDH gene prediction method and system | |
CN106709921B (en) | Color image segmentation method based on space Dirichlet mixed model | |
CN116596836A (en) | Pneumonia CT image attribute reduction method based on multi-view neighborhood evidence entropy | |
CN113723485B (en) | Hypergraph processing method for brain image of mild hepatic encephalopathy | |
Landman et al. | Multiatlas segmentation | |
CN116861210A (en) | AD feature extraction method and system based on MRI (magnetic resonance imaging) bimodal self-adaptive weighting feature fusion | |
Ekuma et al. | An Explainable Deep Learning Model for Prediction of Severity of Alzheimer’s Disease | |
Wu et al. | [Retracted] Application of Multimodal Fusion Technology in Image Analysis of Pretreatment Examination of Patients with Spinal Injury |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |