CN108345903A - A kind of multi-modal fusion image classification method based on mode distance restraint - Google Patents

A kind of multi-modal fusion image classification method based on mode distance restraint Download PDF

Info

Publication number
CN108345903A
CN108345903A CN201810073841.3A CN201810073841A CN108345903A CN 108345903 A CN108345903 A CN 108345903A CN 201810073841 A CN201810073841 A CN 201810073841A CN 108345903 A CN108345903 A CN 108345903A
Authority
CN
China
Prior art keywords
brain
vector
feature
mode
modal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810073841.3A
Other languages
Chinese (zh)
Other versions
CN108345903B (en
Inventor
阳洁
刘哲宁
董健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Second Xiangya Hospital of Central South University
Original Assignee
Second Xiangya Hospital of Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Second Xiangya Hospital of Central South University filed Critical Second Xiangya Hospital of Central South University
Priority to CN201810073841.3A priority Critical patent/CN108345903B/en
Publication of CN108345903A publication Critical patent/CN108345903A/en
Application granted granted Critical
Publication of CN108345903B publication Critical patent/CN108345903B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2136Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The invention discloses a kind of multi-modal fusion image classification methods based on mode distance restraint, include the following steps:The first step, the rs fMRI data and DTI data for obtaining multiple tested objects;Second step builds brain function network characterization vector sum brain structural network feature vector respectively for each tested object;Third step carries out characteristic filter operation based on Kendall tau related coefficients and " overlapping " pattern to the feature vector of both modalities which;4th step, the feature vector of same tested object both modalities which is added on K support norm original bases before the mapping after relative distance constraint, build the object function of multi-modal feature selection module, filter out the optimal characteristics vector of both modalities which;5th step is based on multi-kernel support vector machine model training grader;The optimal characteristics vector of object both modalities which to be measured is inputted into trained grader, predicts its class label.Classification accuracy of the present invention is high.

Description

A kind of multi-modal fusion image classification method based on mode distance restraint
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of multi-modal fusion based on mode distance restraint Image classification method.
Background technology
Nearly ten years with the progress of brain image technology, the research of brain science enters the period of a rapid development.Magnetic Resonance image-forming is as a kind of noninvasive live body brain function detection technique, by its high-resolution, radiationless advantage, from 20 Since the 90's of century are born, the brain imaging technique being most widely used in brain science research has been rapidly become.Wherein magnetic resonance at As technology includes structure Magnetic resonance imaging (sMRI, structural Magnetic Resonance Imaging), function NMR imaging (fMRI, functional Magnetic Resonance Imaging) and diffusion tensor (DTI, Diffusion Tensor Imaging) etc..Each neuroimaging technology, which provides, portrays brain tissue different level, and Each own advantage and disadvantage and applicable situation.Meanwhile multi-modal nuclear magnetic resonance image fusion treatment technology can integrate different modalities image To the delineation information of brain tissue, systematically to explore the structure function of brain and studying the morbidity machine of Nervous and mental diseases The great Neuscience problems such as reason provide completely new visual angle.
Human brain is described generally as the network organization of an economy and exquisiteness, most neuropsychiatric disease symptom Related with responsible mood or the cranial nerve network imbalance of cognitive function, the imbalance of large-scale distributed neural network can be from brain work( It can be expressed in two levels of network and brain structural network.Meanwhile a large amount of neuroimaging researchs have proven to function and structure is special The fusion of sign can improve the nicety of grading of image.The method for carrying out image classification currently based on multi-modal fusion is will not A long feature vector is connected into the feature that Frequency extraction goes out and carries out subsequent analysis, is not accounted between different modalities data Relationship, classification accuracy is to be improved.A kind of new image point is carried out based on multi-modal fusion therefore, it is necessary to provide The method of class.
Invention content
Technical problem solved by the invention is to provide a kind of method of the multi-modal fusion based on mode distance restraint, It establishes based on brain structural network and on the basis of brain function network connection Fusion Features, making full use of from different modalities god Complementary information through image data promotes classification accuracy.
Technical solution provided by the present invention is:
A kind of multi-modal fusion image classification method based on mode distance restraint, includes the following steps:
The first step obtains data, specifically:The rs-fMRI data and DTI data of multiple tested objects are obtained, and to it It is pre-processed, obtains pretreated rs-fMRI data and pretreated DTI data;Wherein tested object includes known The object to be measured of the sample object of class label and unknown class label;
Second step, the feature vector for building both modalities which respectively for each tested object, i.e. brain function network characterization to Amount and brain structural network feature vector;
For each tested object, the method for building its brain function network characterization vector is:First according to pretreatment after Rs-fMRI data, the part that cerebellum is removed in its full brain is divided by 90 cortex and skin using automatic dissection tag template Lower core rolls into a ball region, i.e. 90 brain areas;Then each of which brain area is defined as a node in its brain function network, it is each A brain area is defined as the Pearson correlation coefficients of average time sequence the connection relation in its brain function network respective nodes (it is the prior art to the Pearson correlation coefficients of average time sequence to calculate arbitrary brain area);Again with each in its brain function network Connection relation between a node is matrix element, and one 90 × 90 tranquillization state brain function network symmetrical matrix is built for it;Most 90 diagonal elements on symmetrical matrix diagonal line are removed afterwards, extract all elements conduct of lower Delta Region in symmetrical matrix Its brain function network characterization vector, vector dimension 4005;
For each tested object, the method for building its brain structural network feature vector is:First according to pretreatment after DTI data, the part that cerebellum is removed in its full brain is divided by 90 cortex and subcutaneous core using automatic dissection tag template Group region, i.e. 90 brain areas;Each of which brain area is defined as a node in its brain structural network, by its each brain area pair Between white matter fiber quantity be defined as the connection relation in its brain structural network between respective nodes (according to FACT (Deterministic Fiber assignment with the continuous tracking) between the arbitrary node (i.e. brain area) traced back to of algorithm The quantity of white matter fiber, the method are the prior art);Again using the connection relation between each node in its brain structural network as square Array element element builds one 90 × 90 tranquillization state brain structural network symmetrical matrix for it;Finally remove the symmetrical matrix diagonal line On 90 diagonal elements, extract all elements of lower Delta Region in symmetrical matrix as its brain structural network feature vector, Vector dimension is 4005;
Third step, characteristic filter operation, specifically:Based on Kendall tau related coefficients and " overlapping " pattern to subject The feature vector of object both modalities which carries out characteristic filter operation, obtains the new feature vector of tested object both modalities which;
4th step, feature selecting operation, specifically:Same tested object is added on K-support norm original bases The feature vector of both modalities which before the mapping after relative distance constraint, build the object function of multi-modal feature selection module, The feature vector new to tested object both modalities which carries out feature selecting, to obtain the optimal spy of tested object both modalities which Sign vector;
5th step, multi-mode classification analysis, specifically:Optimal characteristics vector based on training sample both modalities which, is adopted With multi-kernel support vector machine model training grader;The optimal characteristics vector input of object both modalities which to be measured is trained Grader predicts its class label.
Further, the characteristic filter operation in the third step specifically includes following steps:
Step 3.1 obtains patient's group and Normal group sample data first, is used in combinationWithIndicate i-th of sample pair As the jth dimensional feature in the e modal characteristics vectors with h-th of sample object is (in corresponding brain function network or brain function network Connection relation of the jth between node), i.e. in the e modal characteristics vectors of i-th of sample object and h-th of sample object J tie up element, e=1, the 2, the 1st mode be brain function network, the 2nd mode be brain structural network, j=1,2 ..., 4005, yiAnd yh Indicate the class label of i-th of sample object and h-th of sample object respectively, it is disease that class label, which is the 1 expression sample object, People group sample object, class label be -1 indicate the sample object be Normal group sample object, further according to formula 1) Calculate Kendall tau related coefficients:
Wherein:Indicate the Kendall tau related coefficients of the jth dimensional feature in e modal characteristics vectors, m and n difference The sample object number with Normal group is organized for patient,WithThe respectively reconciliation pair of e mode and anharmonic couple of Number;We are not necessarily to relationship between two sample objects for considering same grouping,Therefore total sample object to quantity be m × n;Reconcile pair definition be:
Meanwhile anharmonic pair of definition is:
Wherein:Sgn is sign function,WithAs meet formula 2) and formula 3) sample object to i-h Number;
Just to indicate conspicuousness ratio of the jth dimensional feature in e modal characteristics vectors in patient's group in normal control Conspicuousness enhancing in group, andFor conspicuousness of the jth dimensional feature in negative indication e modal characteristics vectors in patient's group Weaken than the conspicuousness in Normal group.
Then rightJ=1,2 ..., 4005 are ranked up according to order of magnitude;It is right in e modal characteristics vectors to choose It answersMore than the characteristic dimension of given threshold.It is worth noting that, in above step, brain function network characterization and brain knot Structure network characterization need to be separated and be operated.The threshold value is by user's sets itself, preferably threshold valueJ=1,2 ..., 4005 Average valueJ=1,2 ..., 4005 standard deviation;
Step 3.2, in order to ensure the correspondence of brain network characterization, the spy that step 3.1 is selected with " overlapping " pattern Sign dimension is further screened, to ensure that the connection relation in brain function network or brain function network between any node pair is same When occur or be not present in subsequent feature selection step;Screening technique is:If the jth Wei Te in the 1st modal characteristics vector It seeks peace (connection relation and brain function net of the jth between node i.e. in brain function network of jth dimensional feature in the 2nd modal characteristics vector Connection relation of the jth between node in network) while being selected by step 3.1, then retain this feature dimension;Otherwise, if the 1st mode In jth dimensional feature and the 2nd modal characteristics vector in feature vector in jth dimensional feature only there are one or neither one by step 3.1 selections, then filter out this feature dimension;
Step 3.3, feature vector (the brain function network characterization vector sum brain structure to each tested object both modalities which Network characterization vector), only retain the characteristic dimension that filters out of step 3.2, it is (new to obtain the new feature vector of its both modalities which Brain function network characterization vector or brain structural network feature vector).
Further, the principle of feature selecting operation is in the 4th step:
(1) feature selecting based on k-support regularization terms is carried out, needs to minimize mesh with k-support normal forms Scalar functions, calculation formula are expression formula 4):
Wherein:For the e modal characteristics vector matrixs of S training sample, S Referring respectively to training samples number with l, (training samples number S can be that patient organizes sample and normal right in step 3.1 herein According to a group sample, sample can also be separately selected) and the obtained dimension of the new feature vector of e mode of step 3.3;It indicates I-th of training sample is by the new feature vector of e mode that step 3.3 obtains;we∈Rl×1Represent e modal characteristics vectors Regression coefficient vector is parameter to be optimized (the method for solving prior art of the parameter to be optimized);Y=[y1,y2,…,yi,…, yS]T∈RS×1Vectorial for the class label of S training sample, all elements are all 1 or -1 in Y;F indicates Frobenius Normal form;λ1The regularization parameter of the sparse degree of Controlling model, be empirical parameter (such as:10 foldings can be being carried out in training set Cross validation;The corresponding parameter of best classification results obtained in cross-validation is used for external cross validation Test set is tested);R is to meet expression formula 5 in { 0 ..., k-1 }) condition unique integral;K meets k<l;It is Vectorial weIn the i-th big element;Expression formula 5) it is as follows:
(2) present invention carries out, based on the feature selecting for improving K-support regularization terms, making full use of multi-modal benefit Information is filled to select the optimal characteristics vector of each mode, and same training sample is added on K-support norm original bases Brain function network and brain structural network feature vector before the mapping after relative distance constraint, calculation formula be expression Formula 6):
Wherein:D constrains for relative distance;WithThe new e mode that is walked by third for i-th of training sample and T modal characteristics are vectorial (i.e. brain function or brain structural network feature vector);F indicates Frobenius normal forms;
The object function of multi-modal feature selection module is rewritten as expression formula 7 as a result):
Wherein:λ1>0 and λ2>0, λ1And λ2The reservation of Controlling model sparsity and different modalities feature vector relationship respectively Degree.Above-mentioned object function is solved, w is obtainede(method for solving of above-mentioned object function is the prior art);Determine weIn be more than 0 yuan The corresponding characteristic dimension of element selects the feature of these dimensions to constitute its e in the feature vector new from tested object e mode Mode optimal characteristics vector.
Further, the principle of multi-mode classification analysis is in the 5th step:
The multi-kernel support vector machine need to meet object function, see expression formula 8):
Wherein:qeIndicate the hyperplane method vector of e-th of modal data;B indicates deviation;ξiIt indicates to weigh error in data point The non-negative slack variable of class;C indicate penalty factor, for weighing the weight of loss and class interval, be empirical parameter (such as: 10 folding cross validations can be being carried out in training set;The corresponding parameter of best classification results that will be obtained in cross-validation It is tested for the test set to external cross validation);For non-linear transform function;For training sample xi E mode optimal characteristics vector;βeIt indicates the weight factor of e modal characteristics vectors and needs to meetWherein, exist In the present invention, a total of both modalities which feature vector, i.e. G=2;
By expression formula 8) Lagrange duality transformation is carried out, calculation formula is expression formula 9):
βeVoluntarily it is arranged by user, only needs to ensure
Wherein:aiAnd apRespectively training sample xiXpCorresponding Lagrange multiplier;yiAnd ypRespectively training sample xiAnd xpClass label;It is training sample xiAnd xpThe core letter of e mode optimal characteristics vector Number (Polynomial kernel function);
For given object to be measured, its corresponding optimal characteristics vector is inputted into formula 10) in grader, you can To its class label:
Wherein:F (x) is the classification prediction result of object to be measured;Sgn () indicates sign function;It is trained sample This xiWith the kernel function of object x e mode optimal characteristics vector to be measured;B is deviation, trains to obtain by training sample.
The beneficial effects of the invention are as follows:The present invention is first respectively from rs-fMRI (resting-state functional Magnetic resonance imaging, tranquillization state functional mri) and DTI extracting data brain function network characterizations And brain structural network feature, characteristic filter behaviour is then carried out using two category feature of Kendall tau related coefficients pair respectively Make;Secondly, multi-modal supplemental information is made full use of to select the optimal feature subset of each mode, in K-support models Added on number original bases same subject brain function network and brain structural network feature vector before the mapping after it is opposite away from From constraint, to retain the relationship of different modalities characteristic, by holding from feature vector in brain function and structural network it Between relationship, ensure that the sparsity of selection feature inside each mode;Finally, using multinuclear SVM for combining from different moulds The feature that is selected in state and the prediction for carrying out image category label.Brain network characterization that the present invention selects at the same consider its The correlation of brain function and brain structural level and disease, therefore the brain network characterization that the present invention is selected is as disease organism The trustworthiness higher of label, the clinic study process for disclosing progression of disease have great importance.With it is previous The feature that different modalities extract is connected into the multi-modal fusion method phase that a long feature vector carries out subsequent analysis Than the present invention considers the relationship between different modalities data and used this potential contact in feature selection step, carries Classification accuracy is risen;Using the present invention from selected optimal feature subset in brain function and two levels of structure as disease The trustworthiness of the biomarker of disease is stronger.
Description of the drawings
Fig. 1 is that the present invention is based on the flow charts of the multi-modal fusion image classification method of mode distance restraint.
Specific implementation mode
The embodiment of the present invention is described in detail below in conjunction with the accompanying drawings, so that advantages and features of the invention can be more easy to In being readily appreciated by one skilled in the art, so as to make a clearer definition of the protection scope of the present invention.
The principle of the present invention is:Respectively from rs-fMRI and DTI extracting datas go out brain function and brain structural network feature to Amount, makes full use of the interaction between two class mode, increases a new constraint on the basis of original K-support normal forms and comes The relationship for retaining different modalities characteristic, the sparsity of each modal characteristics is ensure that with this.Finally, it is used using multinuclear SVM In the feature that is selected from different modalities of combination and carry out the prediction of image category label.
Embodiment 1:
The invention discloses a kind of multi-modal fusion image classification methods based on mode distance restraint, as shown in Figure 1, institute The method of stating includes the following steps:
The first step obtains data, specifically:The rs-fMRI data and DTI data of multiple tested objects are obtained, and are carried out Pretreatment, obtains pretreated rs-fMRI data and pretreated DTI data;
Second step, structure brain function network characterization vector sum build brain structural network feature vector, and Details as Follows:
Building brain function network characterization vector is built according to pretreated rs-fMRI data, specifically:Using Automatic dissection tag template generates 90 cortex and subcutaneous core group region, and removes cerebellum part;It calculates arbitrary in each subject Pearson correlation coefficients of the brain area to average time sequence;Node definition in brain function network is 90 cortex and subcutaneous Core group region, the connection in brain function network are defined as Pearson correlation coefficients of the arbitrary brain area to average time sequence;To The tranquillization state brain function network symmetrical matrix that one 90 × 90 is constructed for each subject, in removal symmetrical matrix diagonal line On 90 diagonal elements after, all elements for extracting lower Delta Region in symmetrical matrix are vectorial as brain function network characterization; Building brain structural network feature vector is built according to pretreated DTI data, specifically:It is marked using automatic dissection It signs 90 cortex of template generation and subcutaneous core rolls into a ball region, and remove cerebellum part;Node definition in brain structural network is nine Ten cortex and subcutaneous core roll into a ball region, and the connection in brain structural network is defined as according to FACT (Deterministic fiber Assignment with the continuous tracking) white matter is fine between the arbitrary node (i.e. brain area) traced back to of algorithm The quantity of dimension;To construct one 90 × 90 tranquillization state brain structural network symmetrical matrix for each subject, in removal pair After claiming 90 diagonal elements on diagonal of a matrix, all elements of lower Delta Region in symmetrical matrix are extracted as brain structure Network characterization vector;
Third step, characteristic filter operation, specifically:Using Kendall tau related coefficients respectively come to brain function network Feature vector and brain structural network feature vector carry out characteristic filter operation.
4th step, feature selecting operation, specifically:Multi-modal supplemental information is made full use of to select each mode Optimal characteristics vector, brain function network and the brain structural network that same subject is added on K-support norm original bases are special Relative distance after levying vector before the mapping;
5th step, multi-mode classification analysis, specifically:Optimal characteristics vector based on training sample both modalities which, is adopted With multi-kernel support vector machine model training grader;The optimal characteristics vector input of object both modalities which to be measured is trained Grader predicts its class label.
Details as Follows for preprocessing process in the present embodiment:
Rs-fMRI data are handled using SPM8 softwares and the tool boxes CONN, briefly, the pre-treatment step of image Correction is moved including head, free-air correction, is registrated, normalized to the spaces MNI and space smoothing and handle, smoothing kernel FWHM= 8mm rejects the subject for occurring the dynamic amplitude of head in any direction more than 2.5mm or rotation more than 2.5 degree, by white matter, brain ridge Liquid and head move coefficient and are considered as confounding factors;And using CompCor (component-based-noise- Correction) method reduces the influence of the above non-nervous activity factor pair function NMR signal, followed by complete Brain signal is returned to remove the negative correlation of a large amount of mistakes, and remaining time series carries out bandpass filtering (frequency 0.01- 0.08HZ) to reduce the influence of low-and high-frequency physiological noise;Finally, seed region and other all voxel time serieses are calculated Pearson correlation coefficient, and turned the related coefficient of gained using fischer z-transform (Fisher z-transformation) It is changed to and is just distributed very much.
DTI data are pre-processed and analyzed to be handled using the tool boxes PANDA.By by all diffusion-weighted figures It is really corrected as being registrated to progress head movement and eddy loss on b=0 images.Then, using Stejskal and Tanner equations Dispersion tensor element is calculated to obtain three characteristic values and feature vector.Then score anisotropy (FA) figure is generated.
According to data above, specific implementation includes the following steps:
Structure brain function network characterization vector includes the following steps:
1,90 cortex are generated using automatic dissection tag template (AAL) and subcutaneous core rolls into a ball region, and remove small brain Point, using arbitrary brain area is calculated in each subject to the Pearson correlation coefficients of average time sequence, generate one 90 × 90 quiet Cease state brain function network symmetrical matrix;
2, the node definition in brain function network is 90 cortex and subcutaneous core rolls into a ball region, the connection in brain function network It is defined as Pearson correlation coefficients of the arbitrary brain area to average time sequence;
3,90 diagonal elements on symmetrical matrix diagonal line are removed;
4, the element extracted is connected into the one-dimensional vector that a length is 4005 and (extracts lower trigonum in symmetrical matrix The all elements in domain), as brain function network characterization is vectorial.
Structure brain structural network feature vector includes the following steps:
1,90 cortex are generated using automatic dissection tag template (AAL) and subcutaneous core rolls into a ball region, and remove small brain Point, the quantity of white matter fiber between arbitrary brain area pair is traced using FACT;
2, brain structural network interior joint is defined as 90 cortex and subcutaneous core group region, and definition is connected in brain function network The quantity of white matter fiber between arbitrary brain area pair;
3,90 diagonal elements on symmetrical matrix diagonal line are removed;
4, the element extracted is connected into the one-dimensional matrix that a length is 4005 and (extracts lower trigonum in symmetrical matrix The all elements in domain), as brain function network characterization is vectorial.
Two, characteristic filter flow specifically includes following steps:
If patient organizes has m and n sample respectively with Normal group.xijIt can indicate the brain work(of i-th of j-th of sample Energy network characterization or brain structural network feature, yiIndicate that (+1 is patient to the true tag data for needing to predict, -1 is normal right According to).So i-th of brain function network characterization or the Kendall tau related coefficients of brain structural network feature are
Wherein:Indicate the Kendall tau related coefficients of the jth dimensional feature in e modal characteristics vectors, m and n difference The number of samples with Normal group is organized for patient,WithThe respectively reconciliation pair of e mode and anharmonic pair of number;I Without considering relationship between two samples of same grouping,Therefore total sample logarithm amount is m × n;It reconciles to determining Justice is:
Meanwhile anharmonic pair of definition is:
Wherein:Sgn is sign function,WithAs meet formula 2) and formula 3) sample to i-h's Number;
Positive related coefficient ΓiIndicate that i-th of brain function network characterization or brain structural network feature compare in patient is grouped It shows to significantly increase in normal control grouping, and a negative ΓiIndicate i-th of brain function network characterization or brain structure Network characterization shows to be obviously reduced in patient is grouped.Then it is arranged according to the absolute value of Kendall tau related coefficients Sequence selects those to carry out next step operation more than the brain function network characterization and brain structural network feature of a certain threshold value.It is worth note Meaning, in above step, brain function network characterization and brain structural network feature need to be separated and be operated.
Step 3.2, in order to ensure the correspondence of brain network characterization, filter out brain function network with " overlapping " pattern Feature vector and brain structural network feature vector ensure that brain function and brain the structural network connection between any brain area pair need It comes across in subsequent feature selection step simultaneously.
Three, feature selecting operates, and specifically includes following steps:
The object function for building multi-modal feature selection module is:
Wherein:For the e modal characteristics vector matrixs of S training sample, S The dimension for the new e modal characteristics vectors that training samples number and step 3.3 obtain is referred respectively to l;Indicate i-th The new e modal characteristics vectors that a training sample is obtained by step 3.3;we∈Rl×1Represent returning for e modal characteristics vectors Return coefficient vector, is parameter to be optimized;Y=[y1,y2,…,yi,…,yS]T∈RS×1For S training sample class label to It measures, all elements are all 1 or -1 in Y;Subscript F indicates Frobenius normal forms;λ1>0 and λ2>0, λ1And λ2Respectively control The reservation degree of simulation sparsity and different modalities feature vector relationship is the regularization parameter of the sparse degree of Controlling model;r To meet expression formula 5 in { 0 ..., k-1 }) condition unique integral;K meets k<l;It is vectorial weIn the i-th big member Element;D constrains for relative distance, according to expression formula 6);Expression formula 5) it is as follows:
Expression formula 5) it is as follows:
Wherein:D constrains for relative distance;WithThe new e mode that is walked by third for i-th of training sample and T modal characteristics vectors;
The object function for solving multi-modal feature selection module, obtains we;Select weIn be more than 0 element and correspond to the spy of dimension Sign constitutes e mode optimal characteristics vector.
Four, multi-mode classification analysis
Step 5.1 is based on the following grader of multi-kernel support vector machine model construction:
Wherein:F (x) is the class label of object x to be measured;Sgn () indicates sign function;yiFor training sample xiClass Distinguishing label, aiRespectively training sample xiCorresponding Lagrange multiplier is parameter to be optimized;βeIndicate e modal characteristics to It the weight factor of amount and needs to meetIt is training sample xiWith object x e mode optimal characteristics to be measured to The kernel function of amount;B is deviation, trains to obtain by training sample;
Step 5.2, by solving following object function, obtain parameter aiValue:
Wherein:It is training sample xiAnd xpThe kernel function of e mode optimal characteristics vector;
Step 5.3, the parameter a for obtaining the object function in solution procedure 5.2iValue substitute into step 5.1 in classification Device;For given object to be measured, the optimal characteristics vector of its both modalities which is inputted into grader, obtains its class label.Such as Fruit F (x)=1, then object x to be measured is Disease, conversely, then object x to be measured is normal.

Claims (4)

1. a kind of multi-modal fusion image classification method based on mode distance restraint, which is characterized in that include the following steps:
The first step obtains data, specifically:The rs-fMRI data and DTI data of multiple tested objects are obtained, and it is carried out Pretreatment, obtains pretreated rs-fMRI data and pretreated DTI data;Wherein tested object includes known class The object to be measured of the sample object of label and unknown class label;
Second step, the feature vector for building both modalities which respectively for each tested object, i.e. brain function network characterization vector sum Brain structural network feature vector;
For each tested object, the method for building its brain function network characterization vector is:It is pretreated according to its first The part that cerebellum is removed in its full brain is divided into 90 cortex and subcutaneous by rs-fMRI data using automatic dissection tag template Core group region, i.e. 90 brain areas;Then each of which brain area is defined as a node in its brain function network, by its each brain Area is defined as the Pearson correlation coefficients of average time sequence the connection relation in its brain function network respective nodes;Again with Connection relation in its brain function network between each node is matrix element, and one 90 × 90 tranquillization state brain function is built for it Network symmetrical matrix;90 elements on the symmetrical matrix diagonal line are finally removed, lower Delta Region in symmetrical matrix is extracted All elements are as its brain function network characterization vector, vector dimension 4005;
For each tested object, the method for building its brain structural network feature vector is:It is pretreated according to its first The part that cerebellum is removed in its full brain is divided into 90 cortex and subcutaneous core group by DTI data using automatic dissection tag template Region, i.e. 90 brain areas;Each of which brain area is defined as a node in its brain structural network, it will be white between its each brain area pair Matter fiber number is defined as the connection relation between respective nodes in its brain structural network;Again with each node in its brain structural network Between connection relation be matrix element, build one 90 × 90 tranquillization state brain structural network symmetrical matrix for it;Finally remove 90 elements on the symmetrical matrix diagonal line extract all elements of lower Delta Region in symmetrical matrix as its brain Structure Network Network feature vector, vector dimension 4005;
Third step, characteristic filter operation, specifically:Based on Kendall tau related coefficients and " overlapping " pattern to tested object The feature vector of both modalities which carries out characteristic filter operation, obtains the new feature vector of tested object both modalities which;
4th step, feature selecting operation, specifically:Two kinds of same tested object is added on K-support norm original bases The feature vector of mode before the mapping after relative distance constraint, the object function of multi-modal feature selection module is built, to quilt Try the new feature vector of object both modalities which and carry out feature selecting, to obtain the optimal characteristics of tested object both modalities which to Amount;
5th step, multi-mode classification analysis, specifically:Optimal characteristics vector based on training sample both modalities which, using more Kernel support vectors machine model training grader;The optimal characteristics vector of object both modalities which to be measured is inputted into trained classification Device predicts its class label.
2. the multi-modal fusion image classification method according to claim 1 based on mode distance restraint, which is characterized in that Characteristic filter operation in third step specifically includes following steps:
Step 3.1 obtains patient's group and Normal group sample data first, is used in combinationWithIndicate i-th sample object and Jth dimensional feature in h-th of sample object e modal characteristics vector, the i.e. e of i-th of sample object and h-th of sample object Jth in modal characteristics vector ties up element, wherein e=1, and the 2, the 1st mode is brain function network, and the 2nd mode is brain structural network, J=1,2 ..., 4005, yiAnd yhIndicate that the class label of i-th of sample object and h-th of sample object, class label are respectively 1 indicate the sample object be patient group sample object, class label be -1 indicate the sample object be Normal group sample This object, further according to formula 1) calculate Kendall tau related coefficients:
Wherein:Indicate that the Kendall tau related coefficients of the jth dimensional feature in e modal characteristics vectors, m and n are respectively disease Sample object number in people's group and Normal group,WithThe respectively reconciliation pair of e mode and anharmonic pair of number;It reconciles To definition be:
Anharmonic pair of definition is:
Wherein:Sgn is sign function, yi≠yh, yi≠yhWithAs meet formula 2) and formula 3) sample object to i- The number of h;
Then rightIt is ranked up according to order of magnitude;It chooses corresponding in e modal characteristics vectors More than the characteristic dimension of given threshold;
Step 3.2, in order to ensure the correspondence of brain network characterization, step 3.1 is chosen with " overlapping " pattern feature into Row further screening, to ensure connection relation in brain function network or brain function network between any node pair appearance simultaneously or not It appears in subsequent feature selection step;Screening technique is:If the jth dimensional feature in the 1st modal characteristics vector and the 2nd mode Jth dimensional feature is selected by step 3.1 simultaneously in feature vector, then retains this feature dimension;Otherwise, if the 1st modal characteristics are vectorial In jth dimensional feature and the 2nd modal characteristics vector in jth dimensional feature only there are one or neither one selected by step 3.1, then Filter out this feature dimension;
Step 3.3, to the feature vector of each tested object both modalities which, only retain the characteristic dimension that step 3.2 filters out, Obtain the new feature vector of its both modalities which.
3. the multi-modal fusion image classification method according to claim 2 based on mode distance restraint, which is characterized in that The object function of the multi-modal feature selection module built in 4th step is:
Wherein:For the e modal characteristics vector matrixs of S training sample, S and l divide The dimension for the new feature vector of e mode that training samples number and step 3.3 obtain is not referred to;Indicate i-th of training Sample is by the new feature vector of e mode that step 3.3 obtains;we∈Rl×1Represent the regression coefficients of e modal characteristics vectors to Amount is parameter to be optimized;Y=[y1,y2,…,yi,…,yS]T∈RS×1Vectorial, the institute in Y for the class label of S training sample Some elements are all 1 or -1;Subscript F indicates Frobenius normal forms;λ1>0 and λ2>0, λ1And λ2Respectively Controlling model is sparse The regularization parameter of the reservation degree of property and different modalities feature vector relationship;R is to meet expression formula 5 in { 0 ..., k-1 }) The unique integral of condition;K meets k<l;It is vectorial weIn the i-th big element;D constrains for relative distance, according to expression formula 6);Expression formula 5) it is as follows:
Expression formula 5) it is as follows:
The object function for solving multi-modal feature selection module, obtains we;Determine weIn be more than the corresponding characteristic dimension of 0 element, from The feature of these dimensions is selected to constitute its e mode optimal characteristics vector in the new feature vector of tested object e mode.
4. the multi-modal fusion image classification method according to claim 3 based on mode distance restraint, which is characterized in that Multi-mode classification analysis specifically includes following steps in 5th step:
Step 5.1 is based on the following grader of multi-kernel support vector machine model construction:
Wherein:F (x) is the class label of object x to be measured;Sgn () indicates sign function;yiFor training sample xiClassification mark Label, aiRespectively training sample xiCorresponding Lagrange multiplier is parameter to be optimized;βeIndicate the power of e modal characteristics vectors It repeated factor and needs to meetIt is training sample xiWith the core of object x e mode optimal characteristics vector to be measured Function;B is deviation, trains to obtain by training sample;
Step 5.2, by solving following object function, obtain parameter aiValue:
Wherein:C is penalty factor,It is training sample xiAnd xpThe kernel function of e mode optimal characteristics vector;
Step 5.3, the parameter a for obtaining the object function in solution procedure 5.2iValue substitute into step 5.1 in grader;It is right In given object to be measured, the optimal characteristics vector of its both modalities which is inputted into grader, obtains its class label.
CN201810073841.3A 2018-01-25 2018-01-25 A kind of multi-modal fusion image classification method based on mode distance restraint Active CN108345903B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810073841.3A CN108345903B (en) 2018-01-25 2018-01-25 A kind of multi-modal fusion image classification method based on mode distance restraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810073841.3A CN108345903B (en) 2018-01-25 2018-01-25 A kind of multi-modal fusion image classification method based on mode distance restraint

Publications (2)

Publication Number Publication Date
CN108345903A true CN108345903A (en) 2018-07-31
CN108345903B CN108345903B (en) 2019-06-28

Family

ID=62961602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810073841.3A Active CN108345903B (en) 2018-01-25 2018-01-25 A kind of multi-modal fusion image classification method based on mode distance restraint

Country Status (1)

Country Link
CN (1) CN108345903B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241903A (en) * 2018-08-30 2019-01-18 平安科技(深圳)有限公司 Sample data cleaning method, device, computer equipment and storage medium
CN109344889A (en) * 2018-09-19 2019-02-15 深圳大学 A kind of cerebral disease classification method, device and user terminal
CN109770932A (en) * 2019-02-21 2019-05-21 河北工业大学 The processing method of multi-modal brain neuroblastoma image feature
CN110210403A (en) * 2019-06-04 2019-09-06 电子科技大学 A kind of SAR image target recognition method based on latent structure
CN110263791A (en) * 2019-05-31 2019-09-20 京东城市(北京)数字科技有限公司 A kind of method and apparatus in identification function area
CN110298364A (en) * 2019-06-27 2019-10-01 安徽师范大学 Based on the feature selection approach of multitask under multi-threshold towards functional brain network
CN110473635A (en) * 2019-08-14 2019-11-19 电子科技大学 A kind of analysis method of teenager's brain structural network and brain function cyberrelationship model
CN111210467A (en) * 2018-12-27 2020-05-29 上海商汤智能科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113616184A (en) * 2021-06-30 2021-11-09 北京师范大学 Brain network modeling and individual prediction method based on multi-mode magnetic resonance image
WO2023108712A1 (en) * 2021-12-18 2023-06-22 深圳先进技术研究院 Structural-functional brain network bidirectional mapping model construction method and brain network bidirectional mapping model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325119A (en) * 2013-06-27 2013-09-25 中国科学院自动化研究所 Default state brain network center node detecting method based on modality fusion
CN105046709A (en) * 2015-07-14 2015-11-11 华南理工大学 Nuclear magnetic resonance imaging based brain age analysis method
CN105957047A (en) * 2016-05-06 2016-09-21 中国科学院自动化研究所 Supervised multimodal brain image fusion method
CN106667490A (en) * 2017-01-09 2017-05-17 北京师范大学 Magnetic resonance brain image based subject individual difference data relation analysis method
US20170270832A1 (en) * 2016-03-15 2017-09-21 Lauren Hill Brain exhibit or model, and method of using or deploying same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103325119A (en) * 2013-06-27 2013-09-25 中国科学院自动化研究所 Default state brain network center node detecting method based on modality fusion
CN105046709A (en) * 2015-07-14 2015-11-11 华南理工大学 Nuclear magnetic resonance imaging based brain age analysis method
US20170270832A1 (en) * 2016-03-15 2017-09-21 Lauren Hill Brain exhibit or model, and method of using or deploying same
CN105957047A (en) * 2016-05-06 2016-09-21 中国科学院自动化研究所 Supervised multimodal brain image fusion method
CN106667490A (en) * 2017-01-09 2017-05-17 北京师范大学 Magnetic resonance brain image based subject individual difference data relation analysis method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
梁夏等: "人脑连接组研究: 脑结构网络和脑功能网络", 《科学通报》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241903A (en) * 2018-08-30 2019-01-18 平安科技(深圳)有限公司 Sample data cleaning method, device, computer equipment and storage medium
CN109241903B (en) * 2018-08-30 2023-08-29 平安科技(深圳)有限公司 Sample data cleaning method, device, computer equipment and storage medium
CN109344889A (en) * 2018-09-19 2019-02-15 深圳大学 A kind of cerebral disease classification method, device and user terminal
CN111210467A (en) * 2018-12-27 2020-05-29 上海商汤智能科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109770932A (en) * 2019-02-21 2019-05-21 河北工业大学 The processing method of multi-modal brain neuroblastoma image feature
CN109770932B (en) * 2019-02-21 2022-04-29 河北工业大学 Processing method of multi-modal brain nerve image features
CN110263791A (en) * 2019-05-31 2019-09-20 京东城市(北京)数字科技有限公司 A kind of method and apparatus in identification function area
CN110210403A (en) * 2019-06-04 2019-09-06 电子科技大学 A kind of SAR image target recognition method based on latent structure
CN110210403B (en) * 2019-06-04 2022-10-14 电子科技大学 SAR image target identification method based on feature construction
CN110298364A (en) * 2019-06-27 2019-10-01 安徽师范大学 Based on the feature selection approach of multitask under multi-threshold towards functional brain network
CN110473635A (en) * 2019-08-14 2019-11-19 电子科技大学 A kind of analysis method of teenager's brain structural network and brain function cyberrelationship model
CN110473635B (en) * 2019-08-14 2023-02-28 电子科技大学 Analysis method of relation model of teenager brain structure network and brain function network
CN113616184A (en) * 2021-06-30 2021-11-09 北京师范大学 Brain network modeling and individual prediction method based on multi-mode magnetic resonance image
CN113616184B (en) * 2021-06-30 2023-10-24 北京师范大学 Brain network modeling and individual prediction method based on multi-mode magnetic resonance image
WO2023108712A1 (en) * 2021-12-18 2023-06-22 深圳先进技术研究院 Structural-functional brain network bidirectional mapping model construction method and brain network bidirectional mapping model

Also Published As

Publication number Publication date
CN108345903B (en) 2019-06-28

Similar Documents

Publication Publication Date Title
CN108345903B (en) A kind of multi-modal fusion image classification method based on mode distance restraint
CN109376751B (en) Human brain function network classification method based on convolutional neural network
CN107944490B (en) Image classification method based on semi-multimodal fusion feature reduction framework
CN113040715B (en) Human brain function network classification method based on convolutional neural network
JP4308871B2 (en) Brain function data analysis method, brain function analysis apparatus, and brain function analysis program
Bassett et al. Emerging frontiers of neuroengineering: a network science of brain connectivity
Ding et al. Classification and quantification of neuronal fiber pathways using diffusion tensor MRI
CN108288070B (en) Neural fingerprint extraction and classification method and system
Deco et al. The most relevant human brain regions for functional connectivity: Evidence for a dynamical workspace of binding nodes from whole-brain computational modelling
CN109528197A (en) The individuation prediction technique and system of across the Species migration carry out mental disease of monkey-people based on brain function map
Zhao et al. Medical image fusion method based on dense block and deep convolutional generative adversarial network
CN110111325A (en) Neuroimaging classification method, terminal and computer readable storage medium
CN109003270A (en) A kind of image processing method and electronic equipment
Bassett et al. Network methods to characterize brain structure and function
CN111728590A (en) Individual cognitive ability prediction method and system based on dynamic function connection
CN111090764A (en) Image classification method and device based on multitask learning and graph convolution neural network
CN109816630A (en) FMRI visual coding model building method based on transfer learning
CN106127263A (en) The human brain magnetic resonance image (MRI) classifying identification method extracted based on three-dimensional feature and system
CN106798558A (en) The measure of the crucial brain area based on principal component analysis
CN112233805B (en) Mining method for biomarkers based on multi-map neuroimaging data
CN106251299A (en) A kind of high-efficient noise-reducing visual pattern reconstructing method
Hart et al. Connections, Tracts, Fractals, and the Rest: A Working Guide to Network and Connectivity Studies in Neurosurgery
Yeung et al. Pipeline comparisons of convolutional neural networks for structural connectomes: predicting sex across 3,152 participants
CN115359013A (en) Brain age prediction method and system based on diffusion tensor imaging and convolutional neural network
Mehta Brain tumor detection using artificial neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant