CN114419015A - Brain function fusion analysis method based on multi-modal registration - Google Patents

Brain function fusion analysis method based on multi-modal registration Download PDF

Info

Publication number
CN114419015A
CN114419015A CN202210084461.6A CN202210084461A CN114419015A CN 114419015 A CN114419015 A CN 114419015A CN 202210084461 A CN202210084461 A CN 202210084461A CN 114419015 A CN114419015 A CN 114419015A
Authority
CN
China
Prior art keywords
registration
training
fmri
image
brain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210084461.6A
Other languages
Chinese (zh)
Inventor
杨金柱
孙齐浓
曹鹏
吴雪
孙奇
袁玉亮
李洪赫
瞿明军
冯朝路
覃文军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN202210084461.6A priority Critical patent/CN114419015A/en
Publication of CN114419015A publication Critical patent/CN114419015A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

A brain function fusion analysis method based on multi-modal registration relates to the fields of medicine, nuclear magnetic resonance imaging and computer vision. A brain function analysis flow based on multi-mode registration is designed, firstly, fMRI images are preprocessed, and clear structural information of the MRI images and functional information provided by fMRI time sequence signals are fused. The deformation field is trained after sampling the timing information for fMRI. The functional information of the original fMRI is kept as possible, and a recursive scheme is used for solving the problem of high registration difficulty of the fMRI and the MRI due to large resolution difference. The fused image has both the functional and structural information of the original image. The fused images were subjected to functional analysis using the fMRI analysis method. The similarity between the fused image and the original structure sMRI image is higher than that of the existing method, and meanwhile, the functional information of the original fMRI image is well fused. The fused image provides data support for subsequent brain structure and function analysis, and the structure partition and the function partition are in one-to-one correspondence under the same coordinate.

Description

Brain function fusion analysis method based on multi-modal registration
Technical Field
The invention relates to the fields of medicine, nuclear magnetic resonance imaging and computer vision, in particular to a brain function fusion analysis method based on multi-modal registration.
Background
fmri (functional magnetic resonance imaging) and srri (structural magnetic resonance imaging) are two common categories of magnetic resonance images, and are imaging technologies based on magnetic resonance basic generation and are commonly used for research of brain science. Many studies indicate that some mental or neurological diseases cause changes in brain structure and brain function, while no significant changes in brain structure are observed in early stage of the disease, and the dynamic characteristics of functional connection are significantly changed due to abnormal connections between the whole brain and local networks of patients with the neuropsychiatric diseases.
In recent years, voxel-based functional magnetic resonance imaging activation detection has been widely applied to perform functional brain mapping, where methods of studying brain function are primarily task or stimulus driven. From the relative change in Blood oxygen-level dependent (BOLD) signals with respect to the task performance period or as a response to stimulation, it is inferred which regions of the brain are activated. Simply speaking, more energy is necessarily consumed during vigorous activity in a certain brain region, and naturally more oxygen is consumed. sMRI is the traditional medical magnetic resonance, and has high spatial resolution. The advantages are that the focus position of the patient can be clearly seen, which is beneficial to medical diagnosis. fMRI is an image that captures the same location within a time sequence continuously over a shorter period of time with higher temporal resolution. However, due to the inherent weakness of nuclear magnetic technology, the spatial resolution is reduced at the corresponding cost. The method has the advantages that the level of the oxygen activity of the brain at a certain moment can be accurately known, and the conclusion of 'which brain areas participate in corresponding activities' can be obtained through comparison and calculation according to the experimental design content.
In the current multi-modal registration scheme, the anatomical structure partitions and the functional activity partitions of the brain cannot be in one-to-one correspondence, and most commonly, a single frame fMRI image is used for scaling according to an sMRI image, and then the spatial information is tried to be restored, and various noise reduction methods are involved. The registration result obtained under the scheme can not keep the four-dimensional time sequence information of the functional nuclear magnetism. Meanwhile, the difference of the deformation field is large due to the large difference of the resolution of the fMRI image and the sMRI image, and the registration difficulty is large when the registration between the modalities is carried out. The main methods of registration are classified into conventional image processing-related registration methods and depth learning-based registration methods, and the conventional registration methods are gradually replaced by the depth learning registration at present. The reason is that the conventional registration is usually very computationally intensive, and the similarity needs to be recalculated and iterated for each pair of images, and the registration mode shared by the same data set is not considered, which results in too slow speed and too low efficiency.
Disclosure of Invention
Aiming at the technical defects, the invention provides a brain function fusion analysis method based on multi-modal registration;
a brain function fusion analysis method based on multi-modal registration specifically comprises the following steps:
step 1: inputting fMRI and sMRI original images into the registration network model, and training the registration network model; the input original image format is a nifti file;
step 2: preprocessing an sMRI original image; obtaining sMRI image data which are aligned in standard space, have LPBA probability brain atlas labels and have the size of 128 multiplied by 128;
step 2.1: firstly, aligning sMRI to a standard MNI space by using a statistical parameter mapping tool SPM;
step 2.2: preparing an LPBA probability brain atlas in a space corresponding to the sMRI as a label, and using the LPBA probability brain atlas as a reference comparison standard result during the training of the registration network model;
step 2.3: then down-sampling the aligned sMRI to 128 × 128 × 128;
and step 3: preprocessing an fMRI original image; obtaining fMRI image data which is used for eliminating the influence of machine interference, is aligned in a standard space after the skull is removed and has the size of 128 multiplied by 128;
step 3.1: removing X frame body data before fMRI;
step 3.2: uniformly selecting 10 individual data in the volume data of the fMRI residual frame;
step 3.3: removing skull from the sampled fMRI image data by using FSL;
step 3.4: determining the position of an origin by using a statistical parameter mapping tool SPM and aligning the position to a standard space;
step 3.5: upsampling the processed fMRI to 128 × 128 × 128 using an interpolation method;
and 4, step 4: training a registration network model and a deformation field based on Tensorflow and GPU architecture; training a registration network model by using a training data set, and outputting a multi-mode registration pre-training model; loading a multi-mode registration pre-training model by using a test data set, and calculating an evaluation index; training the deformation field by using the function fusion analysis data in combination with the training data set and the test data set;
step 4.1: training a registration network model by using a training data set, configuring training parameters and outputting a multi-mode registration pre-training model; the training data set comprises preprocessed sMRI images and corresponding fMRI data, and is used for training a registration network model;
the multi-mode registration network is obtained by optimizing the existing VoxelMorph network structure, a positioning network in the STN network is added to complete one-step rigid body transformation, and the number of network layers is deepened, so that the size of a characteristic diagram is finally 4 multiplied by 4;
step 4.1.1: writing the preprocessed fMRI and sMRI images and LPBA labels into an h5 file, and inputting the h5 file into a registration network model;
step 4.1.2: loading and training parameter configuration of a registration network model;
step 4.1.3: outputting a multi-mode registration pre-training model obtained by training;
step 4.2: loading a multi-mode registration pre-training model by using a test data set, and calculating evaluation indexes, namely DICE and jacobian coefficients; the test data set comprises preprocessed sMRI images and corresponding LPBA brain atlas labels and is used for fine-tuning multi-mode registration pre-training model parameters;
step 4.2.1: writing the input fMRI image and LPBA label into an h5 file, and inputting an h5 file into a registration network model;
step 4.2.2: loading a multi-modal registration pre-training model trained using a training data set;
step 4.2.3: loading parameter configuration of a test data set and calculating evaluation indexes of the test data set;
step 4.2.4: outputting an evaluation index obtained by testing the data set, and using mutual information as a main body of a loss function; calculating an evaluation index, and comparing the similarity of moved and original fixed; the specific calculation method is as follows:
the fixed and moved mutual information calculation formula is as follows:
Figure BDA0003484860480000031
wherein, F represents fixed; m represents moved; corr represents mutual information between fixed and moved, Cov (F, M) represents covariance between fixed and moved, e (F) represents expected fixed, var (F) represents variance of fixed;
the calculation formula of the similarity between moved and original fixed is as follows:
Lsim==1-Corr(F,M)
the spatial smoothing term calculation formula of the deformation field trained by the multi-mode registration network model is as follows:
R(u)=‖Du‖2
wherein | Du |2Representing the minimum square error of the deformation field Du, and regularizing the whole smoothness of the deformation field of the predicted displacement;
the loss function formula of the registration network model structure is as follows:
Ltotal=Lsim+R(u)
step 4.3: training the deformation field by using the function fusion analysis data in combination with the training data set and the test data set; the functional fusion data comprise preprocessed sMRI images and corresponding fMRI data, and the fMRI data preprocess all 200 individual data;
in order to solve the problem of large difference of deformation fields, a recursive method is used for optimization according to the existing cascade network, and a plurality of difficult deformation tasks are decomposed into a plurality of simple subtasks;
and 5: adjusting training parameters of the training data set according to the evaluation indexes calculated in the step 4.2, inputting the training data set again to train the multi-mode registration pre-training model, and obtaining the multi-mode registration model;
step 6: loading the multi-modal registration model obtained in the step 5, and using the function fusion analysis data to make a brain structure function fusion image;
step 6.1: inputting functional fusion analysis data into the multi-modal registration model;
step 6.2: loading a multi-mode registration model parameter model and a configuration file, and completing deformation registration on single-frame fMRI;
step 6.3: outputting a deformation field and a registration result;
step 6.4: combining all registration results to manufacture a structural function fusion image;
and 7: analyzing the brain structure and the brain function according to the fusion image obtained in the step 6;
performing brain function partitioning on the fused image, and performing function analysis by using methods such as ICA independent principal component analysis and the like under the condition that an interested region is not preset; the structural partition in the standard space can be directly obtained for the fused image;
step 7.1: performing brain function analysis on the fused image by using ICA or GroupICA to obtain a brain activation region and a corresponding functional partition;
step 7.2: and performing brain structure analysis by using the fusion image corresponding to the MNI template.
The invention has the beneficial effects that: the invention aims to provide a brain function fusion analysis method based on multi-mode registration, which uses a deep learning network to complete multi-mode registration fusion and solves the problems of large resolution difference and information loss wind. Provides data support for subsequent brain structure and function analysis and diagnosis of mental diseases.
The invention provides a brain function analysis method based on multi-modal registration, which solves the problem that multi-modal structures and functional images cannot be fused and aligned at the present stage. The difficulty of fusion of three-dimensional structure information and four-dimensional function information is high, the problem that the dimension difference is large and the resolution difference of single frame data is large is also reflected, and the network training difficulty is decomposed by using a cascade network thought. Meanwhile, an end-to-end network is designed to reduce the preprocessing difficulty. Realizes the multi-modal functional and structural nuclear magnetic registration and the fusion of image results to complete the brain function analysis.
Drawings
FIG. 1 is a flow chart of a process flow of an embodiment of the present invention.
Fig. 2 is an overall structural diagram of a multi-modal registration network according to an embodiment of the present invention.
FIG. 3 is a flowchart of recursive network input according to an embodiment of the present invention.
FIG. 4 illustrates an embodiment of the present invention inputting a preprocessed data set format; panel (a) single frame fMRI after pre-processing; panel (b) post-pretreatment srmri; panel (c) LPBA label; wherein, the combination of the graph (a) and the graph (b) is used for training the registration network, and the graph (c) is used as an evaluation index and directly outputs a probabilistic brain atlas.
FIG. 5 illustrates deformation fields and registration results according to an embodiment of the present invention.
FIG. 6 results of brain function analysis according to the embodiment of the present invention.
Fig. 7 is a network structure diagram corresponding to the elastic deformation subnetwork.
FIG. 8 illustrates the convergence speed of Loss during training according to an embodiment of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and examples;
the invention provides a brain function fusion analysis method based on multi-modal registration, which uses fMRI (functional magnetic resonance imaging) and sMRI (structural magnetic resonance imaging) as input, the size of an original image is generally 64 × 64 × 40 × 200 for fMRI, 256 × 256 × 200 for sMRI, and a deformation field adapting to single frame fMRI and sMRI is trained to obtain a fused image and perform structure and function analysis. As shown in fig. 1, the present invention comprises the following steps:
step 1: inputting fMRI and sMRI original images into the registration network model, and training the registration network model; the input original image format is a nifti file;
step 2: preprocessing an sMRI original image; obtaining sMRI image data which are aligned in standard space, have LPBA probability brain atlas labels and have the size of 128 multiplied by 128;
step 2.1: firstly, aligning sMRI to a standard MNI space by using a statistical parameter mapping tool SPM;
step 2.2: preparing an LPBA probability brain atlas in a space corresponding to the sMRI as a label, and using the LPBA probability brain atlas as a reference comparison standard result during the training of the registration network model;
step 2.3: then down-sampling the aligned sMRI to 128 × 128 × 128;
and step 3: preprocessing an fMRI original image; obtaining fMRI image data which is used for eliminating the influence of machine interference, is aligned in a standard space after the skull is removed and has the size of 128 multiplied by 128;
step 3.1: removing the 30 frames of body data before fMRI;
step 3.2: uniformly selecting 10 individual data in the volume data of the fMRI residual frame;
step 3.3: removing skull from the sampled fMRI image data by using FSL;
step 3.4: determining the position of an origin by using a statistical parameter mapping tool SPM and aligning the position to a standard space;
step 3.5: upsampling the processed fMRI to 128 × 128 × 128 using an interpolation method;
and 4, step 4: training a registration network and a deformation field based on the Tensorflow and GPU architecture; training a registration network by using a training data set, and outputting a multi-mode registration pre-training model; loading a multi-mode registration pre-training model by using a test data set, and calculating an evaluation index; training the deformation field by using the function fusion analysis data in combination with the training data set and the test data set; FIG. 4 illustrates an embodiment of the present invention inputting a preprocessed data set format;
step 4.1: training a registration network by using a training data set, configuring training parameters and outputting a multi-mode registration pre-training model; the training data set comprises preprocessed sMRI images and corresponding fMRI data, and is used for training a registration network model;
the multi-mode registration network is obtained by optimizing the existing VoxelMorph network structure, a positioning network in the STN network is added to complete one-step rigid body transformation, and the number of network layers is deepened, so that the size of a characteristic diagram is finally 4 multiplied by 4;
fig. 2 is a diagram of a multi-modality registration architecture used in the present invention. And inputting the preprocessed sMRI and fMRI into a multi-modal registration network structure to obtain a multi-modal registration model capable of fusing a structure and a functional image. The network used by the method is an end-to-end network, before deformation field training is completed, a positioning network is firstly input, the main structure of the positioning network is a CNN network, fixed and moving are used as the input of the network structure, and images or feature mapping is converted based on a voxel mode. The network localization network after rigid body transformation predicts affine transformation coefficient matrixes of rotation, translation and scaling, and takes elements of the transformation matrixes as output network structures. According to parameters of localization Network, carrying out affine transformation on an input image, wherein a transformation matrix is as follows:
Figure BDA0003484860480000061
where θ represents a parameter that determines the linear transformation; miRepresenting each voxel of the moving; x is the number ofi、yi、ziRepresenting the position information of each point of the three-dimensional data. A regression network is trained in advance to predict the affine transformation coefficient matrix. Automatically executing affine transformation without human participation by using the obtained parameters, and coarsely registering after affine transformationThe result of (a) is an input network for the elastically registered network structure.
Step 4.1.1: writing the preprocessed fMRI and sMRI images and LPBA labels into an h5 file, writing an h5 file into a json configuration file, and inputting the json file into a network;
step 4.1.2: if the pre-trained multi-mode registration model exists, initializing the pre-trained multi-mode registration model, and if the pre-trained multi-mode registration model does not exist, directly training a registration network;
step 4.1.3: and loading the parameter configuration of the pre-trained multi-mode registration model, setting the recursion times as 2 times, and training the multi-mode registration model.
Step 4.1.4: outputting a pre-trained multi-mode registration model obtained by training;
step 4.2: loading a pre-trained multi-mode registration model by using a test data set, and calculating evaluation indexes, namely DICE and jacobian coefficients; the test data set comprises preprocessed sMRI images and corresponding LPBA brain atlas labels and is used for fine-tuning multi-mode pre-training registration model parameters;
step 4.2.1: writing an input fMRI image and an LPBA label into an h5 file, writing an h5 file into a json configuration file, and inputting the json file into a registration network;
step 4.2.2: loading a pre-trained multi-modal registration model trained using a training dataset;
step 4.2.3: loading parameter configuration of a test data set and calculating evaluation indexes of the test data set;
step 4.2.4: outputting an evaluation index obtained by the test data set, and using mutual information as a main body of a loss function; calculating an evaluation index, and comparing the similarity of moved and original fixed; the specific calculation method is as follows:
the fixed and moved mutual information calculation formula is as follows:
Figure BDA0003484860480000071
wherein, F represents fixed; m represents moved; corr represents mutual information between fixed and moved, Cov (F, M) represents covariance between fixed and moved, e (F) represents expected fixed, var (F) represents variance of fixed;
the calculation formula of the similarity between moved and original fixed is as follows:
Lsim==1-Corr(F,M)
the calculation formula of the spatial smoothing term of the deformation field trained by the registration network is as follows:
R(u)=‖Du‖2
wherein | Du |2Representing the minimum square error of the deformation field Du, and regularizing the whole smoothness of the deformation field of the predicted displacement;
the loss function formula of the registration network structure is as follows:
Ltotal=Lsim+R(u)
step 4.3: training the deformation field by using the function fusion analysis data in combination with the training data set and the test data set; the functional fusion data comprise preprocessed sMRI images and corresponding fMRI data, and the fMRI data preprocess all 200 individual data;
in order to solve the problem of large difference of deformation fields, a recursive method is used for optimization according to the existing cascade network, and a plurality of difficult deformation tasks are decomposed into a plurality of simple subtasks;
fig. 3 is a cascaded network used in the present invention to solve the problem of large difference in deformation field. Fig. 7 is a network structure diagram corresponding to the elastic deformation subnetwork. The relative positions of fixed and moving in the space are ensured in the coarse registration, but due to the large difference of resolution and morphology between fMRI and sMRI, a registration network is selected and called recursively. The backbone network is built based on a VoxelMorph network structure, and each Convolitional layer consists of three-dimensional convolution (Conv3d) and LeakyReLU. The feature map with the size of 4 × 4 × 4 is obtained by down-sampling. When downsampling is carried out for 3,4,5 and 6 times, a 3 multiplied by 3 constant layer with the step length of 1 is subjected to UpSampling (UpSampling) operation, and the downsampled layer is spliced with low-dimensional features (Concatenate) output by a high-scale residual unit in a backbone network to obtain a deformation field of each sub-cascade.
The registration task is decomposed into a plurality of subtasks, and the network registration difficulty is reduced. The problem of large multi-mode registration deformation field is solved. The larger deformation task is decomposed into several small subtasks, the deformation is completed from the global to the local, the moving image is continuously distorted, and the final prediction (which may have large displacement) is decomposed into a cascade and gradual small displacement deformation field.
Since the network is called for many times, in the prediction process, the probability brain atlas aligned with the fixed label is used as the input of the network, and the final deformation result is predicted, so that the deformation field can act on the probability brain atlas at the same time, and the obtained prediction result not only comprises the registration image after each frame of fMRI fusion, but also comprises the probability brain atlas corresponding to the registration image and the corresponding partition. Thus, the corresponding partition can be used as an evaluation index for subsequent evaluation.
And 5: adjusting training parameters of the training data set according to the evaluation indexes calculated in the step 4.2, inputting the training data set again to train the multi-modal registration pre-training model, and obtaining a final multi-modal registration model;
the following table shows the evaluation indexes of the multi-modal registration model and the evaluation indexes of the VoxelMorph registration network before the same data set is input into the multi-modal registration model. FIG. 8 shows the convergence rate of Loss during training.
Figure BDA0003484860480000081
Step 6: loading the multi-modal registration model obtained in the step 5, and using the function fusion analysis data to make a brain structure function fusion image;
step 6.1: inputting function fusion analysis data into a registration network;
step 6.2: loading a multi-mode registration model parameter model and a configuration file, and completing deformation registration on single-frame fMRI;
step 6.3: outputting a deformation field and a registration result; FIG. 5 is a deformation field and registration results;
step 6.4: combining all registration results to manufacture a structural function fusion image;
and 7: analyzing the brain structure and the brain function according to the fusion image obtained in the step 6; FIG. 6 shows the results of brain function analysis according to an embodiment of the present invention;
performing brain function partitioning on the fused image, and performing function analysis by using methods such as ICA independent principal component analysis and the like under the condition that an interested region is not preset; the structural partition in the standard space can be directly obtained for the fused image;
step 7.1: performing brain function analysis on the fused image by using ICA or GroupICA to obtain a brain activation region and a corresponding functional partition;
step 7.2: and performing brain structure analysis by using the fusion image corresponding to the MNI template.
By adopting the brain function analysis method based on multi-modal registration, the multi-modal function and structure nuclear magnetic registration is realized by using a deep learning method, the similarity between the fused image and the original structure sMRI image is higher than that of the traditional method, and the function information of the original fMRI image is well fused. The result shows that the method solves the problem of high difficulty in multi-modal image registration. The method is simple, overcomes the defect that the existing stage method cannot realize the fusion of the structure and the function information, has an end-to-end network structure as a whole, does not need manual interaction in the processing process, and meets the application requirement.

Claims (8)

1. A brain function fusion analysis method based on multi-modal registration is characterized by comprising the following steps:
step 1: inputting fMRI and sMRI original images into the registration network model, and training the registration network model; the input original image format is a nifti file;
step 2: preprocessing an sMRI original image; obtaining sMRI image data which are aligned in standard space, have LPBA probability brain atlas labels and have the size of 128 multiplied by 128;
and step 3: preprocessing an fMRI original image; obtaining fMRI image data which is used for eliminating the influence of machine interference, is aligned in a standard space after the skull is removed and has the size of 128 multiplied by 128;
and 4, step 4: training a registration network model and a deformation field based on Tensorflow and GPU architecture; training a registration network model by using a training data set, and outputting a multi-mode registration pre-training model; loading a multi-mode registration pre-training model by using a test data set, and calculating an evaluation index; training the deformation field by using the function fusion analysis data in combination with the training data set and the test data set;
and 5: adjusting training parameters of the training data set according to the evaluation indexes calculated in the step 4, inputting the training data set again to train the multi-mode registration pre-training model, and obtaining the multi-mode registration model;
step 6: loading the multi-modal registration model obtained in the step 5, and using the function fusion analysis data to make a brain structure function fusion image;
and 7: analyzing the brain structure and the brain function according to the fusion image obtained in the step 6;
performing brain function partitioning on the fused image, and performing function analysis by using methods such as ICA independent principal component analysis and the like under the condition that an interested region is not preset; the structural partition in the standard space can be directly obtained for the fused image.
2. The brain function fusion analysis method based on multi-modal registration according to claim 1, wherein the step 2 is specifically:
step 2.1: firstly, aligning sMRI to a standard MNI space by using a statistical parameter mapping tool SPM;
step 2.2: preparing an LPBA probability brain atlas in a space corresponding to the sMRI as a label, and using the LPBA probability brain atlas as a reference comparison standard result during the training of the registration network model;
step 2.3: the aligned srmri is then down-sampled to 128 × 128 × 128.
3. The brain function fusion analysis method based on multi-modal registration according to claim 1, wherein the step 3 is specifically:
step 3.1: removing X frame body data before fMRI;
step 3.2: uniformly selecting 10 individual data in the volume data of the fMRI residual frame;
step 3.3: removing skull from the sampled fMRI image data by using FSL;
step 3.4: determining the position of an origin by using a statistical parameter mapping tool SPM and aligning the position to a standard space;
step 3.5: the processed fMRI is upsampled to 128 x 128 using an interpolation method.
4. The brain function fusion analysis method based on multi-modal registration according to claim 1, wherein the step 4 is specifically:
step 4.1: training a registration network model by using a training data set, configuring training parameters and outputting a multi-mode registration pre-training model; the training data set comprises preprocessed sMRI images and corresponding fMRI data, and is used for training a registration network model;
the multi-mode registration network is obtained by optimizing the existing VoxelMorph network structure, a positioning network in the STN network is added to complete one-step rigid body transformation, and the number of network layers is deepened, so that the size of a characteristic diagram is finally 4 multiplied by 4;
step 4.2: loading a multi-mode registration pre-training model by using a test data set, and calculating evaluation indexes, namely DICE and jacobian coefficients; the test data set comprises preprocessed sMRI images and corresponding LPBA brain atlas labels and is used for fine-tuning multi-mode registration pre-training model parameters;
step 4.3: training the deformation field by using the function fusion analysis data in combination with the training data set and the test data set; the functional fusion data comprise preprocessed sMRI images and corresponding fMRI data, and the fMRI data preprocess all 200 individual data;
in order to solve the problem of large deformation field difference, the existing cascade network is optimized by a recursion method, and a plurality of difficult deformation tasks are decomposed into a plurality of simple subtasks.
5. The brain function fusion analysis method based on multi-modal registration according to claim 4, wherein the step 4.1 is specifically:
step 4.1.1: writing the preprocessed fMRI and sMRI images and LPBA labels into an h5 file, and inputting the h5 file into a registration network model;
step 4.1.2: loading and training parameter configuration of a registration network model;
step 4.1.3: and outputting the multi-mode registration pre-training model obtained by training.
6. The brain function fusion analysis method based on multi-modal registration according to claim 4, wherein the step 4.2 is specifically:
step 4.2.1: writing the input fMRI image and LPBA label into an h5 file, and inputting an h5 file into a registration network model;
step 4.2.2: loading a multi-modal registration pre-training model trained using a training data set;
step 4.2.3: loading parameter configuration of a test data set and calculating evaluation indexes of the test data set;
step 4.2.4: outputting an evaluation index obtained by testing the data set, and using mutual information as a main body of a loss function; calculating an evaluation index, and comparing the similarity of moved and original fixed; the specific calculation method is as follows:
the fixed and moved mutual information calculation formula is as follows:
Figure FDA0003484860470000031
wherein, F represents fixed; m represents moved; corr represents mutual information between fixed and moved, Cov (F, M) represents covariance between fixed and moved, e (F) represents expected fixed, var (F) represents variance of fixed;
the calculation formula of the similarity between moved and original fixed is as follows:
Lsim==1-Corr(F,M)
the spatial smoothing term calculation formula of the deformation field trained by the multi-mode registration network model is as follows:
R(u)=‖Du‖2
wherein | Du |2Representing the minimum squared error of the deformation field Du, will predict the deformation field overall smoothness regularization of the displacementMelting;
the loss function formula of the registration network model structure is as follows:
Ltotal=Lsim+R(u)。
7. the brain function fusion analysis method based on multi-modal registration according to claim 1, wherein the step 6 is specifically:
step 6.1: inputting functional fusion analysis data into the multi-modal registration model;
step 6.2: loading a multi-mode registration model parameter model and a configuration file, and completing deformation registration on single-frame fMRI;
step 6.3: outputting a deformation field and a registration result;
step 6.4: and combining all the registration results to manufacture a structural function fusion image.
8. The brain function fusion analysis method based on multi-modal registration according to claim 1, wherein the step 7 is specifically as follows:
step 7.1: performing brain function analysis on the fused image by using ICA or GroupICA to obtain a brain activation region and a corresponding functional partition;
step 7.2: and performing brain structure analysis by using the fusion image corresponding to the MNI template.
CN202210084461.6A 2022-01-24 2022-01-24 Brain function fusion analysis method based on multi-modal registration Pending CN114419015A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210084461.6A CN114419015A (en) 2022-01-24 2022-01-24 Brain function fusion analysis method based on multi-modal registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210084461.6A CN114419015A (en) 2022-01-24 2022-01-24 Brain function fusion analysis method based on multi-modal registration

Publications (1)

Publication Number Publication Date
CN114419015A true CN114419015A (en) 2022-04-29

Family

ID=81277684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210084461.6A Pending CN114419015A (en) 2022-01-24 2022-01-24 Brain function fusion analysis method based on multi-modal registration

Country Status (1)

Country Link
CN (1) CN114419015A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078692A (en) * 2023-10-13 2023-11-17 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) Medical ultrasonic image segmentation method and system based on self-adaptive feature fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078692A (en) * 2023-10-13 2023-11-17 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) Medical ultrasonic image segmentation method and system based on self-adaptive feature fusion
CN117078692B (en) * 2023-10-13 2024-02-06 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) Medical ultrasonic image segmentation method and system based on self-adaptive feature fusion

Similar Documents

Publication Publication Date Title
CN111798462A (en) Automatic delineation method for nasopharyngeal carcinoma radiotherapy target area based on CT image
CN112837274B (en) Classification recognition method based on multi-mode multi-site data fusion
US20230342918A1 (en) Image-driven brain atlas construction method, apparatus, device and storage medium
WO2022121100A1 (en) Darts network-based multi-modal medical image fusion method
JP2023540910A (en) Connected Machine Learning Model with Collaborative Training for Lesion Detection
CN112862805B (en) Automatic auditory neuroma image segmentation method and system
CN110838140A (en) Ultrasound and nuclear magnetic image registration fusion method and device based on hybrid supervised learning
CN114037714B (en) 3D MR and TRUS image segmentation method for prostate system puncture
CN114202545A (en) UNet + + based low-grade glioma image segmentation method
CN113139974B (en) Focus segmentation model training and application method based on semi-supervised learning
CN112785593A (en) Brain image segmentation method based on deep learning
CN115578427A (en) Unsupervised single-mode medical image registration method based on deep learning
CN115375711A (en) Image segmentation method of global context attention network based on multi-scale fusion
CN114581453A (en) Medical image segmentation method based on multi-axial-plane feature fusion two-dimensional convolution neural network
CN114266939A (en) Brain extraction method based on ResTLU-Net model
CN117274599A (en) Brain magnetic resonance segmentation method and system based on combined double-task self-encoder
CN116823625A (en) Cross-contrast magnetic resonance super-resolution method and system based on variational self-encoder
CN113269774B (en) Parkinson disease classification and lesion region labeling method of MRI (magnetic resonance imaging) image
CN114419015A (en) Brain function fusion analysis method based on multi-modal registration
CN114241240A (en) Method and device for classifying brain images, electronic equipment and storage medium
CN112164447B (en) Image processing method, device, equipment and storage medium
CN116823613A (en) Multi-mode MR image super-resolution method based on gradient enhanced attention
CN116309754A (en) Brain medical image registration method and system based on local-global information collaboration
CN115565671A (en) Atrial fibrillation auxiliary analysis method based on cross-model mutual teaching semi-supervision
CN114119354A (en) Medical image registration training and using method, system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination