CN110378980B - Multichannel magnetic resonance image reconstruction method based on deep learning - Google Patents

Multichannel magnetic resonance image reconstruction method based on deep learning Download PDF

Info

Publication number
CN110378980B
CN110378980B CN201910641089.2A CN201910641089A CN110378980B CN 110378980 B CN110378980 B CN 110378980B CN 201910641089 A CN201910641089 A CN 201910641089A CN 110378980 B CN110378980 B CN 110378980B
Authority
CN
China
Prior art keywords
magnetic resonance
resonance image
channel
network model
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910641089.2A
Other languages
Chinese (zh)
Other versions
CN110378980A (en
Inventor
屈小波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN201910641089.2A priority Critical patent/CN110378980B/en
Publication of CN110378980A publication Critical patent/CN110378980A/en
Application granted granted Critical
Publication of CN110378980B publication Critical patent/CN110378980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

A multi-channel magnetic resonance image reconstruction method based on deep learning relates to a multi-channel magnetic resonance image reconstruction method. The method comprises the following steps: acquiring a multichannel magnetic resonance image, and forming a training set by a sensitivity mapping image, an under-sampled zero-filling multichannel magnetic resonance image and a fully sampled synthetic image; establishing a multi-channel deep learning network model for magnetic resonance image reconstruction; constructing a loss function of the network; training multi-channel magnetic resonance image reconstruction network model parameters; reconstructing a target under-sampled multi-channel magnetic resonance image; obtaining an end-to-end mapping function from the undersampled multi-channel image to the complete magnetic resonance image and a network model loss function corresponding to the network model by adopting a residual connection mode for the iteration block; training parameters of a magnetic resonance image reconstruction network model of a residual error connection mode; and reconstructing the target under-sampled multi-channel magnetic resonance image by using the network model connected by the residual errors. The method has the advantages of high reconstruction speed and good reconstruction effect.

Description

Multichannel magnetic resonance image reconstruction method based on deep learning
Technical Field
The invention relates to a method for reconstructing a multi-channel magnetic resonance image, in particular to a method for reconstructing a multi-channel magnetic resonance image based on deep learning by using an iterative network.
Background
Magnetic Resonance Imaging (MRI) is an important clinical medical diagnostic tool. Because of its radiationless, advantages such as contrast of soft tissue imaging are widely used in diagnosis of brain, cardiovascular and abdominal regions.
In the process of clinical diagnosis, high-resolution magnetic resonance images can show tiny focuses, and are helpful for disease discovery and treatment. However, the generally longer scan time required for high resolution magnetic resonance images increases patient discomfort during the scan. Speeding up scanning by means of multi-channel parallel imaging and sparse sampling has been the approach used in the clinic. Common multi-channel parallel imaging modalities include Sensitivity encoding (Klaas P. Pruessmann, Markus Weiger, Markus B. Scheideger, and Peter Boeseger. SENSE: sensing encoding for fast MRI. Magnetic Resonance in Medicine,42: 952-. In sparse sampling, the complete magnetic resonance Image is often recovered by applying a sparse constraint (seismic mapping from undivided acquired by acquired, 63(9):1850-1861,2016.). In recent years, with the rapid development of deep learning in the field of Computer Vision, the usage scenarios of artificial intelligence have become more and more extensive (ISTA-Net: interpretation-estimated network for image-computing sensing. the IEEE Conference on Computer Vision and Pattern recognition,1828-1837, 2018.; Xaaobo, Yihui Huang, Hengfa Lu, Tianyu Qiau, Diguo, Tatianga agrack, Vladislavav Orekhov, Zhang Chen electronic Computer aided diagnosis with depth learning. arXiv 1904.05168v2,2019.). Wang et al (Shanshan Wang, Zhenghang Su, Leslie Ying, Xi Peng, Shun Zhu, Feng Liang, Daganeng and Dong Liang, According magnetic resonance Imaging via estimating, 2016IEEE 13th International Symposium on biological Imaging (ISBI),514- > 517,2016.) first established an end-to-end mapping from an undersampled magnetic resonance image to a complete magnetic resonance image using a convolutional neural network, shortening the image reconstruction time. However, none of the methods proposed by Wang et al allow for reconstruction using multi-channel magnetic resonance images and are not widely applicable to current commercial devices.
Disclosure of Invention
The invention aims to provide a deep learning-based multi-channel magnetic resonance image reconstruction method which is high in reconstruction speed, good in reconstruction effect and high in commercial value.
The invention comprises the following steps:
1) acquiring a multichannel magnetic resonance image, and forming a training set by a sensitivity mapping image, an under-sampled zero-filling multichannel magnetic resonance image and a fully sampled synthetic image;
2) establishing a multi-channel deep learning network model for magnetic resonance image reconstruction;
3) constructing a loss function of the network;
4) training multi-channel magnetic resonance image reconstruction network model parameters;
5) reconstructing a target under-sampled multi-channel magnetic resonance image;
6) obtaining an end-to-end mapping function from the undersampled multi-channel image to the complete magnetic resonance image and a network model loss function corresponding to the network model by adopting a residual connection mode for the iteration block;
7) training parameters of a magnetic resonance image reconstruction network model of a residual error connection mode;
8) and reconstructing the target under-sampled multi-channel magnetic resonance image by using the network model connected by the residual errors.
In step 1), the specific method for acquiring the multichannel magnetic resonance image, in which the sensitivity map, the under-sampled zero-padding multichannel magnetic resonance image, and the fully sampled synthetic image together form a training set, may be:
acquiring complete multi-channel magnetic resonance image X ═ X from magnetic resonance imaging instrument1,...,xj,...,xJ]And sensitivity map C ═ C1,...,Cj,...,CJ],
Figure BDA0002131886890000021
Wherein x isjRepresenting the complete magnetic resonance image of the vectorized j-th channel, cjA sensitivity map representing the vectorized jth channel, CjIs a sensitivity map of the jth channel represented by a diagonal matrix, CjDiagonal element of cjThe elements of (A) are sequentially combined,
Figure BDA0002131886890000031
representing a complex field, wherein N and J represent the point number and the channel number of the image in X or C; then using the undersampled matrix
Figure BDA0002131886890000032
The image of each channel in X is undersampled in fourier space, where M represents the number of points sampled,
Figure BDA0002131886890000033
representing the real number domain to obtain an under-sampled zero-filled multi-channel magnetic resonance image Xu=[xu,1,...,xu,j,...,xu,J]The subscript u denotes the Fourier space undersampling, x, of a single channelu,jA magnetic resonance image obtained by zero-filling an under-sampled fourier space representing a vectorized jth channel is defined as:
xu,j=FHUTUFxj, (1)
wherein the content of the first and second substances,
Figure BDA0002131886890000034
representing a discrete fourier transform, "H" representing a complex conjugate transpose, "T" representing a transpose; from all xj(J ═ 1,2, …, J) and cj(J-1, 2, …, J) to obtain a fully sampled composite image xcombined
Figure BDA0002131886890000035
The superscript "-1" represents the matrix inversion; from C, XuAnd xcombinedTogether forming a training set.
In step 2), the deep learning network model constructs an end-to-end mapping function A (X) from the undersampled multi-channel image to the complete magnetic resonance imageuC, C; Θ), wherein Θ is a training parameter in the network model; the network model is composed of S iteration blocks in series connection, and the output of the last iteration block
Figure BDA0002131886890000036
Complete magnetic resonance image reconstructed for network
Figure BDA0002131886890000037
The index of the s-th iteration block is represented by s, and the value of s is an integer greater than or equal to 1; each iteration block is composed ofData checking module, forward conversion volume block PsSoft threshold operator
Figure BDA0002131886890000038
And inverse transform volume block QsThe total four parts are connected in sequence;
the data verification module performs data verification on the output of the last iteration block by using the data acquired from the undersampled data, and the result r of the data verificationsFrom equation (2):
Figure BDA0002131886890000039
where s denotes the index of the s-th iteration block, γsIs the step size, γs∈ theta, the notation "∈" indicates that the training parameters theta contain gammas(ii) a When s is equal to 1, the first step is carried out,
Figure BDA00021318868900000310
the forward transform volume block PsThe convolution layer comprises L convolution layers, 1 nonlinear function activation layer sigma is connected behind each convolution layer except the L-th convolution layer, each convolution layer comprises K convolution kernels, the size of each convolution kernel is h × h, and the convolution kernel of the first forward convolution layer is formed by Wf,lIt is shown that,
Figure BDA0002131886890000041
Wf,l∈ theta, forward transform module checks result r of data checking modulesNon-linear mapping to zs,zs=Ps(rs);
The soft threshold operator outputs z to the forward transform convolution blocksPerforms a soft threshold operation on each element of
Figure BDA0002131886890000042
γsλsIs the threshold value, γ, of the s-th iteration blocksλs∈ theta and the definition of the soft threshold operation is given by the vector zsIf the vector z issElement z of the v-ths,vAbsolute value of (a) | zs,v|≤γsλsThen z iss,v0; if | zs,v|>γsλsThen z iss,v=sgn(zs,v)(|zs,v|-γsλs) Wherein sgn (·) is a sign function; note zsAfter passing through the soft threshold operator, the output is
Figure BDA0002131886890000043
The reverse conversion volume block QsThe convolution filter comprises L convolution layers, 1 nonlinear function active layer sigma is connected behind each convolution layer except the L-th convolution layer, each convolution layer comprises K convolution kernels, the size of each convolution kernel is h × h, and the convolution kernel of the L-th reverse convolution layer is formed by Wb,lIt is shown that,
Figure BDA0002131886890000044
Wb,l∈ theta, and the reverse transformation module outputs the soft threshold operator
Figure BDA0002131886890000045
Mapping to an image domain
Figure BDA0002131886890000046
In step 3), the loss function of the network is:
Figure BDA0002131886890000047
wherein I represents the index of the vectorized image in the training set, I represents the number of samples in the training set, s represents the s-th iteration block, ri,sIndicating the data check result of the ith sample in the s-th iteration block, βaIs a number with the value more than or equal to 0, can be set according to different training sets,
Figure BDA0002131886890000048
Figure BDA0002131886890000049
the ith complete magnetic resonance image is reconstructed by the network and expressed by the function of a network model
Figure BDA00021318868900000410
In step 4), the specific method for training the parameters of the multichannel magnetic resonance image reconstruction network model may be: estimating the optimal value of the parameter theta in the mapping function A by minimizing the loss function
Figure BDA00021318868900000411
The method of minimizing the loss function may employ Adam algorithm in deep learning (Kingma D, Ba J. Adam: A method for stochasticotimization. arXiv:1412.6980,2014.).
In step 5), the specific method for reconstructing the under-sampled multi-channel magnetic resonance image of the target may be: the optimal value of the model obtained by training in the step 4) is taken
Figure BDA00021318868900000412
As parameter Θ in A; undersampling a vectorized target multi-channel magnetic resonance image YuAnd its corresponding sensitivity mapping image CYInputting into a model, and reconstructing a corresponding complete magnetic resonance image after forward propagation
Figure BDA0002131886890000051
Representing the reconstructed magnetic resonance image of the s-th iteration block,
Figure BDA0002131886890000052
is formulated as:
Figure BDA0002131886890000053
in step 6), the specific method for obtaining the end-to-end mapping function from the under-sampled multi-channel image to the complete magnetic resonance image and the network model loss function corresponding to the network model by using the residual connection mode for the iteration block may be:
adopting a residual error connection mode for the iteration block on the basis of the step 2), wherein the end-to-end mapping function from the undersampled multi-channel image to the complete magnetic resonance image corresponding to the network model is B (X)uC, C; Θ), while the loss function of the network becomes:
Figure BDA0002131886890000054
wherein I represents the index of the vectorized image in the training set, I represents the number of samples in the training set, s represents the s-th iteration block,
Figure BDA0002131886890000055
representing the output of the ith sample as it passes through the s-th iteration block, ri,sIndicating the data check result of the ith sample in the s-th iteration block, βbIs a number with the value more than or equal to 0, can be set according to different training sets,
Figure BDA0002131886890000056
Figure BDA0002131886890000057
the ith complete magnetic resonance image is reconstructed by the network and expressed by the function of a network model
Figure BDA0002131886890000058
In step 7), the specific method for reconstructing the network model parameters of the magnetic resonance image in the training residual connection mode may be: estimating the optimal value of the parameter theta in the mapping function B by minimizing the loss function
Figure BDA0002131886890000059
The method of minimizing the loss function is the Adam algorithm in deep learning.
In step 8), the network model connected by the residual error is used for reconstructing the target under-sampled multi-channel magnetic resonance imageThe method comprises the following steps: the optimal value of the network model obtained by training in the step 7) is taken
Figure BDA00021318868900000510
As parameter Θ in B; undersampling a vectorized target multi-channel magnetic resonance image YuAnd its corresponding sensitivity mapping image CYInputting into network model, and reconstructing corresponding complete magnetic resonance image after forward propagation
Figure BDA00021318868900000511
Figure BDA00021318868900000512
Representing the reconstructed magnetic resonance image of the s-th iteration block,
Figure BDA00021318868900000515
is formulated as:
Figure BDA00021318868900000514
the invention provides a method which can reconstruct a complete magnetic resonance image from an undersampled multi-channel magnetic resonance image. The method comprises the steps of firstly acquiring complete and undersampled multi-channel magnetic resonance images as a training set, then establishing a deep learning network model reconstructed by the multi-channel magnetic resonance images, then training the deep learning network model by using the training set to obtain a trained network, and finally inputting the undersampled multi-channel magnetic resonance images into the network to reconstruct the complete magnetic resonance images. The deep learning multi-channel magnetic resonance image reconstruction method has the characteristics of high reconstruction speed and good reconstruction effect. Compared with the prior art, the invention can obtain the complete magnetic resonance image after carrying out one-time forward propagation on the undersampled multi-channel image by utilizing the network model obtained by training, thereby greatly improving the reconstruction speed of the undersampled image, and simultaneously, the details of the image can be more completely recovered after adding the residual connection.
Drawings
Fig. 1 shows a training process of the proposed multi-channel magnetic resonance image reconstruction network model.
Figure 2 is a reconstruction process of a multi-channel magnetic resonance image of an object under-sampled through the proposed network model.
Figure 3 is a training process of the proposed residual concatenated multi-channel magnetic resonance image reconstruction network model.
Figure 4 is a reconstruction process of a target undersampled multi-channel magnetic resonance image through a network model of the proposed residual concatenation.
Fig. 5 is a sampling template used in the training set and the test set. In fig. 5, (a) is a sampling template of the training set, and (b) is a sampling template of the test set.
Figure 6 is a reconstruction of a multi-channel undersampled magnetic resonance image. In fig. 6, (a) is the complete multi-channel magnetic resonance image reconstruction result, (b) the under-sampled multi-channel magnetic resonance image reconstruction result of the proposed network, (c) the under-sampled multi-channel magnetic resonance image reconstruction result of the proposed network with residual connection, (d) the reconstruction result of the under-sampled multi-channel magnetic resonance image without network, (e) the error map of the under-sampled multi-channel magnetic resonance image reconstruction result of the proposed network, and (f) the error map of the under-sampled multi-channel magnetic resonance image reconstruction result of the proposed network with residual connection.
Detailed Description
The present invention will be further described in the following embodiments with reference to the drawings, and the embodiments of the present invention are a specific process for reconstructing multi-channel brain data by using a deep learning method.
The embodiment of the invention comprises the following steps:
the first step is as follows: acquiring multi-channel magnetic resonance images as training set
The brain images of 4 volunteers scanned by the magnetic resonance instrument are taken as a training set and a testing set of the network, wherein the training set comes from the brain images of 12.4 volunteers scanned by the magnetic resonance instrumentA test set of 120 magnetic resonance images from 120 magnetic resonance images of another volunteer was then Fourier-spatially undersampled using a 33% sampling template from 360 magnetic resonance images of 3 volunteers, where each channel image of the training set samples was 256 × 186 in size and represented as a vector by a vector
Figure BDA0002131886890000071
Each channel image of the test set samples has a size of 232 × 186, represented as a vector
Figure BDA0002131886890000072
So that each multi-channel image sample of the training set can be represented as X ═ X1,...,xj,...,x12]Each multi-channel image sample of the test set may be represented as Y ═ Y1,...,yj,...,y12]And acquiring a respective sensitivity map C ═ C from the magnetic resonance instrument1,...,Cj,...,C12]And CY=[CY,1,...,CY,j,...,CY,12]. Using undersampled matrices U and U for each channel of X and YYSeparately undersampling the Fourier space, U and UYThe corresponding sampling templates are respectively shown in figure 5, and an undersampled multi-channel magnetic resonance image X is obtainedu=[xu,1,...,xu,j,...,xu,12]And Yu=[yu,1,...,yu,j,...,yu,12]. This process is defined by the formula:
xu,j=FHUTUFxj, (7)
Figure BDA0002131886890000073
wherein the content of the first and second substances,
Figure BDA0002131886890000074
denotes a discrete fourier transform, "H" denotes a complex conjugate transpose, "T" denotes a transpose.From XuC and xcombinedForm training set samples, Yu、CYAnd ycombinedA test set sample is composed of, among other things,
Figure BDA0002131886890000075
the superscript "-1" indicates the matrix inversion.
The second step is that: and establishing a multi-channel magnetic resonance image parallel reconstruction deep learning network model. The network model constructs an end-to-end mapping function A (X) from the undersampled multi-channel image to the complete magnetic resonance imageuC, C; Θ), where Θ is a training parameter in the network model. The network model is composed of 10 iteration blocks in series connection, and the output of the 10 th iteration block is taken as a complete magnetic resonance image reconstructed by the network
Figure BDA0002131886890000076
The details of the s-th iteration block are explained here, and the iteration block includes a data check module and a forward transform volume block PsSoft threshold operator
Figure BDA0002131886890000077
And inverse transform volume block QsFour parts:
a data checking module: the module carries out data verification on the output of the (s-1) th iteration block by using data acquired from the undersampled data, and the result r of the data verificationsObtained by the formula (9):
Figure BDA0002131886890000078
wherein, γsIs the step size, γs∈ theta, the notation "∈" indicates that the training parameters theta contain gammas,γsThe initial value is 1. When s is equal to 1, the first step is carried out,
Figure BDA0002131886890000081
forward transform volume block: forward converting rolling block PsIs composed of 3 convolutional layers, and except the 3 rd convolutional layer, after each convolutional layerEach convolution layer contains 24 convolution kernels, The size of each convolution kernel is 3 × 3, The input channel number of The convolution kernels of other convolution layers is 24 except The input channel number of The convolution kernel of The 1 st convolution layer, The convolution kernel of The first forward convolution layer is formed by Wf,lIt is shown that,
Figure BDA0002131886890000082
Wf,l∈ theta here, the forward transform module first checks the result of the data check module
Figure BDA0002131886890000083
The real part and the imaginary part of (A) are respectively stored as
Figure BDA0002131886890000084
1 real number channel of then rreal,sThen passing through the convolution layer in the forward conversion module to obtain the output zs,zs=Ps(rs)。
Soft threshold operator: this module outputs z to the forward transform convolution blocksPerforms a soft threshold operation on each element of
Figure BDA0002131886890000085
γsλsIs the threshold value, γ, of the s-th iteration blocksλs∈ theta, the threshold for each iteration block is initialized to 0.001. the definition of the soft thresholding operation is given by the vector zsIf the vector z issElement z of the v-ths,vAbsolute value of (a) | zs,v|≤γsλsThen z iss,v0; if | zs,v|>γsλsThen z iss,v=sgn(zs,v)(|zs,v|-γsλs) Where sgn (·) is a sign function. Note zsAfter passing through the soft threshold operator, the output is
Figure BDA0002131886890000086
Reverse transformation of volume blocks: reverse conversion roll block QsThe convolution kernel comprises 3 convolution layers, except the 3 rd convolution layer, 1 nonlinear function active layer ReLu is connected behind each convolution layer, except that the 3 rd convolution layer comprises 2 convolution kernels, the other convolution layers comprise 24 convolution kernels, the size of each convolution kernel is 3 × 3, the number of input channels of each convolution kernel is 24, and the convolution kernels of the l-th reverse convolution layer are formed by Wb,lIt is shown that,
Figure BDA0002131886890000087
Wb,l∈ theta. inverse transformation Module transforms the output T of the soft threshold operatorγλ(zs) Mapping to an image domain
Figure BDA0002131886890000088
Then taking out
Figure BDA0002131886890000089
As a reverse transform volume block to output the image
Figure BDA00021318868900000810
The 2 nd channel as
Figure BDA00021318868900000811
The imaginary part of (a) is,
Figure BDA00021318868900000812
the third step: a loss function for the network is constructed. The loss function of the network is defined as:
Figure BDA00021318868900000813
where I denotes the index of the vectorized image in the training set, I is the number of samples in the training set, where a total of 360 samples are obtained from the training set obtained in the first step, so I is 360. r isi,sIs shown asData check result of i samples in the s-th iteration block, βaThe setting is made to be 0.1,
Figure BDA0002131886890000091
Figure BDA00021318868900000917
the ith complete magnetic resonance image is reconstructed by the network and expressed by the function of a network model
Figure BDA0002131886890000093
The fourth step: and training parameters of the multichannel magnetic resonance image reconstruction network model. The training of the model is to estimate the optimal value of the parameter theta in the mapping function A by minimizing the loss function
Figure BDA0002131886890000094
The method for minimizing the loss function is an Adam algorithm in deep learning, and the learning rate is set to be 0.0001. The training set in the first step is used for 100 times of training to obtain the optimal value of the parameter theta
Figure BDA0002131886890000095
The training process of the network model is shown in fig. 1.
The fifth step: and reconstructing the target under-sampled multi-channel magnetic resonance image. The optimal value of the model obtained by training in the fourth step is taken
Figure BDA0002131886890000096
As parameter Θ in a. Here, Y in the test set generated in the first step is selecteduAs an undersampled multi-channel magnetic resonance image to be reconstructed, Y is takenuAnd corresponding CYInputting into a model, and reconstructing a corresponding complete magnetic resonance image after forward propagation
Figure BDA0002131886890000097
As shown in fig. 6 (b).
Figure BDA00021318868900000918
The corresponding fully sampled reference image is shown in figure 6 (a),
Figure BDA00021318868900000919
representing the reconstructed magnetic resonance image of the s-th iteration block,
Figure BDA00021318868900000910
is formulated as:
Figure BDA00021318868900000911
multi-channel magnetic resonance image Y of target undersamplinguThe reconstruction process through the network model is shown in fig. 2.
And a sixth step: and adopting a residual error connection mode for the iteration block on the basis of the second step, wherein the loss function of the network is changed into:
Figure BDA00021318868900000912
where I denotes the index of the vectorized image in the training set, I is the number of samples in the training set, where a total of 360 samples are obtained from the training set obtained in the first step, so I is 360.
Figure BDA00021318868900000913
Representing the output of the ith sample as it passes through the s-th iteration block, ri,sIndicating the data check result of the ith sample in the s-th iteration block, βbThe setting is made to be 0.1,
Figure BDA00021318868900000914
Figure BDA00021318868900000915
the ith complete magnetic resonance image is reconstructed by the network and expressed by the function of a network model
Figure BDA00021318868900000916
Here, a training set obtained in the first step has a total of 360 samples, so I is 360.
The seventh step: and training parameters of the magnetic resonance image reconstruction network model of the residual error connection mode. The training of the model is to estimate the optimal value of the parameter theta in the mapping function B by minimizing the loss function
Figure BDA0002131886890000101
The method for minimizing the loss function is an Adam algorithm in deep learning, and the learning rate is set to be 0.0001. The training set in the first step is used for 100 times of training to obtain the optimal value of the parameter theta
Figure BDA0002131886890000102
The training process of the residual error connected mode magnetic resonance image reconstruction network model is shown in fig. 3.
Eighth step: and reconstructing the target under-sampled multi-channel magnetic resonance image by using the network model connected by the residual errors. The optimal value of the model obtained by training in the seventh step is taken
Figure BDA0002131886890000103
As parameter Θ in B. Here, Y in the test set generated in the first step is selecteduAs an undersampled multi-channel magnetic resonance image to be reconstructed, Y is takenuAnd corresponding CYInputting into a model, and reconstructing a corresponding complete magnetic resonance image after forward propagation
Figure BDA0002131886890000104
As shown in (c) of figure 6,
Figure BDA0002131886890000105
representing the reconstructed magnetic resonance image of the s-th iteration block,
Figure BDA0002131886890000106
is formulated as:
Figure BDA0002131886890000107
multi-channel magnetic resonance image Y of target undersamplinguThe reconstruction process of the residual-connected network model is shown in fig. 4.
Compared with the prior art, the method has the advantages that the network model obtained through training is utilized to perform one-time forward propagation on the under-sampled multi-channel image to obtain the complete magnetic resonance image, and the reconstruction speed of the under-sampled image is greatly improved. Fig. 6 (e) is an error map of the reconstruction result of the undersampled multi-channel magnetic resonance image, and fig. 6 (f) is an error map of the reconstruction result of the undersampled multi-channel magnetic resonance image with a network of residual connections, wherein darker error maps indicate smaller reconstruction errors. As can be seen from the graphs (e) and (f) in fig. 6, after the residual concatenation is added, the error of the magnetic resonance image reconstruction result is smaller, and the details can be more completely recovered.

Claims (5)

1. A multi-channel magnetic resonance image reconstruction method based on deep learning is characterized by comprising the following steps:
1) acquiring a multichannel magnetic resonance image, wherein a sensitivity mapping chart, an under-sampling zero-filling multichannel magnetic resonance image and a fully-sampled synthetic image jointly form a training set, and the specific method comprises the following steps:
acquiring complete multi-channel magnetic resonance image X ═ X from magnetic resonance imaging instrument1,...,xj,...,xJ]And sensitivity map C ═ C1,...,Cj,...,CJ],
Figure FDA0002461083580000011
Wherein x isjRepresenting the complete magnetic resonance image of the vectorized j-th channel, cjA sensitivity map representing the vectorized jth channel, CjIs a sensitivity map of the jth channel represented by a diagonal matrix, CjDiagonal element of cjThe elements of (A) are sequentially combined,
Figure FDA0002461083580000012
representation complexThe number field, N and J represent the number of points and channels of the image in X or C; then using the undersampled matrix
Figure FDA0002461083580000013
The image of each channel in X is undersampled in fourier space, where M represents the number of points sampled,
Figure FDA0002461083580000014
representing the real number domain to obtain an under-sampled zero-filled multi-channel magnetic resonance image Xu=[xu,1,...,xu,j,...,xu,J]The subscript u denotes the Fourier space undersampling, x, of a single channelu,jA magnetic resonance image obtained by zero-filling an under-sampled fourier space representing a vectorized jth channel is defined as:
xu,j=FHUTUFxj, (1)
wherein the content of the first and second substances,
Figure FDA0002461083580000015
representing a discrete fourier transform, "H" representing a complex conjugate transpose, "T" representing a transpose; from all xj(J ═ 1,2, …, J) and cj(J-1, 2, …, J) to obtain a fully sampled composite image xcombined
Figure FDA0002461083580000016
The superscript "-1" represents the matrix inversion; from C, XuAnd xcombinedForming a training set together;
2) establishing a deep learning network model for multi-channel magnetic resonance image reconstruction that constructs an end-to-end mapping function A (X) from an undersampled multi-channel image to a complete magnetic resonance imageuC, C; Θ), wherein Θ is a training parameter in the network model; the network model is composed of S iteration blocks in series connection, and the output of the last iteration block
Figure FDA0002461083580000017
Complete magnetic resonance image reconstructed for network
Figure FDA0002461083580000018
The index of the s-th iteration block is represented by s, and the value of s is an integer greater than or equal to 1; each iteration block is composed of a data check module and a forward conversion convolution block PsSoft threshold operator
Figure FDA0002461083580000019
And inverse transform volume block QsThe total four parts are connected in sequence;
the data verification module performs data verification on the output of the last iteration block by using the data acquired from the undersampled data, and the result r of the data verificationsFrom equation (2):
Figure FDA0002461083580000021
where s denotes the index of the s-th iteration block, γsIs the step size, γs∈ theta, the notation "∈" indicates that the training parameters theta contain gammas(ii) a When s is equal to 1, the first step is carried out,
Figure FDA0002461083580000022
the forward transform volume block PsThe convolution layer comprises L convolution layers, 1 nonlinear function activation layer sigma is connected behind each convolution layer except the L-th convolution layer, each convolution layer comprises K convolution kernels, the size of each convolution kernel is h × h, and the convolution kernel of the first forward convolution layer is formed by Wf,lIt is shown that,
Figure FDA0002461083580000023
Wf,l∈ theta, forward transform module checks result r of data checking modulesNon-linear mapping to zs,zs=Ps(rs);
The soft threshold operator outputs z to the forward transform convolution blocksEach element of (1)Performing soft threshold operations
Figure FDA0002461083580000024
γsλsIs the threshold value, γ, of the s-th iteration blocksλs∈ theta and the definition of the soft threshold operation is given by the vector zsIf the vector z issElement z of the v-ths,vAbsolute value of (a) | zs,v|≤γsλsThen z iss,v0; if | zs,v|>γsλsThen z iss,v=sgn(zs,v)(|zs,v|-γsλs) Wherein sgn (·) is a sign function; note zsAfter passing through the soft threshold operator, the output is
Figure FDA0002461083580000025
The reverse conversion volume block QsThe convolution filter comprises L convolution layers, 1 nonlinear function active layer sigma is connected behind each convolution layer except the L-th convolution layer, each convolution layer comprises K convolution kernels, the size of each convolution kernel is h × h, and the convolution kernel of the L-th reverse convolution layer is formed by Wb,lIt is shown that,
Figure FDA0002461083580000026
Wb,l∈ theta, and the reverse transformation module outputs the soft threshold operator
Figure FDA0002461083580000027
Mapping to an image domain
Figure FDA0002461083580000028
Figure FDA0002461083580000029
3) Constructing a loss function of the network, wherein the loss function of the network is as follows:
Figure FDA00024610835800000210
wherein I represents the index of the vectorized image in the training set, I represents the number of samples in the training set, s represents the s-th iteration block, ri,sIndicating the data check result of the ith sample in the s-th iteration block, βaIs a number with the value more than or equal to 0, can be set according to different training sets,
Figure FDA00024610835800000211
Figure FDA00024610835800000212
the ith complete magnetic resonance image is reconstructed by the network and expressed by the function of a network model
Figure FDA00024610835800000213
4) Training multi-channel magnetic resonance image reconstruction network model parameters;
5) reconstructing a target under-sampled multi-channel magnetic resonance image;
6) an end-to-end mapping function from an under-sampled multi-channel image to a complete magnetic resonance image and a network model loss function corresponding to a network model are obtained by adopting a residual connection mode for an iteration block, and the specific method comprises the following steps:
adopting a residual error connection mode for the iteration block on the basis of the step 2), wherein the end-to-end mapping function from the undersampled multi-channel image to the complete magnetic resonance image corresponding to the network model is B (X)uC, C; Θ), while the loss function of the network becomes:
Figure FDA0002461083580000031
wherein I represents the index of the vectorized image in the training set, I represents the number of samples in the training set, s represents the s-th iteration block,
Figure FDA0002461083580000032
representing the output of the ith sample as it passes through the s-th iteration block, ri,sIndicating the data check result of the ith sample in the s-th iteration block, βbIs a number with the value more than or equal to 0, can be set according to different training sets,
Figure FDA0002461083580000033
Figure FDA0002461083580000034
the ith complete magnetic resonance image is reconstructed by the network and expressed by the function of a network model
Figure FDA0002461083580000035
7) Training parameters of a magnetic resonance image reconstruction network model of a residual error connection mode;
8) and reconstructing the target under-sampled multi-channel magnetic resonance image by using the network model connected by the residual errors.
2. The deep learning-based multi-channel magnetic resonance image reconstruction method according to claim 1, wherein in step 4), the specific method for training the multi-channel magnetic resonance image reconstruction network model parameters is as follows: estimating the optimal value of the parameter theta in the mapping function A by minimizing the loss function
Figure FDA0002461083580000036
The method of minimizing the loss function employs the Adam algorithm in deep learning.
3. The deep learning-based multi-channel magnetic resonance image reconstruction method as claimed in claim 1, wherein in step 5), the specific method for reconstructing the target under-sampled multi-channel magnetic resonance image is as follows: the optimal value of the model obtained by training in the step 4) is taken
Figure FDA0002461083580000037
As parameter Θ in A; undersampling a vectorized target multi-channel magnetic resonance image YuAnd its corresponding sensitivity mapping image CYInputting into a model, and reconstructing a corresponding complete magnetic resonance image after forward propagation
Figure FDA0002461083580000038
Figure FDA0002461083580000039
Representing the reconstructed magnetic resonance image of the s-th iteration block,
Figure FDA00024610835800000310
is formulated as:
Figure FDA00024610835800000311
4. the deep learning-based multi-channel magnetic resonance image reconstruction method according to claim 1, wherein in step 7), the specific method for training the parameters of the residual connection mode magnetic resonance image reconstruction network model is as follows: estimating the optimal value of the parameter theta in the mapping function B by minimizing the loss function
Figure FDA0002461083580000041
The method of minimizing the loss function is the Adam algorithm in deep learning.
5. The deep learning-based multi-channel magnetic resonance image reconstruction method as claimed in claim 1, wherein in step 8), the specific method for reconstructing the target under-sampled multi-channel magnetic resonance image by using the residual connected network model is as follows: the optimal value of the network model obtained by training in the step 7) is taken
Figure FDA0002461083580000042
As ginseng in BA number Θ; undersampling a vectorized target multi-channel magnetic resonance image YuAnd its corresponding sensitivity mapping image CYInputting into network model, and reconstructing corresponding complete magnetic resonance image after forward propagation
Figure FDA0002461083580000043
Figure FDA0002461083580000044
Representing the reconstructed magnetic resonance image of the s-th iteration block,
Figure FDA0002461083580000045
is formulated as:
Figure FDA0002461083580000046
CN201910641089.2A 2019-07-16 2019-07-16 Multichannel magnetic resonance image reconstruction method based on deep learning Active CN110378980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910641089.2A CN110378980B (en) 2019-07-16 2019-07-16 Multichannel magnetic resonance image reconstruction method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910641089.2A CN110378980B (en) 2019-07-16 2019-07-16 Multichannel magnetic resonance image reconstruction method based on deep learning

Publications (2)

Publication Number Publication Date
CN110378980A CN110378980A (en) 2019-10-25
CN110378980B true CN110378980B (en) 2020-07-03

Family

ID=68253477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910641089.2A Active CN110378980B (en) 2019-07-16 2019-07-16 Multichannel magnetic resonance image reconstruction method based on deep learning

Country Status (1)

Country Link
CN (1) CN110378980B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110988763B (en) * 2019-11-05 2021-12-17 深圳先进技术研究院 Magnetic resonance imaging method, magnetic resonance imaging device, server and storage medium
CN111091604B (en) * 2019-11-18 2022-02-01 中国科学院深圳先进技术研究院 Training method and device of rapid imaging model and server
CN111047660B (en) * 2019-11-20 2022-01-28 深圳先进技术研究院 Image reconstruction method, device, equipment and storage medium
CN110942496B (en) * 2019-12-13 2022-02-11 厦门大学 Propeller sampling and neural network-based magnetic resonance image reconstruction method and system
CN111353935A (en) * 2020-01-03 2020-06-30 首都医科大学附属北京友谊医院 Magnetic resonance imaging optimization method and device based on deep learning
CN112634385B (en) * 2020-10-15 2024-05-10 浙江工业大学 Rapid magnetic resonance imaging method based on deep Laplace network
CN112489150B (en) * 2020-10-19 2024-05-10 浙江工业大学 Multi-scale sequential training method of deep neural network for rapid MRI
CN112336337B (en) * 2020-11-06 2022-09-02 深圳先进技术研究院 Training method and device for magnetic resonance parameter imaging model, medium and equipment
CN112946545B (en) * 2021-01-28 2022-03-18 杭州电子科技大学 PCU-Net network-based fast multi-channel magnetic resonance imaging method
CN113256749B (en) * 2021-04-20 2022-12-06 南昌大学 Rapid magnetic resonance imaging reconstruction algorithm based on high-dimensional correlation prior information
CN113538616B (en) * 2021-07-09 2023-08-18 浙江理工大学 Magnetic resonance image reconstruction method combining PUGAN with improved U-net
CN113920211B (en) * 2021-09-16 2023-08-04 中国人民解放军总医院第一医学中心 Quick magnetic sensitivity weighted imaging method based on deep learning
CN113920212B (en) * 2021-09-27 2022-07-05 深圳技术大学 Magnetic resonance reconstruction model training method, computer device and storage medium
CN113854995B (en) * 2021-10-19 2023-11-24 复旦大学 Single excitation-based diffusion weighted imaging scanning reconstruction method and system
CN114010180B (en) * 2021-11-05 2024-04-26 清华大学 Magnetic resonance rapid imaging method and device based on convolutional neural network
CN114219843B (en) * 2021-12-16 2022-11-01 河南工业大学 Method and system for constructing terahertz spectrum image reconstruction model and application
CN115861472B (en) * 2023-02-27 2023-05-23 广东工业大学 Image reconstruction method, device, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108335339A (en) * 2018-04-08 2018-07-27 朱高杰 A kind of magnetic resonance reconstruction method based on deep learning and convex set projection
CN108459289A (en) * 2018-01-30 2018-08-28 奥泰医疗系统有限责任公司 A kind of multiple excitation Diffusion weighted MR imaging method based on data consistency
CN108828481A (en) * 2018-04-24 2018-11-16 朱高杰 A kind of magnetic resonance reconstruction method based on deep learning and data consistency

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107064846A (en) * 2016-12-12 2017-08-18 国网北京市电力公司 The sensitivity detection method and device of live testing apparatus for local discharge
CN108535675B (en) * 2018-04-08 2020-12-04 朱高杰 Magnetic resonance multi-channel reconstruction method based on deep learning and data self-consistency

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108459289A (en) * 2018-01-30 2018-08-28 奥泰医疗系统有限责任公司 A kind of multiple excitation Diffusion weighted MR imaging method based on data consistency
CN108335339A (en) * 2018-04-08 2018-07-27 朱高杰 A kind of magnetic resonance reconstruction method based on deep learning and convex set projection
CN108828481A (en) * 2018-04-24 2018-11-16 朱高杰 A kind of magnetic resonance reconstruction method based on deep learning and data consistency

Also Published As

Publication number Publication date
CN110378980A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
CN110378980B (en) Multichannel magnetic resonance image reconstruction method based on deep learning
Ghodrati et al. MR image reconstruction using deep learning: evaluation of network structure and loss functions
CN107610194B (en) Magnetic resonance image super-resolution reconstruction method based on multi-scale fusion CNN
Wen et al. Transform learning for magnetic resonance image reconstruction: From model-based learning to building neural networks
WO2020151355A1 (en) Deep learning-based magnetic resonance spectroscopy reconstruction method
WO2020134826A1 (en) Parallel magnetic resonance imaging method and related equipment
US10627470B2 (en) System and method for learning based magnetic resonance fingerprinting
WO2020114329A1 (en) Fast magnetic resonance parametric imaging and device
CN111870245B (en) Cross-contrast-guided ultra-fast nuclear magnetic resonance imaging deep learning method
CN104739410B (en) A kind of iterative reconstruction approach of magnetic resonance image (MRI)
CN111487573B (en) Enhanced residual error cascade network model for magnetic resonance undersampling imaging
CN106618571A (en) Nuclear magnetic resonance imaging method and system
CN111353935A (en) Magnetic resonance imaging optimization method and device based on deep learning
CN110942496B (en) Propeller sampling and neural network-based magnetic resonance image reconstruction method and system
CN114299185A (en) Magnetic resonance image generation method, magnetic resonance image generation device, computer equipment and storage medium
CN111784792A (en) Rapid magnetic resonance reconstruction system based on double-domain convolution neural network and training method and application thereof
CN114255291A (en) Reconstruction method and system for magnetic resonance parameter quantitative imaging
Fan et al. An interpretable MRI reconstruction network with two-grid-cycle correction and geometric prior distillation
CN112819740A (en) Medical image fusion method based on multi-component low-rank dictionary learning
CN110598579B (en) Hypercomplex number magnetic resonance spectrum reconstruction method based on deep learning
Gan et al. SS-JIRCS: Self-supervised joint image reconstruction and coil sensitivity calibration in parallel MRI without ground truth
CN113838105B (en) Diffusion microcirculation model driving parameter estimation method, device and medium based on deep learning
CN115471580A (en) Physical intelligent high-definition magnetic resonance diffusion imaging method
CN114299174A (en) Multi-echo undersampling reconstruction-water-fat separation method based on deep unsupervised learning
CN114331849A (en) Cross-mode nuclear magnetic resonance hyper-resolution network and image super-resolution method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant