CN114820849A - Magnetic resonance CEST image reconstruction method, device and equipment based on deep learning - Google Patents

Magnetic resonance CEST image reconstruction method, device and equipment based on deep learning Download PDF

Info

Publication number
CN114820849A
CN114820849A CN202210410394.2A CN202210410394A CN114820849A CN 114820849 A CN114820849 A CN 114820849A CN 202210410394 A CN202210410394 A CN 202210410394A CN 114820849 A CN114820849 A CN 114820849A
Authority
CN
China
Prior art keywords
cest
channel
data
image
magnetic resonance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210410394.2A
Other languages
Chinese (zh)
Inventor
张祎
徐健平
祖涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202210410394.2A priority Critical patent/CN114820849A/en
Publication of CN114820849A publication Critical patent/CN114820849A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/561Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution by reduction of the scanning time, i.e. fast acquiring systems, e.g. using echo-planar pulse sequences
    • G01R33/5619Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution by reduction of the scanning time, i.e. fast acquiring systems, e.g. using echo-planar pulse sequences by temporal sharing of data, e.g. keyhole, block regional interpolation scheme for k-Space [BRISK]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Signal Processing (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a magnetic resonance CEST image reconstruction method, a device, a medium and equipment based on deep learning, wherein the method comprises the following steps: acquiring magnetic resonance CEST (CEST-assisted magnetic resonance) down-sampling k-space data of an object to be imaged; acquiring a trained deep neural network, wherein the network consists of a data sharing module and a plurality of iteration modules; and reconstructing a CEST source image by utilizing a neural network. The method combines coil sensitivity encoding and neural network prior, fully utilizes redundant information on CEST frequency dimension, and further improves CEST imaging speed on the premise of ensuring image quality. In addition, the invention also provides a multi-channel CEST data simulation method, which greatly reduces the dependence degree of the method on the collection of a large amount of training data.

Description

Magnetic resonance CEST image reconstruction method, device and equipment based on deep learning
Technical Field
The invention belongs to the field of magnetic resonance imaging, and particularly relates to a rapid chemical exchange saturation transfer imaging technology combining a deep neural network and parallel imaging reconstruction.
Background
Chemical Exchange Saturation Transfer (CEST) imaging accumulates and amplifies Magnetic Resonance (MR) signals generated by endogenous protons through selective Saturation of specific endogenous protons and the phenomenon of mutual Exchange between water protons and endogenous protons, so that spatial distribution of low-concentration metabolites in a living body can be sensitively detected from a molecular level, and imaging of important physiological parameters such as metabolites and pH is realized. Currently, CEST imaging techniques show great potential in the diagnosis and treatment of various diseases, for example, Amide Proton Transfer (APT) techniques have been used for tumor diagnosis, glutamate chemical exchange saturation transfer techniques for epilepsy diagnosis, glycosaminoglycan chemical exchange saturation transfer techniques for osteoarthritis diagnosis, and the like.
However, the high abundance of CEST imaging information is at the cost of a long imaging time, and in order to ensure the quality and reliability of CEST imaging, multi-frame images are often acquired in a large saturation offset frequency range. Therefore, although the CEST imaging has a good application prospect, the development and clinical popularization of the CEST imaging are severely restricted by the defect of long imaging time.
In order to solve the problems, a parallel imaging method represented by variable acceleration sensitivity encoding (vSENSE) is used for accelerating CEST imaging, the method effectively reduces the CEST imaging time by utilizing the spatial sensitivity difference of a multi-channel coil, but the image reconstruction quality is easily influenced by factors such as noise amplification and the like, and the achievable acceleration times are limited; the Compressed Sensing (CS) reconstruction method based on image sparsity reduces noise in a CEST image, but the method needs to undergo a time-consuming nonlinear iterative process in an image reconstruction link, and the reconstructed image often has the situation of loss of detail information, so that the application of a CS algorithm in the CEST imaging field is limited by the defects; in recent years, a deep learning algorithm based on CEST-PROPELLER undersampling is used for accelerating CEST imaging, but only a CEST contrast map can be obtained through the method, and the requirement of ceST practical quantitative analysis is difficult to meet.
In addition, structural information contained in CEST source images with different frequencies has high similarity, and the existing image reconstruction algorithms cannot fully exploit and utilize redundant information in the frequency dimension. Therefore, how to fully utilize redundant information of data and realize faster CEST imaging on the premise of ensuring image quality and reconstruction efficiency is still an urgent problem to be solved.
Disclosure of Invention
The invention aims to provide a magnetic resonance CEST image reconstruction method, a device, a medium and equipment which have short imaging time, good reconstruction effect and higher clinical and commercial values and are used for solving the problem of long CEST imaging time.
The invention adopts the following specific technical scheme:
in a first aspect, the present invention provides a magnetic resonance CEST image reconstruction method based on deep learning, which includes:
s1, aiming at a target object to be subjected to magnetic resonance CEST imaging, acquiring multichannel undersampled k-space data of the target object and a corresponding coil sensitivity map; the multichannel undersampled k-space data consists of acquired undersampled k-space data frames under all CEST saturation offset frequencies;
s2, acquiring a trained deep neural network; the input of the deep neural network is multi-channel undersampled k-space data and a corresponding coil sensitivity map, and the output is a CEST source image reconstructed by the network;
and S3, inputting the multichannel undersampled k-space data acquired in the S1 and the corresponding coil sensitivity maps into the trained deep neural network to obtain a reconstructed CEST source image.
Preferably, the deep neural network in S2 is composed of a data sharing module and a plurality of iteration modules cascaded after the data sharing module;
wherein the data sharing module undersamples k-space by using adjacent frames aiming at missing parts in each input frame of undersampled k-space dataFilling the value data of the corresponding position of the data to obtain the filled k-space data, then carrying out inverse Fourier transform and multi-channel combination on the filled k-space data, and taking the obtained aliasing image as the output S of the module 1
Each iteration module has the same structure, and the input of any k-th iteration module comprises the output S of the last cascade module k The multi-channel undersampled k-space data and the coil sensitivity maps, the output S of which k+1 Comprises the following steps:
Figure BDA0003603436510000021
in the formula: k is 1, … K; gamma ray k Weight coefficients learnable for the network; the encoding operator E ═ MFC, where M denotes the k-space undersampled mask matrix, F denotes the fourier encoding matrix, C denotes the coil sensitivity map matrix in the network input; e * A companion matrix representing E; y represents multi-channel undersampled k-space data in a network input;
Figure BDA0003603436510000031
and
Figure BDA0003603436510000032
respectively representing the ith group of learnable three-dimensional convolution kernels and the three-dimensional deconvolution kernels in the kth iteration module;
Figure BDA0003603436510000033
to be distributed in
Figure BDA0003603436510000034
And
Figure BDA0003603436510000035
a learnable activation function in between.
As a preferred aspect of the first aspect, in S2, the deep neural network is trained in advance using a multi-channel CEST dataset generated by simulation, and the method for acquiring the multi-channel CEST dataset generated by simulation is as follows:
s11, acquiring a multichannel MR structural image data set, and performing data preprocessing on each MR structural image to obtain multichannel MR structural images with uniform size and corresponding merged MR structural images merged through channels; meanwhile, a multi-pool Bloch-McConnell mathematical model based on a water proton pool and an amide proton pool is established, values of model parameters are traversed in a corresponding preset range, and a z spectrum set containing a large number of z spectrums is generated through numerical simulation, wherein each z spectrum covers N spectrums ω Frequency points;
s12, respectively generating a binary mask of a tumor region and a non-tumor region by image segmentation by using each merged MR structure image obtained in S11, and then adding different random weak textures into the two masks to fuse to form a textured mask;
s13, traversing each pixel in each textured mask acquired in S12, retrieving a z spectrum corresponding to the gray value of the pixel from a z spectrum set generated in S11 according to a mapping relation table between the gray value of the pixel and the solute concentration of an amide proton pool, pairing the z spectrum and the corresponding pixel, and correspondingly forming a three-dimensional z spectrum matching image by the paired z spectra of all the pixels in each textured mask;
s14, aiming at each multi-channel MR structural image acquired in S11, each channel image is counted by the dimension number N of the z spectrum ω Copy N ω Stacking the layers, performing point multiplication operation on each stacked channel image and a corresponding z spectrum matching image obtained in S13 to form a CEST source image corresponding to each channel, finally obtaining a group of multichannel CEST source images corresponding to each multichannel MR structure image, and combining the multichannel CEST source images through the channels to form a fully sampled CEST source image;
s15, obtaining corresponding multi-channel fully-sampled k-space data through Fourier transform aiming at each group of multi-channel CEST source images, and performing undersampling on the multi-channel fully-sampled k-space data by using a pre-generated k-space undersampling mask to obtain multi-channel undersampled k-space data; meanwhile, calculating a corresponding coil sensitivity map according to undersampled k-space data;
and S16, each group of multi-channel undersampled k-space data, the corresponding coil sensitivity map and the corresponding full-sampling CEST source image jointly form a sample in the multi-channel CEST data set generated by simulation.
Preferably, the learnable activation function is represented by N w Weighted combination of Gaussian radial basis functions, wherein the ith activation function in the kth iteration module
Figure BDA0003603436510000041
In the form of:
Figure BDA0003603436510000042
where z represents the input to the activation function and the fixed parameter delta j And σ is used to control the shape of each basis function; combining weights of basis functions
Figure BDA0003603436510000043
Set as learnable parameters, i 1, … N v ,j=1,…N w
As a preference of the first aspect mentioned above, the loss function used during the deep neural network training is:
Figure BDA0003603436510000044
wherein
Figure BDA0003603436510000045
Representing all learnable parameter sets of the network, d k Representing a set of convolution kernel weight coefficients in the kth iteration block, N for counting training set samples, N n Training the total amount of samples in the training set;
Figure BDA0003603436510000046
representing the n-th set of CEST source images, S, output by a deep neural network n Denotes the nth group ofSampling a CEST source image tag;
Figure BDA0003603436510000047
the representation is based on
Figure BDA0003603436510000048
Calculating the intensity spectrum of the magnetization transfer rate asymmetry effect corresponding to the mth offset frequency,
Figure BDA0003603436510000049
representation is based on S n Calculated intensity spectrum of asymmetric effect of magnetic susceptibility corresponding to mth offset frequency, N M Is the sum of positive and negative frequency pairs; μ (e) represents a weight coefficient associated with the number of network training rounds e.
Preferably, in the data sharing module, for any undersampled k-space data frame, all missing data points in the frame are traversed, whether existing data exists at the same position of an adjacent frame for each missing data point is determined, if existing, the existing data is filled into the missing data point, and if not, filling completion is not performed.
Preferably, in the first aspect, the coil sensitivity maps in S1 are acquired directly or calculated by using the multi-channel undersampled k-space data.
In a second aspect, the invention provides a deep learning-based magnetic resonance CEST image reconstruction data processing apparatus, comprising a memory and a processor;
the memory for storing a computer program;
the processor is configured to, when executing the computer program, implement the method for magnetic resonance CEST image reconstruction based on deep learning according to any aspect of the first aspect.
In a third aspect, the invention provides a computer-readable storage means having stored thereon a computer program which, when being executed by a processor, carries out a method for deep learning based magnetic resonance CEST image reconstruction according to any of the aspects of the first aspect.
In a fourth aspect, the invention provides a magnetic resonance imaging apparatus comprising a magnetic resonance scanner and a control unit;
the magnetic resonance scanner is used for obtaining CEST multi-channel undersampled k-space data of a target object by a parallel imaging method;
the control unit has stored therein a computer program for implementing a method for deep learning based magnetic resonance CEST image reconstruction according to any of the aspects of the first aspect when the computer program is executed.
Compared with the prior art, the invention has the following beneficial effects:
(1) the method combines a deep learning algorithm and a parallel imaging algorithm, on one hand, by means of neural network prior and multi-channel spatial sensitivity difference information, compared with the existing method, the CEST imaging speed can be obviously improved on the premise of ensuring the image quality, and the speed of CEST imaging is increased by at least 4 times; on the other hand, the method can be directly used for reconstructing the clinically acquired multichannel CEST source image, and has good clinical and commercial popularization potentials.
(2) The invention fully ensures the reliability of the reconstruction result. According to the method, the information of the CEST frequency dimension is extracted and optimized by means of the time-frequency convolution kernel, and the reconstruction error of the CEST effect is used as a part of the loss function to constrain model training, so that the method not only reconstructs the space dimension information of the image with high quality, but also well reconstructs the CEST characteristics represented by the z spectrum on the frequency dimension, and compared with the existing method, the reconstruction result based on the method can realize more reliable CEST analysis.
(3) The multichannel CEST data simulation method provided by the invention can simply, conveniently and effectively generate the network training data set, improves the generalization capability of the deep neural network algorithm, greatly reduces the dependence degree of the deep neural network algorithm on a large amount of training data acquisition, and ensures the practicability of the algorithm.
Drawings
Fig. 1 is a flowchart of the steps of a magnetic resonance CEST image reconstruction method based on deep learning;
FIG. 2 is a diagram of a deep neural network architecture;
FIG. 3 is a flow chart of a multi-channel CEST data simulation;
FIG. 4 is a flowchart illustrating an overall implementation of an embodiment of the present invention;
FIG. 5 shows the result of the reconstruction of APTw images of an example of a brain tumor patient in the example. The graph (a) is a full sampling APTw reference image, a GRAPPA algorithm reconstruction result, a BCS algorithm reconstruction result, a VN algorithm reconstruction result and a reconstruction result of the method of the invention from left to right, the graph (b) is a k-space undersampling mask used in the experiment, the acceleration factor of the k-space undersampling mask is 4, and the graph (c) is an error graph of the GRAPPA algorithm, the BCS algorithm, the VN algorithm and the reconstruction result of the invention relative to the full sampling reference image and a corresponding normalized root mean square error value (nRMSE) from left to right.
FIG. 6 shows the reconstruction result and the CEST analysis result of a 3.5ppm source image of a brain tumor patient in the example. Wherein, the GRAPPA algorithm reconstruction result, the BCS algorithm reconstruction result, the VN algorithm reconstruction result and the reconstruction result of the method are sequentially obtained from left to right when the acceleration factor R is 4; the first row is a 3.5ppm source image of a brain tumor patient reconstructed by each algorithm, the second row is a reconstruction error map of each algorithm to the 3.5ppm source image, the third row and the fourth row are a z spectrum and an MTRasym spectrum of a region of interest calculated based on the reconstruction results of each algorithm respectively, a solid line in a spectrogram represents a reference value, and a dotted line represents the reconstruction result of a corresponding algorithm. The quantization indices in the graph include peak signal-to-noise ratio (PSNR), normalized root mean square error value (nRMSE), and mean absolute value error (MAE).
Detailed Description
The invention will be further elucidated and described with reference to the drawings and the detailed description. The technical features of the embodiments of the present invention can be combined correspondingly without mutual conflict.
As shown in fig. 1, as a preferred implementation manner of the present invention, a method for fast magnetic resonance CEST image reconstruction based on deep learning is provided, which includes the following steps:
s1, acquiring multichannel undersampled k-space data of a target object and a corresponding coil sensitivity map aiming at the target object to be subjected to magnetic resonance CEST imaging; the multichannel undersampled k-space data consists of acquired undersampled k-space data frames at all CEST saturation offset frequencies.
It should be noted that in step S1, the acquisition mode of the multi-channel under-sampled k-space data of the target object may be off-line or on-line. For the offline mode, it is only necessary to read the data from the storage device storing the data; in the on-line method, a CEST imaging scan of the target object by the magnetic resonance imaging apparatus is required to acquire these data.
It should be noted that, in step S1, the coil sensitivity maps may be acquired directly or calculated by using corresponding multi-channel undersampled k-space data, which is not limited in this respect.
It is noted that in step S1, the target object may be any object capable of being imaged by magnetic resonance CEST, for example, a brain of a patient.
And S2, acquiring the trained deep neural network, and performing image reconstruction on the data obtained in S1 to realize rapid multichannel CEST imaging.
Before the structure of the deep neural network is described in detail, the basic theory is briefly introduced to facilitate understanding of the principle of the deep neural network involved in the present invention, which is specifically as follows:
the CEST source image reconstruction problem to which the present invention relates can be described by the following optimization model:
Figure BDA0003603436510000071
wherein
Figure BDA0003603436510000072
For the CEST source image to be solved,
Figure BDA0003603436510000073
for practically acquired multi-channel undersampled k-space data, N x And N y Representing the CEST source image space size, N ω Representing the total number of CEST saturation offset frequencies, N c Representing the number of channels of the coil; the encoding operator E is MFC, M represents a k-space undersampling mask matrix, F represents a Fourier encoding matrix, and C represents a coil sensitivity map; r is a regularization term, and lambda is a regularization weight coefficient. The regularization used by the invention is a generalized form of sparse regularization, and the specific structure is as follows:
Figure BDA0003603436510000074
where D denotes the sparse convolution kernel, φ denotes the nonlinear potential function, subscript i is used for the different sparse convolution kernels and the counting of the potential function, N v The total number of sparse convolution kernels or potential functions.
Equation (1) above can be solved by a gradient descent algorithm:
Figure BDA0003603436510000075
wherein E * The companion matrix of the representation is,
Figure BDA0003603436510000076
represents the transpose of the sparse convolution kernel D, phi i ' is the activation function (i.e. the first derivative of the potential function), alpha k Is the step size of the kth iteration.
Since the regularization term weight coefficient λ in the above equation (3) can be implicitly learned by the activation function, the above equation can be collated to obtain the following form:
Figure BDA0003603436510000077
therefore, based on the theory, the structure of the deep neural network can be designed.
Specifically, equation (4) is developed as a deep neural network, as shown in fig. 2. The whole deep neural network is formed by sequentially cascading a data sharing module and K iteration modules. The data sharing module is used for utilizing relevant information between adjacent frames; and each subsequent iteration module respectively executes a gradient descent process shown in the primary formula (4), so that iterative optimization of the image is realized together, and finally reconstruction of the CEST source image is realized.
For the whole deep neural network, the network inputs multichannel undersampled k-space data Y of a magnetic resonance CEST imaging target and a corresponding coil sensitivity map C, and outputs a fully-sampled CEST source image after network reconstruction; the input of the data sharing module is multi-channel undersampled k-space data Y, and the output is a channel merged aliasing image S shared by the k-space data 1 (ii) a Output S of data sharing module 1 As one of the inputs of the first iteration block, the output S of the first iteration block 2 As one of the inputs of the second iteration module, and so on, the output of the previous iteration module is used as one of the inputs of the next iteration module, and the output S of the last iteration module K+1 As a result of the reconstruction of the entire network, i.e. a fully sampled CEST source image. It is noted that the input of each iteration module, in addition to the output of the previous iteration module, has multi-channel undersampled k-space data Y and a corresponding coil sensitivity map C.
The functions and implementation details of the data sharing module and the iteration module are described below:
(1) the data sharing module is used for filling missing parts in input under-sampled k-space data of each frame by utilizing the value data of the corresponding position of the under-sampled k-space data of the adjacent frame to obtain the filled k-space data. In a specific implementation process, the data sharing module firstly traverses all missing data points in any undersampled k-space data frame and judges whether value data exist at the corresponding position of each missing data point in two adjacent frames: 1) if the corresponding positions of the two adjacent frames have the value data, filling the average value of the value data in the two frames to the missing data point; 2) if one frame of corresponding position in two adjacent frames has value data, the value data of the corresponding frame is taken to be filled into the missing data point; 3) if no value data exists, no filling is performed.
It is noted that there is only one adjacent frame for the first frame and the last frame, and thus the data sharing process is performed with only one adjacent frame.
After the work is finished, the obtained filled k-space data is subjected to inverse Fourier transform and multi-channel combination, and the obtained aliasing image with the combined channels is used as the output S of the data sharing module 1
(2) For the iteration modules, in order not to lose generality, the number of any one iteration module is denoted by K (K ═ 1, … K) in the following. For the kth iteration module, the input is the output S of the previous cascade module k The multi-channel undersampled k-space data Y and the coil sensitivity map C, the output is recorded as S k+1 The specific calculation it performs is shown in equation (5):
Figure BDA0003603436510000081
in the formula: gamma ray k Weight coefficients learnable for the network; coding operator E and its adjoint E * As defined above; e * A companion matrix representing E; y represents multi-channel undersampled k-space data in a network input;
Figure BDA0003603436510000082
and
Figure BDA0003603436510000083
respectively representing the ith group of learnable three-dimensional convolution kernels and the three-dimensional deconvolution kernels in the kth iteration module,
Figure BDA0003603436510000091
in order to perform the time-frequency convolution kernel,
Figure BDA0003603436510000092
is a time-frequency deconvolution kernel;
Figure BDA0003603436510000093
to be distributed in
Figure BDA0003603436510000094
And
Figure BDA0003603436510000095
a learnable activation function in between.
Note that D in different iteration modules i
Figure BDA0003603436510000096
φ′ i Although having the same structure, since the learnable parameters are not commonly shared in each iteration module, for this reason, equation (5) is applied to the learnable structure D i
Figure BDA0003603436510000097
φ′ i With addition of superscript k, for easy distinction, i.e.
Figure BDA0003603436510000098
Respectively represent D corresponding to the k-th iteration module i
Figure BDA0003603436510000099
φ′ i
It should be noted that the previous cascaded block of different iteration blocks is different, and for the K-1 th iteration block, the previous cascaded block is the data sharing block, but for the other K-2, … K iteration blocks, the previous cascaded block is the K-1 th iteration block.
In addition, D in each iteration module i
Figure BDA00036034365100000910
φ′ i 、γ k (i=1,…N v ) All are learnable parts of the network, and the concrete form is as follows:
1) convolution kernel D i And deconvolution kernel
Figure BDA00036034365100000911
In order to simultaneously extract and reconstruct the space dimension and the frequency dimension information of the source image, the convolution kernel D in the invention i Using a size n x ×n y ×n ω Of three-dimensional time-frequency convolution kernels, i.e.
Figure BDA00036034365100000912
Wherein n is x 、n y And n ω The sizes of the convolution kernels are parameters, and specific values can be optimized according to the actual conditions. For reconstructing a plurality of source images, D i Two channels are included for processing the real and imaginary data of the source image matrix, respectively.
Furthermore, a deconvolution kernel
Figure BDA00036034365100000913
Can directly adopt D i The transposition implementation is not described in detail. Each iteration module has N v Set of convolution kernels D i And deconvolution kernel
Figure BDA00036034365100000914
The weight coefficients d of all convolution kernels contained therein are set to be learned.
2) Activation function phi' i (i=1,…N v ):
Non-linear activation function phi' i Set at convolution kernel D i And deconvolution kernel
Figure BDA00036034365100000915
And is used for promoting the efficient expression of image information in a sparse domain. Practically, phi' i Can be composed of N w Weighted combination of Gaussian radial basis functions is obtained, and the weighted combination is used for the ith (i is 1, … N) in the kth iteration module v ) An activation function
Figure BDA00036034365100000916
For example, it is in the form:
Figure BDA00036034365100000917
where z represents the input to the activation function, a fixed parameter delta j And σ is used to control the shape of each basis function; combining weights of basis functions
Figure BDA00036034365100000918
Set as a learnable parameter.
3) Data item weight coefficient gamma k
γ k The initial value of training is set to 1, and no other constraint is imposed on the network during training.
To this end, the calculation process of equation (4) performed in each iteration module can be represented by the following parameterized non-linear mapping f:
S k+1 =f k (S kk ), (7)
wherein S k And S k+1 Respectively the k-th iteration module f k The input and the output of (a) a,
Figure BDA0003603436510000101
representing trainable parameters contained in the kth iteration module (d) k Representing the set of convolution kernel weight coefficients in the kth iteration block). On this basis, the reconstruction process of the whole deep neural network F can be expressed as:
Figure BDA0003603436510000102
wherein
Figure BDA0003603436510000103
Representing all of the trainable parameter sets in the network.
Before the deep neural network is used for image reconstruction, network parameter optimization needs to be carried out in advance. The concrete mode of optimizing the network parameters can refer to the conventional training method of the deep neural network, namely setting a loss function and utilizing an optimizer to train a network trainable parameter set thetaOptimizing until an optimal parameter set is obtained
Figure BDA0003603436510000104
The loss function used by the network training can be adjusted and optimized according to the practice, and the composite loss function provided by the invention comprises the following components:
Figure BDA0003603436510000105
wherein
Figure BDA0003603436510000106
Representing all learnable parameter sets of the network, N for counting training set samples, N n Training the total amount of samples in the training set;
Figure BDA0003603436510000107
representing the n-th set of CEST source images, S, output by a deep neural network n Representing the nth group of fully sampled CEST source image labels;
Figure BDA0003603436510000108
the representation is based on
Figure BDA0003603436510000109
The calculated magnetization transfer rate asymmetry effect intensity spectrum corresponding to the mth offset frequency,
Figure BDA00036034365100001010
representation is based on S n Calculated magnetization transfer rate asymmetry effect intensity spectrum, N, corresponding to the mth offset frequency M The sum of positive and negative frequency pairs; v (e) represents a weight coefficient associated with the number e of network training rounds.
When the optimal network parameter set is obtained
Figure BDA00036034365100001011
Then, the actual undersampling can be performed through the network containing the optimal parametersAnd reconstructing a CEST source image.
S3, inputting the multichannel undersampled data Y of the target object obtained in the S1 into the deep neural network trained in the S2 to obtain a reconstructed full-sampling CEST source image
Figure BDA00036034365100001012
This process can be expressed as follows:
Figure BDA00036034365100001013
the fully sampled CEST source image obtained by the network reconstruction can be further processed and CEST analyzed according to actual requirements.
In addition, the deep neural network training data set needs large-scale sample construction, and the sample can be acquired through a magnetic resonance experiment. However, the CEST multichannel data currently available in practice are relatively scarce and it is often difficult to meet the training requirements of the network. Aiming at the problem, the invention also provides a method for generating a multichannel CEST training data set, and the method obtains multichannel CEST source image data through numerical simulation of a Bloch-McConnell model by utilizing an easily-obtained multichannel MR structure image data set, thereby greatly reducing the dependence degree of a deep neural network algorithm on the acquisition of a large amount of training data and ensuring the practicability of the algorithm.
In another preferred embodiment of the present invention, the deep neural network is trained in advance using a multi-channel CEST dataset generated by simulation. The simulation process is shown in fig. 3, and the specific steps are as follows:
s11, acquiring a multi-channel MR structure image data set, and performing necessary data preprocessing such as cutting and zooming on each MR structure image to obtain a multi-channel MR structure image (including a plurality of channel images of different channels) with uniform size and a corresponding combined MR structure image combined through a channel; meanwhile, a multi-pool Bloch-McConnell mathematical model based on a water proton pool and an amide proton pool is established, and the model is arranged in a corresponding preset range (the specific range of each model parameter can be set according to the actual condition)Traversing values of model parameters, wherein each group of model parameters can be used as a group of basic data of numerical simulation for generating a corresponding z spectrum, and a z spectrum set comprising a large number of z spectrums is generated through numerical simulation, wherein each z spectrum covers N ω Frequency points; (ii) a
And S12, respectively generating binary masks of a tumor region and a non-tumor region through image segmentation by using each merged MR structural image acquired in S11, and then adding different random weak textures into the two masks to fuse to form a textured mask.
The generation of the binary mask may be implemented by manual segmentation, or may be automatically implemented by using a clustering algorithm such as k-means.
In addition, the random weak texture can be from a large number of natural scene images containing rich information, or can be generated randomly. But is preferably generated using images of natural scenes. The specific generation mode of the random weak texture is as follows: firstly, a natural scene image with abundant texture features is obtained, the image is uniformly subjected to graying processing, the grayed natural scene image is subjected to smooth filtering processing, and then the grayed natural scene image is superposed into the binary mask to form a textured mask.
S13, traversing each pixel in each textured mask acquired in S12, retrieving a z spectrum corresponding to the gray value of the pixel from a z spectrum set generated in S11 according to a mapping relation table between the gray value of the pixel and the solute concentration of an amide proton pool, pairing the z spectrum with the corresponding pixel, and matching each pixel in the textured mask with one z spectrum, so that the paired z spectrums of all the pixels in each textured mask correspondingly form a three-dimensional z spectrum matching graph.
S14, aiming at each multi-channel MR structural image acquired in S11, each channel image is counted by the dimension number N of the z spectrum ω Copy N ω Stacking the layers, performing point multiplication operation on the stacked channel images and the z spectrum matching image obtained in S13 to form a CEST source image corresponding to each channel, and finally obtaining a group of multi-channel CEST source images corresponding to each multi-channel MR structure image, wherein the multi-channel CEST source images are subjected to through-pass operationAnd combining the channels to form a fully sampled CEST source image.
It should be noted that, since the multi-channel MR structural image and the merged MR structural image are matched in groups, and each z spectrum matching image also corresponds to one merged MR structural image, when performing the dot multiplication operation in step S14, the z spectrum matching images corresponding to the group of multi-channel MR structural image and the merged MR structural image need to be taken for performing the dot multiplication with the stacked channel images.
S15, obtaining corresponding multi-channel fully-sampled k-space data through Fourier transform aiming at each group of multi-channel CEST source images, and performing undersampling on the multi-channel fully-sampled k-space data by using a pre-generated k-space undersampling mask to obtain multi-channel undersampled k-space data; meanwhile, calculating a corresponding coil sensitivity map according to undersampled k-space data;
it should be noted that in order to fully utilize the redundant information in the frequency dimension of CEST during image reconstruction, the k-space undersampling mask of different frames should have the difference of undersampling mode. The unexampled k-space portion is filled with 0's, thereby simulating the actual k-space data undersampling process.
And S16, each group of multi-channel undersampled k-space data, the corresponding coil sensitivity map and the corresponding full-sampling CEST source image jointly form a sample in the multi-channel CEST data set generated by simulation.
The following description will be given based on the methods described in the above-mentioned S1 to S3 and S11 to S16, and the specific technical effects thereof will be shown by combining the methods with specific examples, so that those skilled in the art can better understand the essence of the present invention.
Examples
The fast magnetic resonance CEST image reconstruction method based on deep learning described in the above S1-S3 is applied to an embodiment to show the technical effects achieved thereby. In addition, to illustrate the effectiveness of the proposed multi-channel CEST dataset simulation method, the present embodiment trains the network using CEST datasets generated by simulation according to the procedures S11 to S16. The specific frame processes of the methods described in S1 to S3 and S11 to S16 are as described above, and are not described in detail in this embodiment, and specific implementation details and technical effects of the steps are shown in the following. The complete flow of this embodiment is shown in fig. 4.
1. Data preparation
1.1 training data simulation
1) Preparation work: on one hand, a fastMRI open source structural image data set is obtained, and each structural image is subjected to necessary preprocessing such as cutting and scaling, so that a multi-channel MR structural image with uniform size and a corresponding combined MR structural image combined through a channel are obtained; on the other hand, a three-pool Bloch-McConnell model comprising a water quality sub-pool, an amide proton pool and a Magnetization Transfer (MT) pool is established, and a z spectrum set comprising a large number of z spectrums is generated through numerical simulation by traversing simulation parameter values in a preset range, wherein each z spectrum covers 53 frequency points within-6 ppm.
2) Identification, segmentation and texturing mask generation of tumors: identifying and dividing the tumor region in each merged MR structural image obtained in the step 1) by utilizing an improved k-means clustering algorithm, respectively generating binary masks of the tumor region and the non-tumor region, and then respectively adding different random weak textures into the two masks to obtain the textured mask. Wherein the texture is derived from a large number of natural scene images containing rich information.
3) Generating a z-spectrum matching graph: traversing each pixel in each textured mask, retrieving a z spectrum corresponding to the gray value of the pixel from a z spectrum set generated in 1) according to a mapping relation table between the gray value of the pixel and the solute concentration of an amide proton pool, and pairing the z spectrum and the corresponding pixel; and the z spectrums of all the pixels in each textured mask correspondingly form a three-dimensional z spectrum matching graph.
4) Generating a multichannel CEST source image: taking each group of multi-channel MR structural images in 1) as an unsaturated image (S) 0 ) And performing saturation modulation on the unsaturated image by using the z-spectrum matching image generated in the step 3) to obtain a multichannel CEST source image saturated at a frequency of-6 ppm. The specific saturated modulation mode is as follows: firstly, the image of each channel of the multi-channel MR structure image is subjected to dimension number N according to z spectrum ω The 53 layers are copied and superimposed, and then the stacked channel maps are superimposedThe image is multiplied point by point (Hadamard product) with a z-spectrum matching image corresponding to the structural image. And performing Fourier transform on the multichannel CEST source image to correspondingly obtain multichannel fully-sampled k-space data.
According to the steps, 9200 groups of multi-channel full sampling k space data are generated through co-simulation in the embodiment.
1.2 MRI data acquisition
In order to test generalization performance of the model on real data, evaluate effectiveness of the CEST simulation data set and monitor a model training state in real time, the embodiment obtains actually-measured CEST data as a sample in a test set through an MR experiment.
Specifically, the brains of 2 glioma patients were scanned using a 3 tesla Siemens scanner (MAGNETOM prism, Siemens Healthcare, Erlangen, Germany) with a 16-channel head coil using a 2D fast spin echo (TSE) CEST imaging sequence with specific acquisition parameters: the saturation pulse duration is 1.0s, the intensity is 2 mu T, and the flip angle FA is 90 degrees; echo Time (TE) 6.7 ms; repetition Time (TR) 3 s; field of view (FOV) 212X 186mm 2 (ii) a Resolution 2.2X 2.2mm 2 (ii) a The layer thickness is 5 mm; the turbine coefficient is collected 96. A total of 54 frequency shifted frames were acquired, including the unsaturated frame S0, and saturated frames saturated at 0, ± 0.25, ± 0.5, ± 0.75, ± 1, ± 1.5, ± 2(2), ± 2.5(2), ± 3(2), ± 3.25(2), ± 3.5(6), ± 3.75(2), ± 4(2), ± 4.5, ± 5, ± 6ppm frequencies (the numbers in parentheses represent the number of repeated acquisitions of the corresponding frequency point frames).
For B0 field inhomogeneity correction, this example calculates the B0 spectrum by the direct Water Saturation Shift Referencing (WASSR). The TR of the used WASSR sequence is 2s, the saturation pulse intensity is 0.5 mu T, 26 frequency points which are equidistantly distributed in-1.5 to 1.5ppm are collected, and other parameters are consistent with the CEST imaging sequence.
1.3 data preprocessing and data set construction
In order to construct a training set, a verification set and a test set of a network, preprocessing such as down-sampling is required to be performed on each fully-sampled k-space data acquired by simulation or MRI experiments.
In this embodiment, each k-space data is 4 times undersampled using a 1-dimensional variable density cartesian undersampling mode as shown in fig. 5(b), to obtain multichannel undersampled k-space data. Then, based on the undersampled k-space data, the corresponding coil sensitivity maps are calculated using the ESPIRiT algorithm. In addition, a sensitivity encoding (SENSE) algorithm is used for image reconstruction of the fully sampled k-space data to obtain a corresponding complex-valued fully sampled CEST source image.
To this end, each independent data sample comprises 1) multichannel undersampled k-space data, 2) a corresponding coil sensitivity map, 3) a complex-valued fully-sampled CEST source image, wherein the former two are to be used as inputs of the neural network, and the complex-valued fully-sampled CEST source image is to be used as a label (Ground Truth) of the sample. For 9200 groups of samples generated by simulation, 7000 groups are randomly selected as a training set, and the rest are taken as a verification set. A test set of models was constructed from the collected samples of the MRI experiment.
2. Model building and training
Constructing a deep neural network as shown in FIG. 2 by using a deep learning framework PyTorch, wherein the number K of network iteration modules is set to be 10, the size of a time-frequency convolution kernel in each iteration module is set to be 7 multiplied by 7, and the number N of basis functions and convolution groups v 48, Gaussian radial basis function N w The number is 31.
The network is trained using a simulation generated multi-channel CEST dataset. During training, an Adam optimizer is used for optimizing the network parameter set theta through minimizing the loss function value shown in the formula (9), and relatively optimal network parameters are obtained
Figure BDA0003603436510000151
The weight coefficient μ (e) in the loss function is taken to be constant 15 and all neural networks are iterated 40 times over 4 blocks of NVIDIA RTX 2080 Ti GPUs. And in the training process, the generalization capability of the current model is checked by using the verification set, and whether the network reaches the convergence condition is judged.
3. Image reconstruction and post-processing
After the network is trained, the network can be used for image reconstruction of actually acquired data. Specifically, in the embodiment, undersampled k-space data of a brain tumor patient and a corresponding coil sensitivity map are used as input of a network, and the output of the network is a reconstructed fully-sampled CEST source image. Meanwhile, in order to compare and evaluate the performance of the algorithm, the same undersampled data is reconstructed using grappa (generalized automatic Parallel acquisition) algorithm, bcs (blind Compressed sensing) algorithm combined with Parallel imaging, and unmodified vn (spatial network) algorithm.
In addition, in order to eliminate the influence of B0 field inhomogeneity, CEST source images reconstructed by various algorithms are subjected to B0 correction by using the WASSR method, and then an APTw image is calculated by equation (11):
Figure BDA0003603436510000152
and CEST analysis was performed using the APTw image as an example.
4. Analysis of results
Fig. 5 shows the result of the reconstruction of CEST image of an example of brain tumor patient by the present algorithm. The experimental result shows that in the 4-time undersampling mode, compared with advanced image reconstruction algorithms such as GRAPPA, BCS, VN and the like, the reconstruction algorithm can achieve obviously higher CEST image quality, and the difference between the reconstructed image and the full sampling image is extremely small.
Fig. 6 shows a CEST source image at +3.5ppm reconstructed by the present algorithm, and a z spectrum and a MTRasym spectrum of a region of interest calculated based on the reconstruction result. The experimental result shows that compared with other advanced algorithms, the algorithm can better reconstruct the detail information of the image, has stronger capability of removing image artifacts, and simultaneously, the z spectrum and the MTRasym spectrum obtained by the algorithm are closer to the reference situation, so that the source image reconstructed based on the algorithm can be subjected to more reliable CEST analysis.
In addition, the embodiment of the invention uses the network trained by the simulation data, so that the high-quality reconstruction of the actually acquired CEST image can be realized, one shows that the network has excellent generalization performance, and the other indirectly shows the effectiveness of the multi-channel CEST data set simulation method provided by the invention, so that the deep neural network training requirement can be well met.
Similarly, based on the same inventive concept, another preferred embodiment of the present invention further provides a magnetic resonance CEST imaging data processing apparatus based on deep learning, which corresponds to the magnetic resonance CEST image reconstruction method based on deep learning provided in the foregoing embodiment, and which includes a memory and a processor;
the memory for storing a computer program;
the processor is configured to, when executing the computer program, implement the deep learning based magnetic resonance CEST image reconstruction method as described above.
Also, based on the same inventive concept, another preferred embodiment of the present invention further provides a computer-readable storage medium corresponding to the magnetic resonance CEST image reconstruction method based on deep learning provided in the above embodiments, where the storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the magnetic resonance CEST image reconstruction method based on deep learning as described above is implemented.
It is understood that the storage medium may include a Random Access Memory (RAM) and a Non-Volatile Memory (NVM), such as at least one disk Memory. Meanwhile, the storage medium may be various media capable of storing program codes, such as a U-disk, a removable hard disk, a magnetic disk, or an optical disk. Of course, with the wide application of cloud servers, the software program may be installed on a cloud platform to provide corresponding services, and therefore, the computer-readable storage medium is not limited to the form of local hardware.
It is understood that the Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), etc.; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
It should be further noted that, as will be clearly understood by those skilled in the art, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again. In the embodiments provided in the present application, the division of the steps or modules in the apparatus and method is only one logical function division, and in actual implementation, there may be another division manner, for example, multiple modules or steps may be combined or may be integrated together, and one module or step may also be split.
In addition, the logic instructions in the memory may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
Also, based on the same inventive concept, another preferred embodiment of the present invention further provides a magnetic resonance imaging apparatus corresponding to the magnetic resonance CEST image reconstruction method based on deep learning provided by the above embodiment, which includes a magnetic resonance scanner and a control unit. Wherein:
the magnetic resonance scanner is used for obtaining a k-space data undersampled frame, a k-space undersampled mask and a corresponding coil sensitivity map of a target object under all frequency offsets by a parallel imaging method;
and in which a computer program is stored which, when being executed, is adapted to carry out a deep learning based magnetic resonance CEST image reconstruction method as described above.
It should be noted that the magnetic resonance imaging apparatus can be any magnetic resonance scanner capable of implementing the parallel imaging method, the structure of the magnetic resonance imaging apparatus belongs to the prior art, mature commercial products can be adopted, and the specific model is not limited. In addition, the control unit of the magnetic resonance imaging apparatus should have, in addition to the above-mentioned computer program, an imaging sequence and other software programs necessary for enabling CEST imaging.
Of course, the control unit may be an independent control unit or a control unit of the magnetic resonance scanner itself, that is, the magnetic resonance CEST image reconstruction method based on the deep learning may be integrated in the control unit of the magnetic resonance imaging apparatus in the form of a data processing program, so that the reconstruction result may be directly output by the magnetic resonance scanner without an additional control unit.
The above-described embodiments are merely preferred embodiments of the present invention, and are not intended to limit the present invention. Various changes and modifications may be made by one of ordinary skill in the pertinent art without departing from the spirit and scope of the present invention. Therefore, the technical scheme obtained by adopting the mode of equivalent replacement or equivalent transformation is within the protection scope of the invention.

Claims (10)

1. A magnetic resonance CEST image reconstruction method based on deep learning is characterized by comprising the following steps:
s1, aiming at a target object to be subjected to magnetic resonance CEST imaging, acquiring multichannel undersampled k-space data of the target object and a corresponding coil sensitivity map; the multichannel undersampled k-space data consists of acquired undersampled k-space data frames under all CEST saturation offset frequencies;
s2, acquiring a trained deep neural network; the input of the deep neural network is multi-channel undersampled k-space data and a corresponding coil sensitivity map, and the output is a CEST source image reconstructed by the network;
and S3, inputting the multichannel undersampled k-space data acquired in the S1 and the corresponding coil sensitivity map into the trained deep neural network to obtain a reconstructed CEST source image.
2. The deep learning-based magnetic resonance CEST image reconstruction method according to claim 1, wherein the deep neural network in S2 is composed of one data sharing module and several iteration modules cascaded after the data sharing module;
the data sharing module fills missing parts in input under-sampled k-space data of each frame by utilizing value data of corresponding positions of the under-sampled k-space data of adjacent frames to obtain filled k-space data, then performs inverse Fourier transform and multichannel combination on the filled k-space data, and takes obtained aliasing images as output S of the module 1
Each iteration module has the same structure, and the input of any k-th iteration module comprises the output S of the last cascade module k The multi-channel undersampled k-space data and the coil sensitivity maps, the output S of which k+1 Comprises the following steps:
Figure FDA0003603436500000011
in the formula: k is 1, … K; gamma ray k Weight coefficients learnable for the network; the encoding operator E ═ MFC, where M denotes the k-space undersampled mask matrix, F denotes the fourier encoding matrix, C denotes the coil sensitivity map matrix in the network input; e * A companion matrix representing E; y represents multi-channel undersampled k-space data in a network input;
Figure FDA0003603436500000012
and
Figure FDA0003603436500000013
respectively representing the ith group of learnable three-dimensional convolution kernels and the three-dimensional deconvolution kernels in the kth iteration module;
Figure FDA0003603436500000014
to be distributed in
Figure FDA0003603436500000015
And
Figure FDA0003603436500000016
a learnable activation function in between.
3. The deep learning-based magnetic resonance CEST image reconstruction method according to claim 1, wherein in S2, the deep neural network is trained in advance with a simulation-generated multi-channel CEST dataset, which is obtained by:
s11, acquiring a multi-channel MR structure image data set, and performing data preprocessing on each MR structure image to obtain multi-channel MR structure images with uniform size and corresponding merged MR structure images merged through channels; meanwhile, a multi-pool Bloch-McConnell mathematical model based on a water proton pool and an amide proton pool is established, values of model parameters are traversed in a corresponding preset range, and a z spectrum set containing a large number of z spectrums is generated through numerical simulation, wherein each z spectrum covers N spectrums ω Frequency points;
s12, respectively generating a binary mask of a tumor region and a non-tumor region by image segmentation by using each merged MR structure image obtained in S11, and then adding different random weak textures into the two masks to fuse to form a textured mask;
s13, traversing each pixel in each textured mask acquired in S12, retrieving a z spectrum corresponding to the gray value of the pixel from a z spectrum set generated in S11 according to a mapping relation table between the gray value of the pixel and the solute concentration of an amide proton pool, pairing the z spectrum and the corresponding pixel, and correspondingly forming a three-dimensional z spectrum matching image by the paired z spectra of all the pixels in each textured mask;
s14, aiming at each multi-channel MR structural image acquired in S11, each channel image is counted by the dimension number N of the z spectrum ω Copy N ω Stacking the layers, and matching the stacked channel images with the corresponding z-spectra obtained in S13Performing point multiplication operation on the matching images to form a CEST source image corresponding to each channel, finally obtaining a group of multi-channel CEST source images corresponding to each multi-channel MR structure image, and combining the multi-channel CEST source images through the channels to form a fully sampled CEST source image;
s15, obtaining corresponding multi-channel fully-sampled k-space data through Fourier transform aiming at each group of multi-channel CEST source images, and performing undersampling on the multi-channel fully-sampled k-space data by using a pre-generated k-space undersampling mask to obtain multi-channel undersampled k-space data; meanwhile, calculating a corresponding coil sensitivity map according to undersampled k-space data;
and S16, each group of multi-channel undersampled k-space data, the corresponding coil sensitivity map and the corresponding full-sampling CEST source image jointly form a sample in the multi-channel CEST data set generated by simulation.
4. The deep learning-based magnetic resonance CEST image reconstruction method of claim 2, wherein the learnable activation function consists of N w Weighted combination of Gaussian radial basis functions, wherein the ith activation function in the kth iteration module
Figure FDA0003603436500000021
In the form of:
Figure FDA0003603436500000022
where z represents the input to the activation function and the fixed parameter delta j And σ is used to control the shape of each basis function; combining weights of basis functions
Figure FDA0003603436500000023
Set as learnable parameters, i 1, … N v ,j=1,…N w
5. The deep learning-based magnetic resonance CEST image reconstruction method of claim 4, wherein the loss function used during the deep neural network training is:
Figure FDA0003603436500000031
wherein
Figure FDA0003603436500000032
Represents the set of all learnable parameters of the network, k representing a set of convolution kernel weight coefficients in the kth iteration block, N for counting training set samples, N n Training the total amount of samples in the training set;
Figure FDA0003603436500000033
representing the n-th set of CEST source images, S, output by a deep neural network n Representing the nth group of fully sampled CEST source image labels;
Figure FDA0003603436500000034
the representation is based on
Figure FDA0003603436500000035
The calculated magnetization transfer rate asymmetry effect intensity spectrum corresponding to the mth offset frequency,
Figure FDA0003603436500000036
representation is based on S n Calculated intensity spectrum of asymmetric effect of magnetic susceptibility corresponding to mth offset frequency, N M The sum of positive and negative frequency pairs; μ (e) represents a weight coefficient associated with the number of network training rounds e.
6. The deep learning-based magnetic resonance CEST image reconstruction method according to claim 2, wherein in the data sharing module, for any under-sampled k-space data frame, all missing data points in the frame are traversed, and whether there is any data in the same position of each missing data point in an adjacent frame is determined, if yes, the data is filled into the missing data point, and if not, filling and completing are not performed.
7. A deep learning based magnetic resonance CEST image reconstruction method according to claim 1, wherein the coil sensitivity maps in S1 are acquired directly or calculated using the multi-channel undersampled k-space data.
8. A deep learning based magnetic resonance CEST imaging data processing apparatus, comprising a memory and a processor;
the memory for storing a computer program;
the processor, configured to, when executing the computer program, implement a deep learning based magnetic resonance CEST image reconstruction method according to any of claims 1 to 7.
9. A computer-readable storage means, having stored thereon a computer program for implementing a method for deep learning based magnetic resonance CEST image reconstruction as claimed in any one of claims 1 to 7, when the computer program is executed by a processor.
10. A magnetic resonance imaging apparatus comprising a magnetic resonance scanner and a control unit;
the magnetic resonance scanner is used for obtaining CEST multi-channel undersampled k-space data of a target object by a parallel imaging method;
the control unit has stored therein a computer program for, when executed, implementing a method of deep learning based magnetic resonance CEST image reconstruction as claimed in any one of claims 1 to 7.
CN202210410394.2A 2022-04-19 2022-04-19 Magnetic resonance CEST image reconstruction method, device and equipment based on deep learning Pending CN114820849A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210410394.2A CN114820849A (en) 2022-04-19 2022-04-19 Magnetic resonance CEST image reconstruction method, device and equipment based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210410394.2A CN114820849A (en) 2022-04-19 2022-04-19 Magnetic resonance CEST image reconstruction method, device and equipment based on deep learning

Publications (1)

Publication Number Publication Date
CN114820849A true CN114820849A (en) 2022-07-29

Family

ID=82504889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210410394.2A Pending CN114820849A (en) 2022-04-19 2022-04-19 Magnetic resonance CEST image reconstruction method, device and equipment based on deep learning

Country Status (1)

Country Link
CN (1) CN114820849A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984406A (en) * 2023-03-20 2023-04-18 始终(无锡)医疗科技有限公司 SS-OCT compression imaging method for deep learning and spectral domain and spatial domain combined sub-sampling
CN116188612A (en) * 2023-02-20 2023-05-30 信扬科技(佛山)有限公司 Image reconstruction method, electronic device and storage medium
WO2024103414A1 (en) * 2022-11-16 2024-05-23 中国科学院深圳先进技术研究院 Method and apparatus for reconstructing magnetic resonance image
CN118483633A (en) * 2024-07-01 2024-08-13 自贡市第一人民医院 Quick chemical exchange saturation transfer imaging and reconstructing method and system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024103414A1 (en) * 2022-11-16 2024-05-23 中国科学院深圳先进技术研究院 Method and apparatus for reconstructing magnetic resonance image
CN116188612A (en) * 2023-02-20 2023-05-30 信扬科技(佛山)有限公司 Image reconstruction method, electronic device and storage medium
CN115984406A (en) * 2023-03-20 2023-04-18 始终(无锡)医疗科技有限公司 SS-OCT compression imaging method for deep learning and spectral domain and spatial domain combined sub-sampling
CN115984406B (en) * 2023-03-20 2023-06-20 始终(无锡)医疗科技有限公司 SS-OCT compression imaging method for deep learning and spectral domain airspace combined sub-sampling
CN118483633A (en) * 2024-07-01 2024-08-13 自贡市第一人民医院 Quick chemical exchange saturation transfer imaging and reconstructing method and system

Similar Documents

Publication Publication Date Title
Ghodrati et al. MR image reconstruction using deep learning: evaluation of network structure and loss functions
Wang et al. DIMENSION: dynamic MR imaging with both k‐space and spatial prior knowledge obtained via multi‐supervised network training
Quan et al. Compressed sensing MRI reconstruction using a generative adversarial network with a cyclic loss
Tezcan et al. MR image reconstruction using deep density priors
CN114820849A (en) Magnetic resonance CEST image reconstruction method, device and equipment based on deep learning
Wen et al. Transform learning for magnetic resonance image reconstruction: From model-based learning to building neural networks
Lam et al. Constrained magnetic resonance spectroscopic imaging by learning nonlinear low-dimensional models
US11816767B1 (en) Method and system for reconstructing magnetic particle distribution model based on time-frequency spectrum enhancement
Luo et al. Bayesian MRI reconstruction with joint uncertainty estimation using diffusion models
US11170543B2 (en) MRI image reconstruction from undersampled data using adversarially trained generative neural network
Zhou et al. Parallel imaging and convolutional neural network combined fast MR image reconstruction: Applications in low‐latency accelerated real‐time imaging
CN112991483B (en) Non-local low-rank constraint self-calibration parallel magnetic resonance imaging reconstruction method
Pawar et al. A deep learning framework for transforming image reconstruction into pixel classification
CN117223028A (en) System and method for magnetic resonance image reconstruction with denoising
Shen et al. Rapid reconstruction of highly undersampled, non‐Cartesian real‐time cine k‐space data using a perceptual complex neural network (PCNN)
Wang et al. Denoising auto-encoding priors in undecimated wavelet domain for MR image reconstruction
Chaithya et al. Optimizing full 3d sparkling trajectories for high-resolution magnetic resonance imaging
Zhang et al. High-dimensional embedding network derived prior for compressive sensing MRI reconstruction
CN115471580A (en) Physical intelligent high-definition magnetic resonance diffusion imaging method
Zhou et al. Spatial orthogonal attention generative adversarial network for MRI reconstruction
Kleineisel et al. Real‐time cardiac MRI using an undersampled spiral k‐space trajectory and a reconstruction based on a variational network
Jafari et al. GRASPNET: fast spatiotemporal deep learning reconstruction of golden‐angle radial data for free‐breathing dynamic contrast‐enhanced magnetic resonance imaging
Cheng et al. Model-based deep medical imaging: the roadmap of generalizing iterative reconstruction model using deep learning
Fan et al. An interpretable MRI reconstruction network with two-grid-cycle correction and geometric prior distillation
CN117635479A (en) Magnetic particle image denoising method, system and equipment based on double-stage diffusion model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination