CN113892938A - Improved sensitivity encoding reconstruction method based on non-local low-rank constraint - Google Patents

Improved sensitivity encoding reconstruction method based on non-local low-rank constraint Download PDF

Info

Publication number
CN113892938A
CN113892938A CN202111174817.7A CN202111174817A CN113892938A CN 113892938 A CN113892938 A CN 113892938A CN 202111174817 A CN202111174817 A CN 202111174817A CN 113892938 A CN113892938 A CN 113892938A
Authority
CN
China
Prior art keywords
coil
image
matrix
iteration
operator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111174817.7A
Other languages
Chinese (zh)
Other versions
CN113892938B (en
Inventor
段继忠
潘婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming University of Science and Technology
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN202111174817.7A priority Critical patent/CN113892938B/en
Publication of CN113892938A publication Critical patent/CN113892938A/en
Application granted granted Critical
Publication of CN113892938B publication Critical patent/CN113892938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation

Abstract

The invention relates to an improved sensitivity encoding reconstruction method based on non-local low-rank constraint, and belongs to the technical field of magnetic resonance imaging. Parallel Magnetic Resonance Imaging (MRI) reconstruction has been a research focus in recent years, and the sensitivity encoding (SENSE) model is a widely used parallel MRI reconstruction model that reconstructs images from multi-coil undersampled k-space data by directly using coil sensitivity information. The invention provides a parallel magnetic resonance imaging reconstruction algorithm based on image non-local low-rank (NLR) constraint based on a SENSE reconstruction model, which is named as NLR-SENSE, uses a weighted nuclear norm as a rank proxy function, and uses a multiplier Alternating Direction (ADMM) method to effectively solve an objective optimization function. Compared with an LpTV-SENSE algorithm combined with Lp pseudo-norm Total Variation (TV) constraint, simulation experiment results show that the NLR-SENSE algorithm can obtain a clearer reconstructed image and is an effective SENSE improved algorithm.

Description

Improved sensitivity encoding reconstruction method based on non-local low-rank constraint
Technical Field
The invention relates to an improved sensitivity encoding reconstruction method based on non-local low-rank constraint, and belongs to the technical field of magnetic resonance imaging.
Background
Magnetic Resonance Imaging (MRI) is a few radiation-free medical diagnostic imaging methods at present, and has wide application in clinical medicine. However, magnetic resonance imaging has the disadvantages of long time and more imaging artifacts, and cannot meet the requirements of efficient and accurate medical diagnosis. Therefore, the research of the fast magnetic resonance imaging technology and the high-quality magnetic resonance imaging reconstruction technology has high practical significance.
Parallel imaging is a common method for accelerating MRI, which uses a multi-channel phased array to simultaneously acquire magnetic resonance signals, and uses the spatial sensitivity difference of each coil to perform sensitivity encoding of spatial information. Sensitivity Encoding (SENSE) is a widely used parallel imaging technique that recovers image information by directly using information based on coil sensitivity.
The process of reconstructing an image from undersampled k-space data is often expressed as a goal optimization problem. But data reconstruction is usually a morbid problem, and direct solution may have more aliasing artifacts. In order to obtain a better reconstruction result, the past research proposes that a regularization term with data prior information is added into a target optimization problem, so that the performance of a reconstruction algorithm can be effectively improved. At present, the parallel magnetic resonance imaging reconstruction algorithm based on the SENSE model has the following steps: the L1-SENSE algorithm combined with the L1 norm regularization term uses sparsity of the image in projection subspace (e.g., wavelet domain) as prior information; a TV-SENSE algorithm combined with a Total Variation (TV) regularization term, using local similarity characteristics of images as prior information; and an Lp-SENSE algorithm incorporating an Lp pseudo-norm total variation (LpTV) regularization term, the use of an Lp pseudo-norm can further improve algorithm performance. However, images reconstructed by using the algorithms still have more artifacts, and the reconstruction performance of the algorithms still needs to be improved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an improved sensitivity encoding reconstruction method based on non-local low-rank constraint, which can further improve the quality of a parallel magnetic resonance imaging reconstruction image.
The purpose of the invention is realized by the following technical scheme: an improved sensitivity encoding reconstruction method based on non-local low-rank constraint. It is characterized by comprising the following steps:
s0: initialization, order
Figure BDA0003294791770000011
Z0=0,z0=0,D0=0,d0=0,k=0;
Where the superscript "0" represents the initial value before iteration and the superscript "H" represents the conjugate transpose operator.
Figure BDA0003294791770000021
Representing an image to be reconstructed with column vectorization, N ═ Nh×NvNumber of pixel points, N, representing a single-coil amplitude imagehAnd NvNumber of rows and columns, X, respectively, of the single-coil amplitude image0Is the initial value of X.
Figure BDA0003294791770000022
The method comprises the steps of representing column vectorized undersampled multi-coil k-space data, wherein a superscript T represents a transposition operator, M represents the number of undersampled data points of a single coil k-space, and C represents the number of receiving coils used for parallel imaging. For c-th coil data of Y
Figure BDA0003294791770000023
Denotes, C1, C denotes a coil index, Y denotes1And YCThe 1 st coil data and the C th coil data of the column-vectorized under-sampled multi-coil k-space data Y are respectively represented.
Figure BDA0003294791770000024
The representation of the fourier operator is shown as,
Figure BDA0003294791770000025
representing an undersampling operator. I isCIs a C x C identity matrix and,
Figure BDA0003294791770000026
and
Figure BDA0003294791770000027
are each NhAnd NvA two-dimensional discrete fourier transform matrix of points,
Figure BDA0003294791770000028
a matrix representing the selection of sample point locations from a k-space grid,
Figure BDA0003294791770000029
representing the kronecker product. Wherein
Figure BDA00032947917700000210
The sensitivity operator is represented. For sensitivity information of the c-th coil
Figure BDA00032947917700000211
Denotes ScIs a diagonal matrix. S1Indicating sensitivity information of the 1 st coil, SCRepresenting the C-th coil sensitivity information.
Figure BDA00032947917700000212
Denotes an auxiliary variable, Z0Is the initial value of Z.
Figure BDA00032947917700000213
Representing the corresponding Lagrangian multiplier, Z0Is the initial value of z.
Defining Block matching operator Vi:
Figure BDA00032947917700000214
The operator performs block matching on X as follows: will be provided with
Figure BDA00032947917700000215
Divided into overlapping NpEach size is
Figure BDA00032947917700000216
Reference image block of, NpFor the number of reference image blocks, N represents the number of pixels of the image block, m is the number of similar blocks, and i is 1pIs a block index. Assume that the reference image block is
Figure BDA00032947917700000217
In a neighborhood search window of size sxs (e.g. 40 × 40), m image blocks with the minimum euclidean distance to the reference block are selected, each image block column is vectorized and a similar block group V is constructediAnd (X) column.
Figure BDA00032947917700000218
Similar block group V for representing ith reference image blocki(X) corresponding auxiliary variables, all { D }iHorizontally splicing into a matrix in sequence
Figure BDA00032947917700000219
D0Is the initial value of D.
Figure BDA00032947917700000220
Is corresponding to DiLagrange multiplier of (d) will all of diHorizontally splicing into a matrix in sequence
Figure BDA00032947917700000221
d0Is the initial value of d.
S1: initializing, i is 1;
s2: judging whether mod (k, T) is 0 or not, and if so, carrying out reconstruction on the k iteration XkSimilar block matching is carried out to obtain similar block groups Vi(Xk) Otherwise go to S3;
where mod (·) represents the remainder calculation.
S3: to pair
Figure BDA0003294791770000031
Performing singular value decomposition to obtain
Figure BDA0003294791770000032
Wherein, Σ is a singular value diagonal matrix, and the jth diagonal element thereof is the jth singular value:
Figure BDA0003294791770000033
j=1,...,min(m, n) is singular value index, and all singular values are combined into a row vector
Figure BDA0003294791770000034
And
Figure BDA0003294791770000035
respectively represent
Figure BDA0003294791770000036
Y is a unitary matrix whose columns are left singular vectors of singular values, Γ is a unitary matrix whose columns are right singular vectors of singular values,
Figure BDA0003294791770000037
indicating correspondence to auxiliary variables in the k-th iteration
Figure BDA0003294791770000038
Lagrange multiplier.
S4: calculating the weight w ═ w1,...,wmin(m,n)]Wherein w is1And wmin(m,n)Representing the first and last values of w, respectively. w is calculated as:
Figure BDA0003294791770000039
wherein ε is 10-16To ensure that the denominator is not 0.
S5: computing an auxiliary variable D for the (k + 1) th iterationk+1The calculation formula is as follows:
Dk+1=Υdiag(soft(σ,τw))ΓH
wherein: soft (σ, τ w) max (σ - τ w,0) is a soft threshold operator, τ μ/γ1Mu > 0 and gamma1> 0 is a parameter.
S6: judging whether N is completedpAn
Figure BDA00032947917700000310
When i is equal to NpThe routine proceeds to step S7; otherwise, returning to S2 after i is equal to i + 1;
s7: calculating an auxiliary variable Z for the (k + 1) th iterationk+1The calculation formula is as follows:
Figure BDA00032947917700000311
wherein the superscript "-1" represents the operator for the matrix inversion,
Figure BDA0003294791770000041
is a diagonal matrix of the grid,
Figure BDA0003294791770000042
represents an identity matrix, zkIs the k-th iteration corresponding to the auxiliary variable ZkOf lagrange multiplier, gamma2> 0 is a parameter.
S8: calculating the image X to be reconstructed of the (k + 1) th iterationk+1The calculation formula is as follows:
Figure BDA0003294791770000043
wherein
Figure BDA0003294791770000044
Representation operator ViIs associated with an operator, represents
Figure BDA0003294791770000045
The corresponding position in the image is put back.
Figure BDA0003294791770000046
Is a diagonal matrix whose each diagonal element is equal to the corresponding pixel point appearing in all similar groups of blocks { V }i(Xk) The number of times in (c).
Figure BDA0003294791770000047
Is a diagonal matrix.
S9: updating Lagrange multiplier z for the (k + 1) th iterationk+1Calculated by the following formula:
zk+1=zk2(SXk+1-Zk+1);
wherein eta is2The parameter of the Lagrange multiplier z is updated when the value is more than 0;
s10: initializing, i is 1;
s11: updating Lagrange multiplier for the (k + 1) th iteration
Figure BDA0003294791770000048
The calculation is made by the following formula:
Figure BDA0003294791770000049
wherein eta is1> 0 denotes updating the Lagrangian multiplier diParameters of (1) };
s12: judging whether N is completedpAn
Figure BDA00032947917700000410
When i is equal to NpThe routine proceeds to step S13; otherwise, returning to S11 after i is equal to i + 1;
s13: judging whether the maximum iteration number is reached, and if K is satisfied, entering step S14; otherwise, k is set to k +1, and the process returns to S2;
s14: outputting the final reconstructed image Xk+1
The invention has the beneficial effects that: the SENSE model is a parallel magnetic resonance imaging reconstruction algorithm directly using coil sensitivity information, and converts the process of reconstructing an image from multi-coil undersampled k-space data into the solution of a least square optimization problem. The invention combines the regularization constraint of non-local low-rank (NLR), provides a high-quality SENSE improved algorithm which is called as NLR-SENSE algorithm, and can further improve the performance of the parallel magnetic resonance imaging reconstruction algorithm. For the objective optimization problem of the proposed algorithm, the present invention uses a multiplier alternating direction method (ADMM) and solves it using the weighted kernel norm as the proxy function of the rank. Simulation experiments show that compared with the LpTV-SENSE algorithm, the algorithm provided by the invention has higher parallel magnetic resonance image reconstruction performance.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is an image of a brain (i.e., dataset 1) obtained using a full sampling of eight channel receive coils;
FIG. 3 is a two-dimensional Poisson disk undersampling mask diagram with a 5 times acceleration factor for a 24 × 24 center fully sampled self-correcting region;
FIGS. 4 and 5 show images reconstructed from undersampled dataset 1 data from a two-dimensional Poisson disk undersampled mask with a 5 times acceleration factor for a 24 × 24 center fully sampled self-correcting region using the LpTV-SENSE algorithm and the NLR-SENSE algorithm, respectively;
fig. 6 and 7 show the error images corresponding to fig. 4 and 5, respectively, i.e. the error images resulting from the LpTV-SENSE algorithm and the NLR-SENSE reconstruction.
Detailed Description
The technical scheme of the invention is further described in detail in the following with reference to the attached drawings.
Example 1: the invention provides an improved sensitivity encoding reconstruction method based on non-local low-rank constraint based on a SENSE framework, which comprises the following steps:
1) SENSE framework:
suppose that
Figure BDA0003294791770000051
Representing an image to be reconstructed with column vectorization, N ═ Nh×NvNumber of pixel points, N, representing a single-coil amplitude imagehAnd NvThe number of rows and columns, respectively, of the single-coil amplitude image.
Figure BDA0003294791770000052
Under-sampled multi-coil k-space data representing column vectorization, the superscript "T" representing the transpose operator, and M representing single-coil k-space undersamplingThe number of sample data points, C, represents the number of receive coils used for parallel imaging. For c-th coil data of Y
Figure BDA0003294791770000053
Denotes, C1, C denotes a coil index, e.g., Y1And YCThe 1 st coil data and the C th coil data of the column-vectorized under-sampled multi-coil k-space data Y are respectively represented.
Figure BDA0003294791770000054
And
Figure BDA0003294791770000055
are each NhAnd NvA two-dimensional discrete fourier transform matrix of points,
Figure BDA0003294791770000056
a matrix representing the selection of sample point locations from a k-space grid.
Figure BDA0003294791770000057
And
Figure BDA0003294791770000058
the fourier operator and the undersampling operator are indicated separately.
Figure BDA0003294791770000061
Denotes the kronecker product, ICIs a C × C identity matrix.
In the SENSE reconstruction model, the sensitivity operator is
Figure BDA0003294791770000062
Indicating sensitivity information of the c-th coil, ScIs a diagonal matrix. S1Indicating sensitivity information of the 1 st coil, SCRepresenting the C-th coil sensitivity information. The relationship between the image X to be reconstructed and the k-space undersampled image Y can then be found as:
Figure BDA0003294791770000063
the solution of the SENSE model is often written as the following least squares problem:
Figure BDA0003294791770000064
2) and (3) algorithm derivation:
although the SENSE algorithm combined with the regularization methods of L1, TV, and LpTV can improve the reconstruction quality to some extent, there are still many artifacts in the image reconstructed using these algorithms, and there is still a large room for improvement in the image reconstruction performance of the SENSE technique. In order to further improve the reconstruction quality, the invention combines non-local low rank regularization (NLR) with a SENSE model to provide an NLR-SENSE model, and obtains the following optimization problem:
Figure BDA0003294791770000065
wherein
Figure BDA0003294791770000066
In the case of a data fidelity item,
Figure BDA0003294791770000067
is a regularization term, rank (V)i(X)) represents ViRank of (X), μ > 0, represents a regularization parameter.
Figure BDA0003294791770000068
Is a block matching operator, the step of block matching X is as follows: dividing X into overlapping NpEach size is
Figure BDA0003294791770000069
Reference image block of, NpRepresenting the number of reference image blocks, N representing the number of pixel points of the image blocks, i ═ 1pIs a block index. Assume that the reference image block is
Figure BDA00032947917700000610
In a neighborhood search window (for example, setting the size of a local window to be 40 × 40), m image similar blocks with the minimum Euclidean distance from a reference block are found, m is the number of the similar blocks, each image block column is vectorized and forms a similar block group ViAnd (X) column.
Introducing auxiliary variables
Figure BDA00032947917700000611
And
Figure BDA00032947917700000612
and corresponding lagrange multiplier
Figure BDA00032947917700000613
And
Figure BDA00032947917700000614
using the ADMM technique, problem (3) can be transformed into the following sub-problem to be solved with alternate updates:
Figure BDA0003294791770000071
Figure BDA0003294791770000072
Figure BDA0003294791770000073
zk+1=zk2(SXk+1-Zk+1) (7)
Figure BDA0003294791770000074
in the formulas (4) to (8), the superscripts "k + 1" and "k" of the variables denote the k +1 th of the algorithmThe sub-iteration and the k-th iteration. In formula (4) { DiContains NpSub-matrix, all of { DiHorizontally splicing into a matrix in sequence
Figure BDA0003294791770000075
Is an auxiliary variable for the (k + 1) th iteration,
Figure BDA0003294791770000076
is in the kth iteration corresponds to
Figure BDA0003294791770000077
Of lagrange multiplier, gamma1> 0 is a parameter. In formula (5), Zk+1Is an auxiliary variable for the (k + 1) th iteration, zkIs the k-th iteration corresponding to ZkOf lagrange multiplier, gamma2> 0 is a parameter. In formula (6), Xk+1Is the image to be reconstructed for the (k + 1) th iteration. In formula (7), zk+1Is the k +1 th iteration corresponding to Zk+1Of lagrange multiplier, η1> 0 is a parameter that updates the lagrange multiplier z. In formula (8) { diContains NpSub-matrix, all of diSplicing into a matrix in sequence
Figure BDA0003294791770000078
Is in the k +1 th iteration corresponds to
Figure BDA0003294791770000079
Of lagrange multiplier, η2> 0 is the updated Lagrangian multiplier diThe parameters of (c).
1) Solving for problems with { DiThe low rank approximate minimization problem of }: the low rank approximation problem (4) is a nondeterministic polynomial difficulty (NP) -hard) problem. Using the weighted kernel norm as a proxy function for the rank, the problem can be rewritten as:
Figure BDA00032947917700000710
wherein τ ═ μ γ1,||.||w,*Represents a weighted nuclear norm, defined as:
Figure BDA00032947917700000711
wherein sigmajIs the j-th singular value of the matrix, wjThe representation corresponds to σjThe weight of (c).
Suppose that
Figure BDA0003294791770000081
Is that
Figure BDA0003294791770000082
Singular Value Decomposition (SVD) of (1), wherein
Figure BDA0003294791770000083
Is a singular value diagonal matrix, and the superscript "H" represents the conjugate transpose operator. Forming a row vector from all singular values
Figure BDA0003294791770000084
Is expressed as a j-th singular value of
Figure BDA0003294791770000085
j 1.. min (m, n) denotes a singular value index,
Figure BDA0003294791770000086
and
Figure BDA0003294791770000087
respectively represent
Figure BDA0003294791770000088
The 1 st and last singular values of y and Γ are unitary matrices, the columns of which are the left and right singular vectors of the singular values, respectively. Then, a solution to the weighted nuclear norm minimization problem (9) is obtained as:
Dk+1=Υsoft(Σ,τw)ΓH (11)
where soft (Σ, τ w) is max (Σ -diag (τ w),0) denotes a soft threshold operator, and w is [ w ═ w [ [ w ]1,w2,...,wmin(m,n)]To represent
Figure BDA0003294791770000089
Weight corresponding to the singular value of, w1And wmin(m,n)Respectively the 1 st and last value of w. w is calculated as:
Figure BDA00032947917700000810
wherein ε is 10-16To ensure that the denominator is not 0.
2) Solving the sub-problem with respect to Z: according to equation (5), the solution to Z is:
Figure BDA00032947917700000811
where the superscript "-1" denotes the operator that inverts the matrix,
Figure BDA00032947917700000812
is a diagonal matrix of the grid,
Figure BDA00032947917700000813
representing an identity matrix.
3) Solving the sub-problem with respect to X. According to equation (6), the solution for X is given by:
Figure BDA00032947917700000814
wherein
Figure BDA00032947917700000815
Is ViThe suffix ". sup." is the companion operator, indicating that
Figure BDA00032947917700000816
The corresponding position in the image is put back.
Figure BDA00032947917700000817
Is a diagonal matrix whose each diagonal element is equal to the corresponding pixel point occurring at all { V }i(xk) The number of times in (c).
Figure BDA00032947917700000818
Is a diagonal matrix.
By solving (4) - (8) circularly and alternately, the final reconstructed image can be obtained, and a new SENSE parallel MRI reconstruction algorithm with NLR regularization, namely an NLR-SENSE algorithm, is obtained. Because the block matching algorithm has higher computational complexity, in the NLR-SENSE algorithm, the invention proposes to update the block matching operator every T times, that is, block matching is performed when mod (k, T) is judged to be 0, where mod (·) represents remainder calculation.
The specific flow is shown in fig. 1, wherein the steps are as follows:
s0: initialization, order
Figure BDA0003294791770000091
Z0=0,z0=0,D0=0,d0=0,k=0;
Where the superscript "0" indicates the initial value before the iteration. X0Representing an initial value, Z, of an image X to be reconstructed0And D0Initial values of auxiliary variables Z and D, respectively, Z0And d0Are initial values of Z and d, respectively, i.e. corresponding to Z, respectively0And D0Lagrange multiplier.
S1: initializing, i is 1;
s2: judging whether mod (k, T) is 0 or not, and if so, carrying out reconstruction on the k iteration XkSimilar block matching is carried out to obtain similar block groups Vi(Xk) Otherwise go to S3;
s3: to pair
Figure BDA0003294791770000092
Performing singular value decomposition to obtain
Figure BDA0003294791770000093
S4: calculating a weight w by the equation (12);
s5: computing an auxiliary variable D for the (k + 1) th iterationk+1Calculating by the formula (11);
s6: judging whether N is completedpAn
Figure BDA0003294791770000094
When i is equal to NpThe routine proceeds to step S7; otherwise, returning to S2 after i is equal to i + 1;
s7: calculating an auxiliary variable Z for the (k + 1) th iterationk+1Calculating by the formula (13);
s8: calculating the image X to be reconstructed of the (k + 1) th iterationk+1Calculated by equation (14);
s9: updating Lagrange multiplier z for the (k + 1) th iterationk+1Calculating by formula (7);
s10: initializing, i is 1;
s11: updating Lagrange multiplier for the (k + 1) th iteration
Figure BDA0003294791770000095
Calculating by formula (8);
s12: judging whether N is completedpAn
Figure BDA0003294791770000096
When i is equal to NpThe routine proceeds to step S13; otherwise, returning to S11 after i is equal to i + 1;
s13: judging whether the maximum iteration number is reached, and if K is satisfied, entering step S14; otherwise, k is set to k +1, and the process returns to S2;
s14: outputting the final reconstructed image Xk+1
The experimental results are as follows:
the NLR-SENSE method was compared with the LpTV-SENSE method in experiments to verify the reconstruction performance of the algorithm. All algorithms were implemented with MATLAB. All experiments were performed on a laptop configured to: intel core i7-5500U @2.4GHz dual core, 12GB memory and Windows 7 operating system (64 bits).
The reconstruction of the SENSE model needs to rely on coil sensitivity encoding information, but accurate sensitivity information is not readily available in practical applications. The coil sensitivity is estimated by using a sensitivity estimation method provided by an ESPIRiT model, the method depends on an auto-calibration signal (ACS) and estimates the sensitivity by using a characteristic vector decomposition method, and the method is a more accurate sensitivity estimation method. It should be noted that, any sensitivity estimation method may be used in the algorithm provided by the present invention to obtain the sensitivity information.
A simulation experiment was performed using 1 brain magnetic resonance imaging dataset acquired with 8 channels, named dataset 1 (as shown in figure 2). To generate the test dataset, the fully sampled dataset was artificially undersampled using a two-dimensional poisson disk undersampled mask with a 5 x acceleration factor of a 24 x 24 center fully sampled self-correcting region to obtain undersampled k-space data, fig. 3 illustrates the undersampled mask used.
In the following experiment, the quality of the reconstructed image was evaluated using a Signal-to-Noise Ratio (SNR) as a numerical index. A higher value of SNR indicates a better image reconstruction quality. In the experiment, the SNR index is calculated only in the region of interest (ROI) of the image. The parameters of all comparison algorithms are adjusted to SNR optimum. SNR (dB) is defined as:
Figure BDA0003294791770000101
where Var denotes the reference image X and the reconstructed image
Figure BDA0003294791770000102
Mean square error between, MSE is the variance of the reference image X.
And acquiring undersampled data by using an undersampled mask of 5 times of acceleration factor for the fully-sampled data dataset 1 acquired by using multiple coils, and reconstructing the undersampled data by using all comparative algorithms. In order to compare the reconstruction effects of different algorithms, experiments show the reconstructed images of different algorithms and error graphs between the reconstructed images and an original image (white dots indicate errors, and the more the white dots, the larger the errors). Specifically, fig. 4 and 5 show images reconstructed using the LpTV-SENSE algorithm and the NLR-SENSE algorithm, respectively, and fig. 6 and 7 show reconstructed error images corresponding to fig. 4 and 5. From the reconstructed image and the error image, it can be seen that the image reconstructed by the LpTV-SENSE method has more artifacts and more errors, and the reconstructed image by the NLR-SENSE has less reconstruction artifacts and reconstruction errors, so that the reconstructed image is clearer and is closer to the original image.
In addition, the experiment also analyzed the values of SNR of the reconstructed images of the two methods. The SNR of the LpTV-SENSE method (as in FIG. 4) was 21.27 dB; the SNR of the reconstructed image (as in fig. 5) of the NLR-SENSE method is 22.71 dB. It can be seen that the NLR-SENSE method proposed by the present invention has a high SNR, which is consistent with the visual image and the error image of the reconstructed result.
In conclusion, different algorithms are used for carrying out experiments on the selected test set, and the NLR-SENSE method based on non-local low-rank regularization is obviously superior to the LpTV-SENSE algorithm in SNR evaluation index and visual comparison.
While the present invention has been described in detail with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, and various changes can be made without departing from the spirit and scope of the present invention.

Claims (1)

1. A method for improving sensitivity coding reconstruction based on non-local low rank constraint is characterized by comprising the following steps:
s0: initialization, order
Figure FDA0003294791760000011
Z0=0,z0=0,D0=0,d0=0,k=0;
Wherein the superscript "0" represents the initial value before iteration, the superscript "H" represents the conjugate transpose operator,
Figure FDA0003294791760000012
representing an image to be reconstructed with column vectorization, N ═ Nh×NvNumber of pixel points, N, representing a single-coil amplitude imagehAnd NvNumber of rows and columns, X, respectively, of the single-coil amplitude image0Is the initial value of X and is,
Figure FDA0003294791760000013
the method comprises the steps of representing column vectorization undersampled multi-coil k-space data, using superscript T to represent a transposition operator, using M to represent the number of undersampled data points of a single-coil k-space, using C to represent the number of receiving coils used for parallel imaging, and using the data of the C-th coil of Y
Figure FDA0003294791760000014
Denotes, C1, C denotes a coil index, Y denotes1And YCThe 1 st coil data and the C th coil data respectively representing column-vectorized under-sampled multi-coil k-space data Y,
Figure FDA0003294791760000015
the representation of the fourier operator is shown as,
Figure FDA0003294791760000016
representing undersampling operators, ICIs a C x C identity matrix and,
Figure FDA0003294791760000017
and
Figure FDA0003294791760000018
are each NhAnd NvPoint two-dimensional discrete fourier transformThe matrix is a matrix of a plurality of matrices,
Figure FDA0003294791760000019
a matrix representing the selection of sample point locations from a k-space grid,
Figure FDA00032947917600000110
represents the kronecker product, wherein
Figure FDA00032947917600000111
Indicating sensitivity operator, for sensitivity information of the c-th coil
Figure FDA00032947917600000112
Denotes ScIs a diagonal matrix, S1Indicating sensitivity information of the 1 st coil, SCThe C-th coil sensitivity information is represented,
Figure FDA00032947917600000113
denotes an auxiliary variable, Z0Is the initial value of Z and is,
Figure FDA00032947917600000114
representing the corresponding Lagrangian multiplier, Z0Is the initial value of z;
defining Block matching operator Vi:
Figure FDA00032947917600000115
The operator performs block matching on X as follows: will be provided with
Figure FDA00032947917600000116
Divided into overlapping NpEach size is
Figure FDA00032947917600000117
Reference image block of, NpFor the number of reference image blocks, N represents the number of pixels of the image block, m is the number of similar blocks, and i is 1pFor the block index, a reference image block is set to
Figure FDA00032947917600000118
In a neighborhood search window with the size of s multiplied by s, m image blocks with the minimum Euclidean distance from a reference block are selected, each image block column is vectorized, and a similar block group V is formediThe column of (X) is a column,
Figure FDA00032947917600000119
similar block group V for representing ith reference image blocki(X) corresponding auxiliary variables, all { D }iHorizontally splicing into a matrix in sequence
Figure FDA0003294791760000021
D0Is the initial value of D and is,
Figure FDA0003294791760000022
is corresponding to DiLagrange multiplier of (d) will all of diHorizontally splicing into a matrix in sequence
Figure FDA0003294791760000023
d0Is the initial value of d and is,
s1: initializing, i is 1;
s2: judging whether mod (k, T) is 0 or not, and if so, carrying out reconstruction on the k iteration XkSimilar block matching is carried out to obtain similar block groups Vi(Xk) Otherwise go to S3;
where mod (-) represents the remainder calculation;
s3: to pair
Figure FDA0003294791760000024
Performing singular value decomposition to obtain
Figure FDA0003294791760000025
Wherein, Σ is a singular value diagonal matrix,the jth diagonal element is the jth singular value:
Figure FDA0003294791760000026
for singular value indexing, all singular values are combined into a row vector
Figure FDA0003294791760000027
Figure FDA0003294791760000028
And
Figure FDA0003294791760000029
respectively represent
Figure FDA00032947917600000210
Y is a unitary matrix whose columns are left singular vectors of singular values, Γ is a unitary matrix whose columns are right singular vectors of singular values,
Figure FDA00032947917600000211
indicating correspondence to auxiliary variables in the k-th iteration
Figure FDA00032947917600000212
Lagrange multiplier of (a);
s4: calculating the weight w ═ w1,...,wmin(m,n)]Wherein w is1And wmin(m,n)The first and last values of w are represented, respectively, and w is calculated as:
Figure FDA00032947917600000213
wherein ε is 10-16For ensuring that the denominator is not 0;
s5: computing an auxiliary variable D for the (k + 1) th iterationk+1The calculation formula is as follows:
Dk+1=Υdiag(soft(σ,τw))ΓH
wherein: soft (σ, τ w) max (σ - τ w,0) is a soft threshold operator, τ μ/γ1Mu > 0 and gamma1More than 0 is a parameter;
s6: judging whether N is completedpAn
Figure FDA00032947917600000214
When i is equal to NpThe routine proceeds to step S7; otherwise, returning to S2 after i is equal to i + 1;
s7: calculating an auxiliary variable Z for the (k + 1) th iterationk+1The calculation formula is as follows:
Figure FDA0003294791760000031
wherein the superscript "-1" represents the operator for the matrix inversion,
Figure FDA0003294791760000032
is a diagonal matrix of the grid,
Figure FDA0003294791760000033
represents an identity matrix, zkIs the k-th iteration corresponding to the auxiliary variable ZkOf lagrange multiplier, gamma2More than 0 is a parameter;
s8: calculating the image X to be reconstructed of the (k + 1) th iterationk+1The calculation formula is as follows:
Figure FDA0003294791760000034
wherein Vi *:
Figure FDA0003294791760000035
Representation operator ViIs associated with an operator, represents
Figure FDA0003294791760000036
The corresponding position in the image is put back,
Figure FDA0003294791760000037
is a diagonal matrix whose each diagonal element is equal to the corresponding pixel point appearing in all similar groups of blocks { V }i(Xk) The number of times in (c) },
Figure FDA0003294791760000038
is a diagonal matrix of the grid,
s9: updating Lagrange multiplier z for the (k + 1) th iterationk+1Calculated by the following formula:
zk+1=zk2(SXk+1-Zk+1);
wherein eta is2The parameter of the Lagrange multiplier z is updated when the value is more than 0;
s10: initializing, i is 1;
s11: updating Lagrange multiplier for the (k + 1) th iteration
Figure FDA0003294791760000039
The calculation is made by the following formula:
Figure FDA00032947917600000310
wherein eta is1> 0 denotes updating the Lagrangian multiplier diParameters of (1) };
s12: judging whether N is completedpAn
Figure FDA00032947917600000311
When i is equal to NpThe routine proceeds to step S13; otherwise, returning to S11 after i is equal to i + 1;
s13: judging whether the maximum iteration number is reached, and if K is satisfied, entering step S14; otherwise, k is set to k +1, and the process returns to S2;
s14: outputting the final reconstructed image Xk+1
CN202111174817.7A 2021-10-09 2021-10-09 Improved sensitivity coding reconstruction method based on non-local low-rank constraint Active CN113892938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111174817.7A CN113892938B (en) 2021-10-09 2021-10-09 Improved sensitivity coding reconstruction method based on non-local low-rank constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111174817.7A CN113892938B (en) 2021-10-09 2021-10-09 Improved sensitivity coding reconstruction method based on non-local low-rank constraint

Publications (2)

Publication Number Publication Date
CN113892938A true CN113892938A (en) 2022-01-07
CN113892938B CN113892938B (en) 2023-10-27

Family

ID=79190519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111174817.7A Active CN113892938B (en) 2021-10-09 2021-10-09 Improved sensitivity coding reconstruction method based on non-local low-rank constraint

Country Status (1)

Country Link
CN (1) CN113892938B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933683A (en) * 2015-06-09 2015-09-23 南昌大学 Non-convex low-rank reconstruction method for rapid magnetic resonance (MR) imaging
WO2018099321A1 (en) * 2016-11-30 2018-06-07 华南理工大学 Generalized tree sparse-based weighted nuclear norm magnetic resonance imaging reconstruction method
US20190086501A1 (en) * 2017-09-21 2019-03-21 Centre National De La Recherche Scientifique (Cnrs) Method and magnetic resonance apparatus for image reconstruction with trimmed autocalibrating k-space estimation based on structured matrix completion
US20190355157A1 (en) * 2018-05-21 2019-11-21 The Board Of Trustees Of The Leland Stanford Junior University Motion robust reconstruction of multi-shot diffusion-weighted images without phase estimation via locally low-rank regularization
CN112617798A (en) * 2020-12-31 2021-04-09 昆明理工大学 Parallel magnetic resonance imaging reconstruction method based on Lp norm combined total variation
CN112991483A (en) * 2021-04-26 2021-06-18 昆明理工大学 Non-local low-rank constraint self-calibration parallel magnetic resonance imaging reconstruction method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933683A (en) * 2015-06-09 2015-09-23 南昌大学 Non-convex low-rank reconstruction method for rapid magnetic resonance (MR) imaging
WO2018099321A1 (en) * 2016-11-30 2018-06-07 华南理工大学 Generalized tree sparse-based weighted nuclear norm magnetic resonance imaging reconstruction method
US20190086501A1 (en) * 2017-09-21 2019-03-21 Centre National De La Recherche Scientifique (Cnrs) Method and magnetic resonance apparatus for image reconstruction with trimmed autocalibrating k-space estimation based on structured matrix completion
US20190355157A1 (en) * 2018-05-21 2019-11-21 The Board Of Trustees Of The Leland Stanford Junior University Motion robust reconstruction of multi-shot diffusion-weighted images without phase estimation via locally low-rank regularization
CN112617798A (en) * 2020-12-31 2021-04-09 昆明理工大学 Parallel magnetic resonance imaging reconstruction method based on Lp norm combined total variation
CN112991483A (en) * 2021-04-26 2021-06-18 昆明理工大学 Non-local low-rank constraint self-calibration parallel magnetic resonance imaging reconstruction method

Also Published As

Publication number Publication date
CN113892938B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
US11327137B2 (en) One-dimensional partial Fourier parallel magnetic resonance imaging method based on deep convolutional network
CN107274462B (en) Classified multi-dictionary learning magnetic resonance image reconstruction method based on entropy and geometric direction
Montalt-Tordera et al. Machine learning in magnetic resonance imaging: image reconstruction
WO2020134826A1 (en) Parallel magnetic resonance imaging method and related equipment
CN112991483B (en) Non-local low-rank constraint self-calibration parallel magnetic resonance imaging reconstruction method
Kelkar et al. Prior image-constrained reconstruction using style-based generative models
CN111754598B (en) Local space neighborhood parallel magnetic resonance imaging reconstruction method based on transformation learning
CN112819949B (en) Magnetic resonance fingerprint image reconstruction method based on structured low-rank matrix
Nakarmi et al. M-MRI: A manifold-based framework to highly accelerated dynamic magnetic resonance imaging
Ke et al. Deep manifold learning for dynamic MR imaging
CN109934884B (en) Iterative self-consistency parallel imaging reconstruction method based on transform learning and joint sparsity
Manimala et al. Convolutional neural network for sparse reconstruction of MR images interposed with gaussian noise
CN109920017B (en) Parallel magnetic resonance imaging reconstruction method of joint total variation Lp pseudo norm based on self-consistency of feature vector
Babu et al. Fast low rank column-wise compressive sensing for accelerated dynamic MRI
CN106991651B (en) Fast imaging method and system based on synthesis analysis deconvolution network
Xiao et al. SR-Net: a sequence offset fusion net and refine net for undersampled multislice MR image reconstruction
CN114820849A (en) Magnetic resonance CEST image reconstruction method, device and equipment based on deep learning
CN109188327B (en) Magnetic resonance image fast reconstruction method based on tensor product complex small compact framework
CN112617798A (en) Parallel magnetic resonance imaging reconstruction method based on Lp norm combined total variation
US20230342886A1 (en) Method and system for low-field mri denoising with a deep complex-valued convolutional neural network
Zong et al. Fast reconstruction of highly undersampled MR images using one and two dimensional principal component analysis
He et al. Dynamic MRI reconstruction exploiting blind compressed sensing combined transform learning regularization
Liu et al. High-Fidelity MRI Reconstruction Using Adaptive Spatial Attention Selection and Deep Data Consistency Prior
Bhatia et al. Fast reconstruction of accelerated dynamic MRI using manifold kernel regression
CN113892938B (en) Improved sensitivity coding reconstruction method based on non-local low-rank constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant