CN113892938A - Improved sensitivity encoding reconstruction method based on non-local low-rank constraint - Google Patents
Improved sensitivity encoding reconstruction method based on non-local low-rank constraint Download PDFInfo
- Publication number
- CN113892938A CN113892938A CN202111174817.7A CN202111174817A CN113892938A CN 113892938 A CN113892938 A CN 113892938A CN 202111174817 A CN202111174817 A CN 202111174817A CN 113892938 A CN113892938 A CN 113892938A
- Authority
- CN
- China
- Prior art keywords
- coil
- image
- matrix
- iteration
- operator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000035945 sensitivity Effects 0.000 title claims abstract description 35
- 239000011159 matrix material Substances 0.000 claims description 44
- 238000004364 calculation method Methods 0.000 claims description 11
- 239000013598 vector Substances 0.000 claims description 9
- 238000003384 imaging method Methods 0.000 claims description 6
- 238000000354 decomposition reaction Methods 0.000 claims description 5
- 230000017105 transposition Effects 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 abstract description 40
- 238000002595 magnetic resonance imaging Methods 0.000 abstract description 16
- 238000005457 optimization Methods 0.000 abstract description 6
- 238000011160 research Methods 0.000 abstract description 3
- 238000004088 simulation Methods 0.000 abstract description 3
- 238000002474 experimental method Methods 0.000 description 7
- 230000001133 acceleration Effects 0.000 description 4
- 210000004556 brain Anatomy 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000002969 morbid Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
Abstract
The invention relates to an improved sensitivity encoding reconstruction method based on non-local low-rank constraint, and belongs to the technical field of magnetic resonance imaging. Parallel Magnetic Resonance Imaging (MRI) reconstruction has been a research focus in recent years, and the sensitivity encoding (SENSE) model is a widely used parallel MRI reconstruction model that reconstructs images from multi-coil undersampled k-space data by directly using coil sensitivity information. The invention provides a parallel magnetic resonance imaging reconstruction algorithm based on image non-local low-rank (NLR) constraint based on a SENSE reconstruction model, which is named as NLR-SENSE, uses a weighted nuclear norm as a rank proxy function, and uses a multiplier Alternating Direction (ADMM) method to effectively solve an objective optimization function. Compared with an LpTV-SENSE algorithm combined with Lp pseudo-norm Total Variation (TV) constraint, simulation experiment results show that the NLR-SENSE algorithm can obtain a clearer reconstructed image and is an effective SENSE improved algorithm.
Description
Technical Field
The invention relates to an improved sensitivity encoding reconstruction method based on non-local low-rank constraint, and belongs to the technical field of magnetic resonance imaging.
Background
Magnetic Resonance Imaging (MRI) is a few radiation-free medical diagnostic imaging methods at present, and has wide application in clinical medicine. However, magnetic resonance imaging has the disadvantages of long time and more imaging artifacts, and cannot meet the requirements of efficient and accurate medical diagnosis. Therefore, the research of the fast magnetic resonance imaging technology and the high-quality magnetic resonance imaging reconstruction technology has high practical significance.
Parallel imaging is a common method for accelerating MRI, which uses a multi-channel phased array to simultaneously acquire magnetic resonance signals, and uses the spatial sensitivity difference of each coil to perform sensitivity encoding of spatial information. Sensitivity Encoding (SENSE) is a widely used parallel imaging technique that recovers image information by directly using information based on coil sensitivity.
The process of reconstructing an image from undersampled k-space data is often expressed as a goal optimization problem. But data reconstruction is usually a morbid problem, and direct solution may have more aliasing artifacts. In order to obtain a better reconstruction result, the past research proposes that a regularization term with data prior information is added into a target optimization problem, so that the performance of a reconstruction algorithm can be effectively improved. At present, the parallel magnetic resonance imaging reconstruction algorithm based on the SENSE model has the following steps: the L1-SENSE algorithm combined with the L1 norm regularization term uses sparsity of the image in projection subspace (e.g., wavelet domain) as prior information; a TV-SENSE algorithm combined with a Total Variation (TV) regularization term, using local similarity characteristics of images as prior information; and an Lp-SENSE algorithm incorporating an Lp pseudo-norm total variation (LpTV) regularization term, the use of an Lp pseudo-norm can further improve algorithm performance. However, images reconstructed by using the algorithms still have more artifacts, and the reconstruction performance of the algorithms still needs to be improved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an improved sensitivity encoding reconstruction method based on non-local low-rank constraint, which can further improve the quality of a parallel magnetic resonance imaging reconstruction image.
The purpose of the invention is realized by the following technical scheme: an improved sensitivity encoding reconstruction method based on non-local low-rank constraint. It is characterized by comprising the following steps:
Where the superscript "0" represents the initial value before iteration and the superscript "H" represents the conjugate transpose operator.Representing an image to be reconstructed with column vectorization, N ═ Nh×NvNumber of pixel points, N, representing a single-coil amplitude imagehAnd NvNumber of rows and columns, X, respectively, of the single-coil amplitude image0Is the initial value of X.The method comprises the steps of representing column vectorized undersampled multi-coil k-space data, wherein a superscript T represents a transposition operator, M represents the number of undersampled data points of a single coil k-space, and C represents the number of receiving coils used for parallel imaging. For c-th coil data of YDenotes, C1, C denotes a coil index, Y denotes1And YCThe 1 st coil data and the C th coil data of the column-vectorized under-sampled multi-coil k-space data Y are respectively represented.The representation of the fourier operator is shown as,representing an undersampling operator. I isCIs a C x C identity matrix and,andare each NhAnd NvA two-dimensional discrete fourier transform matrix of points,a matrix representing the selection of sample point locations from a k-space grid,representing the kronecker product. WhereinThe sensitivity operator is represented. For sensitivity information of the c-th coilDenotes ScIs a diagonal matrix. S1Indicating sensitivity information of the 1 st coil, SCRepresenting the C-th coil sensitivity information.Denotes an auxiliary variable, Z0Is the initial value of Z.Representing the corresponding Lagrangian multiplier, Z0Is the initial value of z.
Defining Block matching operator Vi:The operator performs block matching on X as follows: will be provided withDivided into overlapping NpEach size isReference image block of, NpFor the number of reference image blocks, N represents the number of pixels of the image block, m is the number of similar blocks, and i is 1pIs a block index. Assume that the reference image block isIn a neighborhood search window of size sxs (e.g. 40 × 40), m image blocks with the minimum euclidean distance to the reference block are selected, each image block column is vectorized and a similar block group V is constructediAnd (X) column.Similar block group V for representing ith reference image blocki(X) corresponding auxiliary variables, all { D }iHorizontally splicing into a matrix in sequenceD0Is the initial value of D.Is corresponding to DiLagrange multiplier of (d) will all of diHorizontally splicing into a matrix in sequenced0Is the initial value of d.
S1: initializing, i is 1;
s2: judging whether mod (k, T) is 0 or not, and if so, carrying out reconstruction on the k iteration XkSimilar block matching is carried out to obtain similar block groups Vi(Xk) Otherwise go to S3;
where mod (·) represents the remainder calculation.
Wherein, Σ is a singular value diagonal matrix, and the jth diagonal element thereof is the jth singular value:j=1,...,min(m, n) is singular value index, and all singular values are combined into a row vectorAndrespectively representY is a unitary matrix whose columns are left singular vectors of singular values, Γ is a unitary matrix whose columns are right singular vectors of singular values,indicating correspondence to auxiliary variables in the k-th iterationLagrange multiplier.
S4: calculating the weight w ═ w1,...,wmin(m,n)]Wherein w is1And wmin(m,n)Representing the first and last values of w, respectively. w is calculated as:
wherein ε is 10-16To ensure that the denominator is not 0.
S5: computing an auxiliary variable D for the (k + 1) th iterationk+1The calculation formula is as follows:
Dk+1=Υdiag(soft(σ,τw))ΓH;
wherein: soft (σ, τ w) max (σ - τ w,0) is a soft threshold operator, τ μ/γ1Mu > 0 and gamma1> 0 is a parameter.
S6: judging whether N is completedpAnWhen i is equal to NpThe routine proceeds to step S7; otherwise, returning to S2 after i is equal to i + 1;
s7: calculating an auxiliary variable Z for the (k + 1) th iterationk+1The calculation formula is as follows:
wherein the superscript "-1" represents the operator for the matrix inversion,is a diagonal matrix of the grid,represents an identity matrix, zkIs the k-th iteration corresponding to the auxiliary variable ZkOf lagrange multiplier, gamma2> 0 is a parameter.
S8: calculating the image X to be reconstructed of the (k + 1) th iterationk+1The calculation formula is as follows:
whereinRepresentation operator ViIs associated with an operator, representsThe corresponding position in the image is put back.Is a diagonal matrix whose each diagonal element is equal to the corresponding pixel point appearing in all similar groups of blocks { V }i(Xk) The number of times in (c).Is a diagonal matrix.
S9: updating Lagrange multiplier z for the (k + 1) th iterationk+1Calculated by the following formula:
zk+1=zk+η2(SXk+1-Zk+1);
wherein eta is2The parameter of the Lagrange multiplier z is updated when the value is more than 0;
s10: initializing, i is 1;
s11: updating Lagrange multiplier for the (k + 1) th iterationThe calculation is made by the following formula:
wherein eta is1> 0 denotes updating the Lagrangian multiplier diParameters of (1) };
s12: judging whether N is completedpAnWhen i is equal to NpThe routine proceeds to step S13; otherwise, returning to S11 after i is equal to i + 1;
s13: judging whether the maximum iteration number is reached, and if K is satisfied, entering step S14; otherwise, k is set to k +1, and the process returns to S2;
s14: outputting the final reconstructed image Xk+1。
The invention has the beneficial effects that: the SENSE model is a parallel magnetic resonance imaging reconstruction algorithm directly using coil sensitivity information, and converts the process of reconstructing an image from multi-coil undersampled k-space data into the solution of a least square optimization problem. The invention combines the regularization constraint of non-local low-rank (NLR), provides a high-quality SENSE improved algorithm which is called as NLR-SENSE algorithm, and can further improve the performance of the parallel magnetic resonance imaging reconstruction algorithm. For the objective optimization problem of the proposed algorithm, the present invention uses a multiplier alternating direction method (ADMM) and solves it using the weighted kernel norm as the proxy function of the rank. Simulation experiments show that compared with the LpTV-SENSE algorithm, the algorithm provided by the invention has higher parallel magnetic resonance image reconstruction performance.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is an image of a brain (i.e., dataset 1) obtained using a full sampling of eight channel receive coils;
FIG. 3 is a two-dimensional Poisson disk undersampling mask diagram with a 5 times acceleration factor for a 24 × 24 center fully sampled self-correcting region;
FIGS. 4 and 5 show images reconstructed from undersampled dataset 1 data from a two-dimensional Poisson disk undersampled mask with a 5 times acceleration factor for a 24 × 24 center fully sampled self-correcting region using the LpTV-SENSE algorithm and the NLR-SENSE algorithm, respectively;
fig. 6 and 7 show the error images corresponding to fig. 4 and 5, respectively, i.e. the error images resulting from the LpTV-SENSE algorithm and the NLR-SENSE reconstruction.
Detailed Description
The technical scheme of the invention is further described in detail in the following with reference to the attached drawings.
Example 1: the invention provides an improved sensitivity encoding reconstruction method based on non-local low-rank constraint based on a SENSE framework, which comprises the following steps:
1) SENSE framework:
suppose thatRepresenting an image to be reconstructed with column vectorization, N ═ Nh×NvNumber of pixel points, N, representing a single-coil amplitude imagehAnd NvThe number of rows and columns, respectively, of the single-coil amplitude image.Under-sampled multi-coil k-space data representing column vectorization, the superscript "T" representing the transpose operator, and M representing single-coil k-space undersamplingThe number of sample data points, C, represents the number of receive coils used for parallel imaging. For c-th coil data of YDenotes, C1, C denotes a coil index, e.g., Y1And YCThe 1 st coil data and the C th coil data of the column-vectorized under-sampled multi-coil k-space data Y are respectively represented.Andare each NhAnd NvA two-dimensional discrete fourier transform matrix of points,a matrix representing the selection of sample point locations from a k-space grid.Andthe fourier operator and the undersampling operator are indicated separately.Denotes the kronecker product, ICIs a C × C identity matrix.
In the SENSE reconstruction model, the sensitivity operator isIndicating sensitivity information of the c-th coil, ScIs a diagonal matrix. S1Indicating sensitivity information of the 1 st coil, SCRepresenting the C-th coil sensitivity information. The relationship between the image X to be reconstructed and the k-space undersampled image Y can then be found as:
the solution of the SENSE model is often written as the following least squares problem:
2) and (3) algorithm derivation:
although the SENSE algorithm combined with the regularization methods of L1, TV, and LpTV can improve the reconstruction quality to some extent, there are still many artifacts in the image reconstructed using these algorithms, and there is still a large room for improvement in the image reconstruction performance of the SENSE technique. In order to further improve the reconstruction quality, the invention combines non-local low rank regularization (NLR) with a SENSE model to provide an NLR-SENSE model, and obtains the following optimization problem:
whereinIn the case of a data fidelity item,is a regularization term, rank (V)i(X)) represents ViRank of (X), μ > 0, represents a regularization parameter.Is a block matching operator, the step of block matching X is as follows: dividing X into overlapping NpEach size isReference image block of, NpRepresenting the number of reference image blocks, N representing the number of pixel points of the image blocks, i ═ 1pIs a block index. Assume that the reference image block isIn a neighborhood search window (for example, setting the size of a local window to be 40 × 40), m image similar blocks with the minimum Euclidean distance from a reference block are found, m is the number of the similar blocks, each image block column is vectorized and forms a similar block group ViAnd (X) column.
Introducing auxiliary variablesAndand corresponding lagrange multiplierAndusing the ADMM technique, problem (3) can be transformed into the following sub-problem to be solved with alternate updates:
zk+1=zk+η2(SXk+1-Zk+1) (7)
in the formulas (4) to (8), the superscripts "k + 1" and "k" of the variables denote the k +1 th of the algorithmThe sub-iteration and the k-th iteration. In formula (4) { DiContains NpSub-matrix, all of { DiHorizontally splicing into a matrix in sequenceIs an auxiliary variable for the (k + 1) th iteration,is in the kth iteration corresponds toOf lagrange multiplier, gamma1> 0 is a parameter. In formula (5), Zk+1Is an auxiliary variable for the (k + 1) th iteration, zkIs the k-th iteration corresponding to ZkOf lagrange multiplier, gamma2> 0 is a parameter. In formula (6), Xk+1Is the image to be reconstructed for the (k + 1) th iteration. In formula (7), zk+1Is the k +1 th iteration corresponding to Zk+1Of lagrange multiplier, η1> 0 is a parameter that updates the lagrange multiplier z. In formula (8) { diContains NpSub-matrix, all of diSplicing into a matrix in sequenceIs in the k +1 th iteration corresponds toOf lagrange multiplier, η2> 0 is the updated Lagrangian multiplier diThe parameters of (c).
1) Solving for problems with { DiThe low rank approximate minimization problem of }: the low rank approximation problem (4) is a nondeterministic polynomial difficulty (NP) -hard) problem. Using the weighted kernel norm as a proxy function for the rank, the problem can be rewritten as:
wherein τ ═ μ γ1,||.||w,*Represents a weighted nuclear norm, defined as:
wherein sigmajIs the j-th singular value of the matrix, wjThe representation corresponds to σjThe weight of (c).
Suppose thatIs thatSingular Value Decomposition (SVD) of (1), whereinIs a singular value diagonal matrix, and the superscript "H" represents the conjugate transpose operator. Forming a row vector from all singular valuesIs expressed as a j-th singular value of j 1.. min (m, n) denotes a singular value index,andrespectively representThe 1 st and last singular values of y and Γ are unitary matrices, the columns of which are the left and right singular vectors of the singular values, respectively. Then, a solution to the weighted nuclear norm minimization problem (9) is obtained as:
Dk+1=Υsoft(Σ,τw)ΓH (11)
where soft (Σ, τ w) is max (Σ -diag (τ w),0) denotes a soft threshold operator, and w is [ w ═ w [ [ w ]1,w2,...,wmin(m,n)]To representWeight corresponding to the singular value of, w1And wmin(m,n)Respectively the 1 st and last value of w. w is calculated as:
wherein ε is 10-16To ensure that the denominator is not 0.
2) Solving the sub-problem with respect to Z: according to equation (5), the solution to Z is:
where the superscript "-1" denotes the operator that inverts the matrix,is a diagonal matrix of the grid,representing an identity matrix.
3) Solving the sub-problem with respect to X. According to equation (6), the solution for X is given by:
whereinIs ViThe suffix ". sup." is the companion operator, indicating thatThe corresponding position in the image is put back.Is a diagonal matrix whose each diagonal element is equal to the corresponding pixel point occurring at all { V }i(xk) The number of times in (c).Is a diagonal matrix.
By solving (4) - (8) circularly and alternately, the final reconstructed image can be obtained, and a new SENSE parallel MRI reconstruction algorithm with NLR regularization, namely an NLR-SENSE algorithm, is obtained. Because the block matching algorithm has higher computational complexity, in the NLR-SENSE algorithm, the invention proposes to update the block matching operator every T times, that is, block matching is performed when mod (k, T) is judged to be 0, where mod (·) represents remainder calculation.
The specific flow is shown in fig. 1, wherein the steps are as follows:
Where the superscript "0" indicates the initial value before the iteration. X0Representing an initial value, Z, of an image X to be reconstructed0And D0Initial values of auxiliary variables Z and D, respectively, Z0And d0Are initial values of Z and d, respectively, i.e. corresponding to Z, respectively0And D0Lagrange multiplier.
S1: initializing, i is 1;
s2: judging whether mod (k, T) is 0 or not, and if so, carrying out reconstruction on the k iteration XkSimilar block matching is carried out to obtain similar block groups Vi(Xk) Otherwise go to S3;
S4: calculating a weight w by the equation (12);
s5: computing an auxiliary variable D for the (k + 1) th iterationk+1Calculating by the formula (11);
s6: judging whether N is completedpAnWhen i is equal to NpThe routine proceeds to step S7; otherwise, returning to S2 after i is equal to i + 1;
s7: calculating an auxiliary variable Z for the (k + 1) th iterationk+1Calculating by the formula (13);
s8: calculating the image X to be reconstructed of the (k + 1) th iterationk+1Calculated by equation (14);
s9: updating Lagrange multiplier z for the (k + 1) th iterationk+1Calculating by formula (7);
s10: initializing, i is 1;
s12: judging whether N is completedpAnWhen i is equal to NpThe routine proceeds to step S13; otherwise, returning to S11 after i is equal to i + 1;
s13: judging whether the maximum iteration number is reached, and if K is satisfied, entering step S14; otherwise, k is set to k +1, and the process returns to S2;
s14: outputting the final reconstructed image Xk+1。
The experimental results are as follows:
the NLR-SENSE method was compared with the LpTV-SENSE method in experiments to verify the reconstruction performance of the algorithm. All algorithms were implemented with MATLAB. All experiments were performed on a laptop configured to: intel core i7-5500U @2.4GHz dual core, 12GB memory and Windows 7 operating system (64 bits).
The reconstruction of the SENSE model needs to rely on coil sensitivity encoding information, but accurate sensitivity information is not readily available in practical applications. The coil sensitivity is estimated by using a sensitivity estimation method provided by an ESPIRiT model, the method depends on an auto-calibration signal (ACS) and estimates the sensitivity by using a characteristic vector decomposition method, and the method is a more accurate sensitivity estimation method. It should be noted that, any sensitivity estimation method may be used in the algorithm provided by the present invention to obtain the sensitivity information.
A simulation experiment was performed using 1 brain magnetic resonance imaging dataset acquired with 8 channels, named dataset 1 (as shown in figure 2). To generate the test dataset, the fully sampled dataset was artificially undersampled using a two-dimensional poisson disk undersampled mask with a 5 x acceleration factor of a 24 x 24 center fully sampled self-correcting region to obtain undersampled k-space data, fig. 3 illustrates the undersampled mask used.
In the following experiment, the quality of the reconstructed image was evaluated using a Signal-to-Noise Ratio (SNR) as a numerical index. A higher value of SNR indicates a better image reconstruction quality. In the experiment, the SNR index is calculated only in the region of interest (ROI) of the image. The parameters of all comparison algorithms are adjusted to SNR optimum. SNR (dB) is defined as:
where Var denotes the reference image X and the reconstructed imageMean square error between, MSE is the variance of the reference image X.
And acquiring undersampled data by using an undersampled mask of 5 times of acceleration factor for the fully-sampled data dataset 1 acquired by using multiple coils, and reconstructing the undersampled data by using all comparative algorithms. In order to compare the reconstruction effects of different algorithms, experiments show the reconstructed images of different algorithms and error graphs between the reconstructed images and an original image (white dots indicate errors, and the more the white dots, the larger the errors). Specifically, fig. 4 and 5 show images reconstructed using the LpTV-SENSE algorithm and the NLR-SENSE algorithm, respectively, and fig. 6 and 7 show reconstructed error images corresponding to fig. 4 and 5. From the reconstructed image and the error image, it can be seen that the image reconstructed by the LpTV-SENSE method has more artifacts and more errors, and the reconstructed image by the NLR-SENSE has less reconstruction artifacts and reconstruction errors, so that the reconstructed image is clearer and is closer to the original image.
In addition, the experiment also analyzed the values of SNR of the reconstructed images of the two methods. The SNR of the LpTV-SENSE method (as in FIG. 4) was 21.27 dB; the SNR of the reconstructed image (as in fig. 5) of the NLR-SENSE method is 22.71 dB. It can be seen that the NLR-SENSE method proposed by the present invention has a high SNR, which is consistent with the visual image and the error image of the reconstructed result.
In conclusion, different algorithms are used for carrying out experiments on the selected test set, and the NLR-SENSE method based on non-local low-rank regularization is obviously superior to the LpTV-SENSE algorithm in SNR evaluation index and visual comparison.
While the present invention has been described in detail with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, and various changes can be made without departing from the spirit and scope of the present invention.
Claims (1)
1. A method for improving sensitivity coding reconstruction based on non-local low rank constraint is characterized by comprising the following steps:
Wherein the superscript "0" represents the initial value before iteration, the superscript "H" represents the conjugate transpose operator,representing an image to be reconstructed with column vectorization, N ═ Nh×NvNumber of pixel points, N, representing a single-coil amplitude imagehAnd NvNumber of rows and columns, X, respectively, of the single-coil amplitude image0Is the initial value of X and is,the method comprises the steps of representing column vectorization undersampled multi-coil k-space data, using superscript T to represent a transposition operator, using M to represent the number of undersampled data points of a single-coil k-space, using C to represent the number of receiving coils used for parallel imaging, and using the data of the C-th coil of YDenotes, C1, C denotes a coil index, Y denotes1And YCThe 1 st coil data and the C th coil data respectively representing column-vectorized under-sampled multi-coil k-space data Y,the representation of the fourier operator is shown as,representing undersampling operators, ICIs a C x C identity matrix and,andare each NhAnd NvPoint two-dimensional discrete fourier transformThe matrix is a matrix of a plurality of matrices,a matrix representing the selection of sample point locations from a k-space grid,represents the kronecker product, whereinIndicating sensitivity operator, for sensitivity information of the c-th coilDenotes ScIs a diagonal matrix, S1Indicating sensitivity information of the 1 st coil, SCThe C-th coil sensitivity information is represented,denotes an auxiliary variable, Z0Is the initial value of Z and is,representing the corresponding Lagrangian multiplier, Z0Is the initial value of z;
defining Block matching operator Vi:The operator performs block matching on X as follows: will be provided withDivided into overlapping NpEach size isReference image block of, NpFor the number of reference image blocks, N represents the number of pixels of the image block, m is the number of similar blocks, and i is 1pFor the block index, a reference image block is set toIn a neighborhood search window with the size of s multiplied by s, m image blocks with the minimum Euclidean distance from a reference block are selected, each image block column is vectorized, and a similar block group V is formediThe column of (X) is a column,similar block group V for representing ith reference image blocki(X) corresponding auxiliary variables, all { D }iHorizontally splicing into a matrix in sequenceD0Is the initial value of D and is,is corresponding to DiLagrange multiplier of (d) will all of diHorizontally splicing into a matrix in sequenced0Is the initial value of d and is,
s1: initializing, i is 1;
s2: judging whether mod (k, T) is 0 or not, and if so, carrying out reconstruction on the k iteration XkSimilar block matching is carried out to obtain similar block groups Vi(Xk) Otherwise go to S3;
where mod (-) represents the remainder calculation;
Wherein, Σ is a singular value diagonal matrix,the jth diagonal element is the jth singular value:for singular value indexing, all singular values are combined into a row vector Andrespectively representY is a unitary matrix whose columns are left singular vectors of singular values, Γ is a unitary matrix whose columns are right singular vectors of singular values,indicating correspondence to auxiliary variables in the k-th iterationLagrange multiplier of (a);
s4: calculating the weight w ═ w1,...,wmin(m,n)]Wherein w is1And wmin(m,n)The first and last values of w are represented, respectively, and w is calculated as:
wherein ε is 10-16For ensuring that the denominator is not 0;
s5: computing an auxiliary variable D for the (k + 1) th iterationk+1The calculation formula is as follows:
Dk+1=Υdiag(soft(σ,τw))ΓH;
wherein: soft (σ, τ w) max (σ - τ w,0) is a soft threshold operator, τ μ/γ1Mu > 0 and gamma1More than 0 is a parameter;
s6: judging whether N is completedpAnWhen i is equal to NpThe routine proceeds to step S7; otherwise, returning to S2 after i is equal to i + 1;
s7: calculating an auxiliary variable Z for the (k + 1) th iterationk+1The calculation formula is as follows:
wherein the superscript "-1" represents the operator for the matrix inversion,is a diagonal matrix of the grid,represents an identity matrix, zkIs the k-th iteration corresponding to the auxiliary variable ZkOf lagrange multiplier, gamma2More than 0 is a parameter;
s8: calculating the image X to be reconstructed of the (k + 1) th iterationk+1The calculation formula is as follows:
wherein Vi *:Representation operator ViIs associated with an operator, representsThe corresponding position in the image is put back,is a diagonal matrix whose each diagonal element is equal to the corresponding pixel point appearing in all similar groups of blocks { V }i(Xk) The number of times in (c) },is a diagonal matrix of the grid,
s9: updating Lagrange multiplier z for the (k + 1) th iterationk+1Calculated by the following formula:
zk+1=zk+η2(SXk+1-Zk+1);
wherein eta is2The parameter of the Lagrange multiplier z is updated when the value is more than 0;
s10: initializing, i is 1;
s11: updating Lagrange multiplier for the (k + 1) th iterationThe calculation is made by the following formula:
wherein eta is1> 0 denotes updating the Lagrangian multiplier diParameters of (1) };
s12: judging whether N is completedpAnWhen i is equal to NpThe routine proceeds to step S13; otherwise, returning to S11 after i is equal to i + 1;
s13: judging whether the maximum iteration number is reached, and if K is satisfied, entering step S14; otherwise, k is set to k +1, and the process returns to S2;
s14: outputting the final reconstructed image Xk+1。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111174817.7A CN113892938B (en) | 2021-10-09 | 2021-10-09 | Improved sensitivity coding reconstruction method based on non-local low-rank constraint |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111174817.7A CN113892938B (en) | 2021-10-09 | 2021-10-09 | Improved sensitivity coding reconstruction method based on non-local low-rank constraint |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113892938A true CN113892938A (en) | 2022-01-07 |
CN113892938B CN113892938B (en) | 2023-10-27 |
Family
ID=79190519
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111174817.7A Active CN113892938B (en) | 2021-10-09 | 2021-10-09 | Improved sensitivity coding reconstruction method based on non-local low-rank constraint |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113892938B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104933683A (en) * | 2015-06-09 | 2015-09-23 | 南昌大学 | Non-convex low-rank reconstruction method for rapid magnetic resonance (MR) imaging |
WO2018099321A1 (en) * | 2016-11-30 | 2018-06-07 | 华南理工大学 | Generalized tree sparse-based weighted nuclear norm magnetic resonance imaging reconstruction method |
US20190086501A1 (en) * | 2017-09-21 | 2019-03-21 | Centre National De La Recherche Scientifique (Cnrs) | Method and magnetic resonance apparatus for image reconstruction with trimmed autocalibrating k-space estimation based on structured matrix completion |
US20190355157A1 (en) * | 2018-05-21 | 2019-11-21 | The Board Of Trustees Of The Leland Stanford Junior University | Motion robust reconstruction of multi-shot diffusion-weighted images without phase estimation via locally low-rank regularization |
CN112617798A (en) * | 2020-12-31 | 2021-04-09 | 昆明理工大学 | Parallel magnetic resonance imaging reconstruction method based on Lp norm combined total variation |
CN112991483A (en) * | 2021-04-26 | 2021-06-18 | 昆明理工大学 | Non-local low-rank constraint self-calibration parallel magnetic resonance imaging reconstruction method |
-
2021
- 2021-10-09 CN CN202111174817.7A patent/CN113892938B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104933683A (en) * | 2015-06-09 | 2015-09-23 | 南昌大学 | Non-convex low-rank reconstruction method for rapid magnetic resonance (MR) imaging |
WO2018099321A1 (en) * | 2016-11-30 | 2018-06-07 | 华南理工大学 | Generalized tree sparse-based weighted nuclear norm magnetic resonance imaging reconstruction method |
US20190086501A1 (en) * | 2017-09-21 | 2019-03-21 | Centre National De La Recherche Scientifique (Cnrs) | Method and magnetic resonance apparatus for image reconstruction with trimmed autocalibrating k-space estimation based on structured matrix completion |
US20190355157A1 (en) * | 2018-05-21 | 2019-11-21 | The Board Of Trustees Of The Leland Stanford Junior University | Motion robust reconstruction of multi-shot diffusion-weighted images without phase estimation via locally low-rank regularization |
CN112617798A (en) * | 2020-12-31 | 2021-04-09 | 昆明理工大学 | Parallel magnetic resonance imaging reconstruction method based on Lp norm combined total variation |
CN112991483A (en) * | 2021-04-26 | 2021-06-18 | 昆明理工大学 | Non-local low-rank constraint self-calibration parallel magnetic resonance imaging reconstruction method |
Also Published As
Publication number | Publication date |
---|---|
CN113892938B (en) | 2023-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11327137B2 (en) | One-dimensional partial Fourier parallel magnetic resonance imaging method based on deep convolutional network | |
CN107274462B (en) | Classified multi-dictionary learning magnetic resonance image reconstruction method based on entropy and geometric direction | |
Montalt-Tordera et al. | Machine learning in magnetic resonance imaging: image reconstruction | |
WO2020134826A1 (en) | Parallel magnetic resonance imaging method and related equipment | |
CN112991483B (en) | Non-local low-rank constraint self-calibration parallel magnetic resonance imaging reconstruction method | |
Kelkar et al. | Prior image-constrained reconstruction using style-based generative models | |
CN111754598B (en) | Local space neighborhood parallel magnetic resonance imaging reconstruction method based on transformation learning | |
CN112819949B (en) | Magnetic resonance fingerprint image reconstruction method based on structured low-rank matrix | |
Nakarmi et al. | M-MRI: A manifold-based framework to highly accelerated dynamic magnetic resonance imaging | |
Ke et al. | Deep manifold learning for dynamic MR imaging | |
CN109934884B (en) | Iterative self-consistency parallel imaging reconstruction method based on transform learning and joint sparsity | |
Manimala et al. | Convolutional neural network for sparse reconstruction of MR images interposed with gaussian noise | |
CN109920017B (en) | Parallel magnetic resonance imaging reconstruction method of joint total variation Lp pseudo norm based on self-consistency of feature vector | |
Babu et al. | Fast low rank column-wise compressive sensing for accelerated dynamic MRI | |
CN106991651B (en) | Fast imaging method and system based on synthesis analysis deconvolution network | |
Xiao et al. | SR-Net: a sequence offset fusion net and refine net for undersampled multislice MR image reconstruction | |
CN114820849A (en) | Magnetic resonance CEST image reconstruction method, device and equipment based on deep learning | |
CN109188327B (en) | Magnetic resonance image fast reconstruction method based on tensor product complex small compact framework | |
CN112617798A (en) | Parallel magnetic resonance imaging reconstruction method based on Lp norm combined total variation | |
US20230342886A1 (en) | Method and system for low-field mri denoising with a deep complex-valued convolutional neural network | |
Zong et al. | Fast reconstruction of highly undersampled MR images using one and two dimensional principal component analysis | |
He et al. | Dynamic MRI reconstruction exploiting blind compressed sensing combined transform learning regularization | |
Liu et al. | High-Fidelity MRI Reconstruction Using Adaptive Spatial Attention Selection and Deep Data Consistency Prior | |
Bhatia et al. | Fast reconstruction of accelerated dynamic MRI using manifold kernel regression | |
CN113892938B (en) | Improved sensitivity coding reconstruction method based on non-local low-rank constraint |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |