WO2016108947A1 - Modeling and validation methods for compressed sensing and mri - Google Patents
Modeling and validation methods for compressed sensing and mri Download PDFInfo
- Publication number
- WO2016108947A1 WO2016108947A1 PCT/US2015/028445 US2015028445W WO2016108947A1 WO 2016108947 A1 WO2016108947 A1 WO 2016108947A1 US 2015028445 W US2015028445 W US 2015028445W WO 2016108947 A1 WO2016108947 A1 WO 2016108947A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data samples
- result
- reconstruction
- space
- implemented method
- Prior art date
Links
- 238000010200 validation analysis Methods 0.000 title description 20
- 238000005070 sampling Methods 0.000 claims abstract description 60
- 238000002595 magnetic resonance imaging Methods 0.000 claims abstract description 42
- 238000000034 method Methods 0.000 claims abstract description 41
- 239000011159 matrix material Substances 0.000 claims description 65
- 238000001228 spectrum Methods 0.000 claims description 30
- 238000003384 imaging method Methods 0.000 claims description 21
- 238000005457 optimization Methods 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 12
- 230000006872 improvement Effects 0.000 claims description 9
- 230000014509 gene expression Effects 0.000 claims description 5
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 3
- 230000002123 temporal effect Effects 0.000 claims description 3
- 238000009826 distribution Methods 0.000 claims description 2
- 230000000051 modifying effect Effects 0.000 claims 4
- 230000005284 excitation Effects 0.000 claims 2
- 238000002059 diagnostic imaging Methods 0.000 claims 1
- 238000012360 testing method Methods 0.000 description 43
- 230000001133 acceleration Effects 0.000 description 14
- 230000005415 magnetization Effects 0.000 description 14
- 230000000694 effects Effects 0.000 description 12
- 238000005259 measurement Methods 0.000 description 9
- 230000004044 response Effects 0.000 description 9
- 230000035945 sensitivity Effects 0.000 description 9
- 230000008901 benefit Effects 0.000 description 7
- 230000003750 conditioning effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 238000011084 recovery Methods 0.000 description 5
- 230000006399 behavior Effects 0.000 description 4
- 210000004556 brain Anatomy 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000003860 storage Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000004873 anchoring Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000003908 quality control method Methods 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000010408 sweeping Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000009472 formulation Methods 0.000 description 2
- 239000004615 ingredient Substances 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000007639 printing Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 241001584785 Anavitrinella pampinaria Species 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000001727 in vivo Methods 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012113 quantitative test Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000000528 statistical test Methods 0.000 description 1
- 230000003319 supportive effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/54—Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
- G01R33/56—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
- G01R33/561—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution by reduction of the scanning time, i.e. fast acquiring systems, e.g. using echo-planar pulse sequences
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/54—Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
- G01R33/56—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
- G01R33/5608—Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/54—Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
- G01R33/56—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
- G01R33/561—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution by reduction of the scanning time, i.e. fast acquiring systems, e.g. using echo-planar pulse sequences
- G01R33/5611—Parallel magnetic resonance imaging, e.g. sensitivity encoding [SENSE], simultaneous acquisition of spatial harmonics [SMASH], unaliasing by Fourier encoding of the overlaps using the temporal dimension [UNFOLD], k-t-broad-use linear acquisition speed-up technique [k-t-BLAST], k-t-SENSE
Definitions
- Mathematical modeling based on physics, statistics or other knowledge is essential to performance of signal encoding and decoding.
- Representative methods of the new class include compressed sensing and low-rank matrix completion.
- a case in point is compressed sensing in diagnostic MRI application. While there are numerous examples demonstrating compressed sensing's success in accelerating MR data acquisition or improving image signal-to-noise ratio (SNR), in reading an image obtained in a specific imaging instance, it is not uncommon for one to have a lingering concern that the underlying imaging scheme might have, in a convoluted and subtle manner, obscured some diagnostically important features.
- Setting up compressed sensing for accelerated MR data acquisition can be a complex and poorly guided process.
- a compressed sensing scheme with at its core a nonlinear operator that leverages both random sampling and a sparse model to disentangle signals from interferences, poses more challenges to gauging the level of image fidelity.
- Parallel MRI is a subcategory of MRI. Compressed sensing is certainly applicable. Yet owing to the multi-sensor setup, imaging physics further implies redundancy / structure in the signal. There are significant opportunities for devising and optimizing mathematical modeling and for improving imaging speed, SNR and quality.
- embodiments of the invention address vital quality control needs in sophisticated encoding and decoding with unique validations and guidance, paving the way for proper and confident deployment of new encoding and decoding in applications demanding high fidelity.
- embodiments of the invention further exploit signal structures for gains in performance of signal encoding and decoding.
- Embodiments are illustrated for the diagnostic MRI application, and include, for example, assessment and improvement of image quality with self- validations that can be automatically performed on any specific imaging instance itself, without necessitating additional data for validation or comparison.
- Embodiments illustrated also include other assessments and improvements, and parallel MRI that leverages signal structures.
- FIG. 1A - IB show high-level flow charts in accordance with aspects of the present invention.
- FIG. 2 is an illustration of an exemplary Sampling Test.
- Fig. 3 is an illustration of improved compressed sensing with the introduction of self-validation tests.
- Fig. 4 is an illustration of an exemplary multi-level Sampling Test.
- Fig. 5 is an illustration of improved compressed sensing with the introduction of self-validation tests.
- Fig. 6 is an illustration of improved compressed sensing with the introduction of known features.
- Fig. 7A - 7D illustrate a parallel acquisition signal structure in an MRI example.
- Fig. 8A - 8B illustrate example reconstruction formulations.
- Fig. 9 is an illustration of an example of applying COMPASS in a brain MRI.
- Fig. 10 is an illustration of predicted and measured noise standard deviation in an example.
- Fig. 11 is an illustration of an example of applying low-rank matrix completion in parallel MRI.
- Fig. 12 is a schematic block diagram of a magnetic resonance imaging system for use with the present invention.
- Compressed sensing and low-rank matrix completion provide a capacity for speeding up data acquisition while keeping aliasing and noise effects subdued.
- Theories and experiences however are yet to establish a more robust guidance on random sampling, sparse model and non-linear solver, to help manage the challenge of using the technology in diagnostic MRI.
- Fidelity tests described in the next several sections are devised to detect quality issues, including ones that are difficult to spot by inspecting, alone, image(s) in an MRI instance.
- the foundation for the tests include: a) Critical reliance on a nonlinear operator in reconstruction calls for at least a validation of the operator's response at or near the operating point (as set by the acquired data, among others), b) Small perturbations to the imaging instance should not significantly impact its result if the instance is intrinsically robust or reliable, c) Linear response to image features is essential to contrast fidelity, d) From the perspective of compressed sensing or low-rank matrix completion, random or incoherent sampling implies that equivalence exist among different sampling patterns (for example, two sets of k-space locations both selected uniformly at random can yield identical reconstruction results).
- the principle of compressed sensing or low-rank matrix completion is a unique facilitating factor for the fidelity assessment of reconstructed images.
- the essential randomness in sampling allows one to compare and evaluate results obtained from equally valid random k-space sampling and thereby provide indications about the fidelity of reconstructed images.
- the compressed sensing or low-rank matrix completion based scheme that involves randomly acquiring M k-space samples, one acquires one (or several) extra randomly to make it P samples. During image reconstruction one can then form many equally valid M-sample subsets by randomly leaving out one (or several) samples.
- the differences amongst the reconstruction results that correspond to the subsets contain information on the robustness of the original M- sample based scheme. Statistical tests of the results can be performed to assess the significance of the differences. Various metrics can be derived to serve as fidelity indicators. The extra cost in acquisition time can be negligible in this example.
- L- and S-tests address quality control challenge from a unique angle.
- these new tests can be invoked to derive fidelity indicators from the data and the reconstruction themselves, and to enable guided further improvements.
- the ingredients of compressed sensing and low-rank matrix completion can accommodate linear response and contrast fidelity for a range of setups.
- the straightforward L-test helps detect if a specific setup is over the range or the resulting image is unreliable.
- the intrigues of compressed sensing and low-rank matrix completion are exploited to establish unique mechanisms for additional self- validations.
- the S- test helps detect if and where flaws exist in an otherwise clean looking image.
- conditioning to the original raw data samples. This entails transforming the original set of raw data samples into a set of base data samples in a same or alternate space, e.g., image space, Fourier transform space (k-space) and wavelet transform space, where the set of base data samples exhibits desirable characteristics, e.g., uncorrelated noise, reduced amount and enhanced SNR. It is typically preferable to apply conditioning prior to more sophisticated processing. Fig.l shows two examples. If conditioning is not applied the set of base data samples is identical to the original set of raw data samples.
- the covariance matrix of the measurement noise associated with the multiple sensors is acquired, and a linear transform is determined and applied to the multi-sensor raw data samples. For example, eigenvalue decomposition or singular value decomposition of the covariance matrix gives A ⁇ 1/2 V H as the linear transform operator.
- the result is a set of base data samples associated with multiple new sensors that are numerically synthesized by the transform, where noise covariance matrix of the new sensors is an identity matrix. Rendering noise uncorrelated and similarly distributed helps advanced reconstruction methods manage noise interference effects.
- the new parallel MRI methods described in later sections function with or without conditioning, unless specifically noted.
- the ingredients of compressed sensing can accommodate linear response and contrast fidelity for a range of setups.
- a linear response test helps detect if a specific setup is outside an acceptable range or the resulting image is unreliable.
- An exemplary embodiment is hereby described, using a compressed sensing example.
- An indication of an imaging scheme’s inability to linearly track a local signal change shall call into question the accuracy of image contrast/feature at that location.
- an L-test checks the difference between an image in the original MRI instance (base result) and an image reconstructed from perturbed k-space data that correspond to adding a predetermined image feature, in accordance with the framework illustrated by Fig.1A. For example the test checks the difference between a pair of results from, respectively:
- the perturbation is inflicted broadly in k-space.
- deviation of the value of the difference from that of the“bump” indicates nonlinearity in local signal recovery.
- Sweeping a region and recording the deviation voxel-by-voxel creates a deviation-from-linearity map that flags contrast fidelity issue in the region.
- the reconstruction result responds linearly to the measured data y, which preserves contrast and ensues zero deviation.
- the sampling pattern, sparse model and numerical solver involved in a specific imaging instance however may situate the signal recovery scheme in a nonlinear regime, causing deviation detectable by an L- test.
- Fig.3 illustrates, with an MRI example, improved compressed sensing with the introduction of L-tests.
- the k-space’s 256x256 grid was sampled with patterns that correspond to various acceleration factors, fully-sampled center k- space radii and probability density functions
- Fig.3 col.1 shows the compressed sensing reconstruction results.
- Fig.3 col.2 displays, over the tested region (marked), the deviation-from-linearity maps from performing L- tests voxel-by-voxel over the region (Eqn.1). Notice the near zero deviation in Case 1, modestly elevated deviation in Case 2 (higher acceleration) and Case 3 (smaller fully-sampled center k- space), and significant deviation in Case 4 (both higher acceleration and smaller fully-sampled center k-space).
- S-test Sampling Test
- the S-test may involve local perturbations to the sampled space.
- an instance of perturbation to the set of base data samples is realized by excluding L selected samples, where L ⁇ 1 and the selection strategy can be one of random, deterministic, or hybrid (i.e., deterministic for certain sampling locations and random for the rest).
- Reconstruction based on the perturbed set leads to a result carrying additional, useful information.
- Repeating the perturbation-reconstruction Q times leads to an ensemble of results. When Q is sufficiently large the bits of additional information carried by the individual results are pooled together to derive metrics that indicate the quality of, or guide the improvement over, the original image.
- an S-test checks variations in reconstruction results amongst an ensemble of compressed sensing reconstructions that each uses a leaving-L-out version of the set of base data samples. For example the S-test checks standard deviation of the solutions to the following ensemble of magnetic resonance image reconstruction problems: where correspond to the jth instance of leaving one k-space data sample out (i.e.,
- Fig.3 further illustrates improved compressed sensing with the introduction of S-tests.
- the S-test involved a 500-instance ensemble, where each instance used all k-space samples of the case except one sample that was randomly selected outside the fully-sampled center k-space.
- Fig.3 col.3 summarizes the S-test results in the form of standard deviation maps.
- Fig.3 col.4 shows the actual error (the difference between true and reconstructed images). The variations uncovered by the tests (col.3) were rather revealing of the actual error (col.4) in terms of severity level and spatial distribution.
- An alternative framework as illustrated by Fig.1B, can be used for applying L- and S-tests.
- the perturbation is applied to the data collection protocol. This may involve acquisition of more data but offers vast flexibility imposing various test conditions, which allows probing, validating or evaluating more aspects of the encoding and decoding. For example, result variation arising from perturbation to the ordering / timing of data acquisition can be a useful indicator when validating, evaluating or improving the practice of MR finger printing.
- Fig.2 includes the arrangement of leaving more than one samples out per parallel receive channel or letting different channels skip same or different k- space locations.
- S-tests can be applied at multiple levels, in a leaving-L-out hierarchy, as illustrated by Fig.4. Tracking the variation values throughout the hierarchy and analyzing exhibited trends can provide additional assessments. For instance, for any given voxel the variation value as a function of number of samples left out, corresponding to a certain sequence of branches tracing down the hierarchy, may reveal that the reconstruction evolves from working well to breaking down eventually, or behaves unreliably even with the use of all samples, or assumes a deterioration pattern with a critical point.
- the test summary can be in the form of an image, with each voxel value equaling a metric calculated from the voxel’s corresponding variation value curve.
- Use of real / virtual phantoms, in which the voxel values corresponding to the phantoms are known or independently measurable, may help derive an approximate scaling factor that relates the S-test predicted error level to the actual error level.
- a scaling factor for example, is calculated by dividing the deviation of a reconstructed voxel value from the known or independently measured, by a corresponding standard deviation reported in the S-test.
- Fig.5 illustrates, with an MR angiogram simulation example, improved compressed sensing with the introduction of self-validation tests. From left to right Fig.5 shows the compressed sensing results (left column), the standard deviation maps from S-tests (mid column), and the actual error (right column). The top and bottom rows correspond, respectively, to cases employing 12-fold and 20-fold accelerated variable density k-space sampling. Notice again S- tests’ power in tracking actual errors. The loss of two low-contrast features in the 20-fold case for instance, was detected by the standard deviation map. Improving Image Reconstruction Using Known Features
- Eqn.3 g represents a virtual, known feature numerically added. Update of all k-space data samples is through , the under-sampled Fourier transform. For simplicity the added feature is positioned in the unoccupied space as demarcated by mask .
- the second term in Eqn.3 penalizes deviation of the solution from the known feature at applicable voxel locations.
- a natural choice for g is a flat zero-amplitude profile. Other choices may include a profile with distributed local“bumps”, a profile corresponding to a significantly sized resolution phantom, a profile with voxel centers assuming i.i.d. random values, and profiles adaptively / iteratively designed with guidance from preliminary reconstruction or self-validation outcome(s).
- Fig.6 illustrates improved compressed sensing with the introduction of known features, with an MR angiogram simulation similar to that shown in Fig.5 but with highly accelerated data acquisition.
- Fig.6 top-right shows the Eqn.3-based reconstruction result.
- Parallel MRI maps transverse magnetization and reconstructs MR images by processing radio-frequency MR signals that are acquired in parallel with multiple receive channels.
- its spatially varying detection sensitivity causes the channel to sense an intermediate spectrum that results from a convolution of the transverse magnetization’s spectrum with a kernel.
- the spectra sensed by them differ from one another only by convolution effects due to the channels’ sensing profiles.
- parallel acquisition MR signals, and in ⁇ the spectra sensed by the parallel receive channels, a structure is thereby embedded.
- the receive channels’ sensing profiles arising from radio-frequency B1- fields, usually assume smooth weighting profiles in the image-space and narrow-width convolution kernels in the k-space.
- the embedded structure to a certain extent manifests as an intrinsic redundancy, allowing coarse (i.e., lower than full Nyquest rate) sampling of k-space and thereby acceleration of magnetic resonance imaging.
- Central to a conventional parallel MRI method is a full calibration that entails a) sampling of a center k-space region at full Nyquest rate, and b) a reconstruction that subsequently uses the knowledge learned over the region, in the form of image-space sensing profiles or k-space interpolation kernels, to compensate for the coarse sampling and generate MR images.
- this calibration-reconstruction paradigm in terms of accuracy, speed, flexibility and SNR.
- the new parallel MRI methods hereby described exploit the intrinsic signal structures in and generate full and MR images with a general scheme that overcomes conventional methods’ limitations.
- Fig.7 illustrates the concept of parallel acquisition signal structure with an MRI example.
- Fig.7A A k-space sampling pattern defines, for a multi-channel parallel acquisition, sampled locations 701 and skipped locations 702. Compared to sampling at a full Nyquest rate, skipping of some of the grid point locations allows the acquisition to go faster. Samples at skipped locations are not directly known but potentially reconstructable.
- a parallel acquisition signal structure can be identified and exploited through a conceptual stencil 703 dissecting the k-space.
- Fig.7B and C For any one of the parallel signal acquisition channels, its spatially varying sensitivity causes it to sense an intermediate spectrum 704 that results from a convolution of a shift-invariant kernel 706 with the transverse magnetization’s spectrum.
- the stencil 703 in the present example is a block arrow that covers 18 grid points for each channel at each placement location. At one placement location, the result 705 of the convolutions in the stencil-demarcated neighborhood is known. This makes an instance that contributes to signal structure identification, despite a lack of knowledge of the kernels (one per channel) and the relevant portion 707 of the transverse magnetization’s spectrum.
- Fig.7D The convolution principle gives rise to a mathematical expression for describing relationship between k-space grid samples.
- the placement location illustrated in Fig.7B corresponds to a known d i .
- a placement location corresponds to a partially known d i if the stencil-demarcated neighborhood is under- sampled.
- a parallel acquisition signal structure may be revealed and exploited through a conceptual stencil dissecting the k-space.
- its spatially varying sensitivity causes it to sense an intermediate spectrum that results from a convolution of a kernel with the transverse magnetization’s spectrum.
- a shift-invariant matrix K exists that relates samples of ⁇ to samples of the magnetization’s spectrum:
- Eqn.4 arises from the convolution principle and involves no other assumptions.
- the linear equation form is generally valid in describing k-space grid samples.
- the stencil as illustrated in this example case is a block arrow. However numerous other shapes, including ones extending in three dimensions, can be appropriate choices for a stencil.
- the number of columns in K denoted as n K , is equal to the number of samples in the support neighborhood where the magnetization’s spectrum contributes to the n s x N samples. Note that r K , the dimension of the column space of K or the number of independent columns in K, is less or equal to n K .
- n K ⁇ (l + w - 1) 2 n K ⁇ (l + w - 1) 2 .
- the physics of sensitivity weighting gives rise to a projection-invariance property.
- d i as a weighted sum of the columns of K
- Eqn.4 states that d i belongs to the column space of K, i.e., the vector space spanned by K’s column vectors.
- Eqn.4 equivalently states that the projection of d i onto the column space of K results in d i .
- This projection invariance may be expressed as follows:
- COMpletion with Parallel Acquisition Signal Structure represents one class of methods for reconstructing from an under-sampled version as acquired by parallel MRI.
- COMPASS a signal structure space, or the vector space spanned by the columns of K, is explicitly identified from available samples of and is then applied to reconstruct and MR images.
- U can be determined with, for example, QR factorization or SVD of matrix D ID .
- QR factorization or SVD of matrix D ID .
- SVD of matrix D ID with thresholding can be handy, where one ranks the singular values and attributes the basis vectors associated with the more significant singular values as the ones spanning the signal structure space.
- Fig.8 illustrates example reconstruction formulations.
- A Reconstruction as a problem of finding the least squares solution to a set of equations. This set of equations is formed by pooling together subsets of equations, including one that expresses constraints due to the signal structure, one that expresses constraints due to the measured data samples, and, if applicable, additional ones that capture physics, statistics or other knowledge.
- B Reconstruction as an optimization problem.
- the objective function includes terms that capture outcome’s deviation from the signal structure model, from the measurements, and from additional models.
- the optimization flexibly accommodates regularization / other spatial or temporal models that capture physics, statistics or other knowledge.
- Mathematical conversions can facilitate the computation involved in imposing the full set of constraints due to the identified signal structure.
- UU H the projection operator is known. Eqn.7 ties any sample, in a known, shift-invariant fashion, to weighted sums formed in the sample’s neighborhoods in the spectra. This therefore gives rise to a set of N constraints in convolution form:
- Eqn.8 the convolution form (Eqn.8) or the weighted superposition form (Eqn.9) expresses the signal structure constraints due to imaging physics.
- imposing the signal structure constraints by applying Eqn.4, 5 or 7 everywhere is equivalent to imposing the set of N constraints with Eqn.8 or 9.
- Use of Eqn.9 offers a notable advantage in computation efficiency owing to the N 2 low-cost weighting operations. This advantage is amplified when the constraints need to be repeatedly evaluated, as is often the case when the reconstruction employs an iterative numerical solver.
- y pools y (n) ’s, z pools the channels’ spectra, matrix F and F -1 represent (N-channel) Fourier and inverse Fourier transforms respectively, b pools the channels’ spectra as acquired by parallel receive (zero-filled, i.e., with none-zero entries being the acquired measurements and zero entries being the skipped measurements), I is the identity matrix, and diagonal matrix (I - ⁇ ) denotes the k-grid sampling using 0’s and 1’s.
- Parallel computing technology readily supports speedup of N-channel Fourier and inverse Fourier transforms, which also helps accomplish a high reconstruction speed.
- Fig.9 shows a brain MRI example where a 32-channel coil was used for parallel acquisition of in vivo data samples.
- the result from a full-sampling case was available (g) to serve as a reference.
- (a-c) summarizes a first acceleration case: (a) 8.7x under-sampling pattern and 8-point stencil, (b) COMPASS result, and (c)
- d-f summarizes a second acceleration case: (d) 15.1x under-sampling pattern and 8-point stencil, (e) COMPASS result, and (f)
- Eqn.12 may include, for example, a penalty term where ⁇ represents a sparsifying transform.
- Divide-and-conquer strategies are applicable. For example, one can use Eqn.10 to reconstruct ⁇ segment-by-segment. With such a scheme one divides ⁇ into contiguous or overlapped segments, solves smaller reconstruction problems rather than one full-size problem, and stitches the results together to reconstruct the full ⁇ .
- the smaller reconstruction problems can be solved in parallel using parallel computing technology, each involving a fraction of all the measured data samples and to-be-determined unknowns.
- Scalar ⁇ can be set to emphasize the more reliable one of the two to benefit image SNR.
- the first set is very reliable, e.g., due to a robust scheme for signal structure identification or an effective incorporation of other sources of knowledge, choosing a large ⁇ can be beneficial. In such a case one can analytically track noise propagation, predict image noise standard deviation and introduce further optimizations.
- first or second set of constraints can be useful. For example, one may design and apply multiple stencils to optimize structure space identification, to optimize the structure space’s anchoring effect on the reconstruction, or to improve the efficiency of k-space sampling.
- the adaptation to Eqns.10 and 11 for example may give [Exemplary equation 13] and
- the reconstruction principle is supportive of a more general k-space sampling scheme in which the acquisition channels each has its own sampling pattern— such a scheme for example accommodates more flexible deployment of the receive channel resource when the number of receive channels is less than the number of coil ports. Diagnostic applications commonly demand reconstruction result in the form of a combined image rather than a set of individual channel images.
- a concept of synthesized channel(s) can be introduced into COMPASS to facilitate the creation, regularization and analysis of useful combined image(s).
- a synthesized channel can be designated and Eqn.4 remains a valid form in describing k-space grid samples from both actual and synthesized channels.
- the identification and reconstruction principles apply, and COMPASS can directly create the combination instead of deferring it to a separate combination step.
- COMPASS proceeds as if there are N+1 parallel receive channels except for two modifications.
- identification The synthesized channel’s spectrum (needed as part of the input) is emulated, e.g., by sampling FT ⁇ sum of squares of FT -1 ⁇ individual spectrum acquired by actual channels ⁇ with the same k-space sampling pattern that is in effect for the actual channels.
- reconstruction To accommodate the synthesized channel, I, ⁇ and b are augmented as if there is an additional receive channel that is with a skip-all sampling pattern (i.e., ⁇ and b are zero- padded).
- Matrix W resulting from the identification then has the effect of substantiating spectra/images of all N+1 channels through Eqn.10 or 11.
- a prediction of a solution’s noise covariance or standard deviation can be performed by computing (A H A) -1 where A denotes the matrix on the left-hand side of the equation being solved.
- a H A the matrix on the left-hand side of the equation being solved.
- maximizing SNR / robustness against measurement noise may entail further optimization of the measurement scheme (e.g., the under-sampling pattern), joint design of stencil(s) and the under-sampling pattern, the identification scheme, or an optional regularization scheme (e.g., L2 or L1 norm-based).
- Guidance from noise behavior prediction is most valuable. It enables assessment and optimization of SNR in a proactive fashion, once there are enough data samples or knowledge for signal structure identification. For example, the integrated, linear computation of the synthesized channel's spectrum (Eqn.15) facilitates prediction of noise covariance, which, in turn, can guide adaptation of k-space sampling for SNR improvement.
- Fig.10 illustrates noise behavior prediction and ⁇ ’s impact on COMPASS’ SNR behavior with a 1D simulation case.
- the case was set up with magnetization and sensitivity profiles over a line segment extracted from a 2D 8-channel parallel MRI case, and with 4x even under-sampling of k-space.
- the analytical predictions suggested that image noise standard deviation would decrease monotonically as ⁇ increases from 10 0 to 10 8 (Fig.10).
- Image noise standard deviation independently measured with a Monte Carlo study confirmed this (Fig.10b overlay of dots), and further indicated that as the noise effect to the structure identification increased, image noise profiles deviated more and more from the analytical predictions starting at several discrete locations.
- COMPASS offers a framework for accuracy, speed, flexibility and SNR improvements.
- exemplary embodiments including, for example, exemplary embodiments that alternatively formulates the reconstruction as:
- COMPASS accommodates random or incoherent k-space sampling and there are embodiments of COMPASS that incorporate compressed sensing to take further advantage of random or incoherent k-space sampling.
- methods described below employ low-rank matrix completion and emphasize exploitation of random or incoherent k-space sampling. Similar to COMPASS they eliminate the requirements of a conventional calibration and the determination of profiles and kernels. Nonetheless, they represent that the signal structure space is of low-rank but avoid further determination of a basis for the space. Reconstruction of ⁇ is accomplished by enforcing the low-rank model with a low-rank matrix completion algorithm, which necessarily relies on an acquisition scheme that samples k-space grid with an incoherent under-sampling pattern.
- a parallel acquisition scheme is employed to acquire samples over a k-space grid with an incoherent under-sampling pattern.
- the full k- space grid is partitioned into multiple segments that may exhibit overlaps or be of different sizes.
- sweeping a stencil across the segment causes a data matrix to be assembled.
- the assembled matrix represents an incoherently under-sampled version of a low-rank matrix. Entries of the assembled matrix not filled due to coarse sampling are set to zero initially. These entries are updated and the segment recovered using low-rank matrix completion, with self- validation applied when desired.
- the reconstruction may treat the full k-space grid as one single segment.
- Fig.11 illustrates an example of applying low-rank matrix completion in parallel MRI.
- a 200x200 k- space grid was sampled with incoherent under-sampling patterns that correspond to 2.5-fold accelerations over full sampling.
- Fig.11 col.1 displays the result from applying standard sum-of-squares reconstruction to fully sampled 8-channel data.
- the reconstruction partitioned the full grid into up to 25 regions, and employed low-rank matrix completion instead of calibration and profile or kernel determination. Results from the cases were compared to the reference to quantify errors.
- Fig.11 col.2 shows the result from the case where the entire grid was treated as a single segment.
- Fig.11 col.3 shows the result from the case where 25 segments that made the whole grid were individually reconstructed. Each individual reconstruction was anchored (Eqn.18) by data incoherently under- sampled on a 48x48 grid at center k-space.
- Fig.11 col.4 shows the result from the case where 9 segments that made the whole grid were individually and independently reconstructed (Eqn.19). Reconstruction of each of the 8 outer k-space segments was self-reliant, without any knowledge of what happened elsewhere.
- Fig.11 col.5 shows the result from the case that was the same as the previous, except for the use of segment-dependent thresholds when a singular value thresholding algorithm was applied to solve low rank matrix completion.
- Additional ways to accelerate the reconstruction include incremental SVD update techniques, where a small, yet-to-be-completed data matrix is appended to an already-completed data matrix, and the composite data matrix is completed through a low-cost SVD update.
- Additional ways to accelerate reconstruction also include an iterative scheme involving projection of the incomplete columns of the yet-to-be-completed data matrix onto the column- space of the already-completed data matrix and update of the yet-to-be-filled entries while in keeping with the structure.
- COMPASS for parallel MRI can benefit from self-validation, in ways similar to that illustrated in earlier sections with examples.
- Low-rank matrix completion can benefit from self-validation as well.
- an S-test checks variations in reconstruction results amongst an ensemble of low-rank matrix completions that each uses a leaving-L-out version of the set of base data samples. This can be clearly appreciated with a parallel MRI example.
- an incoherently under-sampled version of the full data matrix is acquired, which is followed by reconstruction of the full data matrix and the underlying image.
- the S-test checks standard deviation of the solutions to the following ensemble of problems:
- FIG.12 the major components of an example magnetic resonance imaging (MRI) system 10 incorporating the present invention are shown.
- the operation of the system is controlled from an operator console 12 which includes a keyboard or other input device 13, a control panel 14, and a display screen 16.
- the console 12 communicates through a link 18 with a separate computer system 20 that enables an operator to control the production and display of images on the display screen 16.
- the computer system 20 includes a number of modules which communicate with each other through a backplane 20a. These include an image processor module 22, a CPU module 24 and a memory module 26, known in the art as a frame buffer for storing image data arrays.
- the computer system 20 is linked to disk storage 28 and tape drive 30 for storage of image data and programs, and communicates with a separate system control 32 through a high speed serial link 34.
- the system control 32 includes a set of modules connected together by a backplane 32a. These include a CPU module 36 and a pulse generator module 38 which connects to the operator console 12 through a serial link 40. It is through link 40 that the system control 32 receives commands from the operator to indicate the scan sequence that is to be performed.
- the pulse generator module 38 operates the system components to carry out the desired scan sequence and produces data which indicates, for RF transmit, the timing, strength and shape of the RF pulses produced, and, for RF receive, the timing and length of the data acquisition window.
- the pulse generator module 38 connects to a set of gradient amplifiers 42, to indicate the timing and shape of the gradient pulses that are produced during the scan.
- the pulse generator module 38 can also receive patient data from a physiological acquisition controller 44 that receives signals from a number of different sensors connected to the patient, such as ECG signals from electrodes attached to the patient. And finally, the pulse generator module 38 connects to a scan room interface circuit 46 which receives signals from various sensors associated with the condition of the patient and the magnet system. It is also through the scan room interface circuit 46 that a patient positioning system 48 receives commands to move the patient to the desired position for the scan.
- the gradient waveforms produced by the pulse generator module 38 are applied to the gradient amplifier system 42 having Gx, Gy, and Gz amplifiers.
- Each gradient amplifier excites a corresponding physical gradient coil in a gradient coil assembly generally designated 50 to produce the magnetic field gradients used for spatially encoding acquired signals.
- the gradient coil assembly 50 and a polarizing magnet 54 form a magnet assembly 52.
- An RF coil assembly 56 is placed between the gradient coil assembly 50 and the imaged patient.
- a transceiver module 58 in the system control 32 produces pulses which are amplified by an RF amplifier 60 and coupled to the RF coil assembly 56 by a transmit/receive switch 62.
- the resulting signals emitted by the excited nuclei in the patient may be sensed by the same RF coil assembly 56 and coupled through the transmit/receive switch 62 to a preamplifier module 64.
- the amplified MR signals are demodulated, filtered, and digitized in the receiver section of the transceiver 58.
- the transmit/receive switch 62 is controlled by a signal from the pulse generator module 38 to electrically connect the RF amplifier 60 to the coil assembly 56 during the transmit mode and to connect the preamplifier module 64 to the coil assembly 56 during the receive mode.
- the transmit/receive switch 62 can also enable a separate RF coil (for example, a surface coil) to be used in either the transmit or receive mode.
- the transceiver module 58, the separate RF coil and/or the coil assembly 56 are commonly configured to support parallel acquisition operation.
- the MR signals picked up by the separate RF coil and/or the RF coil assembly 56 are digitized by the transceiver module 58 and transferred to a memory module 66 in the system control 32.
- a scan is complete when an array of raw k-space data has been acquired in the memory module 66.
- This raw k-space data is rearranged into separate k-space data arrays for each image to be reconstructed, and each of these is input to an array processor 68 which operates to Fourier transform the data to combine MR signal data into an array of image data.
- This image data is conveyed through the serial link 34 to the computer system 20 where it is stored in memory, such as disk storage 28.
- this image data may be archived in long term storage, such as on the tape drive 30, or it may be further processed by the image processor 22 and conveyed to the operator console 12 and presented on the display 16.
Landscapes
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- High Energy & Nuclear Physics (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- General Physics & Mathematics (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
A computer implemented method is provided that judiciously applies or manages randomness, incoherence, nonlinearity and structures involved in signal encoding or decoding. The method in a magnetic resonance imaging example comprises acquiring data samples in accordance with a k-space sampling pattern, identifying a signal structure in an assembly of the acquired data samples, and finding a result consistent with both the acquired data samples and the identified signal structure.
Description
MODELING AND VALIDATION FOR COMPRESSED SENSING AND MRI
BACKGROUND
Mathematical modeling based on physics, statistics or other knowledge is essential to performance of signal encoding and decoding.
Built upon unique signal models and reconstruction algorithms, as well as random or incoherent sampling, a new class of methods promise to encode and decode one- or multidimensional signals with a significantly reduced amount of measured, transmitted or stored data samples, and be effective managing interference effects due to random noise and aliasing.
Representative methods of the new class include compressed sensing and low-rank matrix completion.
Examples of success abound demonstrating compressed sensing and low-rank matrix completion in various applications. Theoretical analyses offer not only insights on their statistical behavior but asymptotic performance guarantees under certain conditions. For certain applications however, assessment of fidelity or reliability of a specific encoding and decoding instance, as opposed to assertion of performance in a general or statistical manner, is crucial.
A case in point is compressed sensing in diagnostic MRI application. While there are numerous examples demonstrating compressed sensing's success in accelerating MR data acquisition or improving image signal-to-noise ratio (SNR), in reading an image obtained in a specific imaging instance, it is not uncommon for one to have a lingering concern that the underlying imaging scheme might have, in a convoluted and subtle manner, obscured some diagnostically important features. Setting up compressed sensing for accelerated MR data acquisition can be a complex and poorly guided process. Compared to conventional imaging schemes that rely principally on linear operators, a compressed sensing scheme, with at its core a nonlinear operator that leverages both random sampling and a sparse model to disentangle signals from interferences, poses more challenges to gauging the level of image fidelity. This is because the scheme's response to signal, interferences and encoding / decoding parameters is difficult to grasp or interpret. Sometimes a false sense of confidence may even present, as the scheme tends to produce "clean-looking" images, with interference effects spread and with no conspicuous
artifacts alerting the existence of fidelity issues. When leveraging sparse modeling in diagnostic MRI therefore, there are vital needs for imaging quality control.
Parallel MRI is a subcategory of MRI. Compressed sensing is certainly applicable. Yet owing to the multi-sensor setup, imaging physics further implies redundancy / structure in the signal. There are significant opportunities for devising and optimizing mathematical modeling and for improving imaging speed, SNR and quality.
In accordance with the present invention methods are provided that judiciously apply /manage randomness, incoherence, nonlinearity and structures involved in signal encoding or decoding. In practice, where extra data for validation or comparison is often unavailable, embodiments of the invention address vital quality control needs in sophisticated encoding and decoding with unique validations and guidance, paving the way for proper and confident deployment of new encoding and decoding in applications demanding high fidelity. In multi-sensor practice, embodiments of the invention further exploit signal structures for gains in performance of signal encoding and decoding. Embodiments are illustrated for the diagnostic MRI application, and include, for example, assessment and improvement of image quality with self- validations that can be automatically performed on any specific imaging instance itself, without necessitating additional data for validation or comparison. Embodiments illustrated also include other assessments and improvements, and parallel MRI that leverages signal structures.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1A - IB show high-level flow charts in accordance with aspects of the present invention. Fig. 2 is an illustration of an exemplary Sampling Test.
Fig. 3 is an illustration of improved compressed sensing with the introduction of self-validation tests.
Fig. 4 is an illustration of an exemplary multi-level Sampling Test.
Fig. 5 is an illustration of improved compressed sensing with the introduction of self-validation tests.
Fig. 6 is an illustration of improved compressed sensing with the introduction of known features.
Fig. 7A - 7D illustrate a parallel acquisition signal structure in an MRI example.
Fig. 8A - 8B illustrate example reconstruction formulations.
Fig. 9 is an illustration of an example of applying COMPASS in a brain MRI.
Fig. 10 is an illustration of predicted and measured noise standard deviation in an example.
Fig. 11 is an illustration of an example of applying low-rank matrix completion in parallel MRI.
Fig. 12 is a schematic block diagram of a magnetic resonance imaging system for use with the present invention.
DETAILED DESCRIPTION
The Prospect of MRI with Accompanying Fidelity Assessment
Compressed sensing and low-rank matrix completion provide a capacity for speeding up data acquisition while keeping aliasing and noise effects subdued. Theories and experiences however are yet to establish a more robust guidance on random sampling, sparse model and non-linear solver, to help manage the challenge of using the technology in diagnostic MRI.
Fidelity tests described in the next several sections are devised to detect quality issues, including ones that are difficult to spot by inspecting, alone, image(s) in an MRI instance. The foundation for the tests include: a) Critical reliance on a nonlinear operator in reconstruction calls for at least a validation of the operator's response at or near the operating point (as set by the acquired data, among others), b) Small perturbations to the imaging instance should not significantly impact its result if the instance is intrinsically robust or reliable, c) Linear response to image features is essential to contrast fidelity, d) From the perspective of compressed sensing or low-rank matrix completion, random or incoherent sampling implies that equivalence exist among different sampling patterns (for example, two sets of k-space locations both selected uniformly at random can yield identical reconstruction results).
Regarding bullet d, the principle of compressed sensing or low-rank matrix completion, especially the required randomness or incoherence of the k-space sampling, is a unique facilitating factor for the fidelity assessment of reconstructed images. Unlike conventional sampling and reconstruction schemes, the essential randomness in sampling allows one to compare and evaluate results obtained from equally valid random k-space sampling and thereby provide indications about the fidelity of reconstructed images. Here is an intuitive example. For any given compressed sensing or low-rank matrix completion based scheme that involves randomly acquiring M k-space samples, one acquires one (or several) extra randomly to make it P samples. During image reconstruction one can then form many equally valid M-sample subsets by randomly leaving out one (or several) samples. The differences amongst the reconstruction results that correspond to the subsets contain information on the robustness of the original M- sample based scheme. Statistical tests of the results can be performed to assess the significance of the differences. Various metrics can be derived to serve as fidelity indicators. The extra cost in acquisition time can be negligible in this example.
In accordance with aspects of the invention, L- and S-tests address quality control challenge from a unique angle. In a general setting, or in a particularly relevant setting where additional data for validation/comparison is much desired but not readily available, these new tests can be
invoked to derive fidelity indicators from the data and the reconstruction themselves, and to enable guided further improvements. As existing imaging examples have suggested, the ingredients of compressed sensing and low-rank matrix completion can accommodate linear response and contrast fidelity for a range of setups. The straightforward L-test helps detect if a specific setup is over the range or the resulting image is unreliable. The intrigues of compressed sensing and low-rank matrix completion are exploited to establish unique mechanisms for additional self- validations. With a framework involving one or more levels of randomness, the S- test helps detect if and where flaws exist in an otherwise clean looking image.
Conditioning
To facilitate a solution or improve performance, one can apply conditioning to the original raw data samples. This entails transforming the original set of raw data samples into a set of base data samples in a same or alternate space, e.g., image space, Fourier transform space (k-space) and wavelet transform space, where the set of base data samples exhibits desirable characteristics, e.g., uncorrelated noise, reduced amount and enhanced SNR. It is typically preferable to apply conditioning prior to more sophisticated processing. Fig.l shows two examples. If conditioning is not applied the set of base data samples is identical to the original set of raw data samples.
When methods such as compressed sensing and low-rank matrix completion are used in applications that involve data samples from multiple sensors or channels working in parallel, conditioning applied to raw data samples can help improve performance. In an exemplary embodiment, the covariance matrix of the measurement noise associated with the multiple sensors is acquired, and a linear transform is determined and applied to the multi-sensor raw data samples. For example, eigenvalue decomposition or singular value decomposition of the covariance matrix gives A~1/2VH as the linear transform operator. The result is a set of base data samples associated with multiple new sensors that are numerically synthesized by the transform, where noise covariance matrix of the new sensors is an identity matrix. Rendering noise uncorrelated and similarly distributed helps advanced reconstruction methods manage noise interference effects. The new parallel MRI methods described in later sections function with or without conditioning, unless specifically noted.
Linear Response Test (L-test)
As many imaging examples have suggested, the ingredients of compressed sensing can accommodate linear response and contrast fidelity for a range of setups. Some other methods that involve nonlinear operators, including low-rank matrix completion and MR finger printing, also
support linear response and contrast fidelity to a certain extent. A linear response test (L-test) helps detect if a specific setup is outside an acceptable range or the resulting image is unreliable.
An exemplary embodiment is hereby described, using a compressed sensing example. An indication of an imaging scheme’s inability to linearly track a local signal change shall call into question the accuracy of image contrast/feature at that location. To capture such indications an L-test checks the difference between an image in the original MRI instance (base result) and an image reconstructed from perturbed k-space data that correspond to adding a predetermined image feature, in accordance with the framework illustrated by Fig.1A. For example the test checks the difference between a pair of results from, respectively:
where represents a“bump” added to the true image around voxel i. An example choice for may be a 5x5 Hamming window profile centered at voxel i. Through , the under-sampled
Fourier transform, the perturbation is inflicted broadly in k-space. At voxel i, deviation of the value of the difference from that of the“bump” indicates nonlinearity in local signal recovery. Sweeping a region and recording the deviation voxel-by-voxel creates a deviation-from-linearity map that flags contrast fidelity issue in the region. Ideally the reconstruction result responds linearly to the measured data y, which preserves contrast and ensues zero deviation. The sampling pattern, sparse model and numerical solver involved in a specific imaging instance however may situate the signal recovery scheme in a nonlinear regime, causing deviation detectable by an L- test.
Fig.3 illustrates, with an MRI example, improved compressed sensing with the introduction of L-tests. In acceleration of 2D phase encoding in MRI of a brain, the k-space’s 256x256 grid was sampled with patterns that correspond to various acceleration factors, fully-sampled center k- space radii and probability density functions– Case 1: 2-fold acceleration and r=0.32, Case 2: 4- fold acceleration and r=0.32, Case 3: 2-fold acceleration and r=0, and Case 4: 4-fold acceleration and r=0.02. Fig.3 col.1 shows the compressed sensing reconstruction results. Fig.3 col.2 displays, over the tested region (marked), the deviation-from-linearity maps from performing L- tests voxel-by-voxel over the region (Eqn.1). Notice the near zero deviation in Case 1, modestly elevated deviation in Case 2 (higher acceleration) and Case 3 (smaller fully-sampled center k-
space), and significant deviation in Case 4 (both higher acceleration and smaller fully-sampled center k-space). Sampling Test (S-test)
The intrigues of compressed sensing and low-rank matrix completion support unique mechanisms for further self-validations. Aside from the intuitive example given earlier, a notion that more is not necessarily better– think of the challenge a regular under-sampling poses to aliasing effect management– also inspired the idea of excluding one or a few data samples from a specific reconstruction instance. An S-test implements the idea, and can be carried out using the framework illustrated by Fig.1A to detect if and where flaws exist in an otherwise clean looking original image (base result).
The S-test may involve local perturbations to the sampled space. In particular, an instance of perturbation to the set of base data samples is realized by excluding L selected samples, where L ≥1 and the selection strategy can be one of random, deterministic, or hybrid (i.e., deterministic for certain sampling locations and random for the rest). Reconstruction based on the perturbed set leads to a result carrying additional, useful information. Repeating the perturbation-reconstruction Q times (Fig.2) leads to an ensemble of results. When Q is sufficiently large the bits of additional information carried by the individual results are pooled together to derive metrics that indicate the quality of, or guide the improvement over, the original image.
In an exemplary embodiment of compressed sensing with self-validation, an S-test checks variations in reconstruction results amongst an ensemble of compressed sensing reconstructions that each uses a leaving-L-out version of the set of base data samples. For example the S-test checks standard deviation of the solutions to the following ensemble of magnetic resonance image reconstruction problems:
where correspond to the jth instance of leaving one k-space data sample out (i.e.,
L=1). Conceptually, if M randomly sampled k-space data samples are more than adequate for a full reconstruction, randomly leaving one sample out without altering the original sampling probability density function may not impact the reconstruction. Along this line of reasoning a significant variation detected by an S-test is an indication that the original imaging instance’s result is not reliable. From a more general perspective, excessive sensitivity of a reconstruction’s
result to perturbation is never a good sign, and a metric that tracks sensitivity and flags excessiveness is most desirable.
An exemplary embodiment of low-rank matrix completion with self-validation is presented in a later section following discussions of parallel MRI signal structure.
Fig.3 further illustrates improved compressed sensing with the introduction of S-tests. For each of the cases in the same MRI example, the S-test involved a 500-instance ensemble, where each instance used all k-space samples of the case except one sample that was randomly selected outside the fully-sampled center k-space. Fig.3 col.3 summarizes the S-test results in the form of standard deviation maps. Fig.3 col.4 shows the actual error (the difference between true and reconstructed images). The variations uncovered by the tests (col.3) were rather revealing of the actual error (col.4) in terms of severity level and spatial distribution. Notice the power of the quantitative tests in assessing / guiding MRI– in actual practice where few clues other than the measured data are available, a comparatively high variation value reported would indicate a quality issue and further guide adjustment of reconstruction/acquisition. In Cases 1-4, the root mean square of the actual error were 2%, 3%, 4% and 7% of peak image intensity respectively, and the root mean square of the standard deviation values were 0.01%, 0.02%, 0.04% and 0.06% of peak image intensity respectively. All plots of Fig.3 used a common gray scale. For each of the cases the lead principal component obtained from SVD of the ensemble of instances (not shown) also appeared to be indicative of the case’s actual error pattern. Further Considerations on L- and S-Tests
An alternative framework, as illustrated by Fig.1B, can be used for applying L- and S-tests. The perturbation is applied to the data collection protocol. This may involve acquisition of more data but offers vast flexibility imposing various test conditions, which allows probing, validating or evaluating more aspects of the encoding and decoding. For example, result variation arising from perturbation to the ordering / timing of data acquisition can be a useful indicator when validating, evaluating or improving the practice of MR finger printing.
For each instance of an S-test, Fig.2 includes the arrangement of leaving more than one samples out per parallel receive channel or letting different channels skip same or different k- space locations.
Note that S-tests can be applied at multiple levels, in a leaving-L-out hierarchy, as illustrated by Fig.4. Tracking the variation values throughout the hierarchy and analyzing exhibited trends can provide additional assessments. For instance, for any given voxel the variation value as a function of number of samples left out, corresponding to a certain sequence of branches tracing
down the hierarchy, may reveal that the reconstruction evolves from working well to breaking down eventually, or behaves unreliably even with the use of all samples, or assumes a deterioration pattern with a critical point. The test summary can be in the form of an image, with each voxel value equaling a metric calculated from the voxel’s corresponding variation value curve.
Use of real / virtual phantoms, in which the voxel values corresponding to the phantoms are known or independently measurable, may help derive an approximate scaling factor that relates the S-test predicted error level to the actual error level. Such a scaling factor for example, is calculated by dividing the deviation of a reconstructed voxel value from the known or independently measured, by a corresponding standard deviation reported in the S-test.
Given the task of imaging a specific, unknown object, no sampling pattern instance is likely to guarantee maximum effectiveness recovering signals from interferences. Rather, an ensemble of reconstructions, each using a random leaving-L-out version of an original set of acquired k- space data, collectively can have a good chance of detecting features that happen to be obscured by a reconstruction using the original set. Accordingly devised schemes can be applied for retrospective or prospective fixes, accomplishing un-distortion or recovery of such features.
Fig.5 illustrates, with an MR angiogram simulation example, improved compressed sensing with the introduction of self-validation tests. From left to right Fig.5 shows the compressed sensing results (left column), the standard deviation maps from S-tests (mid column), and the actual error (right column). The top and bottom rows correspond, respectively, to cases employing 12-fold and 20-fold accelerated variable density k-space sampling. Notice again S- tests’ power in tracking actual errors. The loss of two low-contrast features in the 20-fold case for instance, was detected by the standard deviation map. Improving Image Reconstruction Using Known Features
Use of real or virtual, known features or structures can benefit a complex / nonlinear reconstruction. In an exemplary embodiment, one adds real or virtual, known features in the image space. Adding virtual features is accomplished by accordingly updating all k-space data samples. One then uses accurate recovery of the known features as a constraint or target during the reconstruction. The constraint or target tends to regularize the reconstruction, and the feature addition allows some adjustment of the operating point the complex / non-linear reconstruction is situated at.
[Exemplary equation 3] In Eqn.3, g represents a virtual, known feature numerically added. Update of all k-space data samples is through , the under-sampled Fourier transform. For simplicity the added feature is positioned in the unoccupied space as demarcated by mask . The second term in Eqn.3 penalizes deviation of the solution from the known feature at applicable voxel locations. A natural choice for g is a flat zero-amplitude profile. Other choices may include a profile with distributed local“bumps”, a profile corresponding to a significantly sized resolution phantom, a profile with voxel centers assuming i.i.d. random values, and profiles adaptively / iteratively designed with guidance from preliminary reconstruction or self-validation outcome(s).
Fig.6 illustrates improved compressed sensing with the introduction of known features, with an MR angiogram simulation similar to that shown in Fig.5 but with highly accelerated data acquisition. Fig.6 top-right shows the Eqn.3-based reconstruction result. By emphasizing accurate recovery of the known features (in this case g = flat zero-amplitude profile in the unoccupied space), the encoding and decoding scheme worked well at a remarkable acceleration of 200-fold. Parallel Acquisition Signal Structure
Owing to a multi-sensor setup parallel MRI engenders unique characteristics in its signals, providing significant opportunities to devising and optimization of mathematical modeling and to improvement of imaging speed, SNR and quality.
Parallel MRI maps transverse magnetization and reconstructs MR images by processing radio-frequency MR signals that are acquired in parallel with multiple receive channels. For any one of the channels, its spatially varying detection sensitivity causes the channel to sense an intermediate spectrum that results from a convolution of the transverse magnetization’s spectrum with a kernel. For all of the parallel receive channels, the spectra sensed by them differ from one another only by convolution effects due to the channels’ sensing profiles. In parallel acquisition MR signals, and in ^, the spectra sensed by the parallel receive channels, a structure is thereby embedded.
The receive channels’ sensing profiles, arising from radio-frequency B1- fields, usually assume smooth weighting profiles in the image-space and narrow-width convolution kernels in the k-space. The embedded structure to a certain extent manifests as an intrinsic redundancy, allowing coarse (i.e., lower than full Nyquest rate) sampling of k-space and thereby acceleration of magnetic resonance imaging.
Central to a conventional parallel MRI method is a full calibration that entails a) sampling of a center k-space region at full Nyquest rate, and b) a reconstruction that subsequently uses the knowledge learned over the region, in the form of image-space sensing profiles or k-space interpolation kernels, to compensate for the coarse sampling and generate MR images. There are limitations to this calibration-reconstruction paradigm in terms of accuracy, speed, flexibility and SNR.
Based on a concept of parallel acquisition signal structure, the new parallel MRI methods hereby described exploit the intrinsic signal structures in
and generate full
and MR images with a general scheme that overcomes conventional methods’ limitations.
Fig.7 illustrates the concept of parallel acquisition signal structure with an MRI example. Fig.7A: A k-space sampling pattern defines, for a multi-channel parallel acquisition, sampled locations 701 and skipped locations 702. Compared to sampling at a full Nyquest rate, skipping of some of the grid point locations allows the acquisition to go faster. Samples at skipped locations are not directly known but potentially reconstructable. A parallel acquisition signal structure can be identified and exploited through a conceptual stencil 703 dissecting the k-space.
Fig.7B and C: For any one of the parallel signal acquisition channels, its spatially varying sensitivity causes it to sense an intermediate spectrum 704 that results from a convolution of a shift-invariant kernel 706 with the transverse magnetization’s spectrum. The stencil 703 in the present example is a block arrow that covers 18 grid points for each channel at each placement location. At one placement location, the result 705 of the convolutions in the stencil-demarcated neighborhood is known. This makes an instance that contributes to signal structure identification, despite a lack of knowledge of the kernels (one per channel) and the relevant portion 707 of the transverse magnetization’s spectrum.
Fig.7D: The convolution principle gives rise to a mathematical expression for describing relationship between k-space grid samples. In particular, for any given stencil a shift-invariant matrix K exists that relates samples of
^ to samples of the magnetization’s spectrum with equation di = K xi , where i is a location index for the stencil placement over the k-space grid, di contains known or unknown samples of
collected by the stencil, and xi contains samples of the magnetization’s spectrum in a support neighborhood defined by the stencil and the convolution kernels. The placement location illustrated in Fig.7B corresponds to a known di. A placement location corresponds to a partially known di if the stencil-demarcated neighborhood is under- sampled. With a sufficient number of known di’s, one can identify the column space of K, and reconstruct ^ and MR images.
As Fig.7 illustrates, a parallel acquisition signal structure may be revealed and exploited through a conceptual stencil dissecting the k-space. For any one of the parallel signal acquisition channels, its spatially varying sensitivity causes it to sense an intermediate spectrum that results from a convolution of a kernel with the transverse magnetization’s spectrum. In accordance with this convolution principle, for any given stencil a shift-invariant matrix K exists that relates samples of ^ to samples of the magnetization’s spectrum:
[Exemplary equation 4] where i is a location index for the stencil placement over the k-space grid, di contains samples of ^ collected by the stencil, and xi contains samples of the magnetization’s spectrum in a support neighborhood defined by the stencil and the convolution kernels.
Eqn.4 arises from the convolution principle and involves no other assumptions. The linear equation form is generally valid in describing k-space grid samples. The stencil as illustrated in this example case is a block arrow. However numerous other shapes, including ones extending in three dimensions, can be appropriate choices for a stencil.
The number of rows in K is equal to ns x N, where ns is the number of samples the stencil collects from a channel’s spectrum at one placement location (ns =18 in Fig.7), and N is the number of parallel receive channels. The number of columns in K, denoted as nK, is equal to the number of samples in the support neighborhood where the magnetization’s spectrum contributes to the ns x N samples. Note that rK, the dimension of the column space of K or the number of independent columns in K, is less or equal to nK. In an example case where kernels are as broad as w-point across and the dissecting stencil is an l-point wide square stencil, nK≤ (l + w - 1)2. The physics of sensitivity weighting gives rise to a projection-invariance property. Expressing di as a weighted sum of the columns of K, Eqn.4 states that di belongs to the column space of K, i.e., the vector space spanned by K’s column vectors. Thus Eqn.4 equivalently states that the projection of di onto the column space of K results in di. This projection invariance may be expressed as follows:
[Exemplary equation 5] where matrix P denotes the projection. One way to construct P is through a product P=UUH, where columns of matrix U are vectors of an orthonormal basis for the column space of K and H denotes complex conjugate. For parallel MRI it is typically neither possible nor necessary to unambiguously resolve K. Yet identifying the column space and confining di’s to the space can help resolve unknowns and contain noise, exerting anchoring power in ^ reconstruction.
Parallel MRI: COMPASS
COMpletion with Parallel Acquisition Signal Structure, or COMPASS, represents one class of methods for reconstructing
from an under-sampled version as acquired by parallel MRI. With COMPASS, a signal structure space, or the vector space spanned by the columns of K, is explicitly identified from available samples of
and is then applied to reconstruct
and MR images. Several exemplary embodiments and their variations are presented in this section.
Identification principle: Wherever supported fully by available samples of
the stencil assembles the samples into a vector belonging to the signal structure space, adding to the knowledge of the latter. This can be appreciated by organizing multiple such instances as follows:
[Exemplary equation 6] where, in the identification process, DID (whose columns are made of known di’s) is known. It is clear that, despite the lack of knowledge of the x’s, when there are enough instances to allow assembly of rK independent column vectors for DID the signal structure space can be identified and a U matrix determined. For the sampling pattern and stencil definition illustrated in Fig.7, in identifying the signal structure space the six clusters shown support 30+ instances of stencil placements hence a DID matrix with 30+ columns.
In practice, U can be determined with, for example, QR factorization or SVD of matrix DID. Even though diversity of neighborhoods in the magnetization spectrum implies that rK independent column vectors may well result from as few as rK instances, in the presence of measurement noise, it is useful to employ more instances and hence more known di’s to strengthen the robustness of the identification process. For this SVD of DID with thresholding can be handy, where one ranks the singular values and attributes the basis vectors associated with the more significant singular values as the ones spanning the signal structure space.
Reconstruction principle: Eqn.4, and equivalently Eqn.5, is valid everywhere. Consider a k-space sweep. At each step of the sweep the stencil assembles samples, acquired or unknown, and constrains the resulting vector to be in the identified signal structure space. A full sweep gives a full set of constraints:
[Exemplary equation 7] Fig.8 illustrates example reconstruction formulations. (A) Reconstruction as a problem of finding the least squares solution to a set of equations. This set of equations is formed by pooling together subsets of equations, including one that expresses constraints due to the signal structure,
one that expresses constraints due to the measured data samples, and, if applicable, additional ones that capture physics, statistics or other knowledge. (B) Reconstruction as an optimization problem. The objective function includes terms that capture outcome’s deviation from the signal structure model, from the measurements, and from additional models. The optimization flexibly accommodates regularization / other spatial or temporal models that capture physics, statistics or other knowledge. Mathematical conversions can facilitate the computation involved in imposing the full set of constraints due to the identified signal structure. With the identified U matrix, UUH, the projection operator is known. Eqn.7 ties any sample, in a known, shift-invariant fashion, to weighted sums formed in the sample’s neighborhoods in the spectra. This therefore gives rise to a set of N constraints in convolution form:
[Exemplary equation 8] where z(n) represents the spectrum in ^ that corresponds to the nth channel, denotes convolution, and the w’s are the convolution functions derived from U. Fourier transform further converts the convolution operations into spatial weighting operations in image space, giving rise to a set of N rapidly quantifiable constraints on the individual channel images:
[Exemplary equation 9] In Eqn.9, y(n), the image corresponding to the nth channel, represents the inverse Fourier transform of z(n), and W(m,n), the (m,n)th spatial weighting function, represents inverse Fourier transform of w(m,n).
Transformed from Eqn.7, the convolution form (Eqn.8) or the weighted superposition form (Eqn.9) expresses the signal structure constraints due to imaging physics. For reconstructing ^ and images, imposing the signal structure constraints by applying Eqn.4, 5 or 7 everywhere is equivalent to imposing the set of N constraints with Eqn.8 or 9. Use of Eqn.9 offers a notable advantage in computation efficiency owing to the N2 low-cost weighting operations. This advantage is amplified when the constraints need to be repeatedly evaluated, as is often the case when the reconstruction employs an iterative numerical solver.
Let y = W y, (I - Ω) z = b, and z = F y or y = F-1 z be the matrix notations representing, respectively, Eqn.9, k-grid under-sampling, and Fourier transform relationships between the z(n)’s and y(n)’s. In these expressions, y pools y(n)’s, z pools the channels’ spectra, matrix F and F-1 represent (N-channel) Fourier and inverse Fourier transforms respectively, b pools the channels’ spectra as acquired by parallel receive (zero-filled, i.e., with none-zero entries being the acquired measurements and zero entries being the skipped measurements), I is the identity matrix, and diagonal matrix (I - Ω) denotes the k-grid sampling using 0’s and 1’s.
The set of constraints due to the governing signal structure, along with the set of constraints due to the measured, under-sampled spectra, form a set of linear equations (Fig.8A). Said set of linear equations can be cast with the parallel receive channels’ spectra as unknowns:
[Exemplary equation 11] A least-squares solution to either set leads to reconstruction of
^ and individual and combined images. There are a variety of numerical algorithms that support solving the least squares problems. Some of the numerical algorithms (e.g., lsqr) are particularly efficient as they accept W, Ω, F and F-1 that are implemented as operators and scale gracefully with problem size.
Parallel computing technology readily supports speedup of N-channel Fourier and inverse Fourier transforms, which also helps accomplish a high reconstruction speed.
Fig.9 shows a brain MRI example where a 32-channel coil was used for parallel acquisition of in vivo data samples. The result from a full-sampling case was available (g) to serve as a reference. (a-c) summarizes a first acceleration case: (a) 8.7x under-sampling pattern and 8-point stencil, (b) COMPASS result, and (c) |result-reference| x10. (d-f) summarizes a second acceleration case: (d) 15.1x under-sampling pattern and 8-point stencil, (e) COMPASS result, and (f) |result-reference| x10. Notice the satisfactory reconstruction accuracy despite a lack of center k-space full sampling. SNR in the 15.1x case is lower than that in the 8.7x case, as expected.
Alternatively expressing reconstruction as an explicit optimization problem (Fig.8B) can be useful. Eqns.10 and 11, in explicit optimization forms, can additionally incorporate
regularization / other models, including spatial or temporal models that capture physics, statistics or other knowledge. For instance:
[Exemplary equation 12] In incorporating a sparse model for regularization, Eqn.12 may include, for example, a penalty term
where ^ represents a sparsifying transform.
Divide-and-conquer strategies are applicable. For example, one can use Eqn.10 to reconstruct ^ segment-by-segment. With such a scheme one divides ^ into contiguous or overlapped segments, solves smaller reconstruction problems rather than one full-size problem, and stitches the results together to reconstruct the full ^. The smaller reconstruction problems can be solved in parallel using parallel computing technology, each involving a fraction of all the measured data samples and to-be-determined unknowns.
In a case involving sampling off a Cartesian grid, regional re-gridding applied to sample cluster(s) of sufficient density can generate di’s and enable the determination of U and W. In Eqns.10 and 11, (I - Ω) z = b and (I - Ω) F y = b are then to be replaced by EF-1 z = ba and E y = ba, respectively, where E represents the encoding matrix corresponding to the non-Cartesian sampling pattern and ba collects sample data acquired by parallel receive.
COMPASS enables recognition and best balancing of model and measurement uncertainties. Impact of measurement noise on reconstruction is mediated by the two distinct sets of constraints (or more if additional models are introduced). Scalar ^ can be set to emphasize the more reliable one of the two to benefit image SNR. In a case where the first set is very reliable, e.g., due to a robust scheme for signal structure identification or an effective incorporation of other sources of knowledge, choosing a large ^ can be beneficial. In such a case one can analytically track noise propagation, predict image noise standard deviation and introduce further optimizations.
Further adapting the first or second set of constraints can be useful. For example, one may design and apply multiple stencils to optimize structure space identification, to optimize the structure space’s anchoring effect on the reconstruction, or to improve the efficiency of k-space sampling. In such a multi-stencil case, the adaptation to Eqns.10 and 11 for example may give
[Exemplary equation 13] and
[Exemplary equation 14] Given the afore-described reconstruction principle, it is clear that as long as the sensitivity profiles stay the same the samples used in the structure space identification can be acquired in one or more sessions that are separate from the session(s) that produce b. For example the session(s) involved in the identification can take place at a different time and/or assume a different MR image contrast. This, in conjunction with the numerous possibilities for designing/combining stencils and for assembling/organizing DID’s (Eqn.6), allows for MRI enhancements in terms of flexibility and efficiency. As a further note, the reconstruction principle is supportive of a more general k-space sampling scheme in which the acquisition channels each has its own sampling pattern– such a scheme for example accommodates more flexible deployment of the receive channel resource when the number of receive channels is less than the number of coil ports. Diagnostic applications commonly demand reconstruction result in the form of a combined image rather than a set of individual channel images. A concept of synthesized channel(s) can be introduced into COMPASS to facilitate the creation, regularization and analysis of useful combined image(s). Under a setting where a desired combination captures a spatially weighted version of the transverse magnetization or a weighted superposition of individual channel images, a synthesized channel can be designated and Eqn.4 remains a valid form in describing k-space grid samples from both actual and synthesized channels. The identification and reconstruction principles apply, and COMPASS can directly create the combination instead of deferring it to a separate combination step.
Consider√sum of squares of individual channel images (√sos), a common combination, as an example. Typically√sos is calculated in a separate step after the individual channel images are reconstructed. It can be shown however that√sos is a spatially weighted version of the transverse magnetization as well as a weighted superposition of individual channel images. Illustrated below are two integrated, linear approaches to√sos creation.
A first approach uses a synthesized channel in reconstruction. The channel is appended to the N actual receive channels– hence ^ represents spectra of N+1 channels, the number of rows in K in Eqn.4 is ns x (N+1), and z and y (Eqns.10 and 11) both have N+1 channels’ worth of unknowns. COMPASS proceeds as if there are N+1 parallel receive channels except for two modifications. In identification: The synthesized channel’s spectrum (needed as part of the input) is emulated, e.g., by sampling FT{√sum of squares of FT-1{individual spectrum acquired by actual channels}} with the same k-space sampling pattern that is in effect for the actual channels. In reconstruction: To accommodate the synthesized channel, I, Ω and b are augmented as if there is an additional receive channel that is with a skip-all sampling pattern (i.e., Ω and b are zero- padded). Matrix W resulting from the identification then has the effect of substantiating spectra/images of all N+1 channels through Eqn.10 or 11.
A second approach adapts Eqn.10 or 11 with an additional set of equations that explicitly describes the weighted superposition relationship:
[Exemplary equation 16] The additions reflect yc = R y, a matrix notation for , i.e., the combined image is a weighted superposition of individual channel images. Alternatively, there are ways to estimate a set of sensitivity profiles consistent with the acquired samples or the signal structure, which supports use of y = S yc instead of yc = R y. y = S yc is a matrix notation for expressing . Considering the effects of noise, the illustrated integration avoids over-enforcement of yc = R y or y = S yc when either one is integrated into COMPASS. β, a positive scalar allows optimization of the enforcement.
For solving each of Eqns.10-11 and 13-16 with least squares, a prediction of a solution’s noise covariance or standard deviation can be performed by computing (AHA)-1 where A denotes
the matrix on the left-hand side of the equation being solved. There exist numerical algorithms that support cost-effective computation of (AHA)-1.
From an SNR point of view, maximizing SNR / robustness against measurement noise may entail further optimization of the measurement scheme (e.g., the under-sampling pattern), joint design of stencil(s) and the under-sampling pattern, the identification scheme, or an optional regularization scheme (e.g., L2 or L1 norm-based). Guidance from noise behavior prediction is most valuable. It enables assessment and optimization of SNR in a proactive fashion, once there are enough data samples or knowledge for signal structure identification. For example, the integrated, linear computation of the synthesized channel's spectrum (Eqn.15) facilitates prediction of noise covariance, which, in turn, can guide adaptation of k-space sampling for SNR improvement.
Fig.10 illustrates noise behavior prediction and ^’s impact on COMPASS’ SNR behavior with a 1D simulation case. The case was set up with magnetization and sensitivity profiles over a line segment extracted from a 2D 8-channel parallel MRI case, and with 4x even under-sampling of k-space. Under a setting where noise effect on structure identification was negligible, the analytical predictions suggested that image noise standard deviation would decrease monotonically as ^ increases from 100 to 108 (Fig.10). Image noise standard deviation independently measured with a Monte Carlo study confirmed this (Fig.10b overlay of dots), and further indicated that as the noise effect to the structure identification increased, image noise profiles deviated more and more from the analytical predictions starting at several discrete locations.
Using first principle and refraining from unnecessary or untimely introduction of assumptions, COMPASS offers a framework for accuracy, speed, flexibility and SNR improvements. There are various other exemplary embodiments, including, for example, exemplary embodiments that alternatively formulates the reconstruction as:
a) solving F W F-1 z = z subject to (I - Ω) z = b, or
b) iterating z(n+1) = (F W F-1 ) ( Ω z (n) + b), with z (0) = b, or
c) solving (I-F W F-1 Ω) z = F W F-1b, or
d) calculating , where P= ΩF W F-1. Parallel MRI: Methods Leveraging Randomness
As described above COMPASS accommodates random or incoherent k-space sampling and there are embodiments of COMPASS that incorporate compressed sensing to take further advantage of random or incoherent k-space sampling.
For reconstructing ^ from a partially measured version, methods described below employ low-rank matrix completion and emphasize exploitation of random or incoherent k-space sampling. Similar to COMPASS they eliminate the requirements of a conventional calibration and the determination of profiles and kernels. Nonetheless, they represent that the signal structure space is of low-rank but avoid further determination of a basis for the space. Reconstruction of ^ is accomplished by enforcing the low-rank model with a low-rank matrix completion algorithm, which necessarily relies on an acquisition scheme that samples k-space grid with an incoherent under-sampling pattern.
The analysis in section Parallel Acquisition Signal Structure is fully applicable here. Eqn.4, and equivalently Eqn.5, is valid everywhere over the grid. Consider sweeping a stencil across a
segment that is of a sufficiently large size. At each step of the sweep the stencil assembles samples, acquired or unknown, and constrains the resulting vector to conform to the linear form di = K xi. Pooling the results together gives a data matrix D:
[Exemplary equation 17] Due to the acquisition scheme, D’s columns have known and unknown entries appearing in a pseudo-random fashion. If the acquisition were an ideal, full sampling of the k-space grid, D would have a rank no greater than rK. In other words, the assembled D is an under-sampled version of a low-rank matrix. Low-rank matrix completion is a standard mathematical tool that can be employed for resolving the unknown entries of D.
In an exemplary embodiment a parallel acquisition scheme is employed to acquire samples over a k-space grid with an incoherent under-sampling pattern. For reconstruction, the full k- space grid is partitioned into multiple segments that may exhibit overlaps or be of different sizes. For each segment, sweeping a stencil across the segment causes a data matrix to be assembled. The assembled matrix represents an incoherently under-sampled version of a low-rank matrix. Entries of the assembled matrix not filled due to coarse sampling are set to zero initially. These entries are updated and the
segment recovered using low-rank matrix completion, with self- validation applied when desired. As a special case, the reconstruction may treat the full k-space grid as one single segment.
There is the option to process some of the segments together, e.g., using a strategy of anchoring, i.e., concatenating into one matrix the data matrices from both the segment being reconstructed and a selected segment that is sampled with acquisition timing of interest or with high signal / SNR, and completing the composite matrix– see Eqn.18 for an illustration. There is
also the option to process all the segments entirely independent of one another– see Eqn.19 for an illustration.
[Exemplary equation 19] The data matrices corresponding to the segments have the common low-rank structure resulting from location-independent convolution kernels. Each segment is self-contained, and fully reconstructable by enforcing the low-rank structure. No calibration or determination of profiles or kernels is necessary. No explicit identification of structure is necessary either.
Stitching the processed segments together leads to the reconstruction of full ^ and full-resolution images.
Fig.11 illustrates an example of applying low-rank matrix completion in parallel MRI. In acceleration of 2D phase encoding in 8-channel parallel receive MRI of a brain, a 200x200 k- space grid was sampled with incoherent under-sampling patterns that correspond to 2.5-fold accelerations over full sampling. As a comparison reference Fig.11 col.1 displays the result from applying standard sum-of-squares reconstruction to fully sampled 8-channel data. For the accelerated imaging cases, the reconstruction partitioned the full grid into up to 25 regions, and employed low-rank matrix completion instead of calibration and profile or kernel determination. Results from the cases were compared to the reference to quantify errors. Fig.11 col.2 shows the result from the case where the entire grid was treated as a single segment. Fig.11 col.3 shows the result from the case where 25 segments that made the whole grid were individually reconstructed. Each individual reconstruction was anchored (Eqn.18) by data incoherently under- sampled on a 48x48 grid at center k-space. Fig.11 col.4 shows the result from the case where 9 segments that made the whole grid were individually and independently reconstructed (Eqn.19). Reconstruction of each of the 8 outer k-space segments was self-reliant, without any knowledge of what happened elsewhere. Fig.11 col.5 shows the result from the case that was the same as the previous, except for the use of segment-dependent thresholds when a singular value thresholding algorithm was applied to solve low rank matrix completion.
Reconstruction in the segment-by-segment fashion is amenable to parallel computing.
Additional ways to accelerate the reconstruction include incremental SVD update techniques,
where a small, yet-to-be-completed data matrix is appended to an already-completed data matrix, and the composite data matrix is completed through a low-cost SVD update.
Additional ways to accelerate reconstruction also include an iterative scheme involving projection of the incomplete columns of the yet-to-be-completed data matrix onto the column- space of the already-completed data matrix and update of the yet-to-be-filled entries while in keeping with the structure. When random or incoherent sampling is employed, and especially when compressed sensing is employed, COMPASS for parallel MRI can benefit from self-validation, in ways similar to that illustrated in earlier sections with examples.
Low-rank matrix completion can benefit from self-validation as well. In an exemplary embodiment of low-rank matrix completion with self-validation, an S-test checks variations in reconstruction results amongst an ensemble of low-rank matrix completions that each uses a leaving-L-out version of the set of base data samples. This can be clearly appreciated with a parallel MRI example. In accelerating MRI with low-rank matrix completion, an incoherently under-sampled version of the full data matrix is acquired, which is followed by reconstruction of the full data matrix and the underlying image. To assess this base result, the S-test checks standard deviation of the solutions to the following ensemble of problems:
[Exemplary equation 20] where represents the jth instance of leaving out L k-space sample per receive channel.
Except for the leaving-L-out perturbation, each instance of problem in the ensemble is identical to the original problem that yields the base result. Referring to Fig.12, the major components of an example magnetic resonance imaging (MRI) system 10 incorporating the present invention are shown. The operation of the system is controlled from an operator console 12 which includes a keyboard or other input device 13, a control panel 14, and a display screen 16. The console 12 communicates through a link 18 with a separate computer system 20 that enables an operator to control the production and display of images on the display screen 16. The computer system 20 includes a number of modules which communicate with each other through a backplane 20a. These include an image processor module 22, a CPU module 24 and a memory module 26, known in the art as a frame buffer for storing image data arrays. The computer system 20 is linked to disk storage 28 and tape drive 30
for storage of image data and programs, and communicates with a separate system control 32 through a high speed serial link 34.
The system control 32 includes a set of modules connected together by a backplane 32a. These include a CPU module 36 and a pulse generator module 38 which connects to the operator console 12 through a serial link 40. It is through link 40 that the system control 32 receives commands from the operator to indicate the scan sequence that is to be performed. The pulse generator module 38 operates the system components to carry out the desired scan sequence and produces data which indicates, for RF transmit, the timing, strength and shape of the RF pulses produced, and, for RF receive, the timing and length of the data acquisition window. The pulse generator module 38 connects to a set of gradient amplifiers 42, to indicate the timing and shape of the gradient pulses that are produced during the scan. The pulse generator module 38 can also receive patient data from a physiological acquisition controller 44 that receives signals from a number of different sensors connected to the patient, such as ECG signals from electrodes attached to the patient. And finally, the pulse generator module 38 connects to a scan room interface circuit 46 which receives signals from various sensors associated with the condition of the patient and the magnet system. It is also through the scan room interface circuit 46 that a patient positioning system 48 receives commands to move the patient to the desired position for the scan.
The gradient waveforms produced by the pulse generator module 38 are applied to the gradient amplifier system 42 having Gx, Gy, and Gz amplifiers. Each gradient amplifier excites a corresponding physical gradient coil in a gradient coil assembly generally designated 50 to produce the magnetic field gradients used for spatially encoding acquired signals. The gradient coil assembly 50 and a polarizing magnet 54 form a magnet assembly 52. An RF coil assembly 56 is placed between the gradient coil assembly 50 and the imaged patient. A transceiver module 58 in the system control 32 produces pulses which are amplified by an RF amplifier 60 and coupled to the RF coil assembly 56 by a transmit/receive switch 62. The resulting signals emitted by the excited nuclei in the patient may be sensed by the same RF coil assembly 56 and coupled through the transmit/receive switch 62 to a preamplifier module 64. The amplified MR signals are demodulated, filtered, and digitized in the receiver section of the transceiver 58. The transmit/receive switch 62 is controlled by a signal from the pulse generator module 38 to electrically connect the RF amplifier 60 to the coil assembly 56 during the transmit mode and to connect the preamplifier module 64 to the coil assembly 56 during the receive mode. The transmit/receive switch 62 can also enable a separate RF coil (for example, a surface coil) to be
used in either the transmit or receive mode. The transceiver module 58, the separate RF coil and/or the coil assembly 56 are commonly configured to support parallel acquisition operation.
The MR signals picked up by the separate RF coil and/or the RF coil assembly 56 are digitized by the transceiver module 58 and transferred to a memory module 66 in the system control 32. A scan is complete when an array of raw k-space data has been acquired in the memory module 66. This raw k-space data is rearranged into separate k-space data arrays for each image to be reconstructed, and each of these is input to an array processor 68 which operates to Fourier transform the data to combine MR signal data into an array of image data. This image data is conveyed through the serial link 34 to the computer system 20 where it is stored in memory, such as disk storage 28. In response to commands received from the operator console 12, this image data may be archived in long term storage, such as on the tape drive 30, or it may be further processed by the image processor 22 and conveyed to the operator console 12 and presented on the display 16. While the above descriptions of methods and systems contain many specificities, these should not be construed as limitations on the scope of any embodiment, but as exemplifications of the presently preferred embodiments thereof. Many other ramifications and variations are possible within the teachings of the various embodiments.
Claims
CLAIMS What is claimed is: 1. A computer implemented method of imaging, in an imaging system, comprising: a. acquiring a set of base data samples, b. operating on said set of base data samples with a predetermined reconstruction and obtaining a base result, c. performing perturbation at least one time to said set of base data samples and obtaining at least one set of perturbed data samples, d. operating individually on said at least one set of perturbed data samples with said predetermined reconstruction and obtaining at least one perturbed result, e. deriving an outcome based on said at least one perturbed result and said base result, whereby said outcome facilitates assessment and improvement of image quality.
2. The method of claim 1 wherein each time of said performing perturbation comprises excluding at least one data sample from said set of base data samples, said at least one data sample being selected in accordance with a predetermined strategy.
3. The method of claim 2 wherein said outcome comprises a measure of variation in said at least one perturbed result.
4. The method of claim 1 wherein each time of said performing perturbation comprises modifying said set of base data samples, said modifying effects adding a predetermined image feature.
5. The method of claim 4 wherein said outcome comprises a measure of differences between said base result and said at least one perturbed result.
6. The method of claim 1 wherein said predetermined reconstruction is selected from the group comprising compressed sensing-incorporated reconstruction, low-rank matrix completion- incorporated reconstruction, and incoherent sampling-incorporated reconstruction.
7. The method of claim 1 wherein said imaging system is a magnetic resonance imaging (MRI) system.
8. The method of claim 1 wherein said imaging system is a diagnostic imaging system.
9. A computer implemented method for providing an image, in a magnetic resonance imaging (MRI) system, comprising: a. acquiring data samples in accordance with a k-space sampling pattern, b. forming a data assembly by applying a stencil to said data samples, c. identifying a signal structure in said data assembly, said signal structure being selected from the group comprising a vector space, a basis, a matrix, an operator, a vector space’s rank, a matrix’s rank, a spatial distribution and a temporal variation, d. finding a result consistent with both said data samples and said signal structure, said result being selected from the group comprising a spectrum, a set of spectra, a matrix, an image and a set of images, whereby said method facilitates improvements in acquisition speed, image signal-to-noise ratio and image quality.
10. The computer implemented method of claim 9 wherein said finding a result comprises solving a set of equations, said set of equations being selected from the group comprising expressions of constraints due to said signal structure, expressions of constraints due to said data samples, and expressions of constraints derived from physics, statistics and other knowledge.
11. The computer implemented method of claim 10 wherein said finding a result comprises predicting noise covariance of said result.
12. The computer implemented method of claim 9 wherein said finding a result comprises solving an optimization problem, said optimization problem having terms selected from the group comprising deviation from said signal structure model, deviation from said data samples, and deviation from models that capture physics, statistics or other knowledge.
13. The computer implemented method of claim 9 wherein said acquiring data samples is conducted by simultaneously acquiring data samples with a set of parallel RF receive channels.
14. The computer implemented method of claim 9 wherein said finding a result comprises low-rank matrix completion, said low-rank matrix completion being performed for a spectra in a segment-by-segment fashion.
15. The computer implemented method of claim 9 wherein said finding a result employs predetermined calculations, said predetermined calculations being selected from the group comprising convolution computation, weighted superposition computation, fast Fourier transform (FFT), singular value decomposition (SVD) and convex optimization.
16. The computer implemented method of claim 9 wherein said finding a result comprises balancing uncertainties amongst models.
17. The computer implemented method of claim 9 wherein said stencil is designed in conjunction with said k-space sampling pattern.
18. The computer implemented method of claim 9 wherein said signal structure arises in part from modifying said data samples, said modifying effects adding a predetermined feature.
19. The computer implemented method of claim 9 wherein said k-space sampling pattern is selected from the group comprising a pseudo-random sampling pattern, a Cartesian sampling pattern, and a sampling pattern resulting from an arbitrary k-space traversing.
20. A magnetic resonance imaging (MRI) apparatus, comprising: a. a hardware system for performing MR signal excitation and detection, b. a computer system electrically connected to said hardware system, comprising: at least one display; at least one processor; and computer readable media, comprising: computer readable code for applying said MR signal excitation and detection, computer readable code for acquiring data samples in accordance with a k-space sampling pattern,
computer readable code for forming a data assembly by applying a stencil to said data samples, computer readable code for identifying a signal structure in said data assembly, computer readable code for finding a result consistent with both said data samples and said signal structure, and computer readable code for displaying said result on said at least one display.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201580077362.3A CN107615089B (en) | 2014-01-03 | 2015-04-30 | Modeling and verification method for compressed sensing and MRI |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US2014691923296P | 2014-01-03 | 2014-01-03 | |
US14/588,938 | 2015-01-03 | ||
US14/588,938 US10126398B2 (en) | 2014-01-03 | 2015-01-03 | Modeling and validation for compressed sensing and MRI |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016108947A1 true WO2016108947A1 (en) | 2016-07-07 |
Family
ID=56287637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2015/028445 WO2016108947A1 (en) | 2014-01-03 | 2015-04-30 | Modeling and validation methods for compressed sensing and mri |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2016108947A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110084693A1 (en) * | 2005-04-25 | 2011-04-14 | University Of Utah Research Foundation | Systems and methods for image reconstruction of sensitivity encoded mri data |
US20120099774A1 (en) * | 2010-10-21 | 2012-04-26 | Mehmet Akcakaya | Method For Image Reconstruction Using Low-Dimensional-Structure Self-Learning and Thresholding |
US20140086469A1 (en) * | 2012-09-26 | 2014-03-27 | Siemens Aktiengesellschaft | Mri reconstruction with incoherent sampling and redundant haar wavelets |
US20140286560A1 (en) * | 2011-11-06 | 2014-09-25 | Mayo Foundation For Medical Education And Research | Method for calibration-free locally low-rank encouraging reconstruction of magnetic resonance images |
US20140376794A1 (en) * | 2012-01-06 | 2014-12-25 | Children's Hospital Medical Center | Correlation Imaging For Multi-Scan MRI With Multi-Channel Data Acquisition |
-
2015
- 2015-04-30 WO PCT/US2015/028445 patent/WO2016108947A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110084693A1 (en) * | 2005-04-25 | 2011-04-14 | University Of Utah Research Foundation | Systems and methods for image reconstruction of sensitivity encoded mri data |
US20120099774A1 (en) * | 2010-10-21 | 2012-04-26 | Mehmet Akcakaya | Method For Image Reconstruction Using Low-Dimensional-Structure Self-Learning and Thresholding |
US20140286560A1 (en) * | 2011-11-06 | 2014-09-25 | Mayo Foundation For Medical Education And Research | Method for calibration-free locally low-rank encouraging reconstruction of magnetic resonance images |
US20140376794A1 (en) * | 2012-01-06 | 2014-12-25 | Children's Hospital Medical Center | Correlation Imaging For Multi-Scan MRI With Multi-Channel Data Acquisition |
US20140086469A1 (en) * | 2012-09-26 | 2014-03-27 | Siemens Aktiengesellschaft | Mri reconstruction with incoherent sampling and redundant haar wavelets |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10126398B2 (en) | Modeling and validation for compressed sensing and MRI | |
US12111376B2 (en) | Imaging with signal coding and structure modeling | |
JP6557710B2 (en) | Nuclear magnetic resonance (NMR) fingerprinting | |
CN101971045B (en) | Coil selection for parallel magnetic resonance imaging | |
US9910118B2 (en) | Systems and methods for cartesian dynamic imaging | |
CN111856368B (en) | MRI system and method for detecting patient movement using neural networks | |
US11022665B2 (en) | Method for echo planar time-resolved magnetic resonance imaging | |
Uecker et al. | ESPIRiT—an eigenvalue approach to autocalibrating parallel MRI: where SENSE meets GRAPPA | |
KR102236865B1 (en) | Establishing a magnetic resonance system actuation sequence | |
US10488481B2 (en) | Systems and methods for multislice magetic resonance fingerprinting | |
CN104414641B (en) | Multiple spot Dick pine technology | |
US10261155B2 (en) | Systems and methods for acceleration magnetic resonance fingerprinting | |
US20190086496A1 (en) | System and method for controlling noise in magnetic resonance imaging using a local low rank technique | |
CN102955142B (en) | Sampling configuration for the MR method for reconstructing of iteration | |
US10281549B2 (en) | Magnetic resonance imaging apparatus and image processing apparatus | |
US9709650B2 (en) | Method for calibration-free locally low-rank encouraging reconstruction of magnetic resonance images | |
CN108226831A (en) | The magnetic resonance imaging of acceleration | |
US20220067586A1 (en) | Imaging systems with hybrid learning | |
US10527695B2 (en) | Systems and methods for efficient magnetic resonance fingerprinting scheduling | |
US10466321B2 (en) | Systems and methods for efficient trajectory optimization in magnetic resonance fingerprinting | |
US9459335B2 (en) | System and method for parallel magnetic resonance imaging with optimally selected in-plane acceleration | |
WO2016108947A1 (en) | Modeling and validation methods for compressed sensing and mri | |
US11385311B2 (en) | System and method for improved magnetic resonance fingerprinting using inner product space | |
US20240183922A1 (en) | Compact signal feature extraction from multi-contrast magnetic resonance images using subspace reconstruction | |
CN114062988B (en) | Magnetic resonance spectrum imaging method, apparatus, computer device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15875832 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 201580077362.3 Country of ref document: CN |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15875832 Country of ref document: EP Kind code of ref document: A1 |