CN104574456B - A kind of super lack sampling K data imaging method of magnetic resonance based on figure regularization sparse coding - Google Patents

A kind of super lack sampling K data imaging method of magnetic resonance based on figure regularization sparse coding Download PDF

Info

Publication number
CN104574456B
CN104574456B CN201410707447.2A CN201410707447A CN104574456B CN 104574456 B CN104574456 B CN 104574456B CN 201410707447 A CN201410707447 A CN 201410707447A CN 104574456 B CN104574456 B CN 104574456B
Authority
CN
China
Prior art keywords
mrow
msub
msubsup
msup
mfrac
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410707447.2A
Other languages
Chinese (zh)
Other versions
CN104574456A (en
Inventor
刘且根
卢红阳
张明辉
王玉皞
邓晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang University
Original Assignee
Nanchang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang University filed Critical Nanchang University
Priority to CN201410707447.2A priority Critical patent/CN104574456B/en
Publication of CN104574456A publication Critical patent/CN104574456A/en
Application granted granted Critical
Publication of CN104574456B publication Critical patent/CN104574456B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

A kind of super lack sampling K data imaging method of magnetic resonance based on figure regularization sparse coding, comprises the following steps:(a):The expression of figure regularization sparse coding is carried out on the graceful iteration framework of double-deck Burger, obtains image sparse model;(b):Introduce auxiliary variable and the technology of rotation solution, renewal learning dictionary and sparse coefficient in the internal layer iteration of the graceful iteration of double-deck Burger;(c):Constrained using the K data of the super lack sampling in part, image update is carried out in the external iteration of the graceful iteration of double-deck Burger, to obtain imaging results.The present invention learns to introduce figure regularization sparse coding method by self-adapting dictionary, establishes neighborhood graph to encode partial structurtes data and excavate its constraint in terms of geometric data so that view data can more preferable rarefaction representation;The other present invention can handle the more complicated image of local geometric features, can effectively capture local picture structure, recover more image details, obtained image result has more preferable fidelity.

Description

A kind of super lack sampling K data imaging of magnetic resonance based on figure regularization sparse coding Method
Technical field
The invention belongs to medical imaging field, more particularly to magnetic resonance imaging.
Background technology
Magnetic resonance imaging is a kind of important medical diagnosis means, is clinician especially in the case where lacking ionization Provide important anatomical structure.Although magnetic resonance imaging makes high-definition picture distinguish expression soft tissue well, its Image taking speed is limited by physics and physiologically.Image taking speed is the major defect of magnetic resonance imaging system slowly so that magnetic is common The indication of imaging inspection of shaking is greatly reduced, furthermore it is not suitable for moving the inspection of sexual organ and the inspection of urgent patient, increases Duration scanning is added to bring some physiological movement artifacts etc..Therefore since magnetic resonance imaging occurs, people are always It is directed to the raising of image taking speed and image quality.
Magnetic resonance imaging speed is relevant with sweep time slowly.Sweep time is again directly proportional to sample rate, reduces sweep time While sample rate can be made accordingly to decline, the image resolution ratio of reconstruction can also reduce.In order to ensure the quality of image, it is necessary to increasing Add the priori of image, this kind of method is referred to as regularization method.It is developed in recent years characterized by rarefaction representation Compressive sensing theory is to reduce scanning amount and ensure a kind of effective way efficiently rebuild.
The rarefaction representation of image has solid biological context, and it comes from " efficient coding hypothesis " earliest.If signal is at certain Individual base is more sparse, then required sampling quantity is then fewer.Therefore, one in compressive sensing theory is exactly dilute the problem of important Dredge the selection of base.The benefit of rarefaction representation is that nonzero coefficient discloses the immanent structure and essential attribute of signal and image, Nonzero coefficient has explicit physical significance simultaneously.
Traditional compressed sensing magnetic resonance imaging generally use predefines dictionary, and it may not sparse expression weight well Build image.Total variation is more likely to the image of cartoon type, and this image is piecewise constant.Therefore image procossing, information transfer and The sparse and succinct representation of signal and image is being sought in the fields such as computer vision always.In image procossing application, Wavelet transformation, Fourier transformation, discrete cosine transform and variation are graded, and are all typical fixed mode sparse transformation operators, But these operators can not make full use of the feature of process object, they can be regarded as fixed dictionary.
In view of becoming difference model has the ability for preserving image border well, it is also obtained extensively in magnetic resonance imaging neighborhood Application.But blocky effect may be produced in the case where lack sampling is big, the model is directly used in magnetic resonance imaging neck Domain, the heavy amount of image can be affected, and Yang et al. proposes to add other sparse constraint items on TV models to improve weight Build picture quality.Its model is as follows:
In formula, ψ represents sparse constraint item, μ12> 0, for weighing two the first two regular terms and the 3rd fidelity item.
Openness, the dictionary based on image block is improved except the regularization of the sparse transformation based on known fixed at present The image sparse method of habit is also widely studied.Many researchs show that the sparse coding based on dictionary learning compares fixed word The sparse coding of allusion quotation is more advantageous, and under adaptive dynamic dictionary, signal can obtain more sparse expression, a lot Extraordinary effect has been obtained in.In view of sparse coding model is in image procossing each side, particularly in image recovery Good result, the numerical algorithm for designing robust is the problem of one of dictionary learning field is particularly important.The characteristics of this is theoretical It is the optimal self-adapting dictionary of design so that overlapping image block is more excellent rarefaction representation under these dictionaries.
Current typical dictionary learning method is a kind of algorithm K-SVD based on singular value decomposition, is changed in the training process In generation, repeatedly, calculates a SVD and decomposes every time.K-SVD method training dictionaries are divided into sparse coding and dictionary updating two parts, sparse Dictionary D is fixed during coding, with match tracing (MP) or orthogonal matching pursuit (OMP) scheduling algorithm iterative signal on dictionary Sparse coefficient, each row (i.e. each of dictionary in renewal dictionary is then decomposed with SVD according to the sparse coefficient tried to achieve again Atom), so it is repeatedly available the dictionary of optimization.But the defects of K-SVD is exactly to be easy to be absorbed in local solution, i.e., algorithm is effective Property depends critically upon the type of training sample set or initial dictionary.
Avishankar et al. proposes a kind of two step alternative manners, and dictionary learning model is used into K spaces lack sampling On MR image reconstruction, propose that dictionary learning rebuilds MRI (DLMRI) model:
Herein, Γ=[α12,...,αL] it is sparse sparse matrix corresponding to all image blocks.Previous item is used for image block Rarefaction representation on adaptive learning dictionary, latter are the fitting fidelity items in view data.Solve two steps of the model Alternative manner, the first step are self-adapting dictionary study;Second step is the K- spatial data reconstruction images from height lack sampling.Although The learning method of these data is greatly improved than being predefined as the method for basic dictionary in the past, but most methods are not examined Consider the information of K space data geometry, the loss of image detail can be caused.
Liu and root et al. propose the dictionary learning model using the graceful iterative algorithm of double-deck Burger as major architectural, such as carry recently Go out to be based on the graceful dictionary learning of double-deck Burger (TBMDU), this is relevant for external iteration and data fidelity, internal layer be with dictionary and It is sparse relevant based on image block.Improved sparse coding and dictionary updating are applied in the graceful iteration of internal layer Burger, and so doing to make Whole algorithm is after less secondary iteration with regard to that can reach convergence.Its algorithm model is as follows:
D=[d in formula1,d2,...,dJ]∈CM×J, Γ=[α12,...,αL]∈CJ×L.λ represents to scheme in optimal dictionary As the sparse level of block.For medical image, λ value can obtain by rule of thumb.J=KM, wherein K measurement dictionary are excessively complete Degree.The double-deck graceful dictionary learning of Burger overcomes two sensitive to initial value and computationally intensive problems.It is it may be desirable to basic herein On remain to exponentially improve sweep speed, and realize K data lack sampling high quality imaging.
The defects of prior art, is, becomes the conversion of the fixed base such as difference regularization and small echo sparse constraint, can not be preferable All image of sparse expression;The dictionary learning on-fixed base learning method that Avishankar and Liu and root et al. propose, only Only consider the rarefaction representation of image, do not account for the geometry information that image block spatial data is contained, this may result in The loss of details.Therefore, industry needs one kind quick, and fine reconstructed magnetic resonance image geometry architectural feature and image are thin The algorithm of section.
The content of the invention
The purpose of the present invention is to propose to a kind of super lack sampling K data imaging side of the magnetic resonance based on figure regularization sparse coding Method (GSCMRI).
The technical problem to be solved in the present invention is by figure regularization sparse coding method, using it in geometric properties data Advantage in terms of constraint, neighborhood graph is established to encode partial structurtes data, preferably description scheme information and can realize that magnetic is total to Shake image block reconstruction identification, solve existing MR imaging method can not ideally rarefaction representation picture structure the defects of with And quickly and accurately rebuild MRI.
The present invention is step realization by the following technical programs:
Step (a):The expression of figure regularization sparse coding is carried out on the graceful iteration framework of double-deck Burger, establishes image sparse mould Type.
Step (b):Auxiliary variable and the technology of rotation solution are introduced, is updated in the internal layer iteration of the graceful iteration of double-deck Burger Learn dictionary and sparse coefficient.
Step (c):Constrained using the K data of the super lack sampling in part, carried out in the external iteration of the graceful iteration of double-deck Burger Image update, to obtain imaging results.
Further say, step (a) of the present invention is:It is sparse that figure regularization is carried out on the graceful iteration framework of double-deck Burger Coded representation, i.e., on the basis of carrying out rarefaction representation to the coefficient of image, the local geometry constraint of view data is passed through To the regularization of coefficient figure to obtain more preferable rarefaction representation, image sparse model is obtained.
Further say, step (b) of the present invention is:Auxiliary variable is introduced into turn the unconstrained problem in step (a) Restricted problem solution is turned to, dictionary updating is completed by the graceful iterative algorithm of Burger and sparse system is updated by soft-threshold iteration Number.
Further say, step (c) of the present invention is:After step (b) completes dictionary learning and sparse coefficient, pass through The graceful external iteration of Burger, the parsing problem of least square is solved, carry out image update, obtain image reconstruction result.
Further say, the step (a) is:Figure regularization is incorporated in the sparse model of the graceful iteration framework of double-deck Burger Sparse coding process, the image sparse model of foundation are:
Wherein, in model first subproblem first three itemsFor image Carry out sparse and figure regularization on dictionary to represent, u represents MRI, and D represents study dictionary, αlRepresent that image block is sparse Coefficient, RlRepresent that image block chooses matrix, Γ represents figure regularization sparse expression coefficient, FpRepresent part Fourier transform, f tables Show K data;Section 4 ensures that reconstructed results keep matching constraint with K spaces lack sampling data, and λ, η and μ represent to learn word respectively The weight of allusion quotation sparse coefficient, figure regularization code coefficient and K data fitting.
Further say, step (b) of the present invention introduces auxiliary variable, and first subproblem of iconic model is converted into:
Figure regularization constraint can effectively utilize geometric data under manifold hypothesis.It assumes two data point xi,xj It is close on the inherent geometry of data distribution, their expression factor alphaiAnd αjAlso it is close in new dictionary.Given data X, Arest neighbors figure G is constructed by M summit, and each summit represents a data point in X, and W represents G weight matrix.If xiIt is in xjK close to neighborhood or xjIt is in xiK close to neighborhood, Wij=1;Otherwise Wij=0.In addition, xiDegree must define ForWith C=diag (c1,c2,...,cM), L=C-W.Mapping weight map G carrys out sparse expression coefficient Γ, and selection is reasonable The criterion of minimizing following objective functions of mapping:
After introducing auxiliary variable, using the graceful technology of double-deck Burger, model is changed to:
Further say, the rotation described in step (b) of the present invention solves:A variable is updated by turns, while fixes it Its variable:Fixed auxiliary variable ziAnd factor alphai, dictionary D is updated by steepest descent methodk+1, word is updated by column through singular value decomposition Each row of allusion quotation, minimize approximate error;Fixed auxiliary dictionary D and factor alphai, updated by minimizing quadratic polynomial Fixed auxiliary variable ziWith dictionary Dk, updated by using soft-threshold iterative algorithm
Further say, the image update described in step (c) of the present invention is:For MRI u subproblem, it It is updated after each secondary internal layer iteration of the double-deck graceful iterative process of Burger.I.e. in the graceful iteration of Burger of outer layer, magnetic resonance Image u minimizes to obtain by solving model:
Further say, the image update described in step (c) of the present invention is:Use described figure regularization sparse coding table Show the unsampled K space data of reconstruct, obtain being ultimately imaged result.
Technical scheme has the following advantages that or beneficial effect:The present invention learns to introduce by self-adapting dictionary Figure regularization sparse coding method, neighborhood graph is established to encode partial structurtes data and excavate its pact in terms of geometric data Beam so that view data can more preferable rarefaction representation;The other present invention can handle the more complicated image of local geometric features, Local picture structure can be effectively captured, recovers more image details, obtained image result has more preferable fidelity Degree.
Because the technical program can effectively capture local picture structure, reach fast, accurately reconstructed magnetic resonance Image.Therefore the super lack sampling K data imaging method of the magnetic resonance based on figure regularization sparse coding of the present invention causes reconstruct Image reaches gratifying effect.
Brief description of the drawings
Fig. 1 is the flow chart of inventive algorithm step.
Y-PSNR when Fig. 2 is simulation radial direction lack sampling track under tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI (PSNR) with the lack sampling factor (Downsampling Factor) change.
High frequency error when Fig. 3 is simulation radial direction lack sampling track under tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI (HFEN) with the change of the simulation radial direction lack sampling track lack sampling factor (Downsampling Factor).
Fig. 4 is pair of the reconstruction effect of tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI under simulation radial direction lack sampling track Compare result.(a) it is artwork, (b) (c) (d) and (e) (f) (g) is tri- kinds of methods of DLMRI, TBMDU and GSCMRI respectively in 8 times of moulds Reconstructed results figure and reconstruction error figure under the lack sampling rate of pseudo-radial track.
Fig. 5 be under 7.11 times of lack sampling rates tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI in two-dimensional random lack sampling track, Three kinds of the center section of random lack sampling K space data sampling lack sampling track in Descartes's sample track and phase-encoding direction Under the curve map that changes with iterations (Interation Number) of Y-PSNR (PSNR).In Fig. 59 curves according to It is secondary to represent respectively:
The super lack sampling K data imaging algorithm of magnetic resonance of figure regularization sparse coding is used under two-dimensional random lack sampling track (Random GSCMRI);
The graceful dictionary learning algorithm of double-deck Burger (Random TBMDU) is used under two-dimensional random lack sampling track;
MRI algorithm (Random DLMRI) is rebuild using dictionary learning under two-dimensional random lack sampling track;
The super lack sampling K data imaging algorithm of magnetic resonance of figure regularization sparse coding is used under Descartes's sample track (Cartesian GSCMRI);
The graceful dictionary learning algorithm of double-deck Burger (Cartesian TBMDU) is used under Descartes's sample track;
MRI algorithm (Cartesian DLMRI) is rebuild using dictionary learning under Descartes's sample track;
Random lack sampling K space data uses the super lack sampling of magnetic resonance of figure regularization sparse coding on phase-encoding direction K data imaging algorithm (LowResolution GSCMRI);
Random lack sampling K space data uses the graceful dictionary learning algorithm of double-deck Burger on phase-encoding direction (LowResolution TBMDU);
Random lack sampling K space data rebuilds MRI algorithm using dictionary learning on phase-encoding direction (LowResolution DLMRI)。
Fig. 6 be under 7.11 times of lack sampling rates tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI in two-dimensional random lack sampling track, It is high under three kinds of the center section lack sampling track of random lack sampling K space data in Descartes's sample track and phase-encoding direction The curve map that frequency error (HFEN) changes with iterations (Interation Number).The meaning of 9 curves is the same as figure in Fig. 6 5。
Fig. 7 (a) is artwork, and (b) (c) (d) and (e) (f) (g) is respectively that two-dimensional random lack sampling track owes to adopt at 7.11 times Reconstructed results figure and Error Graph under tri- kinds of methods of DLMRI, TBMDU and GSCMRI under sample rate.
Fig. 8 is tri- kinds of algorithm for reconstructing of DLMRI, TBMDU and GTBMDU in various criterion deviation (Standard- Deviation the reconstruction Y-PSNR value (PSNR) under white Gaussian noise).
Fig. 9 (a) is artwork, and (b) is two-dimensional random lack sampling template, (c) (d) (e) and (f) (g) (h) be respectively DLMRI, Reconstructed results figure and Error Graph of the tri- kinds of algorithms of TBMDU and GSCMRI in the white Gaussian noise of the standard deviation of σ=5.
Figure 10 is tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI under 4 times of simulation radial direction lack sampling tracks to water modulus evidence Test case.Wherein (a) is artwork, and (b) is the simulation radial trajectories lack sampling track template of 4 times of lack samplings, and (c) (d) (e) divides Not Wei the reconstructed results figure of tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI under this condition, (f) (g) is appropriate section result Amplification output figure.
Figure 11 be 5 times of lack sampling rates Descartes's sample track under tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI reconstruction Effect, wherein (a) is artwork, (b) is Descartes's sample track template on the phase-encoding direction of 5 times of lack samplings, (c) (d) (e) (g) (h) be respectively DLMRI, TBMDU and GSCMRI tri- kind algorithms reconstructed results figure and residual error under 5 times lack sampling (f) Figure.
Changes of the Figure 12 for Y-PSNR (PSNR) under GSCMRI algorithms with figure regularization Laplace operator weight η values Situation.
Change feelings of the Figure 13 for high frequency error (HFEN) under GSCMRI algorithms with figure regularization Laplace operator weight η values Condition.
Figure 14 (a) is artwork, and (b) (c) (d) is respectively GSCMRI algorithms in η=10-1,10-3,10-5Reconstructed results figure, (e) (f) (g) is corresponding residual plot.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, below in conjunction with accompanying drawing and case study on implementation, The present invention is described in further detail.Specific embodiment described herein is only used for explaining technical solution of the present invention, It is not limited to the present invention.
Referring to the accompanying drawing for showing the embodiment of the present invention, the present invention is described in more detail.
The method according to the invention, technical scheme is by figure regularization sparse coding model and incorporates bilayer primary The graceful iteration framework of lattice obtains final imaging results.Specifically the embodiment of the present invention passes through figure regularization sparse coding constraint office Portion's geometry, can more accurately rarefaction representation image, the more complicated image of processing geometric properties, and recover more Image detail.Referring now to the super lack sampling K data imaging of the magnetic resonance based on figure regularization sparse coding of Fig. 1 description present invention Method.
Step (a):The expression of figure regularization sparse coding is carried out on the graceful iteration framework of double-deck Burger, establishes image sparse mould Type.
The magnetic resonance imaging model for entering row constraint based on prior information is following object function:
HereinRepresentative needs reconstruction image,Represent Fourier's measurement of lack sampling.Pass through fractional-sample Fourier's encoder matrixMapping graph to domain space f obtains F as upu≈f。
The technical program proposes by figure regularization sparse coding model and incorporates the graceful iteration framework foundation of Burger of bilayer Image sparse model,For image study dictionary on and figure regularization On sparse coding represent, it is as follows to establish new image sparse model:
WhereinWithThe sparse and renewal word of λ equilibrium image blocks The approximate error of allusion quotation.For the nature and medical image of many, λ value can obtain by rule of thumb.First three of first subproblem Item utilizes the renewal variables D in this part, and Γ is to carry out study dictionary updating, sparse coefficient encodes.Section 4 ensures to rebuild knot Fruit keeps matching constraint with K spaces lack sampling data, and λ, η and μ represent dictionary learning sparse coefficient, figure regularization coding system respectively Number and the weight of K data fitting.
Wherein, the object function of figure regularization sparse coding is made up of three parts linear combination, data fitting empirical item:Laplace operator regularization term:ηΤr(ΓLΓΤ);Based on L1The compensation term of norm:Its image Model is as follows:
Wherein parameter lambda represents dictionary learning sparse coefficient, and η determines the weight of chart regularization Laplace operator.
Figure regularization constraint can effectively utilize geometric data under manifold hypothesis.It assumes two data point xi,xj It is close on the inherent geometry of data distribution, their expression factor alphaiAnd αjAlso it is close in new dictionary.Given data X, Arest neighbors figure G is constructed by M summit, and each summit represents a data point in X, and W represents G weight matrix.If xiIt is in xjK close to neighborhood or xjIt is in xiK close to neighborhood, Wij=1;Otherwise Wij=0.In addition, xiDegree must define ForWith C=diag (c1,c2,...,cM), L=C-W.Mapping weight map G carrys out sparse expression coefficient Γ, and selection is reasonable The criterion of minimizing following objective functions of mapping:
Step (b):Auxiliary variable and the technology of rotation solution are introduced, is updated in the internal layer iteration of the graceful iteration of double-deck Burger Learn dictionary and sparse coefficient.
X=Ru is defined, by increasing auxiliary variable, unconstrained problem is converted into restricted problem, i.e.,
By the graceful iterative restricted problem equation (5) of double-deck Burger in technical scheme, it is translated into a series of without about Shu Wenti, wherein object function are made up of penalty term of the former object function plus constraint, i.e.,
Wherein
WhereinFor multiplication by variables,
Solve quadratic polynomial functionOn D derivative, obtain following steepest and decline renewal rule Then:
It is normalized after steepest declines iteration, then by each column vector of dictionary so that Dk+1Each row Norm is 1.The outstanding advantage of the dictionary updating of the technical program is that iteration can regard the refinement behaviour of a dictionary as each time Make, if each step is regarded as a yardstick, the renewal of dictionary is the fine process from low yardstick to high yardstick.Will be " small Yardstick " (Yk+1k+1)T) prototype be added in a dictionary, which can effectively avoid being absorbed in local solution.
The graceful iterative algorithm of double-deck Burger used in the technical program, makes DkΓkX is converged to until the pact of optimization problem Untill beam meets.Correspondingly, the D of variablekΓkDegree of rarefication is very big in primary iteration, gradually subtracts with the progress of iteration It is few.
As shown in figure 1, in step s 102, carry out the solution of figure regularization sparse coding.To minimize quadratic polynomialTo represent Z, it is as follows to obtain optimal solution:
The Z of (9) formula of optimal solution is updated in equation (6), obtained
In formulaSoft-threshold iterative algorithm is recycled to solveI.e.:
Wherein,
Step (c):Constrained using the K data of the super lack sampling in part, in the external iteration of the graceful iteration framework of double-deck Burger Image update is carried out, to obtain imaging results.
The graceful iteration of outer layer Burger for the figure regularization sparse coding model that the technical program proposes is as follows:
Equation (12) is the graceful iteration of outer layer Burger, solves least square analytic problem, updates uk+1, obtain being ultimately imaged knot Fruit.
As shown in figure 1, for image u subproblem, it the graceful iterative process of double-deck Burger inner iterative each time it After be updated.I.e. in the graceful iteration of Burger of outer layer, image u is obtained by solving least square analytic problem:
Image u least square analytic is as follows:
Definition:
Defined in itFor full Fourier's encoder matrix, following F is standardizedΤF=1N, Fu represents full K spaces number According to Ω represents to be sampled the subset of data.WhereinAverage every piece of image result and become Fourier is changed to, then:
Specifically, the renewal for the introducing self-adapting dictionary that the technical program proposes can be more preferable than the dictionary constructed in advance Rarefaction representation image and figure regularization sparse coding method can effective capture images local geometry.Pass through figure The introducing of regularization partial structurtes constraint recovers more image details so as to more accurate rarefaction representation image, So that reconstruction image has more preferable fidelity.
In summary, embodiments of the invention propose that complete GSCMRI algorithms can be summarized as follows:
(1):Initialization:Γ0=0, D0,f0=f
(2):If | | Fpuk-f||2> σ then carry out (3)-(12);Otherwise iteration stopping
(3):(4)-(11) are carried out if j≤P;Otherwise iteration stopping
(4):(5)-(6) are then carried out when being unsatisfactory for iteration stopping condition
(5):
(6):
(7):Cj+1=Ym+1j+1,0j,m+1,Dj+1=Dj+ξCj+1j+1,0)Τ
(8):
(9):Pass through frequency interpolationUpdate uk+1
(10):
(11):
(12):fk+1=fk+f-Fpuk+1
Pass through the experiment of real value view data and complex values view data experimental evaluation this technology algorithm in the embodiment of the present invention Performance.The sampling plan of technical solution of the present invention samples including two-dimensional random, Descartes's sampling of one-dimensional random phase code With simulation radial direction lack sampling.Real value experiment uses internal MR scan images, and size is 512 × 512.Accompanying drawing 5 and the plural number of accompanying drawing 6 The image size for being worth view data experiment is 256 × 256 and 512 × 512.In the standard value setting of each parameter and TBMDU methods Equally, tile size isK=1 in complete dictionary (corresponding to J=36) is crossed, overlapping block r=1 (corresponds to J=36 The environment of 512 × 512 image size pair when with data sampling rate being L=267289), for other specific parameters, DLMRI Used as default with TBMDU methods.Implementation process, Selecting All Parameters η=10 are tested in the embodiment of the present invention-3.In addition, introduce peak Value signal to noise ratio (PSNR) and high frequency error (HFEN) quantify reconstructed image quality.
Y-PSNR when Fig. 2 is simulation radial direction lack sampling track under tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI (PSNR) with the lack sampling factor (Downsampling Factor) change.It can be seen that under the different lack sampling factors The all relatively other two kinds of algorithm for reconstructing of GSCMRI reconstruction effect are preferable.
High frequency error when Fig. 3 is simulation radial direction lack sampling track under tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI (HFEN) with the change of the simulation radial direction lack sampling track lack sampling factor (Downsampling Factor).It can be seen that The all relatively other two kinds of algorithm for reconstructing of GSCMRI reconstruction effect are preferable under the different lack sampling factors.
Fig. 4 is pair of the reconstruction effect of tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI under simulation radial direction lack sampling track Compare result.(a) it is artwork, (b) (c) (d) and (e) (f) (g) is tri- kinds of methods of DLMRI, TBMDU and GSCMRI respectively in 8 times of moulds Reconstructed results figure and reconstruction error figure under the lack sampling rate of pseudo-radial track.It can be seen that under same sample rate, GSCMRI Reconstruction effect under different sample tracks is better than other two kinds of algorithm for reconstructing.
Fig. 5 be under 7.11 times of lack sampling rates tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI in two-dimensional random lack sampling track, Three kinds of the center section of random lack sampling K space data sampling lack sampling track in Descartes's sample track and phase-encoding direction Under the curve map that changes with iterations (Interation Number) of Y-PSNR (PSNR).It can be seen that same Sample rate under, reconstruction effects of the GSCMRI under different sample tracks is better than other two kinds of algorithm for reconstructing.9 in Fig. 5 Curve represents respectively successively:
The super lack sampling K data imaging algorithm of magnetic resonance of figure regularization sparse coding is used under two-dimensional random lack sampling track (Random GSCMRI);
The graceful dictionary learning algorithm of double-deck Burger (Random TBMDU) is used under two-dimensional random lack sampling track;
MRI algorithm (Random DLMRI) is rebuild using dictionary learning under two-dimensional random lack sampling track;
The super lack sampling K data imaging algorithm of magnetic resonance of figure regularization sparse coding is used under Descartes's sample track (Cartesian GSCMRI);
The graceful dictionary learning algorithm of double-deck Burger (Cartesian TBMDU) is used under Descartes's sample track;
MRI algorithm (Cartesian DLMRI) is rebuild using dictionary learning under Descartes's sample track;
Random lack sampling K space data uses the super lack sampling of magnetic resonance of figure regularization sparse coding on phase-encoding direction K data imaging algorithm (LowResolution GSCMRI);
Random lack sampling K space data uses the graceful dictionary learning algorithm of double-deck Burger on phase-encoding direction (LowResolution TBMDU);
Random lack sampling K space data rebuilds MRI algorithm using dictionary learning on phase-encoding direction (LowResolution DLMRI)。
Fig. 6 be under 7.11 times of lack sampling rates tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI in two-dimensional random lack sampling track, It is high under three kinds of the center section lack sampling track of random lack sampling K space data in Descartes's sample track and phase-encoding direction The curve map that frequency error (HFEN) changes with iterations (Interation Number).It can be seen that in same sample rate Under, reconstruction effects of the GSCMRI under different sample tracks is better than other two kinds of algorithm for reconstructing.The meaning of 9 curves in Fig. 6 The same Fig. 5 of justice.
Fig. 7 (a) is artwork, and (b) (c) (d) and (e) (f) (g) is respectively that two-dimensional random lack sampling track owes to adopt at 7.11 times Reconstructed results figure and Error Graph under tri- kinds of methods of DLMRI, TBMDU and GSCMRI under sample rate.
Fig. 8 is that tri- kinds of algorithm for reconstructing of DLMRI, TBMDU and GTBMDU are poor (Standard-Deviation) in various criterion White Gaussian noise under reconstruction Y-PSNR value (PSNR).
Fig. 9 (a) is artwork, and (b) is two-dimensional random lack sampling template, (c) (d) (e) and (f) (g) (h) be respectively DLMRI, Reconstructed results figure and Error Graph of the tri- kinds of algorithms of TBMDU and GSCMRI in the white Gaussian noise of the standard deviation of σ=5.As can be seen that With minimum with reference to image error, TBMDU algorithms take second place GSCMRI algorithm for reconstructing.
Figure 10 is tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI under 4 times of simulation radial direction lack sampling tracks to water modulus evidence Test case.Wherein (a) is artwork, and (b) is the simulation radial trajectories lack sampling track template of 4 times of lack samplings, and (c) (d) (e) divides Not Wei the reconstructed results figure of tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI under this condition, (f) (g) is appropriate section result Amplification output figure.From the point of view of reconstructed results, GSCMRI methods realize that higher resolution is rebuild.
Figure 11 be 5 times of lack sampling rates Descartes's sample track under tri- kinds of algorithms of DLMRI, TBMDU and GSCMRI reconstruction Effect, wherein (a) is artwork, (b) is Descartes's sample track template on the phase-encoding direction of 5 times of lack samplings, (c) (d) (e) (g) (h) be respectively DLMRI, TBMDU and GSCMRI tri- kind algorithms reconstructed results figure and residual error under 5 times lack sampling (f) Figure.From Error Graph it can be seen that the reconstruction effect of GSCMRI methods and reference picture picture are very close.
Changes of the Figure 12 for Y-PSNR (PSNR) under GSCMRI algorithms with figure regularization Laplace operator weight η values Situation.As can be seen that η=10-3When, inventive algorithm has relatively good reconstruction effect.
Change feelings of the Figure 13 for high frequency error (HFEN) under GSCMRI algorithms with figure regularization Laplace operator weight η values Condition.As can be seen that η=10-3When, inventive algorithm has relatively good reconstruction effect.
Figure 14 (a) is artwork, and (b) (c) (d) is respectively GSCMRI algorithms in η=10-1,10-3,10-5Reconstructed results figure, (e) (f) (g) is corresponding residual plot.
The embodiment of the present invention constrains local geometry by the introducing of figure regularization sparse coding, using it in geometry number According to the advantage of aspect, the more complicated image of processing geometry feature, more image details are recovered, it is more accurate so as to obtain Reconstruction image, make reconstruction image that there is more preferable fidelity.

Claims (4)

  1. A kind of 1. super lack sampling K data imaging method of magnetic resonance based on figure regularization sparse coding, it is characterized in that including following Step:
    Step (a):The expression of figure regularization sparse coding is carried out on the graceful iteration framework of double-deck Burger, obtains image sparse model;
    Step (b):Using the technology for introducing auxiliary variable and rotation solution, updated in the internal layer iteration of the graceful iteration of double-deck Burger Learn dictionary and sparse coefficient;
    Step (c):Constrained using the K data of the super lack sampling in part, image is carried out in the external iteration of the graceful iteration of double-deck Burger Renewal, to obtain imaging results;
    Wherein:Step (a) is described to carry out the expression of figure regularization sparse coding on the graceful iteration framework of double-deck Burger, is to image On the basis of coefficient carries out rarefaction representation, the local geometry of view data is constrained by being obtained to the regularization of coefficient figure More preferable rarefaction representation, obtain image sparse model;
    Figure regularization sparse coding process, the image sparse mould of foundation are incorporated in the sparse model of the graceful iteration framework of double-deck Burger Type is:
    <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <mi>u</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>u</mi> </munder> <mo>{</mo> <munder> <mi>min</mi> <mrow> <mi>D</mi> <mo>,</mo> <mi>&amp;Gamma;</mi> </mrow> </munder> <munder> <mo>&amp;Sigma;</mo> <mi>i</mi> </munder> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>&amp;alpha;</mi> <mi>i</mi> </msub> <mo>|</mo> <msub> <mo>|</mo> <mn>1</mn> </msub> <mo>+</mo> <mfrac> <mi>&amp;lambda;</mi> <mn>2</mn> </mfrac> <mo>|</mo> <mo>|</mo> <msub> <mi>D&amp;alpha;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mi>u</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&amp;eta;</mi> <mi>T</mi> <mi>r</mi> <mo>(</mo> <mrow> <msup> <mi>&amp;Gamma;L&amp;Gamma;</mi> <mi>T</mi> </msup> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>+</mo> <mfrac> <mi>&amp;mu;</mi> <mn>2</mn> </mfrac> <mo>|</mo> <mo>|</mo> <msub> <mi>F</mi> <mi>p</mi> </msub> <mi>u</mi> <mo>-</mo> <msup> <mi>f</mi> <mi>k</mi> </msup> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>}</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>f</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msup> <mi>f</mi> <mi>k</mi> </msup> <mo>+</mo> <mi>f</mi> <mo>-</mo> <msub> <mi>F</mi> <mi>p</mi> </msub> <msup> <mi>u</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced>
    Wherein, in model first subproblem first three itemsIt is image in dictionary Upper to carry out sparse and figure regularization expression, u represents MRI, and D represents study dictionary, αiRepresent image block sparse coefficient, Ri Represent that image block chooses matrix, Γ represents figure regularization sparse expression coefficient, FpPart Fourier transform is represented, f represents K numbers According to;Section 4 ensures that reconstructed results keep matching constraint with K spaces lack sampling data, and λ, η and μ represent that study dictionary is sparse respectively Coefficient, figure regularization sparse coefficient and K data fitting weight;
    Auxiliary variable is introduced, first subproblem of iconic model is converted into:
    <mrow> <munder> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mrow> <mi>D</mi> <mo>,</mo> <mi>&amp;Gamma;</mi> </mrow> </munder> <mfrac> <mi>&amp;lambda;</mi> <mn>2</mn> </mfrac> <mo>|</mo> <mo>|</mo> <mi>Z</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&amp;eta;</mi> <mi>T</mi> <mi>r</mi> <mrow> <mo>(</mo> <msup> <mi>&amp;Gamma;L&amp;Gamma;</mi> <mi>T</mi> </msup> <mo>)</mo> </mrow> <mo>+</mo> <munderover> <mo>&amp;Sigma;</mo> <mi>i</mi> <mi>M</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>&amp;alpha;</mi> <mi>i</mi> </msub> <mo>|</mo> <msub> <mo>|</mo> <mn>1</mn> </msub> </mrow>
    s.t.Zi=D αi-RiU, | | dj||2≤ 1, j=1,2 ... ..., J;
    After introducing auxiliary variable, using the graceful technology of double-deck Burger, iconic model is changed to:
    <mrow> <mo>(</mo> <msup> <mi>D</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <msubsup> <mi>&amp;alpha;</mi> <mi>i</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>z</mi> <mi>i</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>)</mo> <mo>=</mo> <mi>arg</mi> <munder> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mrow> <mi>D</mi> <mo>,</mo> <msub> <mi>&amp;alpha;</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>z</mi> <mi>i</mi> </msub> </mrow> </munder> <mo>|</mo> <mo>|</mo> <msub> <mi>&amp;alpha;</mi> <mi>i</mi> </msub> <mo>|</mo> <msub> <mo>|</mo> <mn>1</mn> </msub> <mo>+</mo> <mfrac> <mi>&amp;lambda;</mi> <mn>2</mn> </mfrac> <mo>|</mo> <mo>|</mo> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&amp;eta;</mi> <mi>T</mi> <mi>r</mi> <mo>(</mo> <msup> <mi>&amp;Gamma;L&amp;Gamma;</mi> <mi>T</mi> </msup> <mo>)</mo> <mo>+</mo> <mfrac> <mi>&amp;beta;</mi> <mn>2</mn> </mfrac> <mo>|</mo> <mo>|</mo> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>D&amp;alpha;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mi>u</mi> <mo>-</mo> <mfrac> <msubsup> <mi>y</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mi>&amp;beta;</mi> </mfrac> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> </mrow>
    Yk+1=Yk+β(Ru-Zk+1-Dk+1Γk+1);
    Auxiliary variable is introduced into step (b) unconstrained problem in step (a) is converted into restricted problem solution, pass through bilayer primary The graceful internal layer iterative algorithm of lattice completes dictionary updating and updates sparse coefficient by soft-threshold iteration;
    It is described renewal sparse coefficient formula be:
    <mrow> <msubsup> <mi>&amp;alpha;</mi> <mi>i</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <munder> <mrow> <mi>arg</mi> <mi>min</mi> </mrow> <msub> <mi>&amp;alpha;</mi> <mi>i</mi> </msub> </munder> <mfrac> <mrow> <mi>&amp;lambda;</mi> <mi>&amp;beta;</mi> </mrow> <mrow> <mn>2</mn> <mrow> <mo>(</mo> <mi>&amp;lambda;</mi> <mo>+</mo> <mi>&amp;beta;</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>|</mo> <mo>|</mo> <msup> <mi>D</mi> <mi>k</mi> </msup> <msub> <mi>&amp;alpha;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <msubsup> <mi>y</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>/</mo> <mi>&amp;beta;</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> <mo>+</mo> <msub> <mi>&amp;eta;L</mi> <mrow> <mi>i</mi> <mi>i</mi> </mrow> </msub> <msubsup> <mi>&amp;alpha;</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mi>&amp;alpha;</mi> <mi>i</mi> </msub> <mo>+</mo> <msubsup> <mi>&amp;alpha;</mi> <mi>i</mi> <mi>T</mi> </msubsup> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>+</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>&amp;alpha;</mi> <mi>i</mi> </msub> <mo>|</mo> <msub> <mo>|</mo> <mn>1</mn> </msub> <mo>;</mo> </mrow>
    In formula:;Solved using soft-threshold iterative algorithm, i.e.,:
    <mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>2</mn> <mi>&amp;eta;</mi> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>&amp;NotEqual;</mo> <mi>i</mi> </mrow> </munder> <msub> <mi>L</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <msub> <mi>&amp;alpha;</mi> <mi>j</mi> </msub> </mrow>
    Solved using soft-threshold iterative algorithmI.e.:
    <mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>&amp;alpha;</mi> <mi>i</mi> <mrow> <mi>m</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <munder> <mrow> <mi>arg</mi> <mi>min</mi> </mrow> <msub> <mi>&amp;alpha;</mi> <mi>i</mi> </msub> </munder> <mrow> <mo>{</mo> <mrow> <mi>&amp;gamma;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>&amp;alpha;</mi> <mi>i</mi> </msub> <mo>-</mo> <mrow> <mo>&amp;lsqb;</mo> <mrow> <msubsup> <mi>&amp;alpha;</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>+</mo> <mfrac> <mrow> <mo>-</mo> <mn>2</mn> <msub> <mi>&amp;eta;L</mi> <mrow> <mi>i</mi> <mi>i</mi> </mrow> </msub> <msubsup> <mi>&amp;alpha;</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>-</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msup> <mi>D</mi> <mi>k</mi> </msup> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msubsup> <mi>y</mi> <mi>i</mi> <mi>m</mi> </msubsup> </mrow> <mrow> <mn>2</mn> <mi>&amp;gamma;</mi> </mrow> </mfrac> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>&amp;alpha;</mi> <mi>i</mi> </msub> <mo>|</mo> <msub> <mo>|</mo> <mn>1</mn> </msub> </mrow> <mo>}</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <mi>s</mi> <mi>h</mi> <mi>r</mi> <mi>i</mi> <mi>n</mi> <mi>k</mi> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>&amp;alpha;</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>+</mo> <mfrac> <mrow> <mo>-</mo> <mn>2</mn> <msub> <mi>&amp;eta;L</mi> <mrow> <mi>i</mi> <mi>i</mi> </mrow> </msub> <msubsup> <mi>&amp;alpha;</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>-</mo> <msubsup> <mi>h</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msup> <mi>D</mi> <mi>k</mi> </msup> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msubsup> <mi>y</mi> <mi>i</mi> <mi>m</mi> </msubsup> </mrow> <mrow> <mn>2</mn> <mi>&amp;gamma;</mi> </mrow> </mfrac> <mo>,</mo> <mfrac> <mn>1</mn> <mrow> <mn>2</mn> <mi>&amp;gamma;</mi> </mrow> </mfrac> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced>
    Wherein:
  2. 2. imaging method according to claim 1, it is characterized in that step (c) completes dictionary learning and sparse in step (b) After coefficient, by the graceful external iteration of double-deck Burger, the parsing problem of least square is solved, image update is carried out, obtains image weight Build result.
  3. 3. imaging method according to claim 1, it is characterized in that the solution of described iconic model is to update one by turns Individual variable, while fixed other variables;Fixed auxiliary variable ziAnd factor alphai, dictionary D is updated by steepest descent methodk+1;It is fixed Aid in dictionary D and factor alphai, updated by minimizing quadratic polynomialFixed auxiliary variable ZiWith dictionary Dk, by using Soft-threshold iterative algorithm updates
  4. 4. the imaging method according to claim 1 or 3, it is characterized in that for MRI u subproblem, it is in bilayer It is updated after each secondary internal layer iteration of the graceful iterative process of Burger, in the graceful iteration of Burger of outer layer, MRI u leads to Solving model is crossed to minimize to obtain:
    <mrow> <msup> <mi>u</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>u</mi> </munder> <munder> <mo>&amp;Sigma;</mo> <mi>i</mi> </munder> <mfrac> <mi>&amp;mu;</mi> <mn>2</mn> </mfrac> <mo>|</mo> <mo>|</mo> <msub> <mi>F</mi> <mi>p</mi> </msub> <mi>u</mi> <mo>-</mo> <msup> <mi>f</mi> <mi>k</mi> </msup> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mfrac> <mi>&amp;beta;</mi> <mn>2</mn> </mfrac> <mo>|</mo> <mo>|</mo> <msup> <mi>D</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <msubsup> <mi>&amp;alpha;</mi> <mi>i</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>-</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mi>u</mi> <mo>+</mo> <msubsup> <mi>z</mi> <mi>i</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>-</mo> <mfrac> <msubsup> <mi>y</mi> <mi>i</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mi>&amp;beta;</mi> </mfrac> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>.</mo> </mrow>
CN201410707447.2A 2014-12-01 2014-12-01 A kind of super lack sampling K data imaging method of magnetic resonance based on figure regularization sparse coding Expired - Fee Related CN104574456B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410707447.2A CN104574456B (en) 2014-12-01 2014-12-01 A kind of super lack sampling K data imaging method of magnetic resonance based on figure regularization sparse coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410707447.2A CN104574456B (en) 2014-12-01 2014-12-01 A kind of super lack sampling K data imaging method of magnetic resonance based on figure regularization sparse coding

Publications (2)

Publication Number Publication Date
CN104574456A CN104574456A (en) 2015-04-29
CN104574456B true CN104574456B (en) 2018-02-23

Family

ID=53090422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410707447.2A Expired - Fee Related CN104574456B (en) 2014-12-01 2014-12-01 A kind of super lack sampling K data imaging method of magnetic resonance based on figure regularization sparse coding

Country Status (1)

Country Link
CN (1) CN104574456B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899906B (en) * 2015-06-12 2019-02-12 南方医科大学 MR image reconstruction method based on adaptive quadrature base
EP3320358A4 (en) * 2015-07-07 2019-07-03 Q Bio, Inc. Field-invariant quantitative magnetic-resonance signatures
CN106056647B (en) * 2016-05-30 2019-01-11 南昌大学 A kind of magnetic resonance fast imaging method based on the sparse double-deck iterative learning of convolution
CN106780372B (en) * 2016-11-30 2019-06-18 华南理工大学 A kind of weight nuclear norm magnetic resonance imaging method for reconstructing sparse based on Generalized Tree
CN108389161A (en) * 2018-01-08 2018-08-10 广东工业大学 A kind of rainy day video image separation method based on alternative sparse coding
CN111856362A (en) 2019-04-24 2020-10-30 深圳先进技术研究院 Magnetic resonance imaging method, device, system and storage medium
CN110974223A (en) * 2019-12-13 2020-04-10 河北工业大学 Surface electromyogram signal compression reconstruction method based on improved KSVD algorithm
CN112183297B (en) * 2020-09-23 2022-08-16 中国民航大学 Ultrasonic phased array signal sparse feature extraction method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Graph Regularized Sparse Coding for Image Representation》;Miao Zheng 等;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20110531;第20卷(第5期);第1327-1336页 *
《Highly Undersampled Magnetic Resonance Image Reconstruction Using Two-Level Bregman Method With Dictionary Updating》;Qiegen Liu等;《IEEE TRANSACTIONS ON MEDICAL IMAGING》;20130731;第32卷(第7期);第1290-1301页 *

Also Published As

Publication number Publication date
CN104574456A (en) 2015-04-29

Similar Documents

Publication Publication Date Title
CN104574456B (en) A kind of super lack sampling K data imaging method of magnetic resonance based on figure regularization sparse coding
CN106780372A (en) A kind of weight nuclear norm magnetic resonance imaging method for reconstructing sparse based on Generalized Tree
CN104933683B (en) A kind of non-convex low-rank method for reconstructing for magnetic resonance fast imaging
US20190369191A1 (en) MRI reconstruction using deep learning, generative adversarial network and acquisition signal model
CN103472419B (en) Magnetic resonance fast imaging method and system thereof
CN104063886B (en) Nuclear magnetic resonance image reconstruction method based on sparse representation and non-local similarity
CN106056647B (en) A kind of magnetic resonance fast imaging method based on the sparse double-deck iterative learning of convolution
CN107845065B (en) Super-resolution image reconstruction method and device
CN104899906B (en) MR image reconstruction method based on adaptive quadrature base
CN106537168A (en) System and method for adaptive dictionary matching in magnetic resonance fingerprinting
CN104714200B (en) A kind of magnetic resonance based on the graceful non-convex type dictionary learning of broad sense bilayer Burger surpasses lack sampling K data imaging method
CN106618571A (en) Nuclear magnetic resonance imaging method and system
CN108898568B (en) Image synthesis method and device
CN107316334A (en) Personalized precisely nuclear magnetic resonance image method
CN105931242B (en) Dynamic nuclear magnetic resonance (DNMR) image rebuilding method based on dictionary learning and time gradient
CN106093814A (en) A kind of cardiac magnetic resonance imaging method based on multiple dimensioned low-rank model
CN109886869A (en) A kind of unreal structure method of face of the non-linear expansion based on contextual information
CN117576240A (en) Magnetic resonance image reconstruction method based on double-domain transducer
Sui et al. Simultaneous image reconstruction and lesion segmentation in accelerated MRI using multitasking learning
CN105654527A (en) Magnetic resonance imaging reconstruction method and device based on structural dictionary learning
Guan et al. MRI reconstruction using deep energy-based model
CN115565671A (en) Atrial fibrillation auxiliary analysis method based on cross-model mutual teaching semi-supervision
CN108283495A (en) Parallel MR imaging method, apparatus and computer-readable medium based on two layers of tight frame sparse model
Wu et al. Stabilizing Deep Tomographic Reconstruction
CN103236049A (en) Partial K space image reconstruction method based on sequence similarity interpolation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180223

Termination date: 20191201

CF01 Termination of patent right due to non-payment of annual fee