CN107194912A - The brain CT/MR image interfusion methods of improvement coupling dictionary learning based on rarefaction representation - Google Patents
The brain CT/MR image interfusion methods of improvement coupling dictionary learning based on rarefaction representation Download PDFInfo
- Publication number
- CN107194912A CN107194912A CN201710259812.1A CN201710259812A CN107194912A CN 107194912 A CN107194912 A CN 107194912A CN 201710259812 A CN201710259812 A CN 201710259812A CN 107194912 A CN107194912 A CN 107194912A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msubsup
- msup
- msub
- mtd
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses the brain CT/MR image interfusion methods of the improvement coupling dictionary learning based on rarefaction representation, it is related to technical field of image processing, can be respectively to normal brain, three groups of brain medical images of encephalatrophy and brain tumor are merged, many experiments result shows ICDL methods proposed by the present invention and the method based on multi-scale transform, the method of traditional rarefaction representation, the method of method and multi-scale dictionary study based on K SVD dictionary learnings is compared, not only increase the quality of brain Medical image fusion, and effectively reduce the time of dictionary training, effectively help can be provided for clinical treatment diagnosis.
Description
Technical field
The present invention relates to technical field of image processing, the improvement more particularly to based on rarefaction representation couples dictionary learning
Brain CT/MR image interfusion methods.
Background technology
In medical domain, doctor needs pair single image for having high spatial and hyperspectral information simultaneously to be studied and divided
Analysis, in order to carry out Accurate Diagnosis and treatment to disease.Such information can not only be obtained from single mode image, example
Such as, CT imagings can catch the bone structure of human body, and with higher resolution ratio, and MR imagings can catch soft group of human organ
Knit such as muscle, cartilage, fat detailed information.Therefore, the complementary information of CT and MR images is blended more comprehensively rich to obtain
Rich image information, can be that clinical diagnosis and auxiliary treatment provide effective help.
Method more classical applied to brain Medical image fusion field at present is the method based on multi-scale transform:From
Dissipate wavelet transformation (DWT), it is Stationary Wavelet Transform (SWT), dual-tree complex wavelet transform (DTCWT), laplacian pyramid (LP), non-
Down-sampling contourlet converts (NSCT).Method based on multi-scale transform can extract the notable feature of image well, but
It is that quasi- sensitivity is mismatched to image, traditional convergence strategy also causes fusion results can not retain the edge of source images, texture etc. thin
Save information.With the rise of compressed sensing, the method based on rarefaction representation is widely used in image co-registration field, and achieves pole
Good syncretizing effect.Yang.B etc. uses the DCT dictionary rarefaction representation source images of redundancy, and uses the rule of " selection is maximum " to melt
Close sparse coefficient.DCT dictionaries are a kind of implicit dictionaries formed by dct transform, it is easy to quick to realize, but its expression ability has
Limit.M.Elad etc. proposes that K-SVD algorithms are used for from training image learning dictionary.Compared with DCT dictionaries, the dictionary of study is one
The explicit dictionary for being adaptive to source images is planted, with stronger expression ability.Only it will be adopted in the dictionary of study from natural image
The dictionary that sample training is obtained is referred to as individual character allusion quotation, and individual character allusion quotation can represent that any one width is schemed naturally with training sample classification identical
Picture, but for baroque brain medical image, should represent that CT images represent MR images again using individual character allusion quotation, it is difficult to
To accurate rarefaction representation coefficient.B.Ophir etc. proposes the multi-scale dictionary learning method in wavelet field.It is right i.e. in wavelet field
All subbands obtain the corresponding sub- dictionary of all subbands using K-SVD Algorithm for Training respectively.Multi-scale dictionary effectively will parsing
The Dominant Facies of dictionary and the dictionary of study are combined, and can catch image in different scale and the Bu Tong spy included on different directions
Levy.But the sub- dictionary of all subbands is also individual character allusion quotation, rarefaction representation is carried out to all subbands using sub- dictionary and is still difficult to
To accurate rarefaction representation coefficient, and the dictionary learning time efficiency separated is relatively low.Yu.N etc. proposes to be based on joint sparse table
The image interfusion method shown has noise removal function concurrently.This method is to carry out dictionary learning in itself to source images to be fused, according to
The public characteristic and respective feature of JSM-1 model extractions image to be fused, then be respectively combined and reconstruct and obtain fused images.It is this
Method is due to being to training dictionary of source images to be fused itself so suitable for brain medical image, can obtain accurate sparse
Represent coefficient.But training dictionary is required for for each pair source images to be fused, time efficiency is low, lacks flexibility.
The content of the invention
The embodiments of the invention provide the brain CT/MR image co-registration sides of the improvement coupling dictionary learning based on rarefaction representation
Method, can solve problems of the prior art.
A kind of brain CT/MR image interfusion methods of the improvement coupling dictionary learning based on rarefaction representation, this method includes:
Pretreatment stage:For registered brain CT/MR source images IC, IR∈RMN, RMNRepresent what is arranged with M rows N
Vector space, using the sliding window that step-length is 1 source images IC, IRIt is divided into respectivelyThe image block of size, for every
Width CT source images ICWith MR source images IR, haveIndividual image block, then compiles these image blocks
M dimensional vectors are compiled into, by CT source images ICIn j-th of image block be designated asMR source images IRIn j-th of image block be designated asSubtract respective average value:
Wherein,WithRepresent respectivelyWithThe average of middle all elements, 1 represents the m dimensional vectors of one complete 1;
Fusing stage:Use CoefROMP Algorithm for SolvingSparse coefficient, formula is expressed as follows:
Wherein, | | α | |0The number of nonzero element in sparse coefficient α is represented, ε represents the precision of tolerance, DFRepresent word
Allusion quotation DCAnd DRThe fusion dictionary obtained after fusion;
By the l of sparse coefficient2Norm is measured as the liveness of source images, then sparse coefficientWithMelted by following
Normally merge:
AverageWithUse the fusion of " weighted average " rule:
Wherein,ThenWithFusion results be:
Phase of regeneration:Pretreatment stage and fusing stage is carried out to all image blocks to obtain melting for all image blocks
Result is closed, for each piece of vectorRemolded into by the process of anti-sliding windowImage block and be put back into corresponding
Location of pixels, then counterweight double image element are averaged and obtain final fused images IF。
Preferably, in fusing stage, the fusion dictionary calculates acquisition by the following method:
Using high-quality CT and MR images as training set, vector is obtained to { X from training cluster samplingC,XR, definitionThe matrix constituted for the CT image vectors of n sampling,For
The matrix of the MR image vectors composition of corresponding n sampling, wherein Rd×nRepresent the vector space arranged with d rows n;
The complete prior information of support is added on the basis of dictionary learning cost function, D is alternately updatedC, DRAnd A, correspondence
Training optimization problem it is as follows:
Wherein, A is XCAnd XRJoint sparse coefficient matrix, τ is the degree of rarefication of joint sparse coefficient matrices A, and ⊙ is represented a little
Multiply, mask matrix M is made up of element 0 and 1, be defined as M={ | A |=0 }, be equivalent to M (i, j)=1 if A (i, j)=0,
Otherwise it is 0, introduces auxiliary variable:
Then formula (1) of equal value can be converted into:
The solution procedure of formula (3) is divided into two steps of sparse coding and dictionary updating:
First, in the sparse coding stage, random matrix initialization dictionaryWithRealized by solving formula (4) pair
The renewal of joint sparse coefficient matrices A:
The nonzero element of each row in joint sparse coefficient matrix A is handled respectively, and keeps neutral element complete, then
Formula (4) can be converted to following formula:
In formula,It isThe submatrix of the non-zero support of corresponding A, αiIt is the non-zero that A i-th is arranged, formula (5) is by coefficient weight
Solved with orthogonal matching pursuit algorithm CoefROMP, the joint sparse coefficient matrices A updated;
Secondly, in the dictionary updating stage, the optimization problem of formula (3) is converted into:
Then the compensation term of formula (6) is written as:
In formula,Represent dictionaryIn kth row to be updated,The row k of joint sparse coefficient matrices A is represented,Table
Show mask matrix M jth row, for ensureingIn neutral element in correct position, mask matrixIt is by row vector
Replicate d times and obtain the matrix that the order that size is d × n is 1, utilize mask matrixCan effectively it removeIn
Those do not use the row of sample corresponding to k-th of atom, to error matrix EkCarry out singular value decomposition (SVD) and obtain Ek=U Δs
VT, dictionary is updated using the first row of matrix UIn atomSimultaneously by sparse coefficient matrix AIt is updated to matrix V
First row and Δ (1,1) product;
Finally, circulation performs sparse coding and the two stages of dictionary updating, untill default iterations is reached,
The D of a pair of couplings of outputCAnd DRDictionary.
Preferably, using following methods to dictionary DCAnd DRMerged:
LCAnd L (n)R(n), n=1,2 ..., N represent the characteristic index of n-th of atom of CT dictionaries and MR dictionaries respectively, melt
Formula is closed to be expressed as follows:
λ=0.25 is set herein.
The brain CT/MR image co-registration sides of improvement coupling dictionary learning provided in an embodiment of the present invention based on rarefaction representation
Normal brain, three groups of brain medical images of encephalatrophy and brain tumor can be merged by method respectively, and many experiments result shows this
The ICDL methods proposed are invented with the method based on multi-scale transform, the method for traditional rarefaction representation, based on K-SVD dictionary learnings
Method and the method for multi-scale dictionary study compare, not only increase the quality of brain Medical image fusion, and effectively
The time of dictionary training is reduced, effectively help can be provided for clinical treatment diagnosis.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with
Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 couples the brain CT/MR images of dictionary learning for the improvement provided in an embodiment of the present invention based on rarefaction representation
Fusion method flow chart;
Fig. 2 is the high-quality CT and MR images as training set;
Fig. 3 is the CT/MR fusion results of normal brain, and wherein a is CT images, and b is MR images, and c is DWT (discrete wavelets
Conversion) image, d is SWT (Stationary Wavelet Transform) image, and e is NSCT (non-lower sampling contourlet conversion) image, and f is SRM
(traditional sparse representation method) image, g is SRK (being based on K-SVD dictionary learnings method) image, and h is that MDL (is based on multiple dimensioned word
Allusion quotation learning method) image, ICDL (Improved Coupled Dictionary Learning) figure that i uses for the present invention
Picture;
Fig. 4 is the CT/MR fusion results of encephalatrophy;
Fig. 5 is the CT/MR fusion results of brain tumor.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on
Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made
Embodiment, belongs to the scope of protection of the invention.
The brain CT/MR of the improvement coupling dictionary learning based on rarefaction representation provided in reference picture 1, the embodiment of the present invention
Image interfusion method, this method comprises the following steps:
Step 100, pretreatment stage:For registered brain CT/MR source images IC, IR∈RMN, RMNRepresent that there is M
The vector space of row N row, using the sliding window that step-length is 1 source images IC, IRIt is divided into respectivelyThe image of size
Block, for every width CT source images ICWith MR source images IR, haveIndividual image block, then by these
Image block compiles into m dimensional vectors, by CT source images ICIn j-th of image block be designated asMR source images IRIn j-th of figure
As block is designated asSubtract respective average value:
Wherein,WithRepresent respectivelyWithThe average of middle all elements, 1 represents the m dimensional vectors of one complete 1;
Step 200, fusing stage:Use CoefROMP Algorithm for SolvingSparse coefficient, formula is expressed as follows:
Wherein, | | α | |0The number of nonzero element in sparse coefficient α is represented, ε represents the precision of tolerance, DFRepresent word
Allusion quotation DCAnd DRThe fusion dictionary obtained after fusion, its circular is as follows:
Using the high-quality CT and MR images shown in Fig. 2 as training set, obtained from training cluster sampling
Vector is to { XC,XR, definitionThe matrix constituted for the CT image vectors of n sampling,For the matrix of the MR image vectors composition of corresponding n sampling, wherein Rd×nRepresent that there is d rows
The vector space of n row;
The coupling dictionary training of the present invention uses improved K-SVD algorithms, and the algorithm is in traditional dictionary learning cost letter
The complete prior information of support is added on the basis of number, D is alternately updatedC, DRAnd A, corresponding training optimization problem is as follows:
Wherein, A is XCAnd XRJoint sparse coefficient matrix, τ is the degree of rarefication of joint sparse coefficient matrices A, and ⊙ is represented a little
Multiply, mask matrix M is made up of element 0 and 1, be defined as M={ | A |=0 }, be equivalent to M (i, j)=1 if A (i, j)=0,
Otherwise it is 0.Therefore A ⊙ M=0 can make all zero holdings in A complete.Introduce auxiliary variable:
Then formula (3) of equal value can be converted into:
The solution procedure of formula (5) is divided into two steps of sparse coding and dictionary updating.
First, in the sparse coding stage, random matrix initialization dictionaryWithRealized by solving formula (6) pair
The renewal of joint sparse coefficient matrices A:
The nonzero element of each row in joint sparse coefficient matrix A is handled respectively, and keeps neutral element complete, then
Formula (6) can be converted to following formula:
In formula,It isThe submatrix of the non-zero support of corresponding A, αiIt is the non-zero that A i-th is arranged.Formula (7) is by coefficient weight
Solved with orthogonal matching pursuit algorithm CoefROMP, it can thus be concluded that to the joint sparse coefficient matrices A of renewal.
Secondly, in the dictionary updating stage, the optimization problem of formula (5) can be converted into:
Then the compensation term of formula (8) can be written as:
In formula,Represent dictionaryIn kth row to be updated,The row k of joint sparse coefficient matrices A is represented,Table
Show mask matrix M jth row, for ensureingIn neutral element in correct position.Mask matrixIt is by row vector
Replicate d times and obtain the matrix that the order that size is d × n is 1, utilize mask matrixCan effectively it removeIn
Those do not use the row of sample corresponding to k-th of atom.To error matrix EkCarry out singular value decomposition (SVD) and obtain Ek=U Δs
VT, dictionary is updated using the first row of matrix UIn atomSimultaneously by sparse coefficient matrix AIt is updated to matrix V
First row and Δ (1,1) product.
Finally, circulation performs sparse coding and the two stages of dictionary updating, untill default iterations is reached,
The D of a pair of couplings of outputCAnd DRDictionary.Then using following methods to dictionary DCAnd DRMerged:
LCAnd L (n)R(n), n=1,2 ..., N represent the characteristic index of n-th of atom of CT dictionaries and MR dictionaries respectively, by
The image that the same position of human body is obtained by different imaging devices is corresponded in brain CT and MR image, thus it is certain between the two
It there is public characteristic and respective feature.The present invention proposes that characteristic index is differed into larger atom regards respective feature as, uses
" selection is maximum " rule fusion.Characteristic index differs less atom and regards public characteristic as, uses the rule fusion of " average ", public
Formula is expressed as follows:
λ=0.25 is set herein, and according to the physical characteristic of medical image, use information entropy is used as characteristic index.This method
The method in sparse domain and spatial domain is combined, it is contemplated that the feature of the physical characteristic Dictionary of Computing atom of medical image refers to
Mark, compared with the method in sparse domain, with physical significance definitely.
Because the dictionary updating stage updates dictionary and the nonzero element of rarefaction representation coefficient simultaneously so that the expression of dictionary is missed
Smaller and dictionary the convergence rate of difference is faster.In the sparse coding stage, it is contemplated that an iteration before all ignoring during each iteration
Represent, CoefROMP algorithms propose to carry out coefficient update using the rarefaction representation residual information of last iteration, so that quickly
To the solution of required problem.
Calculating obtains fusion dictionary DFAfterwards, by the l of sparse coefficient2Norm is measured as the liveness of source images, then sparse system
NumberWithMerged by following fusion rule:
AverageWithUse the fusion of " weighted average " rule:
Wherein,ThenWithFusion results be:
Step 300, phase of regeneration:Above-mentioned two step is carried out to all image blocks to obtain melting for all image blocks
Close result.For each piece of vectorRemolded into by the process of anti-sliding windowImage block and be put back into corresponding
Location of pixels, then counterweight double image element are averaged and obtain final fused images IF。
To verify the validity of the inventive method, choose three groups of registered brain CT/MR images and merged, respectively
It is normal brain CT/MR as shown in a in Fig. 3 and b, encephalatrophy CT/MR is as shown in a in Fig. 4 and b, a in brain tumor CT/MR such as Fig. 5
With shown in b, picture size is 256 × 256.The contrast algorithm of selection has:Wavelet transform (DWT), Stationary Wavelet Transform
(SWT), non-lower sampling contourlet convert (NSCT), the method (SRM) of traditional rarefaction representation, based on K-SVD dictionary learnings
Method (SRK), the method (MDL) that is learnt based on multi-scale dictionary, fusion results are shown in c, d, e, f, g, h in Fig. 3, Fig. 4 respectively
C, d, e, f, g, h in middle c, d, e, f, g, h and Fig. 5.
In method based on multi-scale transform, for DWT and SWT methods, decomposition level is all set to 3, and wavelet basis is set respectively
For " db6 " and " bior1.1 ".NSCT methods use " 9-7 " pyramid filter and " c-d " anisotropic filter, and decomposition level is set
For { 22,22,23,24}.Sliding step is 1 in method based on rarefaction representation, and tile size is 8 × 8, and dictionary size is equal
For 64 × 256, error ε=0.01, degree of rarefication τ=6, ICDL methods use improved K-SVD algorithms, perform 6 multiple dictionaries
Update cycle (DUC) and 30 iteration.
The fused images Edge texture of DWT methods is obscured it can be seen from Fig. 3-5, image information distortion and there is block effect
Should;Compared with DWT methods, the fusion mass of SWT and NSCT methods is relatively preferable, and the brightness of image, contrast, definition have
Very big lifting, but still suffer from edge brightness distortion, the problem of soft tissue and focal area have artifact;SRM and SRK methods compared with
The bone tissue and soft tissue of method image based on multi-scale transform are become apparent from, and artifact has also been reduced, and can be recognized well
Focal area;MDL methods can retain more detailed information compared with SRM and SRK methods, and picture quality, which is obtained, further to be changed
It is kind, but still with the presence of part artifact;ICDL methods proposed by the present invention are in the brightness of image, contrast, definition and details
Other method is better than on conservation degree, fused images do not have artifact, and bone tissue, soft tissue and focal area are shown clearly, are helped
In diagnosis.
It should be understood by those skilled in the art that, embodiments of the invention can be provided as method, system or computer program
Product.Therefore, the present invention can be using the reality in terms of complete hardware embodiment, complete software embodiment or combination software and hardware
Apply the form of example.Moreover, the present invention can be used in one or more computers for wherein including computer usable program code
The computer program production that usable storage medium is implemented on (including but is not limited to magnetic disk storage, CD-ROM, optical memory etc.)
The form of product.
The present invention is the flow with reference to method according to embodiments of the present invention, equipment (system) and computer program product
Figure and/or block diagram are described.It should be understood that can be by every first-class in computer program instructions implementation process figure and/or block diagram
Journey and/or the flow in square frame and flow chart and/or block diagram and/or the combination of square frame.These computer programs can be provided
The processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce
A raw machine so that produced by the instruction of computer or the computing device of other programmable data processing devices for real
The device for the function of being specified in present one flow of flow chart or one square frame of multiple flows and/or block diagram or multiple square frames.
These computer program instructions, which may be alternatively stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which is produced, to be included referring to
Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or
The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that in meter
Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, thus in computer or
The instruction performed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in individual square frame or multiple square frames.
, but those skilled in the art once know basic creation although preferred embodiments of the present invention have been described
Property concept, then can make other change and modification to these embodiments.So, appended claims are intended to be construed to include excellent
Select embodiment and fall into having altered and changing for the scope of the invention.
Obviously, those skilled in the art can carry out the essence of various changes and modification without departing from the present invention to the present invention
God and scope.So, if these modifications and variations of the present invention belong to the scope of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to comprising including these changes and modification.
Claims (3)
1. a kind of brain CT/MR image interfusion methods of the improvement coupling dictionary learning based on rarefaction representation, it is characterised in that should
Method includes:
Pretreatment stage:For registered brain CT/MR source images IC, IR∈RMN, RMNRepresent the vector arranged with M rows N
Space, using the sliding window that step-length is 1 source images IC, IRIt is divided into respectivelyThe image block of size, for every width CT
Source images ICWith MR source images IR, haveThen these image blocks are compiled into m by individual image block
Dimensional vector, by CT source images ICIn j-th of image block be designated asMR source images IRIn j-th of image block be designated asSubtract
Remove respective average value:
<mrow>
<msubsup>
<mover>
<mi>x</mi>
<mo>^</mo>
</mover>
<mi>C</mi>
<mi>j</mi>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>x</mi>
<mi>C</mi>
<mi>j</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>m</mi>
<mi>C</mi>
<mi>j</mi>
</msubsup>
<mo>&CenterDot;</mo>
<mn>1</mn>
</mrow>
<mrow>
<msubsup>
<mover>
<mi>x</mi>
<mo>^</mo>
</mover>
<mi>R</mi>
<mi>j</mi>
</msubsup>
<mo>=</mo>
<msubsup>
<mi>x</mi>
<mi>R</mi>
<mi>j</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>m</mi>
<mi>R</mi>
<mi>j</mi>
</msubsup>
<mo>&CenterDot;</mo>
<mn>1</mn>
</mrow>
Wherein,WithRepresent respectivelyWithThe average of middle all elements, 1 represents the m dimensional vectors of one complete 1;
Fusing stage:Use CoefROMP Algorithm for SolvingSparse coefficient, formula is expressed as follows:
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msubsup>
<mi>&alpha;</mi>
<mi>C</mi>
<mi>j</mi>
</msubsup>
<mo>=</mo>
<munder>
<mi>argmin</mi>
<mi>&alpha;</mi>
</munder>
<mo>|</mo>
<mo>|</mo>
<mi>&alpha;</mi>
<mo>|</mo>
<msub>
<mo>|</mo>
<mn>0</mn>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>s</mi>
<mo>.</mo>
<mi>t</mi>
<mo>.</mo>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mover>
<mi>x</mi>
<mo>^</mo>
</mover>
<mi>C</mi>
<mi>j</mi>
</msubsup>
<mo>-</mo>
<msub>
<mi>D</mi>
<mi>F</mi>
</msub>
<mi>&alpha;</mi>
<mo>|</mo>
<msub>
<mo>|</mo>
<mn>2</mn>
</msub>
<mo><</mo>
<mi>&epsiv;</mi>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msubsup>
<mi>&alpha;</mi>
<mi>R</mi>
<mi>j</mi>
</msubsup>
<mo>=</mo>
<munder>
<mi>argmin</mi>
<mi>&alpha;</mi>
</munder>
<mo>|</mo>
<mo>|</mo>
<mi>&alpha;</mi>
<mo>|</mo>
<msub>
<mo>|</mo>
<mn>0</mn>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>s</mi>
<mo>.</mo>
<mi>t</mi>
<mo>.</mo>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mover>
<mi>x</mi>
<mo>^</mo>
</mover>
<mi>R</mi>
<mi>j</mi>
</msubsup>
<mo>-</mo>
<msub>
<mi>D</mi>
<mi>F</mi>
</msub>
<mi>&alpha;</mi>
<mo>|</mo>
<msub>
<mo>|</mo>
<mn>2</mn>
</msub>
<mo><</mo>
<mi>&epsiv;</mi>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
Wherein, | | α | |0The number of nonzero element in sparse coefficient α is represented, ε represents the precision of tolerance, DFRepresent dictionary DC
And DRThe fusion dictionary obtained after fusion;
By the l of sparse coefficient2Norm is measured as the liveness of source images, then sparse coefficientWithAdvised by following fusion
Then merge:
<mrow>
<msubsup>
<mi>&alpha;</mi>
<mi>F</mi>
<mi>j</mi>
</msubsup>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>&alpha;</mi>
<mi>C</mi>
<mi>j</mi>
</msubsup>
<mo>,</mo>
<mi>i</mi>
<mi>f</mi>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>&alpha;</mi>
<mi>C</mi>
<mi>j</mi>
</msubsup>
<mo>|</mo>
<msub>
<mo>|</mo>
<mn>2</mn>
</msub>
<mo>&GreaterEqual;</mo>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>&alpha;</mi>
<mi>R</mi>
<mi>j</mi>
</msubsup>
<mo>|</mo>
<msub>
<mo>|</mo>
<mn>2</mn>
</msub>
<mo>,</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>&alpha;</mi>
<mi>R</mi>
<mi>j</mi>
</msubsup>
<mo>,</mo>
<mi>o</mi>
<mi>t</mi>
<mi>h</mi>
<mi>e</mi>
<mi>r</mi>
<mi>w</mi>
<mi>i</mi>
<mi>s</mi>
<mi>e</mi>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
AverageWithUse the fusion of " weighted average " rule:
<mrow>
<msubsup>
<mi>m</mi>
<mi>F</mi>
<mi>j</mi>
</msubsup>
<mo>=</mo>
<mi>w</mi>
<mo>&CenterDot;</mo>
<msubsup>
<mi>m</mi>
<mi>C</mi>
<mi>j</mi>
</msubsup>
<mo>+</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<mi>w</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msubsup>
<mi>m</mi>
<mi>R</mi>
<mi>j</mi>
</msubsup>
</mrow>
Wherein,ThenWithFusion results be:
<mrow>
<msubsup>
<mi>x</mi>
<mi>F</mi>
<mi>j</mi>
</msubsup>
<mo>=</mo>
<msub>
<mi>D</mi>
<mi>F</mi>
</msub>
<msubsup>
<mi>&alpha;</mi>
<mi>F</mi>
<mi>j</mi>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>m</mi>
<mi>F</mi>
<mi>j</mi>
</msubsup>
<mo>&CenterDot;</mo>
<mn>1</mn>
</mrow>
Phase of regeneration:Pretreatment stage and fusing stage is carried out to all image blocks to obtain the fusion knot of all image blocks
Really, for each piece of vectorRemolded into by the process of anti-sliding windowImage block and be put back into corresponding pixel
Position, then counterweight double image element are averaged and obtain final fused images IF。
2. the method as described in claim 1, it is characterised in that in fusing stage, the fusion dictionary is by the following method
Calculate and obtain:
Using high-quality CT and MR images as training set, vector is obtained to { X from training cluster samplingC,XR, definitionThe matrix constituted for the CT image vectors of n sampling,For
The matrix of the MR image vectors composition of corresponding n sampling, wherein Rd×nRepresent the vector space arranged with d rows n;
The complete prior information of support is added on the basis of dictionary learning cost function, D is alternately updatedC, DRAnd A, corresponding instruction
Practice optimization problem as follows:
Wherein, A is XCAnd XRJoint sparse coefficient matrix, τ is the degree of rarefication of joint sparse coefficient matrices A, and ⊙ represents dot product,
Mask matrix M is made up of element 0 and 1, is defined as M={ A |=0 }, is equivalent to M (i, j)=1 if A (i, j)=0, otherwise
For 0, introducing auxiliary variable:
<mrow>
<mover>
<mi>X</mi>
<mo>&OverBar;</mo>
</mover>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msup>
<mi>X</mi>
<mi>C</mi>
</msup>
</mtd>
</mtr>
<mtr>
<mtd>
<msup>
<mi>X</mi>
<mi>R</mi>
</msup>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>,</mo>
<mover>
<mi>D</mi>
<mo>&OverBar;</mo>
</mover>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msup>
<mi>D</mi>
<mi>C</mi>
</msup>
</mtd>
</mtr>
<mtr>
<mtd>
<msup>
<mi>D</mi>
<mi>R</mi>
</msup>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
Then formula (1) of equal value can be converted into:
The solution procedure of formula (3) is divided into two steps of sparse coding and dictionary updating:
First, in the sparse coding stage, random matrix initialization dictionaryWithRealized by solving formula (4) to joint
Sparse coefficient matrix A renewal:
The nonzero element of each row in joint sparse coefficient matrix A is handled respectively, and keeps neutral element complete, then formula
(4) following formula can be converted to:
<mrow>
<msub>
<mi>&alpha;</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<munder>
<mi>argmin</mi>
<msub>
<mi>&alpha;</mi>
<mi>i</mi>
</msub>
</munder>
<mo>|</mo>
<mo>|</mo>
<mover>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>&OverBar;</mo>
</mover>
<mo>-</mo>
<mover>
<msub>
<mi>D</mi>
<mi>i</mi>
</msub>
<mo>&OverBar;</mo>
</mover>
<msub>
<mi>&alpha;</mi>
<mi>i</mi>
</msub>
<mo>|</mo>
<msubsup>
<mo>|</mo>
<mn>2</mn>
<mn>2</mn>
</msubsup>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>5</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula,It isThe submatrix of the non-zero support of corresponding A, αiIt is the non-zero that A i-th is arranged, formula (5) is reused just by coefficient
Matching pursuit algorithm CoefROMP is handed over to solve, the joint sparse coefficient matrices A updated;
Secondly, in the dictionary updating stage, the optimization problem of formula (3) is converted into:
Then the compensation term of formula (6) is written as:
In formula,Represent dictionaryIn kth row to be updated,The row k of joint sparse coefficient matrices A is represented,Expression is covered
Film matrix M jth row, for ensureingIn neutral element in correct position, mask matrixIt is by row vectorIt is multiple
It is 1 matrix that d times processed, which obtains the order that size is d × n, utilizes mask matrixCan effectively it removeIn that
A little row for not using sample corresponding to k-th of atom, to error matrix EkCarry out singular value decomposition (SVD) and obtain Ek=U Δs VT,
Dictionary is updated using the first row of matrix UIn atomSimultaneously by sparse coefficient matrix AIt is updated to matrix V
The product of first row and Δ (1,1);
Finally, circulation performs sparse coding and the two stages of dictionary updating, untill default iterations is reached, output
The D of a pair of couplingsCAnd DRDictionary.
3. the method as described in claim 1, it is characterised in that using following methods to dictionary DCAnd DRMerged:
LCAnd L (n)R(n), n=1,2 ..., N represent the characteristic index of n-th of atom of CT dictionaries and MR dictionaries respectively, and fusion is public
Formula is expressed as follows:
<mrow>
<msup>
<mi>D</mi>
<mi>F</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msup>
<mi>D</mi>
<mi>C</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mi>i</mi>
<mi>f</mi>
<mi> </mi>
<msup>
<mi>L</mi>
<mi>C</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>></mo>
<msup>
<mi>L</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mi>a</mi>
<mi>n</mi>
<mi>d</mi>
<mfrac>
<mrow>
<mo>|</mo>
<msup>
<mi>L</mi>
<mi>C</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msup>
<mi>L</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
</mrow>
<mrow>
<mo>|</mo>
<msup>
<mi>L</mi>
<mi>C</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msup>
<mi>L</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
</mrow>
</mfrac>
<mo>&GreaterEqual;</mo>
<mi>&lambda;</mi>
<mo>,</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msup>
<mi>D</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mi>i</mi>
<mi>f</mi>
<mi> </mi>
<msup>
<mi>L</mi>
<mi>C</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>&le;</mo>
<msup>
<mi>L</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mi>a</mi>
<mi>n</mi>
<mi>d</mi>
<mfrac>
<mrow>
<mo>|</mo>
<msup>
<mi>L</mi>
<mi>C</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msup>
<mi>L</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
</mrow>
<mrow>
<mo>|</mo>
<msup>
<mi>L</mi>
<mi>C</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msup>
<mi>L</mi>
<mi>R</mi>
</msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
</mrow>
</mfrac>
<mo>&GreaterEqual;</mo>
<mi>&lambda;</mi>
<mo>,</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>(</mo>
<msup>
<mi>D</mi>
<mi>C</mi>
</msup>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
<mo>+</mo>
<msup>
<mi>D</mi>
<mi>R</mi>
</msup>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
<mo>)</mo>
<mo>/</mo>
<mn>2</mn>
<mo>,</mo>
<mi>o</mi>
<mi>t</mi>
<mi>h</mi>
<mi>e</mi>
<mi>r</mi>
<mi>w</mi>
<mi>i</mi>
<mi>s</mi>
<mi>e</mi>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
λ=0.25 is set herein.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710259812.1A CN107194912B (en) | 2017-04-20 | 2017-04-20 | Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710259812.1A CN107194912B (en) | 2017-04-20 | 2017-04-20 | Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107194912A true CN107194912A (en) | 2017-09-22 |
CN107194912B CN107194912B (en) | 2020-12-29 |
Family
ID=59871779
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710259812.1A Active CN107194912B (en) | 2017-04-20 | 2017-04-20 | Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107194912B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107680072A (en) * | 2017-11-01 | 2018-02-09 | 淮海工学院 | It is a kind of based on the positron emission fault image of depth rarefaction representation and the fusion method of MRI |
CN108428225A (en) * | 2018-01-30 | 2018-08-21 | 李家菊 | Image department brain image fusion identification method based on multiple dimensioned multiple features |
CN108846430A (en) * | 2018-05-31 | 2018-11-20 | 兰州理工大学 | A kind of sparse representation method of the picture signal based on polyatom dictionary |
CN109461140A (en) * | 2018-09-29 | 2019-03-12 | 沈阳东软医疗系统有限公司 | Image processing method and device, equipment and storage medium |
CN109946076A (en) * | 2019-01-25 | 2019-06-28 | 西安交通大学 | A kind of planet wheel bearing fault identification method of weighted multiscale dictionary learning frame |
CN109998599A (en) * | 2019-03-07 | 2019-07-12 | 华中科技大学 | A kind of light based on AI technology/sound double-mode imaging fundus oculi disease diagnostic system |
CN110443248A (en) * | 2019-06-26 | 2019-11-12 | 武汉大学 | Substantially remote sensing image semantic segmentation block effect removing method and system |
WO2020223865A1 (en) * | 2019-05-06 | 2020-11-12 | 深圳先进技术研究院 | Ct image reconstruction method, device, storage medium, and computer apparatus |
CN114428873A (en) * | 2022-04-07 | 2022-05-03 | 源利腾达(西安)科技有限公司 | Thoracic surgery examination data sorting method |
CN117877686A (en) * | 2024-03-13 | 2024-04-12 | 自贡市第一人民医院 | Intelligent management method and system for traditional Chinese medicine nursing data |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104182954A (en) * | 2014-08-27 | 2014-12-03 | 中国科学技术大学 | Real-time multi-modal medical image fusion method |
CN104376565A (en) * | 2014-11-26 | 2015-02-25 | 西安电子科技大学 | Non-reference image quality evaluation method based on discrete cosine transform and sparse representation |
-
2017
- 2017-04-20 CN CN201710259812.1A patent/CN107194912B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104182954A (en) * | 2014-08-27 | 2014-12-03 | 中国科学技术大学 | Real-time multi-modal medical image fusion method |
CN104376565A (en) * | 2014-11-26 | 2015-02-25 | 西安电子科技大学 | Non-reference image quality evaluation method based on discrete cosine transform and sparse representation |
Non-Patent Citations (2)
Title |
---|
宗静静等: "联合稀疏表示的医学图像融合及同步去噪", 《中国生物》 * |
李超等: "基于非下采样Contourlet变换和区域特征的医学图像融合", 《计算机应用》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107680072A (en) * | 2017-11-01 | 2018-02-09 | 淮海工学院 | It is a kind of based on the positron emission fault image of depth rarefaction representation and the fusion method of MRI |
CN108428225A (en) * | 2018-01-30 | 2018-08-21 | 李家菊 | Image department brain image fusion identification method based on multiple dimensioned multiple features |
CN108846430B (en) * | 2018-05-31 | 2022-02-22 | 兰州理工大学 | Image signal sparse representation method based on multi-atom dictionary |
CN108846430A (en) * | 2018-05-31 | 2018-11-20 | 兰州理工大学 | A kind of sparse representation method of the picture signal based on polyatom dictionary |
CN109461140A (en) * | 2018-09-29 | 2019-03-12 | 沈阳东软医疗系统有限公司 | Image processing method and device, equipment and storage medium |
CN109946076A (en) * | 2019-01-25 | 2019-06-28 | 西安交通大学 | A kind of planet wheel bearing fault identification method of weighted multiscale dictionary learning frame |
CN109946076B (en) * | 2019-01-25 | 2020-04-28 | 西安交通大学 | Planetary wheel bearing fault identification method of weighted multi-scale dictionary learning framework |
CN109998599A (en) * | 2019-03-07 | 2019-07-12 | 华中科技大学 | A kind of light based on AI technology/sound double-mode imaging fundus oculi disease diagnostic system |
WO2020223865A1 (en) * | 2019-05-06 | 2020-11-12 | 深圳先进技术研究院 | Ct image reconstruction method, device, storage medium, and computer apparatus |
CN110443248A (en) * | 2019-06-26 | 2019-11-12 | 武汉大学 | Substantially remote sensing image semantic segmentation block effect removing method and system |
CN110443248B (en) * | 2019-06-26 | 2021-12-03 | 武汉大学 | Method and system for eliminating semantic segmentation blocking effect of large-amplitude remote sensing image |
CN114428873A (en) * | 2022-04-07 | 2022-05-03 | 源利腾达(西安)科技有限公司 | Thoracic surgery examination data sorting method |
CN117877686A (en) * | 2024-03-13 | 2024-04-12 | 自贡市第一人民医院 | Intelligent management method and system for traditional Chinese medicine nursing data |
CN117877686B (en) * | 2024-03-13 | 2024-05-07 | 自贡市第一人民医院 | Intelligent management method and system for traditional Chinese medicine nursing data |
Also Published As
Publication number | Publication date |
---|---|
CN107194912B (en) | 2020-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107194912A (en) | The brain CT/MR image interfusion methods of improvement coupling dictionary learning based on rarefaction representation | |
CN104156994B (en) | Compressed sensing magnetic resonance imaging reconstruction method | |
CN105046672B (en) | A kind of image super-resolution rebuilding method | |
CN110021037A (en) | A kind of image non-rigid registration method and system based on generation confrontation network | |
CN103218791B (en) | Based on the image de-noising method of sparse self-adapting dictionary | |
CN107133919A (en) | Time dimension video super-resolution method based on deep learning | |
CN112465827A (en) | Contour perception multi-organ segmentation network construction method based on class-by-class convolution operation | |
CN106997581A (en) | A kind of method that utilization deep learning rebuilds high spectrum image | |
CN104091337A (en) | Deformation medical image registration method based on PCA and diffeomorphism Demons | |
CN103049923B (en) | The method of magnetic resonance fast imaging | |
CN107274462A (en) | The many dictionary learning MR image reconstruction methods of classification based on entropy and geometric direction | |
Du et al. | Accelerated super-resolution MR image reconstruction via a 3D densely connected deep convolutional neural network | |
CN107301630B (en) | CS-MRI image reconstruction method based on ordering structure group non-convex constraint | |
CN107993194A (en) | A kind of super resolution ratio reconstruction method based on Stationary Wavelet Transform | |
CN105957029B (en) | MR image reconstruction method based on tensor dictionary learning | |
CN111487573B (en) | Enhanced residual error cascade network model for magnetic resonance undersampling imaging | |
CN109887050A (en) | A kind of code aperture spectrum imaging method based on self-adapting dictionary study | |
CN101799919A (en) | Front face image super-resolution rebuilding method based on PCA alignment | |
CN109741439B (en) | Three-dimensional reconstruction method of two-dimensional MRI fetal image | |
CN107240131B (en) | Mammary gland image registration method based on iterative texture deformation field | |
CN108510465A (en) | The multi-focus image fusing method indicated based on consistency constraint non-negative sparse | |
CN112750131A (en) | Pelvis nuclear magnetic resonance image musculoskeletal segmentation method based on scale and sequence relation | |
CN104376198B (en) | Self adaptation MRI parallel imaging method utilizing and device | |
Wang et al. | An improved coupled dictionary and multi-norm constraint fusion method for CT/MR medical images | |
CN103985111A (en) | 4D-MRI super-resolution reconstruction method based on double-dictionary learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |