CN103985109B - Feature-level medical image fusion method based on 3D (three dimension) shearlet transform - Google Patents

Feature-level medical image fusion method based on 3D (three dimension) shearlet transform Download PDF

Info

Publication number
CN103985109B
CN103985109B CN201410246721.0A CN201410246721A CN103985109B CN 103985109 B CN103985109 B CN 103985109B CN 201410246721 A CN201410246721 A CN 201410246721A CN 103985109 B CN103985109 B CN 103985109B
Authority
CN
China
Prior art keywords
conversion
image
shear
fusion
omega
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410246721.0A
Other languages
Chinese (zh)
Other versions
CN103985109A (en
Inventor
王帅
段昶
刘想
程建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201410246721.0A priority Critical patent/CN103985109B/en
Publication of CN103985109A publication Critical patent/CN103985109A/en
Application granted granted Critical
Publication of CN103985109B publication Critical patent/CN103985109B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a feature-level medical image fusion method based on 3D (three dimension) shearlet transform, belonging to the technical field of medical image processing and application. The feature-level medical image fusion method mainly comprises the following steps: 1, performing 3D-D-CSST (three dimensional-discrete-compact shearlet transform) or 3D-DT-CSST (three dimensional dual-tree compact shearlet transform) on two images to obtain transformation coefficient images Ca and Cb; 2, performing image fusion on transformation coefficients to obtain a fusion coefficient Cf; and 3, performing DWT or DTCWT inverse transformation, performing backward shear transformation on the transformed image to obtain a fusion image Vf. According to the feature-level medical image fusion method, the problems that the quality of a fused image is relatively low and information which is partially important and is not remarkable is easily ignored are solved.

Description

A kind of feature level Method of Medical Image Fusion that wave conversion is sheared based on 3D
Technical field
The invention belongs to Medical Image Processing and applied technical field, and in particular to a kind of spy that wave conversion is sheared based on 3D Levy a grade Method of Medical Image Fusion, solve that fused image quality is relatively low and local is important but inapparent information is easily neglected The problem omitted.
Background technology
Medical image fusion is one kind of image co-registration, and many methods have been widely used in clinical diagnosis.Fusion Referring to will become a width as the important information in source images that the distinct devices such as CT, MRI are gathered with regard to target is extracted and merged The process of image.The information included in the generated image of distinct device or the different configurations of same equipment is different, some letters Breath has similitude, but most information is complementary.For example, the mainly human body that CT images are provided is dense, hard tissues Information, and MRI image then mainly provides the information of soft tissue.The post processing figure of the same information for once gathering of same MRI machine Picture, such as T2* provide the comparative information of tissue relaxation time, quantity of magnetism figure (QSM:Quantitative Susceptibility Mapping the magnetic susceptibility comparative information caused by various magnetic bio labels (such as iron, calcium, gadolinium contrast medium)) is provided. Typically, image co-registration is needed first by source image registration, and T2* and QSM images are to be based on to be carried out with the data of single pass Post processing is generated, so the two is completely registering.
The research of current medical image co-registration primary concern is that the situation of two dimensional image, but multiclass Medical Devices now All it is to generate 3-D view.In 3-D view each point gray value not only with same layer neighbor point cross-correlation, also with adjacent layer in Neighbor point cross-correlation.Traditional two-dimentional fusion method can cause the loss of third dimension information, it is therefore necessary to which research can be directly Process the fusion method of 3-D view.
Blending algorithm can be processed in spatial domain or transform domain.In spatial domain, fused images are typically the weighting of source data Averagely, such method is simply easily achieved, but fused image quality is not high.Transform domain method follows following steps:1) source figure As transforming to transform domain, 2) image coefficient is processed by fusion criterion, the coefficient after being merged, 3) finally become coefficient Spatial domain is returned to, output is fused images.Research emphasis are concentrated mainly at 2 points in this kind of algorithm:The selection and fusion of conversion is accurate Design then.Many multiple dimensioned (Multi-Scale) conversion can be applied in blending algorithm, such as DWT, DTCWT, Curvelet, Shearlet etc..
Shearing wave conversion is to propose in recent years and the efficient conversion for representing of progressively ripe multidimensional data.In fact, being directed to Wavelet transformation lacks shortcoming to edge isotropy feature rarefaction representation, and scholars are also suggested many other multiple dimensioned Conversion.But it is unique while possessing the conversion of advantages below in all methods to shear wave conversion:Only one of which or limited generation Function set, expression high dimensional data that can be almost optimum is uniformly processed to continuous data and discrete data, possesses tight Zhi Shixian etc. Deng.Shearing wave conversion is widely used in image procossing, such as denoising, edge detection, strengthen etc..
Shearing wave is applied equally to image co-registration, and existing image fusion technology has the disadvantage that:1st, tradition is based on The fusion method of wavelet transformation and pyramid transform, because multi-scale transform lacks the rarefaction representation energy to picture structure directionality Power, causes the quality of fused images relatively low;2nd, the image co-registration based on Pixel-level, does not account for the structural information of image, When can cause image co-registration, the important really inapparent information in local and be ignored.These defects can be to final medical diagnosis Have a negative impact.
The content of the invention
For above-mentioned prior art, present invention aim at providing a kind of feature level medical science figure that wave conversion is sheared based on 3D As fusion method, the fusion method based on wavelet transformation and pyramid transform is solved, because multi-scale transform lacks to picture structure The rarefaction representation ability of directionality and cause the quality of fused images relatively low;And during image co-registration, local is important but not The defect that significant information is easily ignored, these defects eventually have a negative impact to medical diagnosis.
In order to solve above-mentioned technical problem, the present invention is adopted the following technical scheme that:
Herein 3D shearing waves specifically refer to 3D and tightly prop up tight shearing wave (3D- of shearing wave (3D-D- shearing waves) or the double trees of 3D DT- shearing waves), D- shearing wave conversions include two steps:Forward direction shear is converted and DWT conversion;DT- shearing wave conversions are included Two steps:Forward direction shear is converted and DTCWT conversion.
A kind of feature level Method of Medical Image Fusion that wave conversion is sheared based on 3D, it is characterised in that comprise the steps:
First, two width 3D medical image V to be fused are prepareda、Vb, three directions of two width images are carried out respectively before to Shear is converted, and wavelet transform DWT or bi-input bi-output system conversion DTCWT is carried out to the image after conversion, obtains corresponding Multigroup changing image coefficient Ca、Cb
2nd, image co-registration is carried out to the coefficient that 3D shearing wave conversions are obtained, obtains fused images coefficient Cf
3rd, to image coefficient C after step 2 fusionfDWT or DTCWT inverse transformations are carried out, the image after conversion is entered The backward shear conversion of row obtains multigroup fused images, and to these images final fused images V are averagely obtainedf
In the present invention, the detailed step of the step 2 includes following two step:
2.1st, the image C that wave conversion is obtained is sheared to 3Da、CbLow frequency part CaL、CbLMerged using average criterion Low frequency part C of imagefL
2.2nd, to HFS CaH、CbHUsing the fusion of feature level, the feature class of same position image to be fused is judged Type, is merged by the maximum information criterion that retains, and obtains CfH
In the present invention, in the step one, to shear conversion before first carrying out to image, then to the image after conversion Carry out wavelet transform DWT or bi-input bi-output system conversion DTCWT;Forward direction shear conversion is specific as follows:For one group three Dimension data l × m × n sets up coordinate system, and origin is (0,0,0), and its angle steel joint is (l-1, m-1, n-1), and three sides are carried out to it To shear conversion it is as follows:Shear conversion wherein for z directions is referred to and carries out following coordinate change to the point in data Change:
It is for the shear transformation for mula in x directions:
Doing shear transformation for mula for y directions is:
Wherein, (x, y, z) is the coordinate before conversion, and (x ', y ', z ') is the coordinate after conversion.ktr, tr=a1, b1, a2, B2, a3, b3 } it is mobile ultimate range.ktrDifferent values are taken, the information for retaining different directions, thus shearing wave will be obtained Conversion can be producedIndividual 3D rendering, whereinWithFor kaiAnd kbiDirection number.
Feature-based fusion is adopted to the low frequency part of changing image in the step 2.1, its fusion rule is:
CfL=(CaL+CaL)/2(2)
Feature-based fusion is adopted to the HFS of changing image in the step 2.2, concrete operation step is as follows:
2.2.1, changing image coefficient C is first calculateda、CbHFS CaH、CbHStructure tensor, then structure tensor is entered Row rank is analyzed:
For changing image coefficient Ca、CbHFS CaH、CbHEach point, structure tensor is 3 × 3 matrixes, Rank of matrix desirable 0,1,2,3, flat, planar respectively in correspondence image, wire, dotted region feature;Ω is regional area l1×m1×n1, the structure tensor of point p is expressed as
W (r) is a l1×m1×n1The Gaussian template of size;Vx(p)、Vy(p)、VzP () is respectively image to x, y, z axle Partial derivative on three directions;
Calculate characteristic value E of this 3 × 3 tensor matrixx、Ey、Ez, given threshold K is control parameter, is set to 0.01, the nonzero eigenvalue number of point pFor the same position of two width figures, C is rememberedaNonzero eigenvalue number be Ma, Note CbNonzero eigenvalue number be Mb, Ma、MbAs the approximate of tensor rank of matrix;
If 2.2.2, Ma=Mb, then two width figures have same type feature in this position, calculate the phase of this position Like degree
Calculate threshold valueFusion rule is:
γabDuring≤α, this position is redundancy, selects weighted criterion:
CfHaCaHbCbH (5)
γab>During α, this position is complementary information, using MRE criterions:
If 2.2.3, Ma≠Mb, fusion criterion:
The step 3 does backward shear conversion to the image after DWT or DTCWT inverse transformations, specific as follows:
To the inverse operation of shear conversion before referring to for backward shear conversion, wherein the shear conversion for z directions is Finger carries out following coordinate transform to the point in data:
It is for the shear transformation for mula in x directions:
Doing shear transformation for mula for y directions is:
Wherein, (x, y, z) be conversion before coordinate, (x ', y ', z ') be conversion after coordinate, ktr, tr=a1, b1, a2, B2, a3, b3 } value corresponding to front to shear conversion institute values.
Compared with prior art, the invention has the advantages that:
First, relative to the rarefaction representation energy lacked based on tradition Wavelet and pyramid transform etc. to direction architectural characteristic For the multi-scale transform of power, compact schemes shearing wave conversion has the almost optimum each energy to different feature represented in high dimensional signal Power, fused images retain more accurately directional information, cause fusion mass higher;
2nd, DT- shearings wave conversion introduces double tree constructions, reduces and moves the fused images distortion that denaturation is caused;
3rd, the spatial domain compact sup-port of DT- shearing waves and D- shearing waves, relative to frequency domain shearing wave, fusion mass is higher;
4th, the present invention uses feature level fusing method, it is contemplated that scanning organ internal structural characteristic (including flat, planar, Wire, dotted region), to greatest extent object of reservation structural information and physical features, special relative to only consideration high frequency coefficient statistics The Pixel-level fusion criterion levied, fused image quality is higher;
5th, the quality of fused images of the present invention is weighed by objective indicator (MI, QAB | F), and its quality is higher.
Description of the drawings
Fig. 1 is image interfusion method schematic diagram of the present invention;
Fig. 2 is that two dimension shear converts schematic diagram;
Fig. 3 is that three-dimensional shear converts (z-axis direction shear conversion) schematic diagram.
Specific embodiment
Below in conjunction with the drawings and the specific embodiments, the invention will be further described.
By taking T2* magnitude images and QSM images as an example, this experiment is schemed with the inventive method to three-dimensional T2* magnitude images and QSM As being processed, fused images are finally given, in example, image size is 128 × 128 × 128.
The image interfusion method of the present embodiment considers how first the anisotropy for showing 3-D view, and shear conversion can With each to different feature of good behaviour image;Secondly consider the impact of the translation qualitative change that DWT conversion brings, thus carry out double Tree Complex Wavelet Transform DTCWT;Finally consider to be fused to the information of low frequency coefficient and high frequency coefficient institute band is as much as possible In closing coefficient image:For low frequency adopts average criterion;For high frequency coefficient is merged using the fusion criterion of feature level.
Flow process is as shown in figure 1, comprise the following steps:
Step one:To two width image Va、VbConversion coefficient C is obtained before carrying out to 3D-DT- shearing wave conversionsa、Cb.Performed Journey is included before three-dimensional to shear conversion and three-dimensional DTCWT conversion.
Forward direction shear is converted:For one group of three-dimensional data l × m × n sets up coordinate system, origin is (0,0,0), and its is diagonal Point is (l-1, m-1, n-1), shear conversion in z-axis direction is carried out to it and refers to that x to the point in data, y-coordinate enter line translation
It is to the shear transformation for mula of other both directions:
Fig. 2 is the schematic diagram of shear conversion, for this example, l=m=n=128, selects ktr, tr=a1, b1, a2, B2, a3, b3 }=- 64,0,64,27 groups of changing image data will be produced;
Step 2:27 system number view data C are obtained to 3D-DT- shearing wave conversionsa、CbCarry out fusion and obtain Cf.Perform Process includes merging low frequency coefficient and merging high frequency coefficient.
The coefficient C that wave conversion is obtained is sheared to 3D-DT-a、CbLow frequency part CaL、CbLMerged using average criterion:
CfL=(CaL+CbL)/2
2) for HFS CaH、CbH, the regional area Ω sizes of selected element p are l1×m1×n1, elect 3 × 3 as here × 3, calculate the structure tensor of p points:
W (r) is a l1×m1×n1The Gaussian template of size;Vx(p)、Vy(p)、VzP () is respectively image to x, y, z axle Partial derivative on three directions.
There is correlation between voxel, it is difficult to have proper flat, planar, wire and dotted region, so When extracting image spatial feature, to nonzero eigenvalue, this condition has made appropriate relaxing.For a certain characteristic value of structure tensor During less than respective threshold, it is believed that this characteristic value is zero, and then think that more than the characteristic value number of threshold value be rank of matrix.Calculate this Characteristic value E of 3 × 3 matrixesx、Ey、Ez, given thresholdk For control parameter, 0.01 can be set to, the nonzero eigenvalue number of point pM is approximately and opens The order of moment matrix, for the same position of two width figures, MaRecord CaHMore than the characteristic value number of threshold values, MbRecord CbHMore than threshold values Characteristic value number;
If Ma=Mb, then two width figures have same type feature in this position, then calculate the similarity of this position
Calculate threshold valueFusion rule is:
γabDuring≤α, this position is redundancy, selects weighted criterion
CfHaCaHbCbH
γab>During α, this position is complementary information, using MRE criterions:
If Ma≠Mb, fusion criterion:
Step 3:To fusion coefficients image CfCarry out backward 3D-DT- shearings wave conversion and obtain final fused images.Perform Process is included before three-dimensional to shear conversion and three-dimensional DTCWT conversion
To the inverse operation of shear conversion before referring to for backward shear conversion, wherein the shear conversion for z directions is Finger carries out following coordinate transform to the point in data:
It is for the shear transformation for mula in x directions:
Doing shear transformation for mula for y directions is:
L=m=n=128 is selected, k is selectedtr, { tr=a1, b1, a2, b2, a3, b3 }=- 64,0,64;Finally to 27 3D rendering after reciprocal transformation does and averagely obtain fused images Vf
The above, the only preferred embodiment of the invention, but protection scope of the present invention is not limited thereto, any ripe Those skilled in the art are known in scope disclosed in this invention, technology according to the present invention scheme and its inventive concept Equivalent or change in addition, belongs to protection scope of the present invention.

Claims (5)

1. it is a kind of based on 3D shear wave conversion feature level Method of Medical Image Fusion, it is characterised in that comprise the steps:
First, two width 3D medical image V to be fused are prepareda、Vb, become to shear before carrying out to three directions of two width images respectively Change, wavelet transform DWT or bi-input bi-output system conversion DTCWT is carried out to the image after conversion, obtain corresponding multigroup Changing image coefficient Ca、Cb
2nd, image co-registration is carried out to the coefficient that 3D shearing wave conversions are obtained, obtains fused images coefficient Cf;Concretely comprise the following steps:
2.1st, image coefficient C that wave conversion is obtained is sheared to 3Da、CbLow frequency part CaL、CbLMerged using average criterion Low frequency part C of imagefL
2.2nd, image coefficient C that wave conversion is obtained is sheared to 3Da、CbHFS CaH、CbHMerged, obtained fused images HFS CfH
2.3rd, according to low frequency part CfLWith HFS CfHObtain fused images coefficient Cf
3rd, to image coefficient C after step 2 fusionfDWT or DTCWT inverse transformations are carried out, the image after conversion is carried out backward Shear conversion obtains fused images, and to these images final fused images V are averagely obtainedf
2. it is according to claim 1 based on 3D shear wave conversion feature level Method of Medical Image Fusion, it is characterised in that In the step one, to shear conversion before first carrying out to image, then wavelet transform DWT is carried out to the image after conversion Or bi-input bi-output system conversion DTCWT;Forward direction shear conversion is specific as follows:Sit for one group of three-dimensional data l × m × n sets up Mark system, origin is (0,0,0), and its angle steel joint is (l-1, m-1, n-1), and the shear that three directions are carried out to it converts following institute Show:Shear conversion wherein for z directions is referred to and carries out following coordinate transform to the point in data:
It is for the shear transformation for mula in x directions:
Doing shear transformation for mula for y directions is:
Wherein, (x, y, z) is the coordinate before conversion, and (x ', y ', z ') is the coordinate after conversion;ktr, tr=a1, b1, a2, b2, A3, b3 } it is mobile ultimate range;ktrDifferent values are taken, the information for retaining different directions will be obtained, thus shear wave conversion Can produceIndividual 3D rendering, whereinWithFor kaiAnd kbiDirection number.
3. it is according to claim 1 based on 3D shear wave conversion feature level Method of Medical Image Fusion, it is characterised in that Two width 3D rendering V in the step 2.2a、VbCorresponding changing image coefficient Ca、CbHFS CaH、CbHFusion method For:Using the fusion of feature level, the characteristic type of same position image to be fused is judged, carried out by the maximum information criterion that retains Fusion, obtains CfH
4. it is according to claim 3 based on 3D shear wave conversion feature level Method of Medical Image Fusion, it is characterised in that The HFS to changing image adopts feature-based fusion, and concrete operation step is as follows:
2.2.1, changing image coefficient C is first calculateda、CbHFS CaH、CbHStructure tensor, then row rank is entered to structure tensor Analysis:
For changing image coefficient Ca、CbHFS CaH、CbHEach point, structure tensor is 3 × 3 matrixes, matrix Order desirable 0,1,2,3, flat, planar respectively in correspondence image, wire, dotted region feature;Ω is regional area l1×m1 ×n1, the structure tensor of point p is expressed as
s ( x , y , z ) = Σ r ∈ Ω w ( r ) V x 2 ( p - r ) Σ r ∈ Ω w ( r ) V x ( p - r ) V y ( p - r ) Σ r ∈ Ω w ( r ) V x ( p - r ) V z ( p - r ) Σ r ∈ Ω w ( r ) V x ( p - r ) V y ( p - r ) Σ r ∈ Ω w ( r ) V y 2 ( p - r ) Σ r ∈ Ω w ( r ) V y ( p - r ) V z ( p - r ) Σ r ∈ Ω w ( r ) V x ( p - r ) V z ( p - r ) Σ r ∈ Ω w ( r ) V y ( p - r ) V z ( p - r ) Σ r ∈ Ω w ( r ) V z 2 ( p - r ) - - - ( 2 )
W (r) is a l1×m1×n1The Gaussian template of size;Vx(p)、Vy(p)、VzP () is respectively image to three, x, y, z axle Partial derivative on direction;
Calculate characteristic value E of this 3 × 3 tensor matrixx、Ey、Ez, given threshold K is control parameter, is set to 0.01, nonzero eigenvalue number M=su of point ptm(Et> Tt1:0), t ∈ { x, y, z }, for two The same position of width figure, remembers CaNonzero eigenvalue number be Ma, remember CbNonzero eigenvalue number be Mb, Ma、MbAs tensor Rank of matrix it is approximate;
If 2.2.2, Ma=Mb, then two width figures have same type feature in this position, calculate the similarity of this position
γ a b ( x , y , z ) = 2 Σ p , q ∈ Ω | C a H ( p ) C b H * ( q ) | Σ p , q ∈ Ω | C a H ( p ) C a H * ( q ) | + Σ p , q ∈ Ω | C b H ( p ) C b H * ( q ) | - - - ( 3 )
Calculate threshold valueFusion rule is:
γabDuring≤α, this position is redundancy, selects weighted criterion:
CfHaCaHbCbH (4)
ω a = δ a δ a + δ b , ω b = δ b δ a + δ b
δ a = σ a × m a x x , y ( a b s ( E a ( x , y , z ) ) ) , δ b = σ b × m a x x , y ( a b s ( E b ( x , y , z ) ) ) .
γabDuring > α, this position is complementary information, using MRE criterions:
C f H = C a H , S a H &GreaterEqual; S b H C b H , S a H < S b H ; - - - ( 5 )
S t H = &sigma; t = 1 N &Omega; &Sigma; p &Element; &Omega; ( C t H ( p ) - C &OverBar; t H ) 2 , t &Element; { a , b } ;
If 2.2.3, Ma≠Mb, fusion criterion:
C f H = C a H , M a &GreaterEqual; M b C b H , M a < M b - - - ( 6 )
5. it is according to claim 1 based on 3D shear wave conversion feature level Method of Medical Image Fusion, it is characterised in that The step 3 does backward shear conversion to the image after DWT or DTCWT inverse transformations, specific as follows:
For backward shear conversion refer to before to shear conversion inverse operation, wherein for z directions shear conversion refer to it is right Point in data carries out following coordinate transform:
It is for the shear transformation for mula in x directions:
Doing shear transformation for mula for y directions is:
Wherein, (x, y, z) is the coordinate before conversion, and (x ', y ', z ') is the coordinate after conversion.
CN201410246721.0A 2014-06-05 2014-06-05 Feature-level medical image fusion method based on 3D (three dimension) shearlet transform Active CN103985109B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410246721.0A CN103985109B (en) 2014-06-05 2014-06-05 Feature-level medical image fusion method based on 3D (three dimension) shearlet transform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410246721.0A CN103985109B (en) 2014-06-05 2014-06-05 Feature-level medical image fusion method based on 3D (three dimension) shearlet transform

Publications (2)

Publication Number Publication Date
CN103985109A CN103985109A (en) 2014-08-13
CN103985109B true CN103985109B (en) 2017-05-10

Family

ID=51277067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410246721.0A Active CN103985109B (en) 2014-06-05 2014-06-05 Feature-level medical image fusion method based on 3D (three dimension) shearlet transform

Country Status (1)

Country Link
CN (1) CN103985109B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268833B (en) * 2014-09-15 2018-06-22 江南大学 Image interfusion method based on translation invariant shearing wave conversion
CN107845079A (en) * 2017-11-15 2018-03-27 浙江工业大学之江学院 3D shearlet medicine CT video denoising methods based on compact schemes
CN110084772B (en) * 2019-03-20 2020-12-29 浙江医院 MRI/CT fusion method based on bending wave
CN110223371B (en) * 2019-06-14 2020-12-01 北京理工大学 Shear wave transformation and volume rendering opacity weighted three-dimensional image fusion method
CN111583330B (en) * 2020-04-13 2023-07-04 中国地质大学(武汉) Multi-scale space-time Markov remote sensing image sub-pixel positioning method and system
CN111481827B (en) * 2020-04-17 2023-10-20 上海深透科技有限公司 Quantitative susceptibility imaging and method for locating target area of potential stimulation of DBS

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049895B (en) * 2012-12-17 2016-01-20 华南理工大学 Based on the multimode medical image fusion method of translation invariant shearing wave conversion

Also Published As

Publication number Publication date
CN103985109A (en) 2014-08-13

Similar Documents

Publication Publication Date Title
CN103985109B (en) Feature-level medical image fusion method based on 3D (three dimension) shearlet transform
Du et al. Super-resolution reconstruction of single anisotropic 3D MR images using residual convolutional neural network
Xu et al. DW-Net: A cascaded convolutional neural network for apical four-chamber view segmentation in fetal echocardiography
Fu et al. Three dimensional fluorescence microscopy image synthesis and segmentation
CN101551863B (en) Method for extracting roads from remote sensing image based on non-sub-sampled contourlet transform
CN109166130A (en) A kind of image processing method and image processing apparatus
WO2022227407A1 (en) Semantic segmentation method based on attention and uses joint image and feature adaptation
CN106204449A (en) A kind of single image super resolution ratio reconstruction method based on symmetrical degree of depth network
CN103279933B (en) A kind of single image super resolution ratio reconstruction method based on bilayer model
CN106127684A (en) Image super-resolution Enhancement Method based on forward-backward recutrnce convolutional neural networks
CN104408700A (en) Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images
CN110459301A (en) Brain neuroblastoma surgical navigation method for registering based on thermodynamic chart and facial key point
CN102903103B (en) Migratory active contour model based stomach CT (computerized tomography) sequence image segmentation method
CN104268833B (en) Image interfusion method based on translation invariant shearing wave conversion
CN108053398A (en) A kind of melanoma automatic testing method of semi-supervised feature learning
CN105335929A (en) Depth map super-resolution method
CN103985104B (en) Multi-focusing image fusion method based on higher-order singular value decomposition and fuzzy inference
CN105427269A (en) Medical image fusion method based on WEMD and PCNN
CN112488971A (en) Medical image fusion method for generating countermeasure network based on spatial attention mechanism and depth convolution
CN106504221A (en) Based on the Medical image fusion new method that quaternion wavelet converts context mechanism
CN104361571B (en) Infrared and low-light image fusion method based on marginal information and support degree transformation
CN104331864A (en) Breast imaging processing based on non-subsampled contourlet and visual salient model
CN103985111A (en) 4D-MRI super-resolution reconstruction method based on double-dictionary learning
CN107067387A (en) Method of Medical Image Fusion based on 3D complex shear wavelet domain broad sense statistical correlation models
Shaohai et al. Block-matching based multimodal medical image fusion via PCNN with SML

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant