CN110211081A - A kind of multi-modality medical image fusion method based on image attributes and guiding filtering - Google Patents
A kind of multi-modality medical image fusion method based on image attributes and guiding filtering Download PDFInfo
- Publication number
- CN110211081A CN110211081A CN201910442219.XA CN201910442219A CN110211081A CN 110211081 A CN110211081 A CN 110211081A CN 201910442219 A CN201910442219 A CN 201910442219A CN 110211081 A CN110211081 A CN 110211081A
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- component
- texture
- approximate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001914 filtration Methods 0.000 title claims abstract description 20
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 13
- 230000004927 fusion Effects 0.000 claims abstract description 29
- 238000000034 method Methods 0.000 claims abstract description 19
- 238000002156 mixing Methods 0.000 claims abstract description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 11
- 238000001514 detection method Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 3
- 239000004615 ingredient Substances 0.000 abstract 3
- 230000009466 transformation Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 238000002595 magnetic resonance imaging Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000002503 metabolic effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000002600 positron emission tomography Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000002603 single-photon emission computed tomography Methods 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000002560 therapeutic procedure Methods 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
- G06T7/44—Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10108—Single photon emission computed tomography [SPECT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
Abstract
The multi-modality medical image fusion method based on image attributes and guiding filtering that the invention discloses a kind of, frame is decomposed first with transportable frame, and source images are decomposed into texture ingredient and proximate component, it is merged later using a kind of fusion rule pairing approximation ingredient based on image attributes and guiding filtering of design, texture ingredient is merged using maximum absolute value rule, finally rebuilds original image.The invention has the advantages that this method can retain more image borders and texture information, blending image contrast with higher simultaneously, and more meet the vision of people, since this method can not only obtain preferable syncretizing effect, computational efficiency is higher simultaneously, has potential application in multi-modality medical image fusion system.
Description
Technical Field
The invention relates to the technical field of image fusion, in particular to a multimode medical image fusion method based on image attributes and guide filtering.
Background
As is known, medical imaging plays an increasingly important role in clinical applications such as diagnosis, therapy planning, surgical navigation, etc., and due to the diversity of imaging mechanisms, multi-modal medical images in different modes are focused on different organ and tissue information. Computed Tomography (CT) can accurately detect dense structures and magnetic resonance imaging (MR) can provide high resolution anatomical information of soft tissues. Positron Emission Tomography (PET) and Single Photon Emission (SPECT) can reflect metabolic information of the body. In order to obtain sufficient information for accurate diagnosis, the multi-modality medical image fusion technique is an effective method. The method aims to fuse complementary information in a plurality of medical images in different modalities to generate a composite image.
Generally, image fusion is divided into three levels from low to high: the method comprises pixel level fusion, feature level fusion and decision level fusion, and the research of the patent is pixel level image fusion.
Current multimodal medical image fusion methods can be mainly divided into two main categories: transform domain fusion algorithms and spatial domain fusion algorithms. The transform domain fusion algorithm mainly comprises the following steps: firstly, the image is transformed to a specific image representation domain, then the image representation coefficients are fused by utilizing a fusion rule, and finally, a fused image is obtained by utilizing inverse transformation. The transform domain fusion method can generally obtain better effect in the field of image fusion. The choice of transformation and fusion rules is two fundamental problems with such approaches. In addition, this method has a problem that it is difficult to determine the number of layers to be decomposed, and it usually takes a long time due to the process of transformation and inverse transformation. The spatial domain fusion method is different from the transform domain fusion method. The spatial domain fusion method fuses source images in a spatial domain, generally, the calculation complexity of the method is low, and the patent provides a solution in the spatial domain fusion algorithm for solving the problems of the transform domain fusion algorithm.
Disclosure of Invention
In order to solve the technical problem, the invention provides a multimode medical image fusion method based on image attributes and guide filtering.
The invention adopts the following technical scheme: a multimode medical image fusion method based on image attributes and guide filtering comprises the following steps:
first, moving frame decomposition diagram (MFDF) decomposition:
decomposition of source images { A, B } into their respective texture components { A, B } Using MFDFT,BTAnd the respective approximate components { A }A,BA};
Secondly, texture component fusion:
using the maximum absolute value rule to match the texture component { A ] of the source imageT,BTGet the fusion texture component F by fusionTFurthermore, to make the information of neighboring pixels from the same image as much as possible, we apply to the texture component FTCarrying out consistency detection;
thirdly, fusing approximate components:
1. firstly, Gaussian filtering and Laplace filtering are utilized to obtain an approximate component { AA,BAInitial weight map of { P }A,PB};
2. We then refine the initial weight map { P } using a thresholding method and a guided filtering methodA,PBGet the final weight map { T }A,TB};
3. Final fused approximate component FAIs an approximate component { AA,BAAnd the final weight map { T }A,TB-weighted summation of;
fourthly, image reconstruction:
blending texture components FTAnd the fusion approximation component FAThe arithmetic square root of the sum of squares of the pixels at the corresponding positions is the pixel value at the corresponding position of the fused image F.
Compared with the prior art, the invention has the advantages that: the method can keep more image edge and texture information, the fused image has higher contrast ratio and better accords with human vision, and the method has potential application value in a multimode medical image fusion system because the method can obtain better fusion effect and has higher calculation efficiency.
Drawings
Fig. 1 is a basic block diagram of the multimodal medical image fusion method of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings so that the advantages and features of the present invention can be more easily understood by those skilled in the art, and the scope of the present invention will be more clearly and clearly defined.
Examples
Step 1 Moving Frame Decomposition Framework (MFDF) decomposition
The moving frame decomposition framework can decompose an image I into texture components ITAnd approximate component IA. The method comprises the following basic steps:
1. creating a matrix P that can be used for image decomposition
Representing the gradient of the image I, Ix and IyRepresenting the horizontal and vertical partial derivatives of the image I, respectively. Mu is a set constant, the value of which is chosen to have some influence on the decomposition result.
2. Obtaining texture component I of image I according to formula (2) by using matrix PTAnd approximate component IA
In this patent, we decompose the source images { A, B } into their respective texture components { A, B } using equations (1-2)T,BTAnd the respective approximate components { A }D,BD}。
Step 2 texture component fusion
Using the maximum absolute value rule to match the texture component { A ] of the source imageT,BTGet the fusion texture component F by fusionT. Furthermore, to make the information of neighboring pixels from the same image as much as possible, we apply to the texture component FTAnd (6) carrying out consistency detection. The fusion process is represented by formula (3-5).
MAXX=MAJORITY(abs(XT),Wr) (3)
mm=((MAXA>MAXB)*W)>floor(r×r/2) (4)
FT=mm.×AT+((~mm).×BT) (5)
In equation (3-5), X ∈ { A, B }, MAJORITY denotes the MAJORITY filter function, abs is the absolute value function, WrIs a filter template of r x r. In all formulas of this patent,. denotes convolution operation,. mm is a texture component weight map,. times.denotes dot product operation.
Step 3 approximate component fusion
In the guided filtering theory, in a local window w centered on a pixel kkThere is a linear relationship between the filtered output O and the guide image I.
wherein Parameter ak and bkThe definition is as follows:
wherein ,μk,δk,|w| and and a local window wkIt is related. In particular, muk and δkRespectively represent the guide image I in the local window wkMean and variance of the interior, | w | is the local window wkThe total number of pixels within the array of pixels,in a local window w for an input image PkAverage value of (d). For convenience, we mathematically represent the guided filtering as equation (9).
O=GFr,ε(P,I) (9)
wherein GFr,εRepresenting the pilot filter function, and the two sub-indices r and epsilon represent the size and blur level of the pilot filter, respectively.
In this patent, the approximate component fusion is performed in three steps:
1. first, an approximate component { A } is obtained from the equation (10-12) by Gaussian filtering and Laplace filteringA,BAInitial weight map of { P }A,PB}。
PA=SA>SB (11)
PB=SA<SB (12)
In the formula (10-12), X ∈ { A, B }, Lap denotes a Laplace filter function, WlA laplacian filter template of l x l, Gau denotes a gaussian filter function,is a gaussian filtering template.
2. We then refine the initial weight map { P } using a thresholding method and a guided filtering methodA,PBGet the final weight value map { T }A,TBWhich can be represented by the formula (13-15) }
wX=Thsegment(XA,thX) (13)
TA=GFr,ε(((PA|wA)&(~wB)),AA) (14)
TB=GFr,ε(((PB|wB)&(~wA)),BA) (15)
In the equation (13-15), X ∈ { A, B }, Thsegment is the threshold segmentation function, threshold thXAs an image XAMean, w, of the first 5% (pixel value size) pixelsXAs an image XAIs divided into binary images, GFr,εTo guide the filter function, the two sub-indices r and epsilon are set to 4 and 0.3, respectively.
3. Final fused approximate component FAObtained according to formula (16)
FA=TA.×AA+TB.×BA (16)
Step 4: image reconstruction
The fused image F is obtained according to the formula (17)
Without being limited thereto, any changes or substitutions that are not thought of through the inventive work should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope defined by the claims.
Claims (1)
1. A multimode medical image fusion method based on image attributes and guide filtering is characterized by comprising the following steps:
first, moving frame decomposition diagram (MFDF) decomposition:
decomposition of source images { A, B } into their respective texture components { A, B } Using MFDFT,BTAnd the respective approximate components { A }A,BA};
Secondly, texture component fusion:
using absolute maximum rule to source graphTexture component of image { AT,BTGet the fusion texture component F by fusionTFurthermore, to make the information of neighboring pixels from the same image as much as possible, we apply to the texture component FTCarrying out consistency detection;
thirdly, fusing approximate components:
1. firstly, Gaussian filtering and Laplace filtering are utilized to obtain an approximate component { AA,BAInitial weight map of { P }A,PB};
2. We then refine the initial weight map { P } using a thresholding method and a guided filtering methodA,PBGet the final weight map { T }A,TB};
3. Final fused approximate component FAIs an approximate component { AA,BAAnd the final weight map { T }A,TB-weighted summation of;
fourthly, image reconstruction:
blending texture components FTAnd the fusion approximation component FAThe arithmetic square root of the sum of squares of the pixels at the corresponding positions is the pixel value at the corresponding position of the fused image F.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910442219.XA CN110211081B (en) | 2019-05-24 | 2019-05-24 | Multimode medical image fusion method based on image attribute and guided filtering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910442219.XA CN110211081B (en) | 2019-05-24 | 2019-05-24 | Multimode medical image fusion method based on image attribute and guided filtering |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110211081A true CN110211081A (en) | 2019-09-06 |
CN110211081B CN110211081B (en) | 2023-05-16 |
Family
ID=67788676
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910442219.XA Active CN110211081B (en) | 2019-05-24 | 2019-05-24 | Multimode medical image fusion method based on image attribute and guided filtering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110211081B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016083417A (en) * | 2015-12-25 | 2016-05-19 | キヤノン株式会社 | Image processing device, image processing method, program, and computer recording medium |
CN107248150A (en) * | 2017-07-31 | 2017-10-13 | 杭州电子科技大学 | A kind of Multiscale image fusion methods extracted based on Steerable filter marking area |
US20170301095A1 (en) * | 2015-12-31 | 2017-10-19 | Shanghai United Imaging Healthcare Co., Ltd. | Methods and systems for image processing |
CN107316285A (en) * | 2017-07-05 | 2017-11-03 | 江南大学 | The image interfusion method detected towards apple quality |
US20180061088A1 (en) * | 2016-08-26 | 2018-03-01 | General Electric Company | Guided filter for multiple level energy computed tomography (ct) |
CN108052988A (en) * | 2018-01-04 | 2018-05-18 | 常州工学院 | Guiding conspicuousness image interfusion method based on wavelet transformation |
CN109493306A (en) * | 2018-10-11 | 2019-03-19 | 南昌航空大学 | A kind of multi-modality medical image fusion method |
CN109509163A (en) * | 2018-09-28 | 2019-03-22 | 洛阳师范学院 | A kind of multi-focus image fusing method and system based on FGF |
CN109544494A (en) * | 2018-11-12 | 2019-03-29 | 北京航空航天大学 | The fusion method of passive millimeter wave image and visible images in a kind of human body safety check |
-
2019
- 2019-05-24 CN CN201910442219.XA patent/CN110211081B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016083417A (en) * | 2015-12-25 | 2016-05-19 | キヤノン株式会社 | Image processing device, image processing method, program, and computer recording medium |
US20170301095A1 (en) * | 2015-12-31 | 2017-10-19 | Shanghai United Imaging Healthcare Co., Ltd. | Methods and systems for image processing |
US20180061088A1 (en) * | 2016-08-26 | 2018-03-01 | General Electric Company | Guided filter for multiple level energy computed tomography (ct) |
CN107316285A (en) * | 2017-07-05 | 2017-11-03 | 江南大学 | The image interfusion method detected towards apple quality |
CN107248150A (en) * | 2017-07-31 | 2017-10-13 | 杭州电子科技大学 | A kind of Multiscale image fusion methods extracted based on Steerable filter marking area |
CN108052988A (en) * | 2018-01-04 | 2018-05-18 | 常州工学院 | Guiding conspicuousness image interfusion method based on wavelet transformation |
CN109509163A (en) * | 2018-09-28 | 2019-03-22 | 洛阳师范学院 | A kind of multi-focus image fusing method and system based on FGF |
CN109493306A (en) * | 2018-10-11 | 2019-03-19 | 南昌航空大学 | A kind of multi-modality medical image fusion method |
CN109544494A (en) * | 2018-11-12 | 2019-03-29 | 北京航空航天大学 | The fusion method of passive millimeter wave image and visible images in a kind of human body safety check |
Non-Patent Citations (2)
Title |
---|
LIU XINGBIN: ""Multi-modality medical image fusion based on image decomposition framework and nonsubsampled shearlet transform"", 《BIOMEDICAL SIGNAL PROCESSING AND CONTROL》 * |
滕骥才: ""保边平滑在图像融合中的应用"", 《中国优秀硕士学位论文全文数据库》 * |
Also Published As
Publication number | Publication date |
---|---|
CN110211081B (en) | 2023-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Lei et al. | 4D-CT deformable image registration using multiscale unsupervised deep learning | |
Liu et al. | Deep iterative reconstruction estimation (DIRE): approximate iterative reconstruction estimation for low dose CT imaging | |
Hu et al. | Obtaining PET/CT images from non-attenuation corrected PET images in a single PET system using Wasserstein generative adversarial networks | |
Pluim et al. | The truth is hard to make: Validation of medical image registration | |
Dogra et al. | Efficient fusion of osseous and vascular details in wavelet domain | |
Du et al. | Three-layer image representation by an enhanced illumination-based image fusion method | |
Li et al. | Fusion of medical sensors using adaptive cloud model in local Laplacian pyramid domain | |
Florkow et al. | The impact of MRI-CT registration errors on deep learning-based synthetic CT generation | |
Nair et al. | Multi-sensor, multi-modal medical image fusion for color images: A multi-resolution approach | |
Yin et al. | Automatic breast tissue segmentation in MRIs with morphology snake and deep denoiser training via extended Stein’s unbiased risk estimator | |
Wen et al. | Msgfusion: Medical semantic guided two-branch network for multimodal brain image fusion | |
Barba-J et al. | Segmentation and optical flow estimation in cardiac CT sequences based on a spatiotemporal PDM with a correction scheme and the Hermite transform | |
Ambily et al. | Brain tumor detection using image fusion and neural network | |
Ganasala et al. | Medical image fusion based on Frei-Chen masks in NSST domain | |
Nocerino et al. | 3D modelling and rapid prototyping for cardiovascular surgical planning–two case studies | |
Lou et al. | Multimodal medical image fusion based on multiple latent low-rank representation | |
Kang et al. | Anatomy-guided PET reconstruction using l 1 bowsher prior | |
Zhu et al. | Deconvolution-based partial volume correction of PET images with parallel level set regularization | |
Mangalagiri et al. | Toward generating synthetic CT volumes using a 3D-conditional generative adversarial network | |
Poonkodi et al. | 3d-medtrancsgan: 3d medical image transformation using csgan | |
Yan et al. | A multi-modal medical image fusion method in spatial domain | |
Bhavana et al. | Multi-modal image fusion using contourlet and wavelet transforms: a multi-resolution approach | |
Rao et al. | Deep learning-based medical image fusion using integrated joint slope analysis with probabilistic parametric steered image filter | |
CN110211081B (en) | Multimode medical image fusion method based on image attribute and guided filtering | |
Suri et al. | Fusion of region and boundary/surface-based computer vision and pattern recognition techniques for 2-D and 3-D MR cerebral cortical segmentation (Part-II): A state-of-the-art review |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |