CN111899206A - Medical brain image fusion method based on convolutional dictionary learning - Google Patents

Medical brain image fusion method based on convolutional dictionary learning Download PDF

Info

Publication number
CN111899206A
CN111899206A CN202010800233.5A CN202010800233A CN111899206A CN 111899206 A CN111899206 A CN 111899206A CN 202010800233 A CN202010800233 A CN 202010800233A CN 111899206 A CN111899206 A CN 111899206A
Authority
CN
China
Prior art keywords
fused
image
frequency
low
frequency component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010800233.5A
Other languages
Chinese (zh)
Inventor
张铖方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Police College
Original Assignee
Sichuan Police College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Police College filed Critical Sichuan Police College
Priority to CN202010800233.5A priority Critical patent/CN111899206A/en
Publication of CN111899206A publication Critical patent/CN111899206A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a medical brain image fusion method based on convolutional dictionary learning, which comprises the following steps: step 1, decomposing a source medical brain image into a low-frequency component and a high-frequency component, and step 2, fusing the low-frequency component; and 3, fusing the high-frequency components. Step 4, obtaining fused high-frequency components through fast Fourier inverse transformation; and 5, carrying out image reconstruction on the fused low-frequency component and the fused high-frequency component to obtain a fused image. The invention has the advantages that: better image information retention, advanced effect on visual quality and objective index, clearer boundaries of brain tissues, frontal sinuses and the like in the obtained image, and more contribution to the diagnosis and analysis of illness state of medical staff.

Description

Medical brain image fusion method based on convolutional dictionary learning
Technical Field
The invention relates to the technical field of image fusion, in particular to a medical brain image fusion method based on convolutional dictionary learning.
Background
Multi-modal medical brain images play a very important role in medical diagnosis. With the rapid development of medical imaging technology, it becomes possible to obtain human anatomy and functional description with high resolution and larger information amount. This development has prompted research in the field of medical image analysis. At present, various medical brain images have respective characteristics, and the fusion of the brain images is helpful for accurate diagnosis and can also be used for electronic medical treatment. Therefore, medical brain image fusion techniques for obtaining high information quality and compact information representation have attracted attention in the field of information synthesis and enhancement.
Imaging methods such as Computed Tomography (CT) have high spatial resolution and geometric features, and can clearly show bone structures. Magnetic Resonance Imaging (MRI) reveals soft tissues and organs. Therefore, the combination of CT and MRI images provides more information on the pathological state of the relevant organ, improving the diagnostic ability for clinical applications. In recent years, scholars at home and abroad propose various fusion methods and strategies: 1) transform domain based methods such as DWT, NSCT, etc.; 2) fusion methods based on sparse domains, such as sparse representation, joint sparse representation, adaptive sparse representation, and the like, and variants thereof; 3) the fusion strategy based on the neural network, such as the convolution neural network, the generative confrontation network and other classical network models. The various fusion algorithms obtain better fusion effect under certain specific conditions, but ignore semantic conflict of medical brain images, namely, the brightness of the CT image represents the density of the tissues, and the brightness of the MR-T2 image represents the mobility and magnetism of the tissues.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a medical brain image fusion method based on convolutional dictionary learning, and the defects in the prior art are overcome.
In order to realize the purpose, the technical scheme adopted by the invention is as follows:
a medical brain image fusion method based on convolutional dictionary learning comprises the following steps:
step 1, source medical brain image { s }A,sBDecomposed into low-frequency components by fast Fourier transform
Figure BDA0002627102190000021
And high frequency component
Figure BDA0002627102190000022
And 2, fusing the low-frequency components by using an average rule:
Figure BDA0002627102190000023
wherein,
Figure BDA0002627102190000024
is a low frequency fused component.
Step 3, high frequency componentThe fusion uses maximum rule fusion:
Figure BDA0002627102190000025
wherein,
Figure BDA0002627102190000026
and
Figure BDA0002627102190000027
respectively representing high frequency components
Figure BDA0002627102190000028
High-frequency sparse coefficients obtained by Convolutional Basis Pursuit Denoising (CBPDN),
Figure BDA0002627102190000029
is the sparse coefficient of the high-frequency fused component, | · | > count1Represents the 1-norm of.
In the step 4, the step of,
Figure BDA00026271021900000210
obtaining fused high-frequency component H through fast Fourier inverse transformationSF
Step 5, mixing
Figure BDA00026271021900000211
And
Figure BDA00026271021900000212
the image reconstruction is carried out as follows:
Figure BDA00026271021900000213
obtaining a fused image SF
Compared with the prior art, the invention has the advantages that:
better image information retention, advanced effect on visual quality and objective index, clearer boundaries of brain tissues, frontal sinuses and the like in the obtained image, and more contribution to the diagnosis and analysis of illness state of medical staff.
Drawings
FIG. 1 is a block diagram of a medical brain image fusion method according to an embodiment of the present invention;
fig. 2 is a source image of a medical brain tested in an embodiment of the present invention, which is divided into four different groups of brain images (a), (b), (c), and (d), the upper row is CT images, and the lower row is MRI images;
FIG. 3 is a comparison graph of the fusion results of a group of images according to example (a) of the present invention;
FIG. 4 is a comparison graph of the fusion results of the group of images according to example (b) of the present invention;
FIG. 5 is a comparison graph of the fusion results of the group of images according to example (c) of the present invention;
FIG. 6 is a comparison graph of the fusion results of the group of images in example (d) of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings by way of examples.
As shown in fig. 1, the medical brain image fusion method based on the convolutional dictionary learning includes the following steps:
step 1, source medical brain image { s }A,sBDecomposed into low-frequency components by fast Fourier transform
Figure BDA0002627102190000031
And high frequency component
Figure BDA0002627102190000032
Fusing the components respectively by using the following rules;
and 2, fusing the low-frequency components by using an average rule:
Figure BDA0002627102190000033
wherein,
Figure BDA0002627102190000034
is a low frequency fused component.
Step 3, fusing the high-frequency components by using a maximum rule:
Figure BDA0002627102190000035
wherein,
Figure BDA0002627102190000036
and
Figure BDA0002627102190000037
respectively representing high frequency components
Figure BDA0002627102190000038
High-frequency sparse coefficients obtained by Convolutional Basis Pursuit Denoising (CBPDN),
Figure BDA0002627102190000039
is the sparse coefficient of the high-frequency fused component, | · | > count1Represents the 1-norm of.
In the step 4, the step of,
Figure BDA00026271021900000310
obtaining fused high-frequency component H through fast Fourier inverse transformationSF
Step 5, mixing
Figure BDA00026271021900000311
And
Figure BDA00026271021900000312
carrying out image reconstruction formula (1) to obtain a fused image SF
Figure BDA00026271021900000313
Results and analysis of the experiments
To verify the superiority of the proposed algorithm, all algorithms were applied to 4 sets of classical medical brain images (as shown in fig. 2). CT and MRI are taken as examples: CT images employ an X-ray beam that passes directly through the brain, with the beam being somewhat blunted as it exits the other side, since the beam strikes dense tissue during passage, and the blunting or "attenuation" of the X-rays comes from the tissue density encountered along the way. Very dense tissue (e.g., bone) can block a large amount of X-rays. Gray matter will block somewhat, while fluid will block even less; MRI images place protons (here brain protons) in a magnetic field, they are able to receive and then transmit electromagnetic energy, the intensity of the transmitted energy being proportional to the number of protons in the tissue, the signal intensity varying due to the properties of the microenvironment of each proton (e.g., its mobility and local homogeneity of the magnetic field), and the MRI signals may be "weighted" to emphasize certain properties but not others.
(1) Fused image result graph
The fusion results of 4 groups of medical brain images on various methods are shown in fig. 3-6. (a) Representing fusion results based on an Adaptive Sparse Representation (ASR); (b) representing fusion results based on discrete wavelet transform and sparse representation (DWT-SR-4); (c) indicating the fusion results presented herein. And combining 4 groups of fusion results, the fused image obtained by DWT-SR-4 has the worst quality, and the fused image not only has a blocky artifact structure (see FIG. 4(b)), but also has serious damage to the fused image information (see FIG. 5 (b)). The ASR fusion result is superior to the former, but compared with the method provided by the inventor, the overall brightness is lower, and the method is not beneficial to the diagnosis of doctors at the later stage.
(2) Objective evaluation
To better verify the performance of the proposed algorithm, 3Q-series objective evaluation indexes (Q)e,QTEAnd Qp) Acts on the comparative process and the process of the invention (as shown in tables 1-4). By combining the analysis of tables 1-4, the fusion algorithm designed by the invention is slightly lower than an ASR algorithm in part of objective evaluation indexes, but is obviously higher than DWT-SR-4. Q of the proposed algorithme,QTEAnd QpHas an average value of 0.4112, 0.4712 and 0.4982, and compared with a DWT-SR-4 fusion algorithm, the method disclosed by the invention has the value of Qe,QTEAnd QpThe upper parts are respectively improved by 9.62 percent, 7.42 percent and 43.73 percent.
Table 1(a) Objective evaluation results of group images on all fusion methods
Figure BDA0002627102190000041
Figure BDA0002627102190000051
Table 2(b) Objective evaluation results of group images on all fusion methods
Figure BDA0002627102190000052
Table 3 Objective evaluation results of group (c) images on all fusion methods
Figure BDA0002627102190000053
Table 4 Objective evaluation results of group (d) images on all fusion methods
Figure BDA0002627102190000054
It will be appreciated by those of ordinary skill in the art that the examples described herein are intended to assist the reader in understanding the manner in which the invention is practiced, and it is to be understood that the scope of the invention is not limited to such specifically recited statements and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (1)

1. A medical brain image fusion method based on convolutional dictionary learning is characterized by comprising the following steps:
step 1, source medical brain image { s }A,sBDecomposed into low-frequency components by fast Fourier transform
Figure FDA0002627102180000011
And high frequency component
Figure FDA0002627102180000012
And 2, fusing the low-frequency components by using an average rule:
Figure FDA0002627102180000013
wherein,
Figure FDA0002627102180000014
is a low frequency fused component;
and 3, fusing the high-frequency components by using a maximum rule:
Figure FDA0002627102180000015
wherein,
Figure FDA0002627102180000016
and
Figure FDA0002627102180000017
respectively representing high frequency components
Figure FDA0002627102180000018
High-frequency sparse coefficients obtained through convolution basis pursuit denoising,
Figure FDA0002627102180000019
is the sparse coefficient of the high-frequency fused component, | · | > count1A 1-norm representing ·;
in the step 4, the step of,
Figure FDA00026271021800000110
obtaining fused high-frequency component H through fast Fourier inverse transformationSF
Step 5, mixing
Figure FDA00026271021800000111
And
Figure FDA00026271021800000112
the image reconstruction is carried out as follows:
Figure FDA00026271021800000113
obtaining a fused image SF
CN202010800233.5A 2020-08-11 2020-08-11 Medical brain image fusion method based on convolutional dictionary learning Pending CN111899206A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010800233.5A CN111899206A (en) 2020-08-11 2020-08-11 Medical brain image fusion method based on convolutional dictionary learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010800233.5A CN111899206A (en) 2020-08-11 2020-08-11 Medical brain image fusion method based on convolutional dictionary learning

Publications (1)

Publication Number Publication Date
CN111899206A true CN111899206A (en) 2020-11-06

Family

ID=73246443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010800233.5A Pending CN111899206A (en) 2020-08-11 2020-08-11 Medical brain image fusion method based on convolutional dictionary learning

Country Status (1)

Country Link
CN (1) CN111899206A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058507A (en) * 2023-08-17 2023-11-14 浙江航天润博测控技术有限公司 Fourier convolution-based visible light and infrared image multi-scale feature fusion method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559292A (en) * 2018-11-22 2019-04-02 西北工业大学 Multi-modality images fusion method based on convolution rarefaction representation
CN111429392A (en) * 2020-04-13 2020-07-17 四川警察学院 Multi-focus image fusion method based on multi-scale transformation and convolution sparse representation
CN111429393A (en) * 2020-04-15 2020-07-17 四川警察学院 Multi-focus image fusion method based on convolution elastic network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559292A (en) * 2018-11-22 2019-04-02 西北工业大学 Multi-modality images fusion method based on convolution rarefaction representation
CN111429392A (en) * 2020-04-13 2020-07-17 四川警察学院 Multi-focus image fusion method based on multi-scale transformation and convolution sparse representation
CN111429393A (en) * 2020-04-15 2020-07-17 四川警察学院 Multi-focus image fusion method based on convolution elastic network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHENGFANG ZHANG: "Medical Brain Image Fusion Via Convolution Dictionary Learning" *
曹义亲 等: "基于卷积稀疏表示的图像融合方法" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058507A (en) * 2023-08-17 2023-11-14 浙江航天润博测控技术有限公司 Fourier convolution-based visible light and infrared image multi-scale feature fusion method
CN117058507B (en) * 2023-08-17 2024-03-19 浙江航天润博测控技术有限公司 Fourier convolution-based visible light and infrared image multi-scale feature fusion method

Similar Documents

Publication Publication Date Title
JP4919408B2 (en) Radiation image processing method, apparatus, and program
CN106530236B (en) Medical image processing method and system
CN111915538B (en) Image enhancement method and system for digital blood vessel subtraction
CN107292858B (en) Multi-modal medical image fusion method based on low-rank decomposition and sparse representation
CN113808234B (en) Under-sampling-based rapid magnetic particle imaging reconstruction method
Pezzotti et al. An adaptive intelligence algorithm for undersampled knee mri reconstruction: Application to the 2019 fastmri challenge
CN111899206A (en) Medical brain image fusion method based on convolutional dictionary learning
Zhang et al. Photoacoustic digital brain and deep-learning-assisted image reconstruction
Xu et al. A novel multi-modal fundus image fusion method for guiding the laser surgery of central serous chorioretinopathy
Kim et al. Wavelet subband-specific learning for low-dose computed tomography denoising
CN112258438B (en) LDCT image recovery method based on unpaired data
CN112489150B (en) Multi-scale sequential training method of deep neural network for rapid MRI
CN117974468A (en) Multi-mode medical image fusion method for global and local feature interaction parallelism
CN112819740A (en) Medical image fusion method based on multi-component low-rank dictionary learning
Li et al. Medical Image Enhancement Algorithm Based on Biorthogonal Wavelet.
Zhang et al. DARU‐Net: A dual attention residual U‐Net for uterine fibroids segmentation on MRI
Zhang et al. Medical image fusion based a densely connected convolutional networks
Gu et al. AIDS brain MRIs synthesis via generative adversarial networks based on attention-encoder
CN110084772B (en) MRI/CT fusion method based on bending wave
Singh et al. An advanced technique of de-noising medical images using ANFIS
Huang et al. The Effective 3D MRI Reconstruction Method Driven by the Fusion Strategy in NSST Domain
Hai Wavelet-based image fusion for enhancement of ROI in CT image
Tripathi et al. Comparison of different denoising networks on motion artifacted MRI scans
CN114445311B (en) Multi-source medical image fusion method and system based on domain transformation edge-preserving filter
CN116630195A (en) Noise removal method for low-dose X-ray CT image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201106