CN116342444B - Dual-channel multi-mode image fusion method and electronic equipment - Google Patents

Dual-channel multi-mode image fusion method and electronic equipment Download PDF

Info

Publication number
CN116342444B
CN116342444B CN202310123425.0A CN202310123425A CN116342444B CN 116342444 B CN116342444 B CN 116342444B CN 202310123425 A CN202310123425 A CN 202310123425A CN 116342444 B CN116342444 B CN 116342444B
Authority
CN
China
Prior art keywords
image
channel
energy
fusion
fused
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310123425.0A
Other languages
Chinese (zh)
Other versions
CN116342444A (en
Inventor
刘慧�
朱积成
王欣雨
郭强
张永霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Finance and Economics
Original Assignee
Shandong University of Finance and Economics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University of Finance and Economics filed Critical Shandong University of Finance and Economics
Priority to CN202310123425.0A priority Critical patent/CN116342444B/en
Publication of CN116342444A publication Critical patent/CN116342444A/en
Application granted granted Critical
Publication of CN116342444B publication Critical patent/CN116342444B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Algebra (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明提供一种双通道多模态图像融合方法及融合成像终端机,涉及医学成像技术领域,通过JBF变换将源图像分解为结构通道和能量通道;采用局部梯度能量算子将结构通道与组织纤维等小边缘小尺度的细节信息进行融合,采用局部熵细节增强算子、PCNN和相位一致性的NSCT将能量通道与器官边缘强度、纹理特征以及灰度变化情况进行融合;通过逆JBF变换得到融合图像。本发明能够使融合图像在保持边缘、降噪平滑的基础上,增强细节信息,提高与多模态医学图像相似程度的医学图像融合。对结构通道采用改进式的局部梯度能量算子,进一步提高了对融合图像细节信息的表达。

The present invention provides a dual-channel multimodal image fusion method and a fusion imaging terminal, which relate to the field of medical imaging technology. The source image is decomposed into a structure channel and an energy channel by JBF transformation; the structure channel is fused with small edge and small scale detail information such as tissue fiber by using a local gradient energy operator, and the energy channel is fused with the edge intensity, texture features and grayscale change of the organ by using a local entropy detail enhancement operator, PCNN and phase-consistent NSCT; and a fused image is obtained by inverse JBF transformation. The present invention can enhance the detail information of the fused image on the basis of maintaining the edge, noise reduction and smoothness, and improve the fusion of medical images with a similar degree to multimodal medical images. The improved local gradient energy operator is used for the structure channel to further improve the expression of the detail information of the fused image.

Description

一种双通道多模态图像融合方法及电子设备Dual-channel multimodal image fusion method and electronic device

技术领域Technical Field

本发明涉及医学成像技术领域,尤其涉及一种双通道多模态图像融合方法及融合成像终端机。The present invention relates to the field of medical imaging technology, and in particular to a dual-channel multi-modal image fusion method and a fusion imaging terminal.

背景技术Background technique

随着传感器技术和计算机技术的应用与发展,医学成像技术在现代医学诊断和治疗中发挥着越来越重要的作用。由于成像机理和技术限制,单一传感器获取的不同图像只能反映病变部位的局部特征,因此想要在一幅图像中观察到该部位的全部特征,就要对目标模态医学图像的有用信息进行提取,并将多幅原始医学图像互补信息进行融合,使融合图像可以提供更全面、更可靠的病变描述,有助于医生对病变部位做出更准确而全面的诊断。With the application and development of sensor technology and computer technology, medical imaging technology plays an increasingly important role in modern medical diagnosis and treatment. Due to imaging mechanism and technical limitations, different images obtained by a single sensor can only reflect the local features of the lesion. Therefore, if you want to observe all the features of the part in one image, you need to extract the useful information of the target modality medical image and fuse the complementary information of multiple original medical images so that the fused image can provide a more comprehensive and reliable description of the lesion, which helps doctors make a more accurate and comprehensive diagnosis of the lesion.

现有技术中,图像融合技术在医学领域得到广泛研究,许多学者提出大量的图像融合算法,这些方法大致分为空间域技术和频率域技术。空间域技术是指对源图像像素级或颜色空间直接进行融合操作,目前常见的包括图像像素最大值法、图像像素加权平均法、主成分分析法(principal component analysis,PCA)和布罗维变换等。空间域技术能够有效保留医学图像的空间信息,但也常会在融合时出现图像细节丢失,对比度降低,部分光谱信息丢失和光谱退化等现象。In the existing technology, image fusion technology has been widely studied in the medical field, and many scholars have proposed a large number of image fusion algorithms, which are roughly divided into spatial domain technology and frequency domain technology. Spatial domain technology refers to the direct fusion operation of the source image pixel level or color space. Currently, the common ones include image pixel maximum method, image pixel weighted average method, principal component analysis (PCA) and Brovey transform. Spatial domain technology can effectively retain the spatial information of medical images, but it often causes image detail loss, contrast reduction, partial spectral information loss and spectral degradation during fusion.

频域技术的引入明显改善了上述问题,目前常见的频域技术包括金字塔变换、小波变换、多尺度变换(muti-scale transform,MST)等。其中,MST相关工作近年来取得了突破性进展,包括3个步骤:多尺度分解(muti-scale decomposition,MSD)、特定方法下的高低频频率系数选择和逆MSD重构。作为多尺度几何分析中的代表,非下采样轮廓波变换(non-subsampled contourlet transform,NSCT)是在传统轮廓波变换(contourlettransform,CT)的基础上引入非下采样的思想,克服传统轮廓波变换中的方向混叠和伪吉布斯现象。然而,NSCT作为一项频域技术,缺乏对像素之间相似程度、深度距离等空间邻域信息的表达,从而限制了其在保持边缘、降噪平滑方面的能力。The introduction of frequency domain technology has significantly improved the above problems. Currently, common frequency domain technologies include pyramid transform, wavelet transform, multi-scale transform (MST), etc. Among them, MST-related work has made breakthrough progress in recent years, including three steps: multi-scale decomposition (MSD), high- and low-frequency coefficient selection under specific methods, and inverse MSD reconstruction. As a representative of multi-scale geometric analysis, non-subsampled contourlet transform (NSCT) introduces the idea of non-subsampling on the basis of traditional contourlet transform (CT) to overcome the directional aliasing and pseudo-Gibbs phenomenon in traditional contourlet transform. However, as a frequency domain technology, NSCT lacks the expression of spatial neighborhood information such as the similarity and depth distance between pixels, which limits its ability to maintain edges, reduce noise and smooth.

同时,随着双边滤波理论的发展,联合双边滤波器(joint bilateral filter,JBF)正作为一种新型的信号处理手段广泛应用于医学图像融合领域。与传统线性滤波器的融合规则不同,JBF作为非线性滤波器,引入像素间的欧氏距离作为加权,根据空间权重与相似权重的综合特征进行计算,有效提取像素间的结构特征,解决了使用传统平均滤波和低通滤波进行基本细节分离时出现全局模糊、边缘结构特征不理想的情况。然而,其有限的分解层数与方向使融合图像在结构信息和细节的分解程度方面仍有不足,制约了其进一步的应用。此项工作在提高各模态图像多特征表现和纹理质量等方面仍然面临着巨大的挑战。At the same time, with the development of bilateral filtering theory, joint bilateral filter (JBF) is being widely used in the field of medical image fusion as a new signal processing method. Different from the fusion rules of traditional linear filters, JBF, as a nonlinear filter, introduces the Euclidean distance between pixels as weighting, calculates according to the comprehensive characteristics of spatial weight and similarity weight, effectively extracts the structural features between pixels, and solves the problem of global blurring and unsatisfactory edge structural features when using traditional average filtering and low-pass filtering to separate basic details. However, its limited number of decomposition layers and directions make the fused image still insufficient in terms of the decomposition degree of structural information and details, which restricts its further application. This work still faces huge challenges in improving the multi-feature performance and texture quality of each modality image.

发明内容Summary of the invention

本发明提供的双通道多模态图像融合方法不仅可以解决全局模糊、边缘结构特征不理想的情况,还可以保障融合图像在结构信息和细节的分解程度的要求,提高各模态图像多特征表现和纹理质量,满足使用要求。The dual-channel multimodal image fusion method provided by the present invention can not only solve the problems of global blur and unsatisfactory edge structure features, but also ensure the requirements of the decomposition degree of structural information and details of the fused image, improve the multi-feature expression and texture quality of each modality image, and meet the use requirements.

方法包括:步骤1、通过JBF变换将源图像分解为结构通道和能量通道;The method comprises: step 1, decomposing the source image into a structure channel and an energy channel by JBF transformation;

步骤2、采用局部梯度能量算子将结构通道与组织纤维等小边缘小尺度的细节信息进行融合,采用局部熵细节增强算子、PCNN和相位一致性的NSCT将能量通道与器官边缘强度、纹理特征以及灰度变化情况进行融合;Step 2: Use the local gradient energy operator to fuse the structural channel with the small edge and small scale detail information such as tissue fibers, and use the local entropy detail enhancement operator, PCNN and phase-consistent NSCT to fuse the energy channel with the organ edge intensity, texture features and grayscale changes;

步骤3、通过逆JBF变换得到融合图像。Step 3: Obtain the fused image through inverse JBF transformation.

进一步需要说明的是,步骤1还包括:对输入图像I进行全局模糊处理,即It should be further explained that step 1 also includes: performing global blur processing on the input image I, that is,

Rm=Gm*I (11)R m =G m *I (11)

其中,Rm表示在标准差为σ下的平滑结果;Gm表示方差为σ2的高斯滤波器,在(x,y)处的高斯滤波器Gm定义为:Among them, R m represents the smoothing result under the standard deviation of σ; G m represents the Gaussian filter with variance of σ 2 , and the Gaussian filter G m at (x, y) is defined as:

使用加权平均高斯滤波器生成全局模糊图像G,即The weighted average Gaussian filter is used to generate the global blurred image G, that is,

其中,I表示输入图像;N(j)表示像素点i的相邻像素集;表示像素值的方差;Zj表示归一化操作,即Where I represents the input image; N(j) represents the set of adjacent pixels of pixel i; represents the variance of pixel values; Z j represents the normalization operation, that is,

采用JBF来恢复能量通道的大尺度结构,即JBF is used to restore the large-scale structure of the energy channel, that is,

其中,gs表示基于像素之间强度差异的强度范围函数;gd表示基于像素距离的空间距离函数;Zj表示归一化操作,即Among them, gs represents the intensity range function based on the intensity difference between pixels; gd represents the spatial distance function based on the pixel distance; Zj represents the normalization operation, that is,

σsr分别表示控制双边滤波器的空间权重和范围权重;σ sr represent the spatial weight and range weight of the bilateral filter respectively;

得到源图像A,B的能量通道EI(x,y),并通过式(19)获得结构通道SI(x,y);Get the energy channel E I (x, y) of the source images A and B, and obtain the structure channel S I (x, y) through equation (19);

SI(x,y)=I(x,y)-EI(x,y) (19)。S I (x, y) = I (x, y) - E I (x, y) (19).

进一步需要说明的是,步骤1还包括:构造局部梯度能量算子,即It should be further explained that step 1 also includes: constructing a local gradient energy operator, that is,

LGE(x,y)=NE1(x,y)·ST(x,y) (20)LGE(x,y)=NE 1 (x,y)·ST(x,y) (20)

其中,ST(x,y)表示由STS产生的结构张量显著图像;Among them, ST(x,y) represents the structural tensor saliency image generated by STS;

NE1(x,y)表示在(x,y)处的图像的局部能量,即NE 1 (x, y) represents the local energy of the image at (x, y), that is,

(x,y)处邻域尺寸大小为(2N+1)×(2N+1),N取值为4;The size of the neighborhood at (x, y) is (2N+1)×(2N+1), where N is 4;

通过比较源图像之间局部梯度能量的大小,得到决策矩阵Smap(x,y)定义为By comparing the local gradient energy between source images, the decision matrix S map (x, y) is defined as

将结构通道融合的决策矩阵更新为Smapi(x,y),即Update the decision matrix of structural channel fusion to S mapi (x, y), that is

其中,Ω1表示以(x,y)为中心,大小为T×T的局部区域,T取值为21;Among them, Ω 1 represents a local area centered at (x, y) with a size of T×T, and T is 21;

根据以下规则得到融合后的结构通道SF(x,y),即The fused structure channel SF (x,y) is obtained according to the following rules:

其中,SA(x,y),SB(x,y)分别为源图像A,B的结构通道。Among them, SA (x, y) and SB (x, y) are the structural channels of the source images A and B respectively.

进一步需要说明的是,步骤1还包括:配置能量通道高频子带融合规则;It should be further explained that step 1 also includes: configuring high frequency sub-band fusion rules of energy channels;

步骤包括:刻画能量通道高频子带的细节信息,以(x,y)为中心的图像局部熵定义为:The steps include: describing the detailed information of the high-frequency sub-band of the energy channel, and the local entropy of the image centered at (x, y) is defined as:

其中,S表示以(x,y)为中心,大小为(2N+1)×(2N+1)的窗口;Where S represents a window centered at (x, y) with a size of (2N+1)×(2N+1);

基于空间频率计算(x,y)处的灰度变化率,反映其细节特征,即The grayscale change rate at (x, y) is calculated based on the spatial frequency to reflect its detail characteristics, that is,

其中,h,w分别表示源图像的长和宽;CF,RF分别表示位于(i,j)处x与y方向的一阶差分,公式为Where h, w represent the length and width of the source image respectively; CF, RF represent the first-order difference in the x and y directions at (i, j), respectively, and the formula is:

CF(x,y)=f(x,y)-f(x-1,y) (27)CF(x,y)=f(x,y)-f(x-1,y) (27)

RF(x,y)=f(x,y)-f(x,y-1) (28)RF(x,y)=f(x,y)-f(x,y-1) (28)

基于边缘密度计算(x,y)处的边缘像素点梯度的幅值,具体定义为:The amplitude of the edge pixel gradient at (x, y) is calculated based on the edge density, which is specifically defined as:

其中,sx,sy分别表示x,y方向上的Sobel算子卷积后的结果,即Among them, s x and s y represent the results of Sobel operator convolution in the x and y directions respectively, that is,

sx=T*hx (30)s x =T*h x (30)

sy=T*hy (31)s y =T*h y (31)

T表示各像素点(x,y);hx,hy分别表示x,y方向上的Sobel算子,即T represents each pixel point (x, y); h x , h y represent the Sobel operator in the x and y directions respectively, that is,

通过高频综合测量算子HM对能量通道高频子带进行融合;The high frequency sub-bands of the energy channel are fused through the high frequency comprehensive measurement operator HM;

其中,参数α111分别用于调整在HM中图像局部熵、空间频率和边缘密度的权重;Among them, the parameters α 1 , β 1 , and γ 1 are used to adjust the weights of the local entropy, spatial frequency, and edge density of the image in HM, respectively;

通过比较能量通道高频子带HM之间大小,得到能量通道高频子带融合的决策矩阵EHmap(x,y),定义为By comparing the size of the high-frequency sub-bands HM of the energy channel, the decision matrix E Hmap (x, y) of the high-frequency sub-band fusion of the energy channel is obtained, which is defined as

同时根据以下规则得到融合后的第1~4层高频子带的融合图像 At the same time, the fused image of the 1st to 4th layer high frequency sub-bands is obtained according to the following rules:

其中,分别表示源图像A,B第1~4层能量通道高频子带。in, They represent the high frequency sub-bands of the 1st to 4th energy channels of the source images A and B respectively.

进一步需要说明的是,方法中对第5层高频子带采用PCNN进行融合,通过计算PCNN激励次数,得到融合后的能量通道高频子带 It should be further explained that the method uses PCNN to fuse the high-frequency sub-band of the fifth layer, and obtains the high-frequency sub-band of the fused energy channel by calculating the number of PCNN excitations.

其中,分别表示源图像A,B第5层能量通道高频子带;分别表示源图像表示第5层能量通道高频子带PCNN激励次数,Tij(n)公式为in, They represent the high frequency sub-bands of the 5th layer energy channel of source images A and B respectively; They represent the source image and the PCNN excitation times of the high-frequency sub-band of the fifth energy channel. The formula for Tij (n) is:

Tij(n)=Tij(n-1)+Pij(n) (38)T ij (n)=T ij (n-1)+P ij (n) (38)

Pij(n)表示PCNN的输出模型。P ij (n) represents the output model of PCNN.

进一步需要说明的是,方法中,为得到PCNN的输出模型,将(x,y)处的神经元的馈电输入和链接输入,定义为It should be further explained that in the method, in order to obtain the output model of PCNN, the feed input and link input of the neuron at (x, y) are defined as

Dij(n)=Iij (39)D ij (n) = I ij (39)

其中,参数VL表示链接输入的振幅;Among them, the parameter VL represents the amplitude of the link input;

Wijop表示八邻域神经元先前的兴奋状态,即 Wijop represents the previous excitation state of the eight neighboring neurons, that is,

其次,利用指数衰减系数ηf计算内部活动项Uij(n)先前值的衰减大小,并通过链接强度β对Dij(n)和Cij(n)进行非线性调制,得到当前内部活动项,定义为Secondly, the exponential decay coefficient ηf is used to calculate the decay size of the previous value of the internal activity term Uij (n), and the link strength β is used to nonlinearly modulate Dij (n) and Cij (n) to obtain the current internal activity term, which is defined as

同时,迭代更新当前动态阈值,即At the same time, the current dynamic threshold is iteratively updated, that is,

其中,ηe和VE分别表示指数衰减系数和Eij(n)的振幅;Where η e and VE represent the exponential decay coefficient and the amplitude of E ij (n), respectively;

利用当前内部活动项Dij(n)与第n-1次迭代时的动态阈值Eij(n-1)进行大小比较,判断PCNN输出模型Pij(n)的状态,定义为The current internal activity item Dij (n) is compared with the dynamic threshold Eij (n-1) at the n-1th iteration to determine the state of the PCNN output model Pij (n), which is defined as

根据式(37)和(44)得到第5层高频子带的融合结果;According to equations (37) and (44), the fusion result of the fifth layer high frequency subband is obtained;

根据以下规则得到融合后的能量通道高频子带The high-frequency sub-band of the fused energy channel is obtained according to the following rules Right now

进一步需要说明的是,方法还包括配置能量通道低频子带融合规则;It should be further noted that the method further includes configuring a low frequency sub-band fusion rule of the energy channel;

具体步骤包括:在(x,y)处的PC值定义为The specific steps include: The PC value at (x, y) is defined as

其中,θk表示位于k处的方向角;表示第n个傅里叶分量与角度θk的振幅大小的值;ω表示用于去除图像信号中的相位成分的参数;Among them, θ k represents the direction angle at position k; represents the value of the amplitude of the nth Fourier component and the angle θk ; ω represents the parameter used to remove the phase component in the image signal;

公式为 The formula is

表示位于(x,y)处图像像素的卷积结果,即 represents the convolution result of the image pixel at (x, y), that is,

IL(x,y)表示位于(x,y)处能量通道低频子带的像素值;表示尺度大小为n的二维Log-Gabor的奇偶对称滤波器组。I L (x, y) represents the pixel value of the low-frequency subband of the energy channel at (x, y); and Represents an even-odd symmetric filter bank of two-dimensional Log-Gabor of scale n.

进一步需要说明的是,方法通过计算(x,y)邻域锐度变化量反映图像局部对比度变化情况,具体定义为:It should be further explained that the method reflects the change of local contrast of the image by calculating the change of sharpness in the (x, y) neighborhood, which is specifically defined as:

其中,M,N取值为3;SCM公式为Among them, M, N takes the value of 3; the SCM formula is

Ω2表示大小为3×3局部区域;Ω 2 represents a local region of size 3×3;

配置局部能量NE2Configure local energy NE 2 ;

其中,M,N取值为3;Among them, M, N takes the value of 3;

通过低频综合测量算子LM对能量通道低频子带进行融合:The low-frequency sub-bands of the energy channel are fused through the low-frequency comprehensive measurement operator LM:

其中,参数α222分别用来调整LM中相位一致值、局部锐度变化量和局部能量大小的权重;Among them, the parameters α 2 , β 2 , and γ 2 are used to adjust the weights of the phase consistency value, the local sharpness change, and the local energy size in LM respectively;

根据以下规则得到融合后的能量通道低频子带The low-frequency sub-band of the fused energy channel is obtained according to the following rules Right now

其中,分别表示源图像能量通道低频子带;ELmap(x,y)表示能量通道低频子带融合的决策矩阵,定义为in, They represent the low-frequency subbands of the energy channel of the source image respectively; E Lmap (x, y) represents the decision matrix for the fusion of the low-frequency subbands of the energy channel, which is defined as

Ri(x,y)定义为R i (x,y) is defined as

N表示源图像的数量;Ω3表示以(x,y)为中心,大小为的滑动窗口,取值为7;N represents the number of source images; Ω 3 represents the center of (x, y) and the size is The sliding window, The value is 7;

使用双坐标系算子对高频子带和低频子带进行线性重建,实现NSCT逆变换,得到能量通道融合图像EFUse the dual coordinate operator to calculate the high frequency subband and low frequency subband Perform linear reconstruction and realize inverse NSCT transform to obtain the energy channel fusion image EF .

进一步需要说明的是,方法还包括:生成结构通道融合图像SF((x,y)和能量通道融合图像EF(x,y),通过叠加得到最后的融合图像:It should be further explained that the method further includes: generating a structure channel fusion image SF ((x, y)) and an energy channel fusion image EF (x, y), and obtaining a final fusion image by superposition:

F(x,y)=SF(x,y)+EF(x,y) (60)F(x,y)=S F (x,y)+E F (x,y) (60)

设置输入为源图像A,B;Set the input to source images A, B;

设置输出为融合图像F;Set the output to be the fused image F;

具体步骤包括:The specific steps include:

Step1、读入源图像A,B,采用JBF分解产生结构通道{SA,SB}和能量通道{EA,EB};Step 1, read the source images A, B, and use JBF decomposition to generate the structure channel { SA , SB } and energy channel { EA , EB };

Step2、对结构通道{SA,SB}采用式(20)的局部梯度能量算子融合生成结构通道融合图像SFStep 2, the structure channel { SA , SB } is fused using the local gradient energy operator of formula (20) to generate the structure channel fused image SF ;

Step3、对能量通道{EA,EB}融合生成能量通道融合图像EFStep 3, fuse the energy channels { EA , EB } to generate the energy channel fusion image EF ;

Step3.1、对能量通道{EA,EB}采用NSCT分解产生能量通道高频子带和能量通道低频子带 Step 3.1. Use NSCT to decompose the energy channel { EA , EB } to generate high-frequency sub-bands of the energy channel and energy channel low frequency subband

Step3.2、对第1~4层高频子带采用式(34)基于LE,SF,ED的高频综合测量算子HM规则对其进行融合;Step 3.2, the high-frequency sub-bands of the 1st to 4th layers are fused using the high-frequency comprehensive measurement operator HM rule based on LE, SF, and ED in formula (34);

Step3.3、对第5层高频子带采用式(37)的PCNN规则对其进行融合;Step 3.3, fuse the fifth layer high frequency sub-band using the PCNN rule of formula (37);

Step3.4、对低频子带采用式(56)基于PC、LSCM、NE2的低频综合测量算子LM规则对其进行融合;Step 3.4, the low-frequency sub-band is fused using the low-frequency comprehensive measurement operator LM rule based on PC, LSCM, and NE 2 in formula (56);

Step3.5、对融合后的高低频子带采用NSCT逆变换生成能量通道EFStep 3.5: For the fused high and low frequency sub-bands The energy channel E F is generated by using the inverse NSCT transform;

Step4、对融合后的结构通道SF和能量通道EF,采用式(60)的JBF逆变换生成最终的融合图像F。Step 4: For the fused structure channel SF and energy channel EF , use the JBF inverse transform of formula (60) to generate the final fused image F.

本发明还提供一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,处理器执行所述程序时实现双通道多模态图像融合方法的步骤。The present invention also provides an electronic device, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the dual-channel multi-modal image fusion method when executing the program.

从以上技术方案可以看出,本发明具有以下优点:It can be seen from the above technical solutions that the present invention has the following advantages:

本发明的双通道多模态图像融合方法能够使融合图像在保持边缘、降噪平滑的基础上,增强细节信息,提高与多模态医学图像相似程度。本发明还对结构通道采用改进式的局部梯度能量算子,并对能量通道低频子带采用由相位、局部锐度变化量和局部能量组成的低频综合测量算子进行计算,进一步提高了对融合图像细节信息的表达。。对JBF变换产生的能量通道通过NSCT再次分解并进行融合处理,提高了框架分解的多方向和多尺度特性;提出一种基于局部熵的增强细节算子,通过计算图像局部熵、空间频率和边缘密度,对能量通道NSCT分解的第1~4层高频子带进行处理,并对第5层高频子带采用脉冲耦合神经网络(pulse coupled neural network,PCNN)进行处理,通过这种深度学习与传统方法相结合的方式提升对能量通道中边缘轮廓结构和纹理特征的提取与利用。The dual-channel multimodal image fusion method of the present invention can enhance the detail information of the fused image on the basis of maintaining the edge, noise reduction and smoothness, and improve the similarity with the multimodal medical image. The present invention also adopts an improved local gradient energy operator for the structure channel, and adopts a low-frequency comprehensive measurement operator composed of phase, local sharpness change and local energy to calculate the low-frequency subband of the energy channel, further improving the expression of the detail information of the fused image. The energy channel generated by the JBF transform is decomposed again by NSCT and fused, which improves the multi-directional and multi-scale characteristics of the framework decomposition; a local entropy-based detail enhancement operator is proposed, which processes the 1st to 4th layer high-frequency subbands of the energy channel NSCT decomposition by calculating the local entropy, spatial frequency and edge density of the image, and uses a pulse coupled neural network (PCNN) to process the 5th layer high-frequency subband, and improves the extraction and utilization of edge contour structure and texture features in the energy channel by combining this deep learning with traditional methods.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本发明的技术方案,下面将对描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solution of the present invention, the drawings required for use in the description will be briefly introduced below. Obviously, the drawings described below are only some embodiments of the present invention. For ordinary technicians in this field, other drawings can be obtained based on these drawings without paying creative work.

图1为双通道多模态图像融合方法框架图;FIG1 is a framework diagram of a dual-channel multimodal image fusion method;

图2为双通道多模态图像融合方法流程图;FIG2 is a flow chart of a dual-channel multimodal image fusion method;

图3为MR-T1/MR-T2图像在σs不同取值下的融合结果图;FIG3 is a diagram showing the fusion results of MR-T1/MR-T2 images under different values of σ s ;

图4为σs不同取值下MR-T1/MR-T2融合质量折线图;FIG4 is a line graph of MR-T1/MR-T2 fusion quality under different values of σ s ;

图5为S不同取值下MR-T1/MR-T2融合质量折线图。FIG5 is a line graph of MR-T1/MR-T2 fusion quality under different S values.

具体实施方式Detailed ways

本发明提供的双通道多模态图像融合方法涉及了JBF、NSCT以及结构张量理论等技术结合局部熵与梯度能量的双通道医学图像融合方式。其中,本发明采用JBF能够有效利用图像的空间结构,使融合后的各类医学图像在平滑的同时表现出良好的包边性,包括3个步骤:首先对于源图像A,B进行JBF变换得到结构通道{SA,SB}和能量通道{EA,EB};其次,利用特定的融合规则对结构通道和能量通道信息提取与融合,得到{SF,EF};最后,通过JBF逆变换得到融合图像F。这一过程既体现了像素之间空间邻近度,又考虑了像素之间灰度相似性,达到保边去噪的目的,具有简单、非迭代、局部的特点。然而,JBF作为一种双通道融合技术,其分解具有局限性,不完全的图像分解导致能量通道包含部分来自结构通道的细节纹理信息,使得后续融合规则无法有效识别并提取相应信息,影响图像融合质量。而NSCT具有多尺度和各向异性,能够刻画图像的奇异性信息,描述其不同频带不同方向的特征。考虑到这一点,将NSCT植入到能量通道中,对能量通道的结构与细节纹理再度分解融合处理,达到提高模型多方向多尺度的目的。The dual-channel multimodal image fusion method provided by the present invention involves a dual-channel medical image fusion method combining JBF, NSCT and structural tensor theory with local entropy and gradient energy. Among them, the present invention adopts JBF to effectively utilize the spatial structure of the image, so that the fused medical images of various types can show good edge wrapping while being smooth, including three steps: first, the source images A and B are transformed by JBF to obtain the structural channel { SA , SB } and the energy channel { EA , EB }; secondly, the structural channel and energy channel information are extracted and fused using specific fusion rules to obtain { SF , EF }; finally, the fused image F is obtained by inverse JBF transformation. This process not only reflects the spatial proximity between pixels, but also considers the grayscale similarity between pixels, so as to achieve the purpose of edge preservation and denoising, and has the characteristics of simplicity, non-iteration and locality. However, as a dual-channel fusion technology, JBF has limitations in its decomposition. Incomplete image decomposition causes the energy channel to contain some detail texture information from the structural channel, so that the subsequent fusion rules cannot effectively identify and extract the corresponding information, affecting the image fusion quality. NSCT is multi-scale and anisotropic, which can describe the singularity information of the image and the characteristics of different frequency bands and directions. Considering this, NSCT is implanted into the energy channel, and the structure and detail texture of the energy channel are decomposed and fused again to achieve the purpose of improving the multi-directional and multi-scale of the model.

对于本发明涉及的NSCT是在CT的基础上,采用上采样滤波器采样替代分解过程中的下采样过程,它由非下采样金字塔(non-subsampled pyramid,NSP)和非下采样方向滤波器组(non-subsampled directional filter bank,NSDFB)两部分组成,分别对图像进行尺度分解和方向分解,避免了采样导致的方向混叠和伪吉布斯现象,保证了分解过程中的平移不变性,提升了对图像边缘信息的提取能力,包括3个步骤:首先,利用NSCT对源图像能量通道{EA,EB}进行分解,得到能量通道高频子带和低频子带其次,通过特定规则对能量通道高低频子带信息提取与融合,得到最后,通过逆NSCT得到能量通道图像EFThe NSCT involved in the present invention is based on CT, and adopts upsampling filter sampling to replace the downsampling process in the decomposition process. It consists of two parts: a non-subsampled pyramid (NSP) and a non-subsampled directional filter bank (NSDFB), which respectively perform scale decomposition and directional decomposition on the image, avoid directional aliasing and pseudo-Gibbs phenomenon caused by sampling, ensure translation invariance in the decomposition process, and improve the ability to extract image edge information. The method includes three steps: first, using NSCT to decompose the energy channel { EA , EB } of the source image to obtain the high-frequency sub-band of the energy channel and low frequency subband Secondly, the high-frequency and low-frequency sub-band information of the energy channel is extracted and fused through specific rules to obtain Finally, the energy channel image EF is obtained through inverse NSCT.

对于本发明的结构张量理论来讲,是通过本地窗口Ω0,在角度为α方向上任取ε→0+,在(x,y)处的图像f(x,y)变化量定义为For the structural tensor theory of the present invention, the change of the image f ( x,y) at (x,y) is defined as:

一般地,用局部变化率C(α)来刻画一幅图像f(x,y)在(x,y)处局部几何特征,定义为Generally, the local change rate C(α) is used to characterize the local geometric features of an image f(x, y) at (x, y), which is defined as

其中,S表示结构张量,即Among them, S represents the structure tensor, that is,

表示一个二阶矩的半正定矩阵 Represents a semi-positive definite matrix with a second-order moment

表示图像f(x,y)的局部梯度向量,公式为 Represents the local gradient vector of the image f(x,y), the formula is

λ12分别表示结构张量的特征值,公式为λ 1 and λ 2 represent the structure tensor The characteristic value of

综上,结构张量显著性检测算子(structural tensor significance detectionoperator,STS)定义为In summary, the structural tensor significance detection operator (STS) is defined as

基于上述技术,通过JBF变换将源图像分解为结构通道和能量通道;采用局部梯度能量算子将结构通道与组织纤维等小边缘小尺度的细节信息进行融合,采用局部熵细节增强算子、PCNN和相位一致性的NSCT将能量通道与器官边缘强度、纹理特征以及灰度变化情况进行融合;通过逆JBF变换得到融合图像。这样,能够使融合图像在保持边缘、降噪平滑的基础上,增强细节信息,提高与多模态医学图像相似程度的医学图像融合方法。Based on the above technology, the source image is decomposed into a structural channel and an energy channel through the JBF transform; the local gradient energy operator is used to fuse the structural channel with the small edge and small scale detail information such as tissue fibers, and the local entropy detail enhancement operator, PCNN and phase-consistent NSCT are used to fuse the energy channel with the edge intensity, texture features and grayscale changes of the organ; the fused image is obtained through the inverse JBF transform. In this way, the fused image can enhance the detail information and improve the similarity with the multimodal medical image on the basis of maintaining the edge and noise reduction smoothness.

本发明的双通道多模态图像融合方法还可以基于人工智能技术对关联的数据进行获取和处理。其中,双通道多模态图像融合方法利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用装置。当然,双通道多模态图像融合方法中既有硬件层面的技术也有软件层面的技术。硬件技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作/交互系统、机电一体化等技术。软件技术主要包括计算机视角技术、机器学习/深度学习以及程序设计语言。程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。The dual-channel multimodal image fusion method of the present invention can also acquire and process the associated data based on artificial intelligence technology. Among them, the dual-channel multimodal image fusion method uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results. The theory, method, technology and application device. Of course, the dual-channel multimodal image fusion method includes both hardware-level technology and software-level technology. Hardware technology generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation/interaction systems, mechatronics, etc. Software technology mainly includes computer perspective technology, machine learning/deep learning and programming language. Programming languages include but are not limited to object-oriented programming languages such as Java, Smalltalk, C++, and conventional procedural programming languages such as "C" language or similar programming languages.

如图1和2示出了本发明的双通道多模态图像融合方法的较佳实施例的流程图。双通道多模态图像融合方法应用于一个或者多个融合成像终端机中,所述融合成像终端机是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(DigitalSignal Processor,DSP)、嵌入式设备等。As shown in Figures 1 and 2, a flow chart of a preferred embodiment of the dual-channel multimodal image fusion method of the present invention is shown. The dual-channel multimodal image fusion method is applied to one or more fusion imaging terminals, which are devices that can automatically perform numerical calculations and/or information processing according to pre-set or stored instructions, and whose hardware includes but is not limited to microprocessors, application specific integrated circuits (ASIC), field-programmable gate arrays (FPGA), digital signal processors (DSP), embedded devices, etc.

融合成像终端机可以是任何一种可与用户进行人机交互的电子产品,例如,个人计算机、平板电脑、智能手机、个人数字助理(Personal Digital Assistant,PDA)、交互式网络电视(Internet Protocol Television,IPTV)、智能式穿戴式设备等。The fusion imaging terminal can be any electronic product that can interact with the user, such as a personal computer, a tablet computer, a smart phone, a personal digital assistant (PDA), an interactive network television (IPTV), a smart wearable device, etc.

融合成像终端机还可以包括网络设备和/或用户设备。其中,所述网络设备包括,但不限于单个网络服务器、多个网络服务器组成的服务器组或基于云计算(CloudComputing)的由大量主机或网络服务器构成的云。The fusion imaging terminal may also include network equipment and/or user equipment, wherein the network equipment includes, but is not limited to, a single network server, a server group consisting of multiple network servers, or a cloud consisting of a large number of hosts or network servers based on cloud computing.

融合成像终端机所处的网络包括但不限于互联网、广域网、城域网、局域网、虚拟专用网络(Virtual Private Network,VPN)等。The network where the fusion imaging terminal is located includes but is not limited to the Internet, wide area network, metropolitan area network, local area network, virtual private network (VPN), etc.

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.

本发明为了获得细节丰富、纹理清晰的融合图像,发明包括联合双边滤波器分解,结构通道、能量通道融合和图像重建3个步骤,如图1和2所示;通过JBF变换将源图像分解为结构通道和能量通道;其次,对结构通道采用局部梯度能量算子融合其组织纤维等小边缘小尺度的细节信息,对能量通道采用局部熵细节增强算子、PCNN和相位一致性的NSCT融合其器官边缘强度、纹理特征与灰度变化情况;最后,通过逆JBF变换得到融合图像。In order to obtain a fused image with rich details and clear texture, the present invention includes three steps of joint bilateral filter decomposition, structural channel, energy channel fusion and image reconstruction, as shown in Figures 1 and 2; the source image is decomposed into a structural channel and an energy channel through JBF transformation; secondly, the local gradient energy operator is used for the structural channel to fuse the detail information of small edges and small scales such as tissue fibers, and the local entropy detail enhancement operator, PCNN and phase-consistent NSCT are used for the energy channel to fuse the edge intensity, texture characteristics and grayscale changes of organs; finally, the fused image is obtained through inverse JBF transformation.

在一个示例性实施例中,为了最大限度地传递源图像的细节纹理,首先,对输入图像I进行全局模糊处理,即In an exemplary embodiment, in order to maximize the transmission of the detail texture of the source image, first, the input image I is globally blurred, that is,

Rm=Gm*I (11)R m =G m *I (11)

其中,Rm表示在标准差为σ下的平滑结果;Gm表示方差为σ2的高斯滤波器,在(x,y)处的高斯滤波器Gm定义为Where Rm represents the smoothing result under standard deviation σ; Gm represents the Gaussian filter with variance σ2 . The Gaussian filter Gm at (x, y) is defined as

随后,使用加权平均高斯滤波器生成全局模糊图像G,即Then, a weighted average Gaussian filter is used to generate the global blurred image G, namely

其中,I表示输入图像;N(j)表示像素点i的相邻像素集;表示像素值的方差;Zj表示归一化操作,即Where I represents the input image; N(j) represents the set of adjacent pixels of pixel i; represents the variance of pixel values; Z j represents the normalization operation, that is,

然而,经过全局模糊处理,图像强度信息相对分散,若将其直接作为能量通道,后续融合规则将无法提取强度信息,导致融合图像出现边界模糊、伪影等问题。为了产生相对集中的边缘强度信息,采用JBF来恢复能量通道的大尺度结构,即However, after global blurring, the image intensity information is relatively scattered. If it is directly used as the energy channel, the subsequent fusion rule will not be able to extract the intensity information, resulting in blurred boundaries and artifacts in the fused image. In order to generate relatively concentrated edge intensity information, JBF is used to restore the large-scale structure of the energy channel, that is,

其中,gs表示基于像素之间强度差异的强度范围函数;gd表示基于像素距离的空间距离函数;Zj表示归一化操作,即Among them, gs represents the intensity range function based on the intensity difference between pixels; gd represents the spatial distance function based on the pixel distance; Zj represents the normalization operation, that is,

σsr分别表示控制双边滤波器的空间权重和范围权重。σ sr represent the spatial weight and range weight of controlling the bilateral filter respectively.

综上,得到源图像A,B的能量通道EI(x,y),并通过式(19)获得结构通道SI(x,y)。In summary, we obtain the energy channel E I (x, y) of the source images A and B, and the structural channel S I (x, y) through formula (19).

SI(x,y)=I(x,y)-EI(x,y) (19)S I (x,y)=I(x,y)-E I (x,y) (19)

作为本发明的实施例中,需要配置结构通道融合规则,具体来讲,在医学成像中,细节信息表达的好坏对器官病变诊断的质量起着决定性的作用,为精确反映器官、组织中小边缘结构与纤维等细节信息,采用基于结构张量和邻域能量的局部梯度能量算子对结构通道信息进行提取与融合。为解决STS无法检测部分缺乏强度函数图像中微小细节特征的问题,构造局部梯度能量算子(local gradient energy,LGE),即As an embodiment of the present invention, it is necessary to configure the structural channel fusion rules. Specifically, in medical imaging, the quality of detail information expression plays a decisive role in the quality of organ lesion diagnosis. In order to accurately reflect the detail information of small edge structures and fibers in organs and tissues, the local gradient energy operator based on the structural tensor and neighborhood energy is used to extract and fuse the structural channel information. In order to solve the problem that STS cannot detect the tiny detail features in some images lacking intensity functions, a local gradient energy operator (local gradient energy, LGE) is constructed, that is,

LGE(x,y)=NE1(x,y)·ST(x,y) (20)LGE(x,y)=NE 1 (x,y)·ST(x,y) (20)

其中,ST(x,y)表示由STS产生的结构张量显著图像;NE1(x,y)表示在(x,y)处的图像的局部能量,即Among them, ST(x,y) represents the structural tensor saliency image generated by STS; NE 1 (x,y) represents the local energy of the image at (x,y), that is,

(x,y)处邻域尺寸大小为(2N+1)×(2N+1),N取值为4。The size of the neighborhood at (x,y) is (2N+1)×(2N+1), where N is 4.

通过比较源图像之间局部梯度能量的大小,得到决策矩阵Smap(x,y)定义为By comparing the local gradient energy between source images, the decision matrix S map (x, y) is defined as

为确保目标图像中区域完整性,将结构通道融合的决策矩阵更新为Smapi(x,y),即To ensure the regional integrity of the target image, the decision matrix of the structural channel fusion is updated to S mapi (x, y), that is,

其中,Ω1表示以(x,y)为中心,大小为T×T的局部区域,T取值为21。Among them, Ω 1 represents a local area with a size of T×T centered at (x, y), and T is 21.

综上,根据以下规则得到融合后的结构通道SF(x,y),即In summary, the fused structural channel SF (x,y) is obtained according to the following rules:

其中,SA(x,y),SB(x,y)分别为源图像A,B的结构通道。Among them, SA (x, y) and SB (x, y) are the structural channels of the source images A and B respectively.

本发明还配置能量通道融合规则。这里是经过JBF分解,能量通道包含器官轮廓结构和边缘强度信息。同时,其分解的局限性导致能量通道包含少量纤维等纹理特征。因此,能量通道信息的复杂性有必要通过NSCT再度分解,采用局部熵细节增强算子、PCNN和相位一致性分别对纹理特征、器官骨骼等轮廓结构提取与融合,进一步提高能量通道信息利用率,提升融合效果。The present invention also configures energy channel fusion rules. Here, after JBF decomposition, the energy channel contains organ contour structure and edge strength information. At the same time, the limitations of its decomposition cause the energy channel to contain a small amount of fiber and other texture features. Therefore, it is necessary to decompose the complexity of the energy channel information again through NSCT, and use the local entropy detail enhancement operator, PCNN and phase consistency to extract and fuse texture features, organ skeleton and other contour structures, further improve the utilization rate of energy channel information, and enhance the fusion effect.

本发明还设置配置能量通道高频子带融合规则。具体来讲,通过NSCT分解,能量通道高频子带各分解层包含不同尺度的器官轮廓结构和纤维纹理特征,对分解层信息提取的好坏直接影响着图像融合效果。同时,各分解层出现了随层数升高,图像尺度信息减小等情况,导致一般图像融合规则难以有效提取最高分解层图像信息。The present invention also configures the energy channel high frequency sub-band fusion rules. Specifically, through NSCT decomposition, each decomposition layer of the energy channel high frequency sub-band contains organ contour structures and fiber texture features of different scales, and the quality of decomposition layer information extraction directly affects the image fusion effect. At the same time, each decomposition layer has the situation that the image scale information decreases with the increase of the number of layers, which makes it difficult for general image fusion rules to effectively extract the image information of the highest decomposition layer.

而PCNN作为一种神经网络模型,具有脉冲同步和全局耦合特性,能够在复杂背景中提取有效信息,优于大部分传统方法,并在图像融合的边缘检测、细化与识别等方面具有显著优势。考虑到这一点,将PCNN嵌入到高频子带的处理中,提升第5层高频子带信息的提取能力,从而达到提高融合图像结构与纹理特征的目的。同时,采用局部熵增强细节算子对第1~4层高频子带进行融合,进一步提高融合图像的器官轮廓、纤维纹理还原程度。通过大量实验证实,能量通道高频子带第1~4层采用局部熵细节增强算子,第5层采用PCNN方法对结构与纹理信息的提取与融合具有显著优势。As a neural network model, PCNN has the characteristics of pulse synchronization and global coupling. It can extract effective information from complex backgrounds, which is superior to most traditional methods. It also has significant advantages in edge detection, refinement and recognition of image fusion. Considering this, PCNN is embedded in the processing of high-frequency sub-bands to improve the extraction ability of the fifth-layer high-frequency sub-band information, so as to achieve the purpose of improving the structure and texture features of the fused image. At the same time, the local entropy enhancement detail operator is used to fuse the 1st to 4th layers of high-frequency sub-bands, further improving the degree of restoration of organ contours and fiber textures in the fused image. A large number of experiments have confirmed that the local entropy detail enhancement operator is used in the 1st to 4th layers of the high-frequency sub-bands of the energy channel, and the PCNN method is used in the 5th layer to extract and fuse the structure and texture information, which has significant advantages.

本发明中,首先,对第1~4层高频子带的融合规则进行介绍。本发明的图像熵作为一种估计图像信息量的统计方法,反映图像所含细节信息的多少。一般来说,熵越大,图像所包含的细节信息就越多。然而,整个图像的熵常常无法反映图像局部细节信息。为解决这一问题,引入图像局部熵,进一步刻画能量通道高频子带的细节信息。以(x,y)为中心的图像局部熵(local entropy,LE)定义为In the present invention, first, the fusion rules of the 1st to 4th layer high-frequency sub-bands are introduced. The image entropy of the present invention is a statistical method for estimating the amount of image information, which reflects the amount of detail information contained in the image. Generally speaking, the greater the entropy, the more detail information the image contains. However, the entropy of the entire image often cannot reflect the local detail information of the image. To solve this problem, the local entropy of the image is introduced to further characterize the detail information of the high-frequency sub-band of the energy channel. The local entropy (LE) of the image centered at (x, y) is defined as

其中,S表示以(x,y)为中心,大小为(2N+1)×(2N+1)的窗口。Where S represents a window centered at (x, y) with a size of (2N+1)×(2N+1).

本发明为进一步突出其纹理信息,引入空间频率(spatial frequency,SF),通过计算(x,y)处的灰度变化率,反映其细节特征,即In order to further highlight its texture information, the present invention introduces spatial frequency (SF) and reflects its detail characteristics by calculating the grayscale change rate at (x, y), that is,

其中,h,w分别表示源图像的长和宽;CF,RF分别表示位于(i,j)处x与y方向的一阶差分,公式为Where h, w represent the length and width of the source image respectively; CF, RF represent the first-order difference in the x and y directions at (i, j), respectively, and the formula is:

CF(x,y)=f(x,y)-f(x-1,y) (27)CF(x,y)=f(x,y)-f(x-1,y) (27)

RF(x,y)=f(x,y)-f(x,y-1) (28)RF(x,y)=f(x,y)-f(x,y-1) (28)

然而,LE,SF作为描述图像细节信息的估计量,缺少对轮廓等大尺度结构信息的提取与表达。因此,引入边缘密度(edge density,ED),通过计算(x,y)处的边缘像素点梯度的幅值,来突出其结构和轮廓边缘的层次感,定义为However, LE and SF, as estimators describing image detail information, lack the extraction and expression of large-scale structural information such as contours. Therefore, edge density (ED) is introduced to highlight the structure and layering of contour edges by calculating the amplitude of the gradient of edge pixels at (x, y). It is defined as

其中,sx,sy分别表示x,y方向上的Sobel算子卷积后的结果,即Among them, s x and s y represent the results of Sobel operator convolution in the x and y directions respectively, that is,

sx=T*hx (30)s x =T*h x (30)

sy=T*hy (31)s y =T*h y (31)

T表示各像素点(x,y);hx,hy分别表示x,y方向上的Sobel算子,,即T represents each pixel point (x, y); h x , h y represent the Sobel operator in the x and y directions respectively, that is,

由此,通过高频综合测量算子HM对能量通道高频子带进行融合。Therefore, the high-frequency sub-bands of the energy channel are fused through the high-frequency comprehensive measurement operator HM.

其中,参数α111分别用于调整在HM中图像局部熵、空间频率和边缘密度的权重。Among them, the parameters α 1 , β 1 , and γ 1 are used to adjust the weights of the local entropy, spatial frequency, and edge density of the image in HM, respectively.

通过比较能量通道高频子带HM之间大小,得到能量通道高频子带融合的决策矩阵EHmap(x,y),定义为By comparing the size of the high-frequency sub-bands HM of the energy channel, the decision matrix E Hmap (x, y) of the high-frequency sub-band fusion of the energy channel is obtained, which is defined as

同时根据以下规则得到融合后的第1~4层高频子带的融合图像 At the same time, the fused image of the 1st to 4th layer high frequency sub-bands is obtained according to the following rules:

其中,分别表示源图像A,B第1~4层能量通道高频子带。in, They represent the high frequency sub-bands of the 1st to 4th energy channels of the source images A and B respectively.

其次,本发明对第5层高频子带采用PCNN进行融合,通过计算PCNN激励次数,得到融合后的能量通道高频子带 Secondly, the present invention uses PCNN to fuse the fifth layer of high-frequency sub-bands, and obtains the fused energy channel high-frequency sub-bands by calculating the number of PCNN excitations.

其中,分别表示源图像A,B第5层能量通道高频子带;分别表示源图像表示第5层能量通道高频子带PCNN激励次数,Tij(n)公式为in, They represent the high frequency sub-bands of the 5th layer energy channel of source images A and B respectively; They represent the source image and the PCNN excitation times of the high-frequency sub-band of the fifth energy channel. The formula for Tij (n) is:

Tij(n)=Tij(n-1)+Pij(n) (38)T ij (n)=T ij (n-1)+P ij (n) (38)

Pij(n)表示PCNN的输出模型。P ij (n) represents the output model of PCNN.

在PCNN中,Dij(n)和Cij(n)分别表示经过迭代n次后,位于(x,y)处神经元的馈电输入和链接输入。Dij(n)与整个迭代过程中输入图像Iij的强度有关;Cij(n)的突触权重与八邻域神经元先前的兴奋状态相关。为得到PCNN的输出模型,首先,将(x,y)处的神经元的馈电输入和链接输入,定义为In PCNN, Dij (n) and Cij (n) represent the feed input and link input of the neuron at (x, y) after n iterations, respectively. Dij (n) is related to the intensity of the input image Iij during the entire iteration process; the synaptic weight of Cij (n) is related to the previous excitation state of the eight neighboring neurons. To obtain the output model of PCNN, first, the feed input and link input of the neuron at (x, y) are defined as

Dij(n)=Iij (39)D ij (n) = I ij (39)

其中,参数VL表示链接输入的振幅;Wijop表示八邻域神经元先前的兴奋状态,即Among them, the parameter V L represents the amplitude of the link input; Wijop represents the previous excitation state of the eight-neighborhood neurons, that is,

其次,利用指数衰减系数ηf计算内部活动项Uij(n)先前值的衰减大小,并通过链接强度β对Dij(n)和Cij(n)进行非线性调制,得到当前内部活动项,定义为Secondly, the exponential decay coefficient ηf is used to calculate the decay size of the previous value of the internal activity term Uij (n), and the link strength β is used to nonlinearly modulate Dij (n) and Cij (n) to obtain the current internal activity term, which is defined as

同时,迭代更新当前动态阈值,即At the same time, the current dynamic threshold is iteratively updated, that is,

其中,ηe和VE分别表示指数衰减系数和Eij(n)的振幅。Where η e and VE represent the exponential decay coefficient and the amplitude of E ij (n), respectively.

最后,利用当前内部活动项Dij(n)与第n-1次迭代时的动态阈值Eij(n-1)进行大小比较,判断PCNN输出模型Pij(n)的状态,定义为Finally, the current internal activity item Dij (n) is compared with the dynamic threshold Eij (n-1) at the n-1th iteration to determine the state of the PCNN output model Pij (n), which is defined as

综上,根据式(37)(44)得到第5层高频子带的融合结果。同时,根据以下规则得到融合后的能量通道高频子带In summary, according to equations (37) and (44), the fusion result of the high-frequency sub-band of the fifth layer is obtained. At the same time, the high-frequency sub-band of the fused energy channel is obtained according to the following rules: Right now

作为本发明的实施例来讲,还配置能量通道低频子带融合规则。As an embodiment of the present invention, an energy channel low-frequency sub-band fusion rule is also configured.

低频子带包含能量通道像素亮度与灰度变化情况。为进一步提升低频子带信息量,采用相位一致性对低频子带图像的信息进行增强处理。相位一致(phase congruency,PC)作为一种无量纲测量,常用来反映图像的清晰度和图像特征的重要性。在(x,y)处的PC值定义为The low-frequency subband contains the changes in the brightness and grayscale of the energy channel pixels. In order to further increase the amount of information in the low-frequency subband, phase congruency is used to enhance the information of the low-frequency subband image. Phase congruency (PC) is a dimensionless measurement that is often used to reflect the clarity of an image and the importance of image features. The PC value at (x, y) is defined as

其中,θk表示位于k处的方向角;表示第n个傅里叶分量与角度θk的振幅大小的值;ω表示用于去除图像信号中的相位成分的参数;公式为Among them, θ k represents the direction angle at position k; represents the value of the amplitude of the nth Fourier component and the angle θk ; ω represents the parameter used to remove the phase component in the image signal; The formula is

表示位于(x,y)处图像像素的卷积结果,即 represents the convolution result of the image pixel at (x, y), that is,

IL(x,y)表示位于(x,y)处能量通道低频子带的像素值;表示尺度大小为n的二维Log-Gabor的奇偶对称滤波器组。I L (x, y) represents the pixel value of the low-frequency subband of the energy channel at (x, y); and Represents an even-odd symmetric filter bank of two-dimensional Log-Gabor of scale n.

然而,PC作为一种对比度不变量,不能反映局部对比度变化情况。因此,引入局部锐度变化量(local sharpness change measure,LSCM),通过计算(x,y)邻域锐度变化量(sharpness change measure,SCM),反映图像局部对比度变化情况,定义为However, PC, as a contrast invariant, cannot reflect the local contrast changes. Therefore, the local sharpness change measure (LSCM) is introduced to reflect the local contrast changes of the image by calculating the sharpness change measure (SCM) of the (x, y) neighborhood, which is defined as

其中,M,N取值为3,SCM公式为Among them, M, N takes the value of 3, and the SCM formula is

Ω2表示大小为3×3局部区域。Ω 2 represents a local region of size 3×3.

由于PC,LSCM不能完全反映局部信号强度,因此引入局部能量NE2Since PC and LSCM cannot fully reflect the local signal intensity, local energy NE 2 is introduced.

其中,M,N取值为3。Among them, M and N are both 3.

由此,通过低频综合测量算子LM对能量通道低频子带进行融合。Therefore, the low-frequency sub-bands of the energy channel are fused through the low-frequency comprehensive measurement operator LM.

其中,参数α222分别用来调整LM中相位一致值、局部锐度变化量和局部能量大小的权重。Among them, the parameters α 2 , β 2 , and γ 2 are used to adjust the weights of the phase consistency value, the local sharpness change, and the local energy size in LM respectively.

综上,根据以下规则得到融合后的能量通道低频子带In summary, the low-frequency sub-band of the fused energy channel is obtained according to the following rules: Right now

其中,分别表示源图像能量通道低频子带;ELmap(x,y)表示能量通道低频子带融合的决策矩阵,定义为in, They represent the low-frequency subbands of the energy channel of the source image respectively; E Lmap (x, y) represents the decision matrix for the fusion of the low-frequency subbands of the energy channel, which is defined as

Ri(x,y)定义为R i (x,y) is defined as

N表示源图像的数量;Ω3表示以(x,y)为中心,大小为的滑动窗口,取值为7。N represents the number of source images; Ω 3 represents the center of (x, y) and the size is The sliding window, The value is 7.

最后,使用双坐标系算子对高频子带和低频子带进行线性重建实现NSCT逆变换,得到能量通道融合图像EFFinally, the high frequency subband is calibrated using the dual coordinate operator and low frequency subband Linear reconstruction is performed to realize the inverse NSCT transform and obtain the energy channel fusion image EF .

本发明的实施例中,还对融合图像进行重建,具体是通过上述步骤生成结构通道融合图像SF(x,y)和能量通道融合图像EF(x,y),再通过叠加,得到最后的融合图像:In the embodiment of the present invention, the fused image is also reconstructed. Specifically, the structure channel fused image SF (x, y) and the energy channel fused image EF (x, y) are generated through the above steps, and then the final fused image is obtained by superposition:

F(x,y)=SF(x,y)+EF(x,y) (60)F(x,y)=S F (x,y)+E F (x,y) (60)

设置输入为源图像A,B;Set the input to source images A, B;

设置输出为融合图像F;Set the output to be the fused image F;

具体步骤包括:The specific steps include:

Step1、读入源图像A,B,采用JBF分解产生结构通道{SA,SB}和能量通道{EA,EB};Step 1, read the source images A, B, and use JBF decomposition to generate the structure channel { SA , SB } and energy channel { EA , EB };

Step2、对结构通道{SA,SB}采用式(20)的局部梯度能量算子融合生成结构通道融合图像SFStep 2, the structure channel { SA , SB } is fused using the local gradient energy operator of formula (20) to generate the structure channel fused image SF ;

Step3、对能量通道{EA,EB}融合生成能量通道融合图像EFStep 3, fuse the energy channels { EA , EB } to generate the energy channel fusion image EF ;

Step3.1、对能量通道{EA,EB}采用NSCT分解产生能量通道高频子带和能量通道低频子带 Step 3.1. Use NSCT to decompose the energy channel { EA , EB } to generate high-frequency sub-bands of the energy channel and energy channel low frequency subband

Step3.2、对第1~4层高频子带采用式(34)基于LE,SF,ED的高频综合测量算子HM规则对其进行融合;Step 3.2, the high-frequency sub-bands of the 1st to 4th layers are fused using the high-frequency comprehensive measurement operator HM rule based on LE, SF, and ED in formula (34);

Step3.3、对第5层高频子带采用式(37)的PCNN规则对其进行融合;Step 3.3, fuse the fifth layer high frequency sub-band using the PCNN rule of formula (37);

Step3.4、对低频子带采用式(56)基于PC,LSCM,NE2的低频综合测量算子LM规则对其进行融合;Step 3.4, the low-frequency sub-band is fused using the low-frequency comprehensive measurement operator LM rule based on PC, LSCM, and NE 2 in formula (56);

Step3.5、对融合后的高低频子带采用NSCT逆变换生成能量通道EFStep 3.5: For the fused high and low frequency sub-bands The energy channel E F is generated by using the inverse NSCT transform;

Step4、对融合后的结构通道SF和能量通道EF,采用式(60)的JBF逆变换生成最终的融合图像F。Step 4: For the fused structure channel SF and energy channel EF , use the JBF inverse transform of formula (60) to generate the final fused image F.

这样,本发明的双通道多模态图像融合方法能够使融合图像在保持边缘、降噪平滑的基础上,增强细节信息,提高与多模态医学图像相似程度。本发明还对结构通道采用改进式的局部梯度能量算子,并对能量通道低频子带采用由相位、局部锐度变化量和局部能量组成的低频综合测量算子进行计算,进一步提高了对融合图像细节信息的表达。对JBF变换产生的能量通道通过NSCT再次分解并进行融合处理,提高了框架分解的多方向和多尺度特性;提出一种基于局部熵的增强细节算子,通过计算图像局部熵、空间频率和边缘密度,对能量通道NSCT分解的第1~4层高频子带进行处理,并对第5层高频子带采用脉冲耦合神经网络(pulse coupled neural network,PCNN)进行处理,通过这种深度学习与传统方法相结合的方式提升对能量通道中边缘轮廓结构和纹理特征的提取与利用。In this way, the dual-channel multimodal image fusion method of the present invention can enhance the detail information of the fused image on the basis of maintaining the edge, noise reduction and smoothness, and improve the similarity with the multimodal medical image. The present invention also adopts an improved local gradient energy operator for the structural channel, and adopts a low-frequency comprehensive measurement operator composed of phase, local sharpness change and local energy to calculate the low-frequency subband of the energy channel, further improving the expression of the detail information of the fused image. The energy channel generated by the JBF transform is decomposed again by NSCT and fused, which improves the multi-directional and multi-scale characteristics of the framework decomposition; a local entropy-based detail enhancement operator is proposed, which processes the 1st to 4th layer high-frequency subbands of the energy channel NSCT decomposition by calculating the local entropy, spatial frequency and edge density of the image, and uses a pulse coupled neural network (PCNN) to process the 5th layer high-frequency subband, and improves the extraction and utilization of edge contour structure and texture features in the energy channel by combining this deep learning with traditional methods.

进一步的,作为上述实施例具体实施方式的实验及结果分析,为了验证本发明方法的技术效果,下面以具体实施效果进行进一步说明:实验数据的设置,设置测试图像。为了充分验证该方法的优越性,进行全面而广泛的实验分析。对来自哈佛医学院网站①的四组不同成像机制捕获的人脑图像数据集进行实验,每张测试图像的分辨率设置为256×256,其中118对多模态医学图像被用来全面验证该方法的有效性。随机挑选4对磁共振成像组(MR-T1/MR-T2)、4对电子计算机断层扫描和磁共振成像组(CT/MR)、4对磁共振成像和单光子发射计算机断层扫描组(MR/SPECT)、4对磁共振成像和正电子发射计算机断层扫描组(MR/PET),分别从视觉和客观指标分析它们的实验结果。Further, as the experiment and result analysis of the specific implementation of the above embodiment, in order to verify the technical effect of the method of the present invention, the following is further explained with the specific implementation effect: setting of experimental data, setting of test images. In order to fully verify the superiority of this method, a comprehensive and extensive experimental analysis is carried out. Experiments are conducted on four sets of human brain image data sets captured by different imaging mechanisms from the Harvard Medical School website①. The resolution of each test image is set to 256×256, of which 118 pairs of multimodal medical images are used to fully verify the effectiveness of this method. Four pairs of magnetic resonance imaging groups (MR-T1/MR-T2), four pairs of electronic computed tomography and magnetic resonance imaging groups (CT/MR), four pairs of magnetic resonance imaging and single photon emission computed tomography groups (MR/SPECT), and four pairs of magnetic resonance imaging and positron emission computed tomography groups (MR/PET) are randomly selected, and their experimental results are analyzed from visual and objective indicators respectively.

所有实验由Matlab 2018编写,运行环境为AMD Ryzen 7 5800with RadeonGraphics3.20GHz,RAM为16.0GB。All experiments were written in Matlab 2018, and the running environment was AMD Ryzen 7 5800 with Radeon Graphics 3.20 GHz and RAM was 16.0 GB.

本发明使用六种当下常用的度量指标来全面定量评估不同融合方法的性能。首先,本发明采用峰值信噪比(peak signal to noise ratio,PSNR)、结构相似度(structural similarity,SSIM)、互信息(mutual infor-mation,MI)三种指标,对融合图像与源图像的相似度进行度量。其指标越高,融合过程产生的失真越小,源图像和融合图像越相似。The present invention uses six commonly used metrics to comprehensively and quantitatively evaluate the performance of different fusion methods. First, the present invention uses three metrics, peak signal to noise ratio (PSNR), structural similarity (SSIM), and mutual information (MI), to measure the similarity between the fused image and the source image. The higher the metric, the smaller the distortion produced by the fusion process, and the more similar the source image and the fused image are.

其中,PSNR通过计算源图像与融合图像之间的均方误差来度量其相似度,SSIM度量源图像和融合图像之间的结构相似度,MI通过计算融合图像的信息熵和融合图像与源图像两者的联合信息熵来度量其相关性。其次,本发明采用空间频率(spatial frequency,SF)、标准偏差(standard deviation,SD)和边缘信息保留度(Qabf)三种指标,对源图像边缘信息、细节纹理的保留度和融合图像的对比度进行度量。其指标越高,融合图像包含的细节和纹理信息越多,从源图像获得的视觉信息质量越好。此外,为了对图像融合的性能做进一步评价,本发明还引入信息熵(entropy,EN)、视觉信息融合保真度(visual informationfusion fidelity,VIFF)指标,对融合图像所含信息量和融合图像对源图像还原程度进一步度量。其指标越高,融合性能越好,融合图像失真情况较小。Among them, PSNR measures the similarity between the source image and the fused image by calculating the mean square error between them, SSIM measures the structural similarity between the source image and the fused image, and MI measures the correlation by calculating the information entropy of the fused image and the joint information entropy of the fused image and the source image. Secondly, the present invention uses three indicators, namely spatial frequency (SF), standard deviation (SD) and edge information retention (Qabf), to measure the edge information of the source image, the retention of detail texture and the contrast of the fused image. The higher the index, the more details and texture information the fused image contains, and the better the quality of the visual information obtained from the source image. In addition, in order to further evaluate the performance of image fusion, the present invention also introduces information entropy (EN) and visual information fusion fidelity (VIFF) indicators to further measure the amount of information contained in the fused image and the degree of restoration of the fused image to the source image. The higher the index, the better the fusion performance and the less distortion of the fused image.

本发明的验证方式是通过固定其他参数,调整一个参数的方法,使用118对多模态医学图像,产生一系列融合结果,并从相似性指数度量和视觉效果等方面对其进行评价,以确定参数的最优值。下面以MR-T1/MR-T2图像融合为例,对最优参数进行分析。The verification method of the present invention is to adjust one parameter by fixing other parameters, using 118 pairs of multimodal medical images to generate a series of fusion results, and evaluate them from aspects such as similarity index measurement and visual effect to determine the optimal value of the parameter. The optimal parameters are analyzed below using MR-T1/MR-T2 image fusion as an example.

1)高斯标准差σs1) Gaussian standard deviation σ s :

高斯标准差σs作为双边滤波器的空间权重,决定着源图像空间信息识别的质量,影响着融合图像的纹理结构和与源图像的相似程度。因此,设置合适的高斯标准差σs显得尤为重要。As the spatial weight of the bilateral filter, the Gaussian standard deviation σs determines the quality of the source image spatial information recognition and affects the texture structure of the fused image and the degree of similarity with the source image. Therefore, it is particularly important to set a suitable Gaussian standard deviation σs .

为确定σs的最优值,固定其他参数,使其在1到6之间取值,实验结果如图3所示。从图3c的特写区域中可以看出,来自源图像的细节信息被严重削弱,脑沟回部位有明显的伪影现象,甚至在鞍上池处出现严重的细节缺失。同时,图3c还出现融合图像灰度信息失真,与源图像信息不匹配的情况,这在医学诊断中是不可取的。从图3d的特写区域可以看出,这一情况得到改善,融合图像轮廓结构相较于图3c更加鲜明,但在脑沟回等部位仍出现纹理特征明显缺失,与源图像相似程度不均衡的情况,其灰度变化无法准确反应病变信息,严重影响医学诊断的准确性。从图3f,3g,3h的特写区域中可以看出,随着σs取值的上升,融合能量损失增大,对比度降低,纤维纹理特征明显削弱,同时,融合图像与MR-T1,MR-T2图像相似程度不均衡的情况愈发显著,随着σs取值的上升,融合图像侧脑室前脚中包含的MR-T2图像信息逐渐减少,甚至当σs取值为6时,融合图像无法反映其MR-T2信息。然而,当σs取值为3时,融合图像无论是纹理细节特征表达,源图像信息的还原,还是权衡MR-T1,MR-T2图像信息,相较于其他取值具有明显优势,从图3e可以看出,融合图像脑沟回部位纹理更加清晰,纤维细节变化显著,且侧脑室前脚灰度均衡,边缘清晰,没有伪影和畸变现象。此外,通过表1客观评价指标可以看出,当σs取值为3时,融合图像与源图像信息的像素灰度与结构相似度最高,融合性能最好。In order to determine the optimal value of σs , other parameters are fixed to be between 1 and 6. The experimental results are shown in Figure 3. As can be seen from the close-up area of Figure 3c, the detail information from the source image is severely weakened, there are obvious artifacts in the sulci and gyri, and even serious detail loss occurs in the suprasellar cistern. At the same time, Figure 3c also shows that the grayscale information of the fused image is distorted and does not match the source image information, which is not desirable in medical diagnosis. As can be seen from the close-up area of Figure 3d, this situation has been improved, and the contour structure of the fused image is more distinct than that of Figure 3c, but there is still a clear lack of texture features in the sulci and gyri, and the similarity with the source image is uneven. Its grayscale change cannot accurately reflect the lesion information, which seriously affects the accuracy of medical diagnosis. It can be seen from the close-up areas of Figures 3f, 3g, and 3h that as the value of σs increases, the fusion energy loss increases, the contrast decreases, and the fiber texture features are significantly weakened. At the same time, the imbalance between the similarity between the fused image and the MR-T1 and MR-T2 images becomes more and more obvious. As the value of σs increases, the MR-T2 image information contained in the anterior crus of the lateral ventricle in the fused image gradually decreases. Even when the value of σs is 6, the fused image cannot reflect its MR-T2 information. However, when the value of σs is 3, the fused image has obvious advantages over other values in terms of texture detail feature expression, source image information restoration, and balancing MR-T1 and MR-T2 image information. As can be seen from Figure 3e, the texture of the sulcus and gyrus of the fused image is clearer, the fiber details change significantly, and the grayscale of the anterior crus of the lateral ventricle is balanced, the edges are clear, and there are no artifacts and distortions. In addition, it can be seen from the objective evaluation indicators in Table 1 that when the value of σs is 3, the pixel grayscale and structural similarity between the fused image and the source image information is the highest, and the fusion performance is the best.

将表1数据以折线图的形式表现,可以更加直观地分析σs变化下的性能优劣,如图4所示。从图4可以看出,随着σs增大,客观指标随之增加,当σs取值为3时达到峰值,然而,当σs取值超过3时,融合图像与源图像相似程度逐渐降低,不利影响随着σs的增大而增强,图像融合性能随之下降。因此,σs取值为3时,无论在主观分析还是客观指标,融合图像与源图像相似程度最高,纹理细节信息最为显著,融合性能最好。因此将σs最优值调整为3。此外,通过大量实验发现,实验结果不会受等式(16)中参数σr取值的影响,因此σr取值为0.05。The data in Table 1 are presented in the form of a line graph, which can more intuitively analyze the performance advantages and disadvantages under the change of σs , as shown in Figure 4. As can be seen from Figure 4, as σs increases, the objective index increases accordingly, and reaches a peak when σs is 3. However, when σs exceeds 3, the similarity between the fused image and the source image gradually decreases, and the adverse effect increases with the increase of σs , and the image fusion performance decreases accordingly. Therefore, when σs is 3, the similarity between the fused image and the source image is the highest, the texture detail information is the most significant, and the fusion performance is the best, whether in subjective analysis or objective indicators. Therefore, the optimal value of σs is adjusted to 3. In addition, through a large number of experiments, it is found that the experimental results will not be affected by the value of the parameter σr in equation (16), so σr is taken as 0.05.

表1σs不同取值下MR-T1/MR-T2融合结果的客观评价Table 1 Objective evaluation of MR-T1/MR-T2 fusion results under different values of σ s

注。粗体表示最优值。Note: Bold indicates optimal values.

2)窗口尺寸S:2) Window size S:

在能量通道高频子带细节增强算子中,窗口尺寸S作为局部图像熵的窗口大小,决定着源图像分块情况,从而通过计算各分块熵值大小刻画出源图像的信息。因此,设置合适的窗口尺寸S对源图像信息提取程度的好坏起着至关重要的作用。In the energy channel high frequency subband detail enhancement operator, the window size S, as the window size of the local image entropy, determines the source image block situation, and thus depicts the source image information by calculating the entropy value of each block. Therefore, setting a suitable window size S plays a vital role in the quality of the source image information extraction.

通过固定其他参数,使窗口尺寸S在1到6之间取值,实验结果如表2所示。从表2可以看出,随着窗口尺寸S的增大,客观指标均有所上升,直至窗口尺寸S取值为3时,PSNR,MI数值达到最佳状态,此时融合性能最好,对源图像信息的还原度最高,然而,随着窗口尺寸S增大,其取值超过3时,融合图像与源图像相似性指数逐渐降低。同时,随着窗口尺寸S增大,SSIM数值也随之增加,当其取值为2时,SSIM达到最佳状态,且最佳状态一直保持,当窗口尺寸S超过5时,SSIM数值开始下降,融合性能与图像质量越来越差。综上分析,当S取值为3时,融合图像与源图像信息的像素灰度与结构相似度最高,融合性能最好。By fixing other parameters, the window size S is set between 1 and 6, and the experimental results are shown in Table 2. As can be seen from Table 2, as the window size S increases, the objective indicators all increase until the window size S is 3, the PSNR and MI values reach the best state, at which time the fusion performance is the best and the restoration of the source image information is the highest. However, as the window size S increases, when its value exceeds 3, the similarity index between the fused image and the source image gradually decreases. At the same time, as the window size S increases, the SSIM value also increases. When its value is 2, SSIM reaches the best state, and the best state is maintained. When the window size S exceeds 5, the SSIM value begins to decrease, and the fusion performance and image quality become worse and worse. In summary, when S is 3, the pixel grayscale and structural similarity between the fused image and the source image information is the highest, and the fusion performance is the best.

同样,将表2数据以折线图的形式表现,如图5所示。从图5可以看出,当窗口尺寸S取值为3时,相较于其他取值,各项客观指标均处于峰值状态,此时融合图像无论在源图像的还原程度、相似程度,还是细节纹理表达方面均达到最佳状态,有利于医疗工作者对患者病变信息的捕捉与分析,提升了医学诊断的可靠性与真实性,因此将S最优值调整为3。Similarly, the data in Table 2 are presented in the form of a line graph, as shown in Figure 5. As can be seen from Figure 5, when the window size S is 3, compared with other values, all objective indicators are at peak values. At this time, the fused image reaches the best state in terms of the degree of restoration and similarity of the source image, as well as the expression of detailed texture, which is conducive to the capture and analysis of patient lesion information by medical workers and improves the reliability and authenticity of medical diagnosis. Therefore, the optimal value of S is adjusted to 3.

表2S不同取值下MR-T1/MR-T2融合结果的客观评价Table 2 Objective evaluation of MR-T1/MR-T2 fusion results under different S values

注。粗体表示最优值。Note: Bold indicates optimal values.

本发明提供的双通道多模态图像融合方法中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。The units and algorithm steps of each example described in the embodiment disclosed in the dual-channel multimodal image fusion method provided by the present invention can be implemented by electronic hardware, computer software or a combination of the two. In order to clearly illustrate the interchangeability of hardware and software, the composition and steps of each example have been generally described in the above description according to function. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of the present invention.

本发明提供的双通道多模态图像融合方法是结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。The dual-channel multimodal image fusion method provided by the present invention is a combination of the units and algorithm steps of each example described in the embodiments disclosed herein, and can be implemented by electronic hardware, computer software, or a combination of the two. In order to clearly illustrate the interchangeability of hardware and software, the composition and steps of each example have been generally described in terms of function in the above description. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of the present invention.

对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。The above description of the disclosed embodiments enables one skilled in the art to implement or use the present invention. Various modifications to these embodiments will be apparent to one skilled in the art, and the general principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the present invention. Therefore, the present invention will not be limited to the embodiments shown herein, but rather to the widest scope consistent with the principles and novel features disclosed herein.

Claims (2)

1.一种双通道多模态图像融合方法,其特征在于,方法包括:1. A dual-channel multimodal image fusion method, characterized in that the method comprises: 设置输入为源图像,Set the input to the source image , ; 设置输出为融合图像F;Set the output to be the fused image F; 具体包括:Specifically include: Step1、读入源图像,, 采用JBF分解产生结构通道和能量通道Step 1. Read the source image , , using JBF decomposition to generate structural channels and energy channels ; Step1还包括:对输入图像进行全局模糊处理, 即Step 1 also includes: Perform global fuzzy processing, that is (11) (11) 其中, 表示在标准差为下的平滑结果; 表示方差为的高斯滤波器, 在处的高斯滤波器定义为:in, Indicates that the standard deviation is The smoothing result under The variance is The Gaussian filter, Gaussian filter at defined as: (12) (12) 使用加权平均高斯滤波器生成全局模糊图像, 即Generate a globally blurred image using a weighted average Gaussian filter , Right now (13) (13) 其中, 表示输入图像; 表示像素点的相邻像素集; 表示像素值的方差;表示归一化操作, 即in, represents the input image; Represents pixel The set of adjacent pixels; represents the variance of pixel values; represents the normalization operation, that is, (14) (14) 采用JBF来恢复能量通道的大尺度结构, 即JBF is used to recover the large-scale structure of the energy channel, namely (15) (15) 其中, 表示基于像素之间强度差异的强度范围函数; 表示基于像素距离的空间距离函数; 表示归一化操作, 即in, represents an intensity range function based on the intensity differences between pixels; represents the spatial distance function based on pixel distance; represents the normalization operation, that is, (16) (16) (17) (17) (18) (18) , 分别表示控制双边滤波器的空间权重和范围权重; , Respectively represent the spatial weight and range weight of controlling the bilateral filter; 得到输入图像的能量通道, 并通过式(19)获得结构通道Get the energy channel of the input image , and the structural channel is obtained by formula (19) ; (19); (19); 由此分别获得输入源图像A、B的结构通道和能量通道Thus, the structural channels of the input source images A and B are obtained respectively. , and energy channels , ; Step2、对结构通道采用式(20)的局部梯度能量算子融合生成结构通道融合图像Step 2: Structural channel The local gradient energy operator fusion of formula (20) is used to generate the structure channel fusion image ; Step2还包括:构造局部梯度能量算子, 即Step 2 also includes: constructing a local gradient energy operator, namely (20) (20) 其中, 表示由STS产生的结构张量显著图像;in, represents the structural tensor saliency image produced by STS; 表示在处的图像的局部能量, 即 Indicated in The local energy of the image at (21) (twenty one) 处邻域尺寸大小为, 取值为4; The size of the neighborhood is , The value is 4; 通过比较源图像之间局部梯度能量的大小, 得到决策矩阵定义为By comparing the local gradient energy between source images, the decision matrix is obtained defined as (22) (twenty two) 将结构通道融合的决策矩阵更新为, 即Update the decision matrix of structure channel fusion as , Right now (23) (twenty three) 其中, 表示以为中心, 大小为21*21的局部区域;in, Indicates The local area with the center being 21*21 in size; 根据以下规则得到融合后的结构通道融合图像, 即The fused structure channel fusion image is obtained according to the following rules , Right now (24) (twenty four) 其中, , 分别为源图像, 的结构通道;in, , The source images , Structural channels; Step3、对能量通道融合生成能量通道融合图像Step 3: Energy channel Fusion generates energy channel fusion image ; Step3.1、对能量通道采用NSCT分解产生能量通道高频子带和能量通道低频子带Step 3.1, Energy channel Use NSCT decomposition to generate high-frequency sub-bands of energy channels and energy channel low frequency subband ; Step3.2、对第1~4层高频子带采用式(34)基于LE, SF, ED的高频综合测量算子HM规则对其进行融合;Step 3.2, the high-frequency sub-bands of the 1st to 4th layers are fused using the high-frequency comprehensive measurement operator HM rule based on LE, SF, and ED in formula (34); Step3.2还包括:配置能量通道高频子带融合规则;Step 3.2 also includes: configuring high frequency sub-band fusion rules for energy channels; 配置能量通道高频子带融合规则包括:刻画能量通道高频子带的细节信息,以为中心的图像局部熵定义为:Configuring the high-frequency sub-band fusion rules of the energy channel includes: describing the detailed information of the high-frequency sub-band of the energy channel, The local entropy of the image centered is defined as: (25) (25) 其中, 表示以为中心, 大小为的窗口;in, Indicates is the center, the size is Window; 基于空间频率计算处的灰度变化率,反映其细节特征, 即Based on spatial frequency calculation The grayscale change rate at reflects its detail characteristics, that is, (26) (26) 其中, , 分别表示源图像的长和宽;CF, RF分别表示位于处x与y方向的一阶差分, 公式为in, , Respectively represent the length and width of the source image; CF, RF represent the The first-order difference in the x and y directions is: (27) (27) (28) (28) 基于边缘密度计算处的边缘像素点梯度的幅值,具体定义为:Based on edge density calculation The amplitude of the gradient of the edge pixel at is specifically defined as: (29) (29) 其中, , 分别表示, 方向上的Sobel算子卷积后的结果,即in, , Respectively , The result of the Sobel operator convolution in the direction is (30) (30) (31) (31) 表示各像素点; 分别表示方向上的Sobel算子,即 Represents each pixel ; , Respectively , The Sobel operator in the direction, that is (32) (32) (33) (33) 通过高频综合测量算子HM对能量通道高频子带进行融合;The high frequency sub-bands of the energy channel are fused through the high frequency comprehensive measurement operator HM; (34) (34) 其中, 参数, , 分别用于调整在HM中图像局部熵、空间频率和边缘密度的权重;Among them, the parameters , , They are used to adjust the weights of local entropy, spatial frequency and edge density of the image in HM respectively; 通过比较能量通道高频子带HM之间大小,得到能量通道高频子带融合的决策矩阵, 定义为By comparing the size of the high-frequency sub-bands HM of the energy channel, the decision matrix of the high-frequency sub-band fusion of the energy channel is obtained. , defined as (35) (35) 同时根据以下规则得到融合后的第1~4层高频子带的融合图像At the same time, the fused image of the 1st to 4th layer high frequency sub-bands is obtained according to the following rules: ; (36) (36) 其中, , 分别表示源图像, 第1~4层能量通道高频子带;in, , Represents the source images , High frequency sub-bands of the 1st to 4th layer energy channels; Step3.3、对第5层高频子带采用式(37)的PCNN规则对其进行融合;Step 3.3, fuse the fifth layer high frequency sub-band using the PCNN rule of formula (37); 方法中对第5层高频子带采用PCNN进行融合, 通过计算PCNN激励次数, 得到融合后的能量通道高频子带In this method, PCNN is used to fuse the high-frequency sub-bands of the fifth layer. By calculating the number of PCNN excitations, the high-frequency sub-bands of the fused energy channel are obtained. ; (37) (37) 其中, , 分别表示源图像, 第5层能量通道高频子带;, 分别表示源图像表示第5层能量通道高频子带PCNN激励次数, 公式为in, , Represents the source images , The 5th layer energy channel high frequency sub-band; , They represent the source image and the PCNN excitation times of the high-frequency sub-band of the 5th energy channel, The formula is (38) (38) 表示PCNN的输出模型; Represents the output model of PCNN; 方法中,为得到PCNN的输出模型,将处的神经元的馈电输入和链接输入, 定义为In the method, in order to obtain the output model of PCNN, The feed input and link input of the neuron at is defined as (39) (39) (40) (40) 其中, 参数表示链接输入的振幅;Among them, the parameters represents the amplitude of the link input; 表示八邻域神经元先前的兴奋状态, 即 represents the previous excitation state of the eight neighboring neurons, that is, (41) (41) 其次, 利用指数衰减系数计算内部活动项先前值的衰减大小, 并通过链接强度进行非线性调制, 得到当前内部活动项, 定义为Secondly, using the exponential decay coefficient Calculate internal activity items The decay magnitude of the previous value, and by the link strength right and Perform nonlinear modulation to obtain the current internal activity item, which is defined as (42) (42) 同时, 迭代更新当前动态阈值, 即At the same time, the current dynamic threshold is iteratively updated, i.e. (43) (43) 其中, 分别表示指数衰减系数和的振幅;in, and denote the exponential decay coefficient and Amplitude of 利用当前内部活动项与第次迭代时的动态阈值进行大小比较,判断PCNN输出模型的状态, 定义为Using the current internal activity With Dynamic threshold at iteration Compare the size and judge the PCNN output model The state is defined as (44) (44) 根据式(37)和(44)得到第5层高频子带的融合结果;According to equations (37) and (44), the fusion result of the fifth layer high frequency subband is obtained; Step3.4、对低频子带采用式(56)基于PC、LSCM、的低频综合测量算子LM规则对其进行融合;Step 3.4, for the low frequency sub-band, use formula (56) based on PC, LSCM, The low-frequency comprehensive measurement operator LM rule is used to fuse them; 方法通过计算邻域锐度变化量反映图像局部对比度变化情况, 具体定义为:Method by calculation The change in neighborhood sharpness reflects the change in local contrast of the image, which is specifically defined as: (53) (53) 其中, , 取值为3; SCM公式为in, , The value is 3; the SCM formula is (54) (54) 表示大小为局部区域; Indicates size Partial area; 配置局部能量Configuring Local Energy ; (55) (55) 其中, , 取值为3;in, , The value is 3; 通过低频综合测量算子LM对能量通道低频子带进行融合:The low-frequency sub-bands of the energy channel are fused through the low-frequency comprehensive measurement operator LM: (56) (56) 其中, 参数, , 分别用来调整LM中相位一致值、局部锐度变化量和局部能量大小的权重;Among them, the parameters , , They are used to adjust the weights of phase consistency value, local sharpness change and local energy size in LM respectively; 方法还包括配置能量通道低频子带融合规则;The method also includes configuring a low frequency sub-band fusion rule for the energy channel; 具体包括:在处的PC值定义为Specifically include: The PC value at is defined as (46) (46) 其中, 表示位于处的方向角; 表示第个傅里叶分量与角度的振幅大小的值; 表示用于去除图像信号中的相位成分的参数;in, Indicates that it is located The direction angle at Indicates Fourier components and angles The value of the amplitude; represents a parameter for removing a phase component from an image signal; 公式为 The formula is (47) (47) (48) (48) (49) (49) , 表示位于处图像像素的卷积结果, 即 , Indicates that it is located The convolution result of the image pixel at is (50) (50) (51) (51) (52) (52) 表示位于处能量通道低频子带的像素值;表示尺度大小为的二维Log-Gabor的奇偶对称滤波器组; Indicates that it is located The pixel value of the low-frequency subband of the energy channel at; and Indicates the scale size Two-dimensional Log-Gabor even-odd symmetric filter bank; 根据以下规则得到融合后的能量通道低频子带, 即The low-frequency sub-band of the fused energy channel is obtained according to the following rules , Right now (57) (57) 其中, , 分别表示源图像能量通道低频子带;表示能量通道低频子带融合的决策矩阵, 定义为in, , They represent the low-frequency subbands of the energy channel of the source image respectively; The decision matrix for the fusion of low-frequency subbands of the energy channel is defined as (58) (58) 定义为 defined as (59) (59) 表示源图像的数量;表示以为中心, 大小为的滑动窗口, ,取值为7; Indicates the number of source images; Indicates is the center, the size is The sliding window, , The value is 7; Step3.5、根据以下规则得到融合后的能量通道高频子带,Step 3.5: Obtain the high-frequency sub-band of the fused energy channel according to the following rules , Right now (45); (45); Step3.6、对融合后的高低频子带采用NSCT逆变换生成能量通道融合图像Step 3.6: For the fused high and low frequency sub-bands Generate energy channel fusion image using NSCT inverse transform ; 使用双坐标系算子对高频子带和低频子带进行线性重建,实现NSCT逆变换,得到能量通道融合图像Use the dual coordinate operator to calculate the high frequency subband and low frequency subband Perform linear reconstruction, implement NSCT inverse transform, and obtain energy channel fusion image ; Step4、对融合后的结构通道融合图像和能量通道融合图像,采用式(60)的JBF逆变换生成最终的融合图像F;Step 4: Fuse the fused structural channel image Fusion image with energy channel ,The JBF inverse transform of formula (60) is used to generate the final fused image F; (60)。 (60). 2.一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现如权利要求1所述双通道多模态图像融合方法的步骤。2. An electronic device, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein when the processor executes the program, the steps of the dual-channel multimodal image fusion method as claimed in claim 1 are implemented.
CN202310123425.0A 2023-02-14 2023-02-14 Dual-channel multi-mode image fusion method and electronic equipment Active CN116342444B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310123425.0A CN116342444B (en) 2023-02-14 2023-02-14 Dual-channel multi-mode image fusion method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310123425.0A CN116342444B (en) 2023-02-14 2023-02-14 Dual-channel multi-mode image fusion method and electronic equipment

Publications (2)

Publication Number Publication Date
CN116342444A CN116342444A (en) 2023-06-27
CN116342444B true CN116342444B (en) 2024-07-26

Family

ID=86878173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310123425.0A Active CN116342444B (en) 2023-02-14 2023-02-14 Dual-channel multi-mode image fusion method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116342444B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883803B (en) * 2023-09-07 2023-12-05 南京诺源医疗器械有限公司 Image fusion method and system for glioma edge acquisition
CN118097581B (en) * 2024-04-28 2024-06-25 山东领军智能交通科技有限公司 Road edge recognition control method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494093A (en) * 2022-01-17 2022-05-13 广东工业大学 Multi-modal image fusion method
CN115222725A (en) * 2022-08-05 2022-10-21 兰州交通大学 NSCT domain-based PCRGF and two-channel PCNN medical image fusion method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722877B (en) * 2012-06-07 2014-09-10 内蒙古科技大学 Multi-focus image fusing method based on dual-channel PCNN (Pulse Coupled Neural Network)
CN107403416B (en) * 2017-07-26 2020-07-28 温州大学 NSCT-based medical ultrasonic image denoising method with improved filtering and threshold function
CN113496473A (en) * 2020-04-07 2021-10-12 无锡盛高计算机科技有限公司 Image fusion method based on dynamic target detection
CN113436128B (en) * 2021-07-23 2022-12-06 山东财经大学 Dual-discriminator multi-mode MR image fusion method, system and terminal
CN115018728A (en) * 2022-06-15 2022-09-06 济南大学 Image fusion method and system based on multi-scale transformation and convolution sparse representation
CN115100172A (en) * 2022-07-11 2022-09-23 西安邮电大学 A fusion method of multimodal medical images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494093A (en) * 2022-01-17 2022-05-13 广东工业大学 Multi-modal image fusion method
CN115222725A (en) * 2022-08-05 2022-10-21 兰州交通大学 NSCT domain-based PCRGF and two-channel PCNN medical image fusion method

Also Published As

Publication number Publication date
CN116342444A (en) 2023-06-27

Similar Documents

Publication Publication Date Title
Kollem et al. A review of image denoising and segmentation methods based on medical images
Du et al. Anatomical-functional image fusion by information of interest in local Laplacian filtering domain
CN108830818B (en) Rapid multi-focus image fusion method
CN104282007B (en) Based on the adaptive Method of Medical Image Fusion of non-sampled profile wave convert
CN116342444B (en) Dual-channel multi-mode image fusion method and electronic equipment
Li et al. Multi-modal sensor medical image fusion based on multiple salient features with guided image filter
Lu et al. Nonlocal Means‐Based Denoising for Medical Images
Dakua et al. Patient oriented graph-based image segmentation
Bhujle et al. Laplacian based non-local means denoising of MR images with Rician noise
Dogra et al. Multi-modality medical image fusion based on guided filter and image statistics in multidirectional shearlet transform domain
CN115018728A (en) Image fusion method and system based on multi-scale transformation and convolution sparse representation
Singh et al. Invariant moments and transform-based unbiased nonlocal means for denoising of MR images
Liu et al. Windowed variation kernel Wiener filter model for image denoising with edge preservation
CN115619693A (en) A spatio-temporal fusion model based on nested multi-scale transformation
Raj et al. Denoising of medical images using total variational method
Radhika et al. An adaptive optimum weighted mean filter and bilateral filter for noise removal in cardiac MRI images
Mrázek et al. From two-dimensional nonlinear diffusion to coupled Haar wavelet shrinkage
Larrabide et al. A medical image enhancement algorithm based on topological derivative and anisotropic diffusion
CN118537254A (en) Denoising method based on deep learning and utilizing blind spots and cyclic refinement of MRI (magnetic resonance imaging) image
CN107067387A (en) Method of Medical Image Fusion based on 3D complex shear wavelet domain broad sense statistical correlation models
CN111476764A (en) Method for three-dimensional reconstruction of motion-blurred CT image
Mahapatra et al. An optimal statistical feature‐based transformation function for enhancement of retinal images using adaptive enhanced leader particle swarm optimization
Li et al. Medical image fusion based on local Laplacian decomposition and iterative joint filter
Judson et al. Efficient and robust non-local means denoising methods for biomedical images
Wang et al. Denoising and 3D reconstruction of CT images in extracted tooth via wavelet and bilateral filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant