CN115797244A - Image fusion method based on multi-scale direction co-occurrence filter and intensity transmission - Google Patents
Image fusion method based on multi-scale direction co-occurrence filter and intensity transmission Download PDFInfo
- Publication number
- CN115797244A CN115797244A CN202310069737.8A CN202310069737A CN115797244A CN 115797244 A CN115797244 A CN 115797244A CN 202310069737 A CN202310069737 A CN 202310069737A CN 115797244 A CN115797244 A CN 115797244A
- Authority
- CN
- China
- Prior art keywords
- image
- scale
- layer sub
- base layer
- infrared
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 31
- 230000005540 biological transmission Effects 0.000 title claims description 3
- 230000004927 fusion Effects 0.000 claims abstract description 78
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 63
- 238000012546 transfer Methods 0.000 claims abstract description 37
- 230000009466 transformation Effects 0.000 claims abstract description 28
- 238000007781 pre-processing Methods 0.000 claims abstract description 11
- 238000004364 calculation method Methods 0.000 claims description 20
- 238000000034 method Methods 0.000 claims description 18
- 238000005259 measurement Methods 0.000 claims description 16
- 230000003044 adaptive effect Effects 0.000 claims description 10
- 238000013519 translation Methods 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 5
- 230000014509 gene expression Effects 0.000 claims description 5
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 3
- 150000001875 compounds Chemical class 0.000 claims 6
- 238000000844 transformation Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 5
- 230000005855 radiation Effects 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 8
- 238000013461 design Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012421 spiking Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明涉及图像融合技术领域,尤其涉及一种基于多尺度方向共现滤波器与强度传递的图像融合方法。The invention relates to the technical field of image fusion, in particular to an image fusion method based on multi-scale direction co-occurrence filters and intensity transfer.
背景技术Background technique
近年来各领域对多传感器图像融合技术,尤其是红外与可见光图像融合的需求量激增,如侦察、辅助驾驶以及地质监测等领域。图像融合旨在组合多模态传感器的多组源图像成单一图像,达到全面展示图像信息、加强场景理解的目的。在红外与可见光图像融合技术中,红外传感器能在弱光或遮挡的环境中有效的反映出热辐射目标信息,这是可见光传感器所不具备的;同时,可见光传感器的高分辨率也是红外传感器的有效补充。红外与可见光图像融合方法能有效的利用两者的互补信息,并能充分地去除冗余信息。根据相关学者的研究,目前的红外与可见光图像融合方法可分为传统的方法和基于深度学习的方法。其中,基于耦合脉冲神经网络的融合算法是深度学习类的代表,相关学者进行了迁移应用,取得了不错的效果,但因为其对于训练原始数据的大量需求以及难以解释的网络结构等特点,目前只适用于特定的应用。而传统算法能解决上述问题。在传统算法中,以基于小波族以及多尺度几何分析为代表的多尺度变换的融合方法效果最为出众,是目前普遍采取的融合方法,原因在于其融合机制符合人眼视觉,能充分表现图像的内在特点。但是这类传统算法对于高频细节信息不加以区分,主要是在区分大尺度轮廓和小梯度变化内部区域内纹理信息方面仍然很困难。例如文献《Aishwarya, N., and C. BennilaThangammal. "Visibleand infrared image fusion using DTCWT and adaptive combined clustereddictionary." Infrared Physics & Technology 93 (2018): 300-309》公开了采用改进的双树复小波变换(DTCWT)对红外与可见光图像进行融合,DTCWT是多尺度几何分析的典型代表算法,通过改进并利用DTCWT对源初始图像进行分解,并对分解出的高低频子带采用针对性的融合策略,解决了融合结果目标不突出的问题。但是该方法由于是将图像的细节信息全部分解到高频子带,加重了后续方向滤波器的方向表示与传递的负担,所以不仅最终的融合结果细节信息损失、边缘模糊,而且整体融合速度也较慢。又如公开号为CN114549379A的中国专利《非下采样剪切波变换域下的红外与可见光图像融合方法》通过设计提取残留在低频子带中的细节信息的低频融合策略,以及引导滤波优化加权的高频策略,解决了部分伪影问题和细节传递问题,但是仍因为采用的非下采样剪切波变换未能有效区分不同类型的细节信息而导致出现细节信息损失、边缘模糊的情况。通过使用基于共现滤波器的多尺度融合方法有效解决融合结果细节信息损失、边缘模糊、融合速度慢的问题,但是该方法缺乏方向性描述能力且不能调节最终融合图像的对比度,导致整体融合效果不佳,同时细节方向特征丢失和难以在复杂场景中突出目标的问题并没有很好的改善,仍需要进一步解决。In recent years, the demand for multi-sensor image fusion technology in various fields, especially the fusion of infrared and visible light images, has surged, such as reconnaissance, assisted driving, and geological monitoring. Image fusion aims to combine multiple sets of source images from multi-modal sensors into a single image to achieve the purpose of comprehensively displaying image information and enhancing scene understanding. In the infrared and visible light image fusion technology, the infrared sensor can effectively reflect the thermal radiation target information in the low light or occluded environment, which is not available in the visible light sensor; at the same time, the high resolution of the visible light sensor is also the advantage of the infrared sensor. effective supplement. The infrared and visible light image fusion method can effectively use the complementary information of the two, and can fully remove redundant information. According to the research of relevant scholars, the current infrared and visible light image fusion methods can be divided into traditional methods and methods based on deep learning. Among them, the fusion algorithm based on coupled spiking neural network is a representative of deep learning. Relevant scholars have carried out migration applications and achieved good results. Applies only to certain applications. The traditional algorithm can solve the above problems. Among the traditional algorithms, the multi-scale transformation fusion method based on wavelet family and multi-scale geometric analysis is the most outstanding, and it is the fusion method commonly adopted at present, because its fusion mechanism conforms to human vision and can fully express the image quality. Intrinsic features. However, such traditional algorithms do not distinguish high-frequency detail information, mainly because it is still difficult to distinguish large-scale contours and texture information in internal regions with small gradient changes. For example, the document "Aishwarya, N., and C. BennilaThangammal. "Visible and infrared image fusion using DTCWT and adaptive combined clustered dictionary." Infrared Physics & Technology 93 (2018): 300-309" discloses the use of improved dual-tree complex wavelet transform ( DTCWT) fuses infrared and visible light images. DTCWT is a typical representative algorithm for multi-scale geometric analysis. By improving and using DTCWT to decompose the original source image, and adopting a targeted fusion strategy for the decomposed high and low frequency subbands, the solution It solves the problem that the target of the fusion result is not outstanding. However, because this method decomposes all the detailed information of the image into high-frequency subbands, it increases the burden of direction representation and transmission of the subsequent direction filter, so not only the final fusion result details are lost and the edges are blurred, but the overall fusion speed is also slow. slower. Another example is the Chinese patent "Infrared and Visible Light Image Fusion Method under Non-subsampled Shearlet Transform Domain" with the publication number CN114549379A, by designing a low-frequency fusion strategy to extract the detail information remaining in the low-frequency sub-band, and guiding the filter to optimize weighting. The high-frequency strategy solves part of the artifact problem and the detail transfer problem, but still causes loss of detail information and blurred edges because the non-subsampling shearlet transform used cannot effectively distinguish different types of detail information. By using a multi-scale fusion method based on co-occurrence filters, the problems of loss of detail information, blurred edges, and slow fusion speed of fusion results can be effectively solved. However, this method lacks directional description ability and cannot adjust the contrast of the final fusion image, resulting in an overall fusion effect. Poor, at the same time, the problems of loss of detail direction features and difficulty in highlighting targets in complex scenes have not been improved very well, and further solutions are still needed.
发明内容Contents of the invention
为了解决现有基于共现滤波器的多尺度融合方法存在的缺乏方向性描述能力、无法调节最终融合图像的对比度、整体融合效果不佳、细节方向特征丢失、融合后显著目标不突出的问题,本发明提供一种基于多尺度方向共现滤波器与强度传递的图像融合方法。In order to solve the existing multi-scale fusion methods based on co-occurrence filters, the lack of directional description ability, the inability to adjust the contrast of the final fused image, the poor overall fusion effect, the loss of detailed directional features, and the inconspicuous objects after fusion. The invention provides an image fusion method based on multi-scale direction co-occurrence filter and intensity transfer.
本发明为解决技术问题所采用的技术方案如下:The technical scheme that the present invention adopts for solving technical problems is as follows:
本发明的基于多尺度方向共现滤波器与强度传递的图像融合方法,包括以下步骤:The image fusion method based on multi-scale direction co-occurrence filter and intensity transfer of the present invention comprises the following steps:
步骤S1、源红外与可见光图像数据获取及预处理;Step S1, source infrared and visible light image data acquisition and preprocessing;
步骤S2、采用共现滤波器对预处理后的源红外与可见光图像进行滤波操作,实现初步分解;迭代使用共现滤波器对初步分解后的图像进行多尺度分解,得到多尺度红外细节层子图像、多尺度红外基础层子图像、多尺度可见光细节层子图像和多尺度可见光基础层子图像;Step S2, use the co-occurrence filter to filter the preprocessed source infrared and visible light images to achieve preliminary decomposition; iteratively use the co-occurrence filter to perform multi-scale decomposition on the pre-decomposed image to obtain the multi-scale infrared detail layer image, multi-scale infrared base layer sub-image, multi-scale visible light detail layer sub-image, and multi-scale visible light base layer sub-image;
步骤S3、采用离散紧支撑剪切波变换对多尺度红外基础层子图像和多尺度可见光基础层子图像进行多方向分解,得到多尺度多方向红外基础层子图像和多尺度多方向可见光基础层子图像;Step S3, using discrete compact support shearlet transform to decompose the multi-scale infrared base layer sub-image and the multi-scale visible light base layer sub-image in multiple directions to obtain the multi-scale and multi-directional infrared base layer sub-image and the multi-scale and multi-directional visible light base layer sub-image sub image;
步骤S4、采用基于相位一致和强度测量的融合策略构建融合权重策略进行细节层子图像融合,得到融合后的多尺度细节层子图像;Step S4, using a fusion strategy based on phase consistency and intensity measurement to construct a fusion weight strategy to fuse the sub-images of the detail layer to obtain a fused multi-scale sub-image of the detail layer;
步骤S5、设计自适应强度传递函数作为融合准则进行基础层子图像融合,得到融合后的不同方向的多尺度多方向基础层子图像;Step S5, designing an adaptive intensity transfer function as a fusion criterion to fuse the base layer sub-images to obtain fused multi-scale and multi-directional base layer sub-images in different directions;
步骤S6、对融合后的多尺度细节层子图像和不同方向的多尺度多方向基础层子图像进行重构得到最终的融合图像。Step S6: Reconstruct the fused multi-scale detail layer sub-image and the multi-scale and multi-directional base layer sub-image in different directions to obtain a final fused image.
进一步的,步骤S1的具体操作步骤如下:Further, the specific operation steps of step S1 are as follows:
S1.1分别从红外相机和可见光相机中获取源红外图像和源可见光图像;S1.1 Obtain the source infrared image and the source visible light image from the infrared camera and the visible light camera respectively;
S1.2对源红外图像和源可见光图像分别进行图像去噪操作和图像增强操作。S1.2 Perform image denoising operation and image enhancement operation on the source infrared image and the source visible light image respectively.
进一步的,所述图像去噪操作采用基于滤波的去噪算法;所述图像增强操作采用基于场景拟合的图像增强算法。Further, the image denoising operation adopts a filter-based denoising algorithm; the image enhancement operation adopts an image enhancement algorithm based on scene fitting.
进一步的,步骤S2的具体操作步骤如下:Further, the specific operation steps of step S2 are as follows:
S2.1初步分解;S2.1 Preliminary decomposition;
利用预处理后的源红外与可见光图像中的共现信息分配像素权重,分别采用共现滤波器对预处理后的源红外与可见光图像进行滤波操作,实现初步分解,得到0级红外基础层子图像和0级可见光基础层子图像;将预处理后的源红外图像与0级红外基础层子图像相减得到0级红外细节层子图像,同时将预处理后的源可见光图像与0级可见光基础层子图像相减得到0级可见光细节层子图像;Use the co-occurrence information in the preprocessed source infrared and visible light images to assign pixel weights, and use co-occurrence filters to filter the preprocessed source infrared and visible light images respectively to achieve preliminary decomposition and obtain the 0-level infrared basic layer image and 0-level visible light base layer sub-image; subtract the pre-processed source infrared image from the 0-level infrared base layer sub-image to obtain the 0-level infrared detail layer sub-image, and simultaneously combine the pre-processed source visible light image with the 0-level visible light The base layer sub-image is subtracted to obtain the 0-level visible light detail layer sub-image;
S2.2多尺度分解;S2.2 Multi-scale decomposition;
对0级红外基础层子图像和0级可见光基础层子图像分别迭代使用共现滤波器滤 波级相减操作算子进行多尺度分解,得到多尺度红外基础层子图像 、多尺度可见光基 础层子图像 、多尺度红外细节层子图像 和多尺度可见光细节层子图像 ,i= 1…k,k表示分解级数。The 0-level infrared base layer sub-image and the 0-level visible light base layer sub-image are respectively iteratively decomposed using the co-occurrence filter filter level subtraction operator to perform multi-scale decomposition to obtain the multi-scale infrared base layer sub-image , multi-scale visible light base layer sub-image , multi-scale infrared detail layer sub-image and multi-scale visible detail layer sub-images , i= 1...k, k represents the decomposition series.
进一步的,步骤S3的具体操作步骤如下:Further, the specific operation steps of step S3 are as follows:
分别利用构造好的水平离散剪切波变换和垂直离散剪切波变换对多尺度红外基 础层子图像和多尺度可见光基础层子图像 进行水平和垂直各K个方向的分解,得 到水平方向的K个多尺度多方向红外基础层子图像 、垂直方向的K个多尺度 多方向红外基础层子图像 、水平方向的K个多尺度多方向可见光基础层子 图像 、垂直方向的K个多尺度多方向可见光基础层子图像 、K个多尺度红外细节层子图像 和K个多尺度可见光细节层子图像 。 Using the constructed horizontal discrete shearlet transform and vertical discrete shearlet transform respectively, the multi-scale infrared base layer sub-image and multiscale visible base layer subimages Carry out decomposition in each K direction horizontally and vertically, and obtain K multi-scale and multi-directional infrared base layer sub-images in the horizontal direction , K multi-scale and multi-directional infrared base layer sub-images in the vertical direction , K multi-scale and multi-directional visible light base layer sub-images in the horizontal direction , K multi-scale and multi-directional visible light base layer sub-images in the vertical direction , K multi-scale infrared detail layer sub-images and K multi-scale visible light detail layer sub-images .
进一步的,所述水平离散剪切波变换操作的计算公式如式(6)所示,所述垂直离散剪切波变换操作的计算公式如式(7)所示;Further, the calculation formula of the horizontal discrete shearlet transform operation is shown in formula (6), and the calculation formula of the vertical discrete shearlet transform operation is shown in formula (7);
(6) (6)
, (7) , (7)
式中,k表示分解级数;n1和n2表示两个平移因子,,表示整数域的 平方;s表示方向因子,取s=1,0,-1,s=1时表示45°方向,s=0时表示90°方向,s=-1时表示 180°方向;表示向下取整操作;和分别表示水平离散剪切波变换和垂直离散剪 切波变换;表示第k级分解时红外基础层子图像采用平移变换,表示第k级分解时红外基础层子图像经水平离散剪切波变换的等效 变换, 表示第k级分解时可见光基础层子图像采用平移变换,表示第k级分 解时可见光基础层子图像经水平离散剪切波变换的等效变换,表示 第k级分解时可见光基础层子图像经垂直离散剪切波变换的等效变换。 In the formula, k represents the decomposition level; n 1 and n 2 represent two translation factors, , Represents the square of the integer field; s represents the direction factor, take s=1,0,-1, s=1 represents the 45° direction, s=0 represents the 90° direction, and s=-1 represents the 180° direction; Indicates the rounding down operation; and represent the horizontal discrete shearlet transform and the vertical discrete shearlet transform respectively; Indicates that the infrared base layer sub-image adopts translation transformation when the k-th level decomposition is performed, Indicates the equivalent transformation of the infrared base layer sub-image through the horizontal discrete shearlet transform when decomposing at the kth level, Indicates that the sub-image of the visible light base layer adopts translation transformation when decomposing at the kth level, Represents the equivalent transformation of the sub-image of the visible light base layer through the horizontal discrete shearlet transform when decomposing at the kth level, Represents the equivalent transformation of the sub-image of the visible light base layer after the vertical discrete shearlet transform at the k-th level of decomposition.
进一步的,步骤S4的具体操作步骤如下:Further, the specific operation steps of step S4 are as follows:
S4.1对步骤S2中得到的K个多尺度红外细节层子图像 和K个多尺度可 见光细节层子图像 采用相位一致操作得到相位一致测度算子;所述相位一致 测度算子的计算公式如下所示; S4.1 For the K multi-scale infrared detail layer sub-images obtained in step S2 and K multi-scale visible light detail layer sub-images The phase consistency measure operator is obtained by adopting the phase consistency operation; the phase consistency measure operator The calculation formula of is as follows;
(8) (8)
(9) (9)
式中,θk表示在分解级数k下的方向角,为第n项方向角为θk的傅里叶级 数的幅值,[en,θk,on,θk]为图像与log-Gabor滤波器在位置(x,y)处的卷积结果;∑k表示对变 量k作求和操作,∑n表示对变量n作求和操作;为一个小值正常数; In the formula, θ k represents the direction angle under the decomposition level k, is the magnitude of the Fourier series whose nth direction angle is θ k , [e n, θk , o n, θk ] is the convolution result of the image and the log-Gabor filter at position (x, y) ; ∑ k represents a summation operation on variable k, and ∑ n represents a summation operation on variable n; is a small-valued normal number;
S4.2通过开窗法设计强度测量策略,如式(10)所示;S4.2 Design the strength measurement strategy through the window method, as shown in formula (10);
(10) (10)
式中, (·)表示在分解级数l下的强度测量算子,Il(x,y)表示在分解级数l下位置(x,y)处的多尺度细节层子图像像素值,Il(x0,y0)表示在分解级数l下位置(x0,y0)处的多尺度细节层子图像像素值,Ω表示以位置为中心的局部开窗区域,表示开窗区域内的中心点像素;In the formula, ( ) represents the intensity measurement operator under the decomposition level l, I l (x, y) represents the multi-scale detail layer sub-image pixel value at the position (x, y) under the decomposition level l, I l ( x 0 , y 0 ) represents the pixel value of the multi-scale detail layer sub-image at position (x 0 , y 0 ) under the decomposition level l, and Ω represents as the central partial window area, Indicates the center point pixel in the window area;
S4.3结合相位一致法和强度测量策略构建融合权重策略,实现多尺度细节层子图像融合,得到一系列融合后的多尺度细节层子图像;所述融合权重策略如式(11)所示;S4.3 Combining the phase consistency method and the intensity measurement strategy to construct a fusion weight strategy, realize the fusion of multi-scale detail layer sub-images, and obtain a series of fused multi-scale detail layer sub-images; the fusion weight strategy is shown in formula (11) ;
(11) (11)
式中,表示融合后的多尺度细节层子图像,DIR(x,y)表示红外细节层子图像,DVI(x,y)表示可见光细节层子图像, (·)表示在分解级数l下的相位一致测度算子。In the formula, Represents the fused multi-scale detail layer sub-image, D IR (x,y) represents the infrared detail layer sub-image, D VI (x,y) represents the visible light detail layer sub-image, (·) represents the phase consistency measure operator under the decomposition level l.
进一步的,步骤S5的具体操作步骤如下:Further, the specific operation steps of step S5 are as follows:
S5.1对步骤S3中得到的水平方向的K个多尺度多方向红外基础层子图像、垂直方向的K个多尺度多方向红外基础层子图像,逐像素计算图像的局部能量;S5.1 For the K multi-scale and multi-directional infrared base layer sub-images in the horizontal direction and the K multi-scale and multi-directional infrared base layer sub-images in the vertical direction obtained in step S3, calculate the local energy of the image pixel by pixel;
(12) (12)
式中,E(x,y)表示图像的局部能量,Wle(i,j)表示大小为i×j的局部能量窗口,1≤i≤3,1≤j≤3,BIR(x+i,y+j)表示对图像进行逐像素遍历操作;In the formula, E(x,y) represents the local energy of the image, W le (i,j) represents the local energy window of size i×j, 1≤i≤3, 1≤j≤3, B IR (x+ i, y+j) means to perform a pixel-by-pixel traversal operation on the image;
S5.2对图像的局部能量进行归一化操作,得到基础层子图像特征分布表示算子P;S5.2 Perform a normalization operation on the local energy of the image to obtain the sub-image feature distribution representation operator P of the base layer;
(13) (13)
式中,,Emax(x,y)表示根据局部能量窗口计算出的局部能量最大值,E(x+i,y+j)表示对局部能量窗口中的局部能量值进行遍历操作;In the formula, , E max (x, y) represents the local energy maximum value calculated according to the local energy window, E(x+i, y+j) represents the traversal operation on the local energy values in the local energy window;
S5.3对基础层子图像特征分布表示算子P引入比例参数,构建自适应强度传递函数分别对水平方向和垂直方向的多尺度多方向基础层子图像进行融合,得到融合后的水平方向的多尺度多方向红外基础层子图像、水平方向的多尺度多方向可见光基础层子图像、垂直方向的多尺度多方向红外基础层子图像、垂直方向的多尺度多方向可见光基础层子图像;融合操作的计算公式如式(14)所示;S5.3 Introduce a scale parameter to the characteristic distribution operator P of the base layer sub-image, and construct an adaptive intensity transfer function to fuse the multi-scale and multi-directional base layer sub-images in the horizontal direction and vertical direction respectively, and obtain the fused horizontal direction Multi-scale and multi-directional infrared base layer sub-image, horizontal multi-scale and multi-directional visible light base layer sub-image, vertical multi-scale and multi-directional infrared base layer sub-image, vertical multi-scale and multi-directional visible light base layer sub-image; fusion The calculation formula of operation is shown in formula (14);
(14) (14)
式中,表示多尺度多方向基础层子图像融合权重;γ表示引入的比例 参数;arctan(·)表示反正切操作符;P表示基础层子图像特征分布表示算子; In the formula, Represents the multi-scale and multi-directional base layer sub-image fusion weight; γ represents the introduced scale parameter; arctan( ) represents the arc tangent operator; P represents the base layer sub-image feature distribution representation operator;
S5.4最终融合后的不同方向的多尺度多方向基础层子图像的表达式如下:S5.4 The expressions of the multi-scale and multi-directional base layer sub-images in different directions after final fusion are as follows:
(15) (15)
式中,表示融合后的不同方向的多尺度多方向基础层子图像,当a=0时, 表示融合后的水平方向的多尺度多方向基础层子图像,表示融合后的水平方向的多 尺度多方向红外基础层子图像,表示融合后的水平方向的多尺度多方向可见光基础 层子图像;当a=1时,表示融合后的垂直方向的多尺度多方向基础层子图像,表示 融合后的垂直方向的多尺度多方向红外基础层子图像,表示融合后的垂直方向的多 尺度多方向可见光基础层子图像;K表示方向因子。 In the formula, Represents the fused multi-scale and multi-directional base layer sub-images in different directions. When a=0, Represents the fused horizontal multi-scale and multi-orientation base layer sub-image, represents the fused horizontal multi-scale and multi-directional infrared base layer sub-image, Represents the fused horizontal multi-scale and multi-directional visible light base layer sub-image; when a=1, represents the fused multi-scale and multi-orientation base layer sub-image in the vertical direction, represents the fused vertical multi-scale and multi-orientation infrared base layer sub-image, Represents the fused multi-scale and multi-directional visible light base layer sub-image in the vertical direction; K represents the orientation factor.
进一步的,步骤S6的具体操作步骤如下:Further, the specific operation steps of step S6 are as follows:
利用离散紧支撑剪切波变换反操作对融合后的不同方向的多尺度多方向基础层 子图像重构得到融合基础层图像,将融合基础层图像与融合后的多尺度细节层子图 像进行相加操作得到最终的融合图像F。 The fused base layer image is obtained by reconstructing the fused multi-scale and multi-directional base layer sub-images in different directions by using the discrete compact support shearlet transform inverse operation , which will fuse the base layer image The final fused image F is obtained by adding to the fused multi-scale detail layer sub-image.
进一步的,所述融合基础层图像的计算公式如式(16)所示;所述融合图像F的 计算公式如式(17)所示; Further, the fused base layer image The calculation formula of is as shown in formula (16); The calculation formula of described fusion image F is as shown in formula (17);
(16) (16)
(17) (17)
式中,表示离散紧支撑剪切波变换反操作符,表示融合后的 第i级多尺度细节层子图像;表示融合后的水平方向的多尺度多方向基础层子图像; 表示融合后的垂直方向的多尺度多方向基础层子图像。 In the formula, represents the discrete compactly supported shearlet transform inverse operator, Represents the i-th level multi-scale detail layer sub-image after fusion; Representing the fused horizontal multi-scale and multi-directional base layer sub-image; Represents the fused vertically oriented multi-scale and multi-orientation base layer sub-images.
本发明的有益效果是:The beneficial effects of the present invention are:
1)本发明可有效保留热辐射目标显著信息与更多的有效细节信息,并显著减少边缘模糊的情况;本发明有效平衡了不同尺度细节信息区分、强度调整与热辐射目标显著信息突出这三者之间的最优关系;1) The present invention can effectively retain the salient information of the thermal radiation target and more effective detail information, and significantly reduce the edge blurring; the present invention effectively balances the distinction between different scale detail information, intensity adjustment and the prominent information of the thermal radiation target. the optimal relationship between
2)本发明在分解阶段通过联合共现滤波器与离散紧支撑剪切波变换设计了多尺度方向共现滤波器,充分结合了共现滤波器和方向剪切波变换的优势,有效区分不同尺度的细节信息同时具备方向表示特性,在传递更多有效细节信息的同时减少了边缘模糊;2) In the decomposition stage, the present invention designs a multi-scale directional co-occurrence filter by combining the co-occurrence filter and the discrete compact support shearlet transform, fully combining the advantages of the co-occurrence filter and the directional shearlet transform, and effectively distinguishing between different The detailed information of the scale also has the characteristic of directional representation, which reduces the edge blur while delivering more effective detailed information;
3)本发明通过设计自适应强度传递函数作为基础层的融合策略,可显著提高融合图像的整体对比度与清晰度,实现对红外与可见光图像的有效融合,增强了图像质量,提高了整体融合效果;3) By designing the adaptive intensity transfer function as the fusion strategy of the base layer, the present invention can significantly improve the overall contrast and clarity of the fusion image, realize the effective fusion of infrared and visible light images, enhance the image quality, and improve the overall fusion effect ;
4)本发明的融合结果在细节信息的保留以及整体对比度调整上具有突出的效果,为后续的高级视觉图像处理系统提供高质量的增强融合图像;4) The fusion result of the present invention has outstanding effects on the retention of detail information and overall contrast adjustment, and provides high-quality enhanced fusion images for subsequent advanced visual image processing systems;
5)本发明还能够加快整个高级视觉图像处理系统的处理效率。5) The present invention can also speed up the processing efficiency of the entire advanced visual image processing system.
附图说明Description of drawings
图1为本发明的基于多尺度方向共现滤波器与强度传递的图像融合方法的流程图;Fig. 1 is the flowchart of the image fusion method based on multi-scale direction co-occurrence filter and intensity transfer of the present invention;
图2为本发明的基于多尺度方向共现滤波器与强度传递的图像融合方法的具体实施过程示意图;2 is a schematic diagram of the specific implementation process of the image fusion method based on the multi-scale direction co-occurrence filter and intensity transfer of the present invention;
图 3 为利用本发明的基于多尺度方向共现滤波器与强度传递的图像融合方法对RoadScene公开数据集中描述真实城市公路场景的源红外与可见光图像序列对进行融合的结果;Fig. 3 is the result of fusing the source infrared and visible light image sequence pairs describing the real urban road scene in the RoadScene public data set by using the image fusion method based on the multi-scale direction co-occurrence filter and intensity transfer of the present invention;
图4为利用本发明的基于多尺度方向共现滤波器与强度传递的图像融合方法对RoadScene公开数据集中描述真实雨中道路场景的源红外与可见光图像序列对进行融合的结果;Fig. 4 is the result of using the image fusion method based on the multi-scale direction co-occurrence filter and intensity transfer of the present invention to fuse the source infrared and visible light image sequence pairs describing the real road scene in the rain in the RoadScene public data set;
图5为利用本发明的基于多尺度方向共现滤波器与强度传递的图像融合方法对TNO公开数据集中描述真实野外布防场景的源红外与可见光图像序列对进行融合的结果。Fig. 5 is the result of using the image fusion method based on the multi-scale direction co-occurrence filter and intensity transfer of the present invention to fuse the source infrared and visible light image sequence pairs describing the real field deployment scene in the TNO public data set.
具体实施方式Detailed ways
下面将结合附图对本发明作进一步详细说明。The present invention will be described in further detail below in conjunction with the accompanying drawings.
本发明的一种基于多尺度方向共现滤波器与强度传递的图像融合方法,参见图1,其包括以下步骤:源红外与可见光图像数据获取及预处理→采用共现滤波器对预处理后的源红外与可见光图像进行滤波操作,实现初步分解;迭代使用共现滤波器对初步分解后的图像进行多尺度分解,得到多尺度红外细节层子图像、多尺度红外基础层子图像、多尺度可见光细节层子图像和多尺度可见光基础层子图像→采用离散紧支撑剪切波变换对多尺度红外基础层子图像和多尺度可见光基础层子图像进行多方向分解,得到多尺度多方向红外基础层子图像和多尺度多方向可见光基础层子图像→采用基于相位一致和强度测量的融合策略构建融合权重策略进行细节层子图像融合,得到融合后的多尺度细节层子图像→设计自适应强度传递函数作为融合准则进行基础层子图像融合,得到融合后的不同方向的多尺度多方向基础层子图像→对融合后的多尺度细节层子图像和不同方向的多尺度多方向基础层子图像进行重构得到最终的融合图像。An image fusion method based on multi-scale direction co-occurrence filter and intensity transfer of the present invention, see Fig. 1, it includes the following steps: source infrared and visible light image data acquisition and preprocessing → using co-occurrence filter for preprocessing The source infrared and visible light images are filtered to achieve preliminary decomposition; the co-occurrence filter is used iteratively to perform multi-scale decomposition on the primary decomposed image, and the multi-scale infrared detail layer sub-image, the multi-scale infrared base layer sub-image, and the multi-scale infrared sub-image are obtained. Visible light detail layer sub-image and multi-scale visible light base layer sub-image → use discrete compact support shearlet transform to decompose multi-scale infrared base layer sub-image and multi-scale visible light base layer sub-image in multiple directions to obtain multi-scale and multi-directional infrared base Layer sub-image and multi-scale and multi-directional visible light base layer sub-image → use a fusion strategy based on phase consistency and intensity measurement to construct a fusion weight strategy for detail layer sub-image fusion, and obtain a fused multi-scale detail layer sub-image → design adaptive intensity The transfer function is used as the fusion criterion to fuse the sub-images of the base layer, and the fused multi-scale and multi-directional base layer sub-images in different directions are obtained → the fused multi-scale detail layer sub-images and the multi-scale and multi-directional base layer sub-images in different directions Perform reconstruction to obtain the final fused image.
本发明的一种基于多尺度方向共现滤波器与强度传递的图像融合方法,具体操作流程如下:An image fusion method based on multi-scale direction co-occurrence filter and intensity transfer of the present invention, the specific operation process is as follows:
步骤S1、源红外与可见光图像数据获取及预处理;具体操作步骤如下:Step S1, source infrared and visible light image data acquisition and preprocessing; the specific operation steps are as follows:
S1.1获取源初始图像,具体地,可从红外相机中获取源红外图像,同时从可见光相机中获取源可见光图像;S1.1 Acquire the initial image of the source, specifically, the infrared image of the source can be obtained from the infrared camera, and the visible light image of the source can be obtained from the visible light camera at the same time;
S1.2对上述所获取的源红外图像和源可见光图像分别进行预处理操作;预处理操作包括:图像去噪操作和图像增强操作;进行图像去噪操作时,具体可采用基于滤波的去噪算法进行图像去噪,进行图像增强操作时,具体可采用基于场景拟合的图像增强算法进行图像增强;通过对源初始图像进行图像去噪和图像增强操作,可消除源初始图像的部分噪声,初步提高源初始图像的对比度和清晰度;具体地,可通过式(1)实现对源初始图像的预处理操作,包括初步的图像去噪及图像增强操作;S1.2 Perform preprocessing operations on the source infrared images and source visible light images obtained above; the preprocessing operations include: image denoising operations and image enhancement operations; when performing image denoising operations, filter-based denoising can be used specifically The algorithm performs image denoising, and when performing image enhancement operations, the image enhancement algorithm based on scene fitting can be used for image enhancement; by performing image denoising and image enhancement operations on the source initial image, part of the noise of the source initial image can be eliminated. Preliminarily improve the contrast and clarity of the source original image; specifically, the preprocessing operation on the source original image can be realized through formula (1), including preliminary image denoising and image enhancement operations;
(1) (1)
式(1)中,I表示经过预处理操作后的源红外与可见光图像数据矩阵,T(.)表示预处理操作,I’表示源红外与可见光图像。In formula (1), I represents the source infrared and visible light image data matrix after the preprocessing operation, T(.) represents the preprocessing operation, and I ' represents the source infrared and visible light image.
步骤S2、采用共现滤波器对预处理后的源红外与可见光图像进行滤波操作,实现初步分解;迭代使用共现滤波器对初步分解后的图像进行多尺度分解,得到多尺度红外细节层子图像、多尺度红外基础层子图像、多尺度可见光细节层子图像和多尺度可见光基础层子图像;具体操作步骤如下:Step S2, use the co-occurrence filter to filter the preprocessed source infrared and visible light images to achieve preliminary decomposition; iteratively use the co-occurrence filter to perform multi-scale decomposition on the pre-decomposed image to obtain the multi-scale infrared detail layer Image, multi-scale infrared base layer sub-image, multi-scale visible light detail layer sub-image and multi-scale visible light base layer sub-image; the specific operation steps are as follows:
S2.1初步分解;S2.1 Preliminary decomposition;
利用预处理后的源红外与可见光图像中的共现信息分配像素权重,分别采用共现 滤波器对预处理后的源红外与可见光图像进行滤波操作,实现初步分解,从而得到0级红外 基础层子图像 和0级可见光基础层子图像 ,如式(2)和式(3)所示;将预处理后 的源红外图像 IIR 与0级红外基础层子图像 相减得到0级红外细节层子图像 , 如式(2)所示,同时将预处理后的源可见光图像IVI与0级可见光基础层子图像 相减得 到0级可见光细节层子图像 ,如式(3)所示; Use the co-occurrence information in the preprocessed source infrared and visible light images to assign pixel weights, and use co-occurrence filters to filter the preprocessed source infrared and visible light images respectively to achieve preliminary decomposition, so as to obtain the 0-level infrared base layer sub image and level 0 visible light base layer subimage , as shown in formula (2) and formula (3); the preprocessed source infrared image I IR and the 0-level infrared base layer sub-image Subtraction to obtain 0-level infrared detail layer sub-image , as shown in formula (2), at the same time, the preprocessed source visible light image I VI and the 0-level visible light base layer sub-image Subtraction to obtain 0-level visible light detail layer sub-image , as shown in formula (3);
=CoF(IIR) ,=CoF(IVI) (2) =CoF(I IR ), =CoF(I VI ) (2)
=IIR-=IVI- (3) =I IR - =I VI - (3)
式(2)和式(3)中,CoF(.)表示对图像采用共现滤波器进行滤波操作,表示0级 红外基础层子图像,表示0级可见光基础层子图像,表示0级红外细节层子图像, 表示0级可见光细节层子图像,IIR表示预处理后的源红外图像,IVI表示预处理后的源可见 光图像; In formula (2) and formula (3), CoF(.) means that the co-occurrence filter is used to filter the image, Indicates the 0-level infrared base layer sub-image, Represents the 0-level visible light base layer sub-image, Indicates the level 0 infrared detail layer sub-image, Indicates the 0-level visible light detail layer sub-image, I IR indicates the preprocessed source infrared image, and I VI indicates the preprocessed source visible light image;
S2.2多尺度分解;S2.2 Multi-scale decomposition;
随后对得到的0级红外基础层子图像和0级可见光基础层子图像分别迭代 使用共现滤波器滤波级相减操作算子进行多尺度分解,最终得到多尺度红外基础层子图像、多尺度可见光基础层子图像 、多尺度红外细节层子图像 (i=1…k)和多尺度 可见光细节层子图像 (i=1…k),k表示分解级数; Then the obtained 0-level infrared base layer sub-image and level 0 visible light base layer subimage Respectively iteratively use the co-occurrence filter filter level subtraction operator to perform multi-scale decomposition, and finally obtain the multi-scale infrared base layer sub-image , multi-scale visible light base layer sub-image , multi-scale infrared detail layer sub-image (i=1...k) and multi-scale visible detail layer sub-images (i=1...k), k represents the decomposition series;
以进行k级分解为例,对第i-1级红外基础层子图像 进行共现滤波操作得到 第i级红外基础层子图像 ,同时对第i-1级可见光基础层子图像 进行共现滤波 操作得到第i级可见光基础层子图像 ;然后对第i-1级红外基础层子图像 和第i 级红外基础层子图像 进行相减操作,得到第i级红外细节层子图像 ,同时对第 i-1级可见光基础层子图像 和第i级可见光基础层子图像 进行相减操作,得到 第i级可见光细节层子图像 ;重复上述步骤k次,最终得到多尺度红外基础层子图像 、多尺度可见光基础层子图像 、多尺度红外细节层子图像 (i=1…k)和多尺度 可见光细节层子图像(i=1…k);其中,第i-1级红外基础层子图像经过k级分解后 得到第i级红外基础层子图像和第i级可见光基础层子图像的计算公式如式(4)所 示,多尺度红外细节层子图像的计算公式和多尺度可见光细节层子图像的计算公 式如式(5)所示; Taking k-level decomposition as an example, for the i-1th level infrared base layer sub-image Perform co-occurrence filtering operation to obtain the i-th level infrared base layer sub-image , and at the same time, for the sub-image of the i-1th level visible light base layer Perform co-occurrence filtering operation to obtain the i-th level visible light base layer sub-image ; Then for the i-1th level infrared base layer sub-image and the i-th level infrared base layer sub-image Perform a subtraction operation to obtain the i-th level infrared detail layer sub-image , and at the same time, for the sub-image of the i-1th level visible light base layer and the i-th level visible light base layer sub-image Perform a subtraction operation to obtain the i-th level visible light detail layer sub-image ; Repeat the above steps k times, and finally get the sub-image of the multi-scale infrared base layer , multi-scale visible light base layer sub-image , multi-scale infrared detail layer sub-image (i=1...k) and multi-scale visible detail layer sub-images (i=1...k); among them, the i-1th level infrared base layer sub-image After k-level decomposition, the i-th level infrared base layer sub-image is obtained and the i-th level visible light base layer sub-image The calculation formula of is shown in formula (4), the multi-scale infrared detail layer sub-image The calculation formula of and multi-scale visible light detail layer sub-image The calculation formula of is shown in formula (5);
(4) (4)
(5) (5)
式(4)和式(5)中,表示第i级红外基础层子图像,表示第i-1级红外基础 层子图像,表示第i级可见光基础层子图像,表示第i-1级可见光基础层子图像,表示第i级红外细节层子图像,表示。 In formula (4) and formula (5), Denotes the i-th level infrared base layer sub-image, Denotes the i-1th level infrared base layer sub-image, Represents the i-th visible light base layer sub-image, Represents the i-1th level visible light base layer sub-image, Represents the i-th level infrared detail layer sub-image, express .
步骤S3、进一步采用离散紧支撑剪切波变换对多尺度红外基础层子图像和多 尺度可见光基础层子图像进行多方向分解,得到多尺度多方向红外基础层子图像和多 尺度多方向可见光基础层子图像;针对步骤S2的多尺度分解缺乏方向性的缺点,采用离散 紧支撑剪切波变换对多尺度基础层子图像进行多方向分解,可提取出图像大尺度轮廓边 缘;具体操作步骤如下: Step S3, further adopting the discrete compact support shearlet transform to the sub-image of the multi-scale infrared base layer and multiscale visible base layer subimages Perform multi-directional decomposition to obtain multi-scale and multi-directional infrared base layer sub-images and multi-scale and multi-directional visible light base layer sub-images; in view of the lack of directionality in the multi-scale decomposition of step S2, the discrete compact support shearlet transform is used to analyze the multi-scale The sub-image of the base layer is decomposed in multiple directions, and the large-scale contour edge of the image can be extracted; the specific operation steps are as follows:
S3.1分别利用构造好的水平离散剪切波变换和垂直离散剪切波变换对步骤S2得 到的多尺度红外基础层子图像 和多尺度可见光基础层子图像 进行水平和垂直各 K(K表示方向因子)个方向的分解;其中,水平离散剪切波变换操作的计算公式如式(6)所 示,垂直离散剪切波变换操作的计算公式如式(7)所示; S3.1 Use the constructed horizontal discrete shearlet transform and vertical discrete shearlet transform respectively to process the multi-scale infrared base layer sub-image obtained in step S2 and multiscale visible base layer subimages Carry out the decomposition of each K direction (K represents the direction factor) horizontally and vertically; wherein, the calculation formula of the horizontal discrete shearlet transform operation is shown in formula (6), and the calculation formula of the vertical discrete shearlet transform operation is shown in the formula ( 7) as shown;
(6) (6)
, (7) , (7)
式(6)和式(7)中,k表示分解级数;n1和n2表示两个平移因子,,表 示整数域的平方;s表示方向因子,取s=1,0,-1,s=1时表示45°方向,s=0时表示90°方向,s=- 1时表示180°方向;表示向下取整操作; 和 分别表示水平离散剪切波变换和垂 直离散剪切波变换;表示第k级分解时红外基础层子图像采用平移变换, 表示第k级分解时红外基础层子图像经水平离散剪切波变换的等效 变换, 表示第k级分解时可见光基础层子图像采用平移变换, In formula (6) and formula (7), k represents the decomposition series; n 1 and n 2 represent two translation factors, , Represents the square of the integer field; s represents the direction factor, take s=1,0,-1, when s=1, it represents the 45° direction, when s=0, it represents the 90° direction, and when s=-1, it represents the 180° direction; Indicates the rounding down operation; and represent the horizontal discrete shearlet transform and the vertical discrete shearlet transform respectively; Indicates that the infrared base layer sub-image adopts translation transformation when the k-th level decomposition is performed, Indicates the equivalent transformation of the infrared base layer sub-image through the horizontal discrete shearlet transform when decomposing at the kth level, Indicates that the sub-image of the visible light base layer adopts translation transformation when decomposing at the kth level,
表示第k级分解时可见光基础层子图像经水平离散剪切波 变换的等效变换,表示第k级分解时可见光基础层子图像经垂直离散 剪切波变换的等效变换; Represents the equivalent transformation of the sub-image of the visible light base layer through the horizontal discrete shearlet transform when decomposing at the kth level, Represents the equivalent transformation of the sub-image of the visible light base layer through the vertical discrete shearlet transform when decomposing at the kth level;
S3.2经过水平离散剪切波变换和垂直离散剪切波变换后,多尺度红外基础层子图 像 和多尺度可见光基础层子图像 被分解为2K个方向,也即水平和垂直各K个方 向;其中,多方向分解后获得的多尺度多方向基础层子图像数据集合可表示为 ,其中, 分别表示经水平离散剪切波变换 和垂直离散剪切波变换后得到的多尺度多方向红外基础层子图像, 分别表 示经水平离散剪切波变换和垂直离散剪切波变换后得到的多尺度多方向可见光基础层子 图像; S3.2 After the horizontal discrete shearlet transform and the vertical discrete shearlet transform, the sub-image of the multi-scale infrared base layer and multiscale visible base layer subimages is decomposed into 2K directions, that is, horizontal and vertical K directions; among them, the multi-scale and multi-directional base layer sub-image data set obtained after multi-directional decomposition can be expressed as ,in, represent the multi-scale and multi-directional infrared base layer sub-images obtained by horizontal discrete shearlet transform and vertical discrete shearlet transform, respectively, Respectively represent the multi-scale and multi-directional visible light base layer sub-images obtained after horizontal discrete shearlet transform and vertical discrete shearlet transform;
S3.3取水平和垂直各三个方向,也即方向因子K=3,s=1,0,-1;经过多方向分解后, 得到水平方向的K个多尺度多方向红外基础层子图像 、垂直方向的K个多尺 度多方向红外基础层子图像 、水平方向的K个多尺度多方向可见光基础层 子图像 和垂直方向的K个多尺度多方向可见光基础层子图像 ,以及K个多尺度红外细节层子图像 。 S3.3 Take the horizontal and vertical three directions respectively, that is, the direction factor K=3, s=1, 0, -1; after multi-directional decomposition, K multi-scale and multi-directional infrared base layer sub-images in the horizontal direction are obtained , K multi-scale and multi-directional infrared base layer sub-images in the vertical direction , K multi-scale and multi-directional visible light base layer sub-images in the horizontal direction and K multi-scale and multi-directional visible light base layer sub-images in the vertical direction , and K multi-scale infrared detail layer sub-images .
步骤S4、细节层子图像融合;Step S4, detail layer sub-image fusion;
采用基于相位一致和强度测量的融合策略对上述获得的K个多尺度红外细节层子 图像和K个多尺度可见光细节层子图像进行融合操作,得到一系列融 合后的多尺度细节层子图像,以传递更多纹理细节;具体操作步骤如下: The K multi-scale infrared detail layer sub-images obtained above are processed using a fusion strategy based on phase coherence and intensity measurements and K multi-scale visible light detail layer sub-images Perform a fusion operation to obtain a series of fused multi-scale detail layer sub-images to transfer more texture details; the specific operation steps are as follows:
S4.1首先对上述获得的K个多尺度红外细节层子图像和K个多尺度可见 光细节层子图像采用工程应用的相位一致操作,得到相位一致测度算子,用于设 计后续的融合策略;其中,相位一致测度算子的计算公式如下所示; S4.1 First, the K multi-scale infrared detail layer sub-images obtained above and K multi-scale visible light detail layer sub-images Using the phase consistency operation of engineering applications, the phase consistency measure operator is obtained, which is used to design the subsequent fusion strategy; among them, the phase consistency measure operator The calculation formula of is as follows;
(8) (8)
(9) (9)
式(8)和(9)中,θk表示在分解级数k下的方向角,为第n项方向角为θk的 傅里叶级数的幅值,[en,θ k,on,θ k]为图像(多尺度红外细节层子图像或)与log-Gabor滤波器在位置(x,y)处的卷积结果;∑k表示 对变量k作求和操作,∑n表示对变量n作求和操作;为一个小值正常数,用于防止分母为0; In formulas (8) and (9), θ k represents the direction angle under the decomposition level k, is the magnitude of the Fourier series whose nth direction angle is θ k , [e n, θ k , o n, θ k ] is the image (multi-scale infrared detail layer sub-image or ) and the convolution result of the log-Gabor filter at position (x, y); ∑ k represents the summation operation on the variable k, and ∑ n represents the summation operation on the variable n; It is a small value normal number, used to prevent the denominator from being 0;
S4.2随后通过开窗法设计强度测量策略,如式(10)所示;S4.2 Then design the strength measurement strategy through the window method, as shown in formula (10);
(10) (10)
式(10)中,Nl (·)表示在分解级数l下的强度测量算子,Il (x,y)表示在分解级数l下位置(x,y)处的多尺度细节层子图像像素值,Il (x0,y0)表示在分解级数l下位置(x0,y0)处的多尺度细节层子图像像素值,Ω表示以位置(x0,y0)为中心的局部开窗区域,(x0,y0)表示开窗区域内的中心点像素;In formula (10), N l ( ) represents the intensity measurement operator under the decomposition level l, and I l (x, y) represents the multi-scale detail layer at position (x, y) under the decomposition level l Sub-image pixel value, I l (x 0 , y 0 ) represents the multi-scale detail sub-image pixel value at position (x 0 , y 0 ) under decomposition level l, Ω represents the sub-image value at position (x 0 , y 0 ) ) as the center of the local window area, (x 0 , y 0 ) represents the center point pixel in the window area;
S4.3最后结合相位一致法和强度测量策略构建融合权重策略,从而实现多尺度细节层子图像融合,得到一系列融合后的多尺度细节层子图像,以传递更多纹理细节;其中,多尺度细节层子图像的融合权重策略如式(11)所示;S4.3 Finally, combine the phase consistency method and the intensity measurement strategy to build a fusion weight strategy, so as to realize the fusion of multi-scale detail layer sub-images, and obtain a series of fused multi-scale detail layer sub-images to convey more texture details; among them, more The fusion weight strategy of the scale-detail layer sub-image is shown in formula (11);
(11) (11)
式(11)中,DF(x,y)表示融合后的多尺度细节层子图像,DIR(x,y)表示红外细节层子图像,DVI(x,y)表示可见光细节层子图像,Pl(·)表示在分解级数l下的相位一致测度算子。In formula (11), D F (x, y) represents the fused multi-scale detail layer sub-image, D IR (x, y) represents the infrared detail layer sub-image, and D VI (x, y) represents the visible light detail layer sub-image In the image, P l (·) represents the phase consistency measure operator under the decomposition level l.
步骤S5、基础层子图像融合;Step S5, base layer sub-image fusion;
通过设计自适应强度传递函数作为多尺度多方向基础层子图像的融合准则,采用红外图像基础层像素强度分布信息调节多尺度多方向基础层子图像强度分布,突出显示热辐射目标显著信息,完成不同方向上的多尺度多方向基础层子图像的融合,利用红外基础层图像的强度分布引导最终的融合结果,确保融合后的基础层图像具有高对比度特征;具体操作步骤如下:By designing the adaptive intensity transfer function as the fusion criterion of the multi-scale and multi-directional base layer sub-image, the intensity distribution of the multi-scale and multi-directional base layer sub-image is adjusted by using the pixel intensity distribution information of the infrared image base layer, and the significant information of the thermal radiation target is highlighted. The fusion of multi-scale and multi-directional base layer sub-images in different directions uses the intensity distribution of the infrared base layer image to guide the final fusion result to ensure that the fused base layer image has high contrast features; the specific operation steps are as follows:
S5.1首先,针对步骤S3中得到的水平方向的K个多尺度多方向红外基础层子图像和垂直方向的K个多尺度多方向红外基础层子图像,逐像素 计算图像的局部能量; S5.1 First, for the K multi-scale and multi-directional infrared base layer sub-images in the horizontal direction obtained in step S3 and K multi-scale and multi-orientation infrared base layer sub-images in the vertical direction , calculate the local energy of the image pixel by pixel;
(12) (12)
式(12)中,E(x,y)表示图像的局部能量,Wle(i,j)表示大小为i×j的局部能量窗口,1≤i≤3,1≤j≤3,BIR(x+i,y+j)表示对图像进行逐像素遍历操作;In formula (12), E(x,y) represents the local energy of the image, W le (i,j) represents the local energy window of size i×j, 1≤i≤3, 1≤j≤3, B IR (x+i,y+j) means to perform a pixel-by-pixel traversal operation on the image;
S5.2对上述计算得到的局部能量进行归一化操作,得到基础层子图像特征分布表示算子P;S5.2 Perform a normalization operation on the local energy obtained by the above calculation, and obtain the characteristic distribution operator P of the sub-image of the base layer;
(13) (13)
式(13)中,,Emax(x,y)表示根据局部能量窗口计算出的局部能量最大值,E(x+i,y+j)表示对局部能量窗口中的局部能量值进行遍历操作;In formula (13), , E max (x, y) represents the local energy maximum value calculated according to the local energy window, E(x+i, y+j) represents the traversal operation on the local energy values in the local energy window;
S5.3之后对基础层子图像特征分布表示算子P引入比例参数,构建自适应强度传递函数对多尺度多方向基础层子图像进行融合,调节多尺度多方向基础层子图像强度分布,即分别对水平方向和垂直方向的多尺度多方向基础层子图像进行融合,得到融合后的水平方向的多尺度多方向红外基础层子图像、融合后的水平方向的多尺度多方向可见光基础层子图像、融合后的垂直方向的多尺度多方向红外基础层子图像和融合后的垂直方向的多尺度多方向可见光基础层子图像;融合操作的具体计算公式如式(14)所示;After S5.3, the proportional parameter is introduced into the characteristic distribution operator P of the base layer sub-image, and an adaptive intensity transfer function is constructed to fuse the multi-scale and multi-directional base layer sub-images, and adjust the multi-scale and multi-directional base layer sub-image intensity distribution, namely The multi-scale and multi-direction base layer sub-images in the horizontal direction and the vertical direction are respectively fused to obtain the fused horizontal multi-scale and multi-direction infrared base layer sub-image and the fused horizontal multi-scale and multi-direction visible light base layer sub-image Image, the fused vertical multi-scale and multi-directional infrared base layer sub-image and the fused vertical multi-scale and multi-directional visible light base layer sub-image; the specific calculation formula of the fusion operation is shown in formula (14);
(14) (14)
式(14)中,表示多尺度多方向基础层子图像融合权重;γ表示引入的 比例参数,用于更好的调控多尺度多方向基础层子图像融合权重;arctan(·)表示反正切 操作符;P表示基础层子图像特征分布表示算子;In formula (14), Represents the multi-scale and multi-directional base layer sub-image fusion weight; γ represents the introduced scale parameter, which is used to better control the multi-scale and multi-directional base layer sub-image fusion weight; arctan(·) represents the arctangent operator; P represents the base layer Sub-image feature distribution representation operator;
S5.4最终融合后的不同方向的多尺度多方向基础层子图像的表达式如下:S5.4 The expressions of the multi-scale and multi-directional base layer sub-images in different directions after final fusion are as follows:
(15) (15)
式(15)中, 表示融合后的不同方向的多尺度多方向基础层子图像,当a=0时, 表示融合后的水平方向的多尺度多方向基础层子图像, 表示融合后的水平方向 的多尺度多方向红外基础层子图像, 表示融合后的水平方向的多尺度多方向可见光 基础层子图像;当a=1时, 表示融合后的垂直方向的多尺度多方向基础层子图像, 表示融合后的垂直方向的多尺度多方向红外基础层子图像, 表示融合后的垂直方向 的多尺度多方向可见光基础层子图像;K表示方向因子。 In formula (15), Represents the fused multi-scale and multi-directional base layer sub-images in different directions. When a=0, Represents the fused horizontal multi-scale and multi-orientation base layer sub-image, represents the fused horizontal multi-scale and multi-directional infrared base layer sub-image, Represents the fused horizontal multi-scale and multi-directional visible light base layer sub-image; when a=1, represents the fused multi-scale and multi-orientation base layer sub-image in the vertical direction, represents the fused vertical multi-scale and multi-orientation infrared base layer sub-image, Represents the fused multi-scale and multi-directional visible light base layer sub-image in the vertical direction; K represents the orientation factor.
步骤S6、重构融合后的多尺度细节层子图像(步骤四所得结果)与融合后的不同方向的多尺度多方向基础层子图像(步骤五所得结果)得到最终的融合图像F;具体操作步骤如下:Step S6. Reconstruct the fused multi-scale detail layer sub-image (the result obtained in step 4) and the fused multi-scale and multi-directional base layer sub-image in different directions (the result obtained in step 5) to obtain the final fused image F; specific operations Proceed as follows:
利用离散紧支撑剪切波变换反操作对融合后的不同方向的多尺度多方向基础层子图像重构得到融合基础层图像BF,最后将融合基础层图像BF与融合后的多尺度细节层子图像进行简单的相加操作即可得到最终的融合图像F,其具体计算公式如下所示;Using discrete compact support shearlet transform inverse operation to reconstruct the fused multi-scale and multi-directional base layer sub-images in different directions to obtain the fused base layer image B F , and finally combine the fused base layer image B F with the fused multi-scale details The final fused image F can be obtained by simply adding the layer sub-images, and the specific calculation formula is as follows;
(16) (16)
(17) (17)
式(16)和式(17)中,表示离散紧支撑剪切波变换反操作符, 表示融合后的第i级多尺度细节层子图像; 表示融合后的水平方向的多尺度多方向基 础层子图像; 表示融合后的垂直方向的多尺度多方向基础层子图像。 In formula (16) and formula (17), represents the discrete compactly supported shearlet transform inverse operator, Represents the i-th level multi-scale detail layer sub-image after fusion; Representing the fused horizontal multi-scale and multi-directional base layer sub-image; Represents the fused vertically oriented multi-scale and multi-orientation base layer sub-images.
为了验证本发明的基于多尺度方向共现滤波器与强度传递的图像融合方法的有效性,特进行以下验证性试验。In order to verify the effectiveness of the image fusion method based on the multi-scale direction co-occurrence filter and intensity transfer of the present invention, the following verification experiments are carried out.
如图2所示,按照本发明的基于多尺度方向共现滤波器与强度传递的图像融合方法,(1)首先从红外相机中获取源红外图像(图2中a),同时从可见光相机中获取源可见光图像(图2中b);(2)采用共现滤波器对预处理后的源红外与可见光图像进行滤波操作,实现初步分解,然后通过迭代使用共现滤波器对初步分解后的图像进行多尺度分解,得到多尺度红外基础层子图像(图2中c)、多尺度可见光基础层子图像(图2中d)、多尺度红外细节层子图像(图2中e)、多尺度可见光细节层子图像(图2中f);(3)采用离散紧支撑剪切波变换对多尺度红外基础层子图像和多尺度可见光基础层子图像进行多方向分解,得到多尺度多方向红外基础层子图像和多尺度多方向可见光基础层子图像;(4)采用基于相位一致和强度测量的融合策略构建融合权重策略对多尺度红外细节层子图像(图2中e)和多尺度可见光细节层子图像(图2中f)进行细节层子图像融合,得到融合后的多尺度细节层子图像(图2中h);(5)设计自适应强度传递函数作为融合准则对多尺度红外基础层子图像(图2中c)和多尺度可见光基础层子图像(图2中d)进行基础层子图像融合,得到融合后的不同方向的多尺度多方向基础层子图像(图2中g);(6)对融合后的多尺度细节层子图像和不同方向的多尺度多方向基础层子图像进行重构得到最终的融合图像(图2中F)。As shown in Figure 2, according to the image fusion method based on multi-scale direction co-occurrence filter and intensity transfer of the present invention, (1) firstly acquire the source infrared image (a in Figure 2) from the infrared camera, and at the same time obtain the source infrared image from the visible light camera Obtain the source visible light image (b in Figure 2); (2) Use the co-occurrence filter to filter the preprocessed source infrared and visible light images to achieve preliminary decomposition, and then iteratively use the co-occurrence filter to decompose the primary decomposed The image is decomposed at multiple scales to obtain the sub-image of the multi-scale infrared base layer (c in Fig. 2), the sub-image of the multi-scale visible light base layer (d in Fig. 2), the sub-image of the multi-scale infrared detail layer (e in Fig. The sub-image of the scale visible light detail layer (f in Fig. 2); (3) using the discrete compact support shearlet transform to decompose the sub-image of the multi-scale infrared base layer and the sub-image of the multi-scale visible light base layer in multiple directions to obtain the multi-scale and multi-direction Infrared base layer sub-image and multi-scale and multi-directional visible light base layer sub-image; (4) Using a fusion strategy based on phase consistency and intensity measurement to construct a fusion weight strategy for multi-scale infrared detail layer sub-image (e in Figure 2) and multi-scale The visible light detail layer sub-image (f in Figure 2) is fused with the detail layer sub-image to obtain the fused multi-scale detail layer sub-image (h in Figure 2); (5) Design an adaptive intensity transfer function as a fusion criterion for multi-scale The sub-image of the infrared base layer (c in Figure 2) and the sub-image of the multi-scale visible light base layer (d in Figure 2) are fused with the sub-image of the base layer, and the fused multi-scale and multi-directional base layer sub-images in different directions are obtained (Figure 2 Middle g); (6) Reconstruct the fused multi-scale detail layer sub-image and the multi-scale and multi-orientation base layer sub-image in different directions to obtain the final fused image (F in Figure 2).
利用本发明的基于多尺度方向共现滤波器与强度传递的图像融合方法对RoadScene公开数据集中描述真实城市公路场景的源红外与可见光图像序列对进行融合,结果如图3所示,第一排为描述真实城市公路场景的源红外图像,第二排为描述真实城市公路场景的源可见光图像,第三排为最终融合后的图像。Using the image fusion method based on multi-scale direction co-occurrence filter and intensity transfer of the present invention to fuse the source infrared and visible light image sequence pairs describing the real urban road scene in the RoadScene public data set, the result is shown in Figure 3, the first row To describe the source infrared image of the real urban road scene, the second row is the source visible light image describing the real urban road scene, and the third row is the final fused image.
利用本发明的基于多尺度方向共现滤波器与强度传递的图像融合方法对RoadScene公开数据集中描述真实雨中道路场景的源红外与可见光图像序列对进行融合,结果如图4所示,第一排为描述真实雨中道路场景的源红外图像,第二排为描述真实雨中道路场景的源可见光图像,第三排为最终融合后的图像。Using the image fusion method based on multi-scale direction co-occurrence filter and intensity transfer of the present invention to fuse the source infrared and visible light image sequence pairs describing the real road scene in the rain in the RoadScene public data set, the result is shown in Figure 4, the first row To describe the source infrared image of the real road scene in the rain, the second row is the source visible light image describing the real road scene in the rain, and the third row is the final fused image.
利用本发明的基于多尺度方向共现滤波器与强度传递的图像融合方法对TNO公开数据集中描述真实野外布防场景的源红外与可见光图像序列对进行融合,结果如图5所示,第一排为描述真实野外布防场景的源红外图像,第二排为描述真实野外布防场景的源可见光图像,第三排为最终融合后的图像。Using the image fusion method based on the multi-scale direction co-occurrence filter and intensity transfer of the present invention to fuse the source infrared and visible light image sequence pairs describing the real field deployment scene in the TNO public data set, the result is shown in Figure 5, the first row In order to describe the source infrared image of the real field deployment scene, the second row is the source visible light image describing the real field deployment scene, and the third row is the final fused image.
综上,本发明通过设计多方向共现滤波器对源初始图像进行多尺度多方向分解,能对图像区域内的大尺度轮廓信息和小梯度纹理信息加以区分,并且具备良好的方向表示能力;通过设计自适应强度传递函数融合多方向基础层子图像,可有效调节图像的整体对比度和清晰度;在细节层子图像融合方面,采用基于相位一致和强度测量的策略构建融合权重,给最终的融合结果传递更多纹理细节;此外,本发明的方法简单易操作,仅仅调整几个参数就可以平衡算法的质量和效率,可为后续高级视觉处理过程提供必要的技术支持。In summary, the present invention performs multi-scale and multi-directional decomposition on the source initial image by designing a multi-directional co-occurrence filter, which can distinguish large-scale contour information and small gradient texture information in the image area, and has good direction representation capabilities; By designing an adaptive intensity transfer function to fuse multi-directional base layer sub-images, the overall contrast and sharpness of the image can be effectively adjusted; in terms of detail layer sub-image fusion, a strategy based on phase consistency and intensity measurement is used to construct fusion weights to give the final The fusion result conveys more texture details; in addition, the method of the present invention is simple and easy to operate, and only a few parameters can be adjusted to balance the quality and efficiency of the algorithm, which can provide necessary technical support for subsequent advanced visual processing.
上述仅为本发明的优选实施例而已,并不对本发明起到任何限制作用;任何所属技术领域的技术人员,在不脱离本发明的技术方案的范围内,对本发明揭露的技术方案和技术内容做任何形式的等同替换或修改等变动,均属未脱离本发明的技术方案的内容,仍属于本发明的保护范围之内。The above is only a preferred embodiment of the present invention, and does not play any limiting role to the present invention; any person skilled in the art, within the scope of the technical solution of the present invention, the technical solution and technical content disclosed in the present invention Any form of equivalent replacement or modification and other changes is within the content of the technical solution of the present invention and still falls within the scope of protection of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310069737.8A CN115797244A (en) | 2023-02-07 | 2023-02-07 | Image fusion method based on multi-scale direction co-occurrence filter and intensity transmission |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310069737.8A CN115797244A (en) | 2023-02-07 | 2023-02-07 | Image fusion method based on multi-scale direction co-occurrence filter and intensity transmission |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115797244A true CN115797244A (en) | 2023-03-14 |
Family
ID=85430098
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310069737.8A Pending CN115797244A (en) | 2023-02-07 | 2023-02-07 | Image fusion method based on multi-scale direction co-occurrence filter and intensity transmission |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115797244A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117173115A (en) * | 2023-08-29 | 2023-12-05 | 哈尔滨工业大学 | Wafer golden image generation method based on statistic iteration |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104809734A (en) * | 2015-05-11 | 2015-07-29 | 中国人民解放军总装备部军械技术研究所 | Infrared image and visible image fusion method based on guide filtering |
CN112017139A (en) * | 2020-09-14 | 2020-12-01 | 南昌航空大学 | Infrared and visible light image perception fusion method |
CN113222877A (en) * | 2021-06-03 | 2021-08-06 | 北京理工大学 | Infrared and visible light image fusion method and application thereof in airborne photoelectric video |
CN114897751A (en) * | 2022-04-12 | 2022-08-12 | 北京理工大学 | Infrared and visible light image perception fusion method based on multi-scale structural decomposition |
-
2023
- 2023-02-07 CN CN202310069737.8A patent/CN115797244A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104809734A (en) * | 2015-05-11 | 2015-07-29 | 中国人民解放军总装备部军械技术研究所 | Infrared image and visible image fusion method based on guide filtering |
CN112017139A (en) * | 2020-09-14 | 2020-12-01 | 南昌航空大学 | Infrared and visible light image perception fusion method |
CN113222877A (en) * | 2021-06-03 | 2021-08-06 | 北京理工大学 | Infrared and visible light image fusion method and application thereof in airborne photoelectric video |
CN114897751A (en) * | 2022-04-12 | 2022-08-12 | 北京理工大学 | Infrared and visible light image perception fusion method based on multi-scale structural decomposition |
Non-Patent Citations (1)
Title |
---|
韩玺钰: "基于多尺度及显著区域分析的红外与可见光图像融合算法研究" * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117173115A (en) * | 2023-08-29 | 2023-12-05 | 哈尔滨工业大学 | Wafer golden image generation method based on statistic iteration |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112767251A (en) | Image super-resolution method based on multi-scale detail feature fusion neural network | |
CN110232653A (en) | The quick light-duty intensive residual error network of super-resolution rebuilding | |
CN108830818A (en) | A kind of quick multi-focus image fusing method | |
CN108399611A (en) | Multi-focus image fusing method based on gradient regularisation | |
CN109636766A (en) | Polarization differential and intensity image Multiscale Fusion method based on marginal information enhancing | |
CN110097617B (en) | Image fusion method based on convolutional neural network and significance weight | |
CN102243711A (en) | Neighbor embedding-based image super-resolution reconstruction method | |
CN111815550B (en) | A method of infrared and visible light image fusion based on gray level co-occurrence matrix | |
Luo et al. | Infrared and visible image fusion based on visibility enhancement and hybrid multiscale decomposition | |
CN117274059A (en) | Low-resolution image reconstruction method and system based on image coding-decoding | |
CN115797244A (en) | Image fusion method based on multi-scale direction co-occurrence filter and intensity transmission | |
Zhong et al. | A fusion approach to infrared and visible images with Gabor filter and sigmoid function | |
Li et al. | RDMA: Low-light image enhancement based on retinex decomposition and multi-scale adjustment | |
CN103310414A (en) | Image enhancement method based on directionlet transform and fuzzy theory | |
Zhao et al. | Color channel fusion network for low-light image enhancement | |
CN110796609A (en) | Low-light image enhancement method based on scale perception and detail enhancement model | |
CN111626944B (en) | Video deblurring method based on space-time pyramid network and against natural priori | |
CN118823175A (en) | Zero-shot infrared image colorization method and system based on multi-level representation fusion | |
CN117745555A (en) | Fusion method of multi-scale infrared and visible light images based on double partial differential equations | |
Tun et al. | Joint training of noisy image patch and impulse response of low-pass filter in CNN for image denoising | |
Zeng | Low-light image enhancement algorithm based on lime with pre-processing and post-processing | |
CN110084770B (en) | Brain image fusion method based on two-dimensional Littlewood-Paley empirical wavelet transform | |
Han et al. | Dual discriminators generative adversarial networks for unsupervised infrared super-resolution | |
Zhang et al. | A dual channel decomposition and remapping fusion model for low illumination images with a wide field of view | |
CN114638770B (en) | Image fusion method and system based on high-low frequency information supplement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230314 |