CN109934794B - A Multi-Focus Image Fusion Method Based on Significant Sparse Representation and Neighborhood Information - Google Patents

A Multi-Focus Image Fusion Method Based on Significant Sparse Representation and Neighborhood Information Download PDF

Info

Publication number
CN109934794B
CN109934794B CN201910126869.3A CN201910126869A CN109934794B CN 109934794 B CN109934794 B CN 109934794B CN 201910126869 A CN201910126869 A CN 201910126869A CN 109934794 B CN109934794 B CN 109934794B
Authority
CN
China
Prior art keywords
image
column
significant
error
sparse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910126869.3A
Other languages
Chinese (zh)
Other versions
CN109934794A (en
Inventor
谢从华
张冰
高蕴梅
刘在德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changshu Institute of Technology
Original Assignee
Changshu Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changshu Institute of Technology filed Critical Changshu Institute of Technology
Priority to CN201910126869.3A priority Critical patent/CN109934794B/en
Publication of CN109934794A publication Critical patent/CN109934794A/en
Application granted granted Critical
Publication of CN109934794B publication Critical patent/CN109934794B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明公开了一种基于显著稀疏表示和邻域信息的多聚焦图像融合方法,具体步骤包括:步骤1,基于均匀网格划分图像块和向量化构建图像字典;步骤2,基于公共稀疏特征、显著稀疏特征和误差表示建立图像显著稀疏模型;步骤3,基于动态惩罚因子线性交替框架求解图像显著稀疏分解模型的公共稀疏特征、显著稀疏特征和误差等参数;步骤4,基于最大平衡化聚焦参数准则的标签融合;步骤5,基于源图像细节信息和邻域图像块统计信息优化标签融合;步骤6,基于优化标签融合重构图像。

Figure 201910126869

The invention discloses a multi-focus image fusion method based on significant sparse representation and neighborhood information. The specific steps include: step 1, dividing image blocks based on uniform grids and vectorization to construct an image dictionary; step 2, based on common sparse features, Significant sparse features and error representations to establish a significant image sparse model; Step 3, solve the common sparse features, significant sparse features and errors of the image significant sparse decomposition model based on the dynamic penalty factor linear alternation framework; Step 4, based on the maximum balanced focus parameter Label fusion of criteria; Step 5, optimize label fusion based on source image detail information and neighborhood image block statistical information; Step 6, reconstruct image based on optimized label fusion.

Figure 201910126869

Description

一种基于显著稀疏表示和邻域信息的多聚焦图像融合方法A Multi-Focus Image Fusion Method Based on Significant Sparse Representation and Neighborhood Information

技术领域technical field

本发明属于计算机图像处理领域,特别涉及一种基于显著稀疏表示和邻域信息的多聚焦图像融合方法。The invention belongs to the field of computer image processing, in particular to a multi-focus image fusion method based on significant sparse representation and neighborhood information.

背景技术Background technique

由于光学镜头缺乏景深信息的局限性,所获得的单幅图像信息量有限,不能保证图像内不同景深的物体都是清晰的。通过传感器对同一场景设置不同焦点,可以得到多幅具有不同清晰区域的图像。多聚焦图像融合技术就是利用多聚焦图像间的互补和冗余信息,将不同焦点的图像合成为一幅均匀清晰的图像,从而保留重要的视觉信息,对获得更为全面、准确的场景具有重要的意义。Due to the limitation of the lack of depth-of-field information of optical lenses, the amount of information obtained in a single image is limited, and objects with different depths of field in the image cannot be guaranteed to be clear. By setting different focal points for the same scene by the sensor, multiple images with different clear areas can be obtained. Multi-focus image fusion technology is to use the complementary and redundant information between multi-focus images to synthesize images of different focal points into a uniform and clear image, so as to retain important visual information, which is important for obtaining a more comprehensive and accurate scene. meaning.

近年来,国内外学者提出了多种方法,如基于引导滤波的融合算法,基于稠密sift的多聚焦图像融合,基于双子代差分演化和自适应分块机制的多聚焦图像融合算法。由于稀疏表示可以高效地提取图像潜在信息,该模型广泛应用于多聚焦图像融合,具有更好的前景。In recent years, scholars at home and abroad have proposed a variety of methods, such as fusion algorithms based on guided filtering, multi-focus image fusion based on dense sift, and multi-focus image fusion algorithms based on double-sub-generation differential evolution and adaptive block mechanism. Since sparse representation can efficiently extract image latent information, this model is widely used in multi-focus image fusion with better prospects.

Yang首次将稀疏表示引入多聚焦图像融合,利用字典分解图像,通过最大的稀疏系数绝对值之和作为融合规则。Liu Y等提出了结合多尺度变换和稀疏表示的图像融合方法,给出了一个通用的图像融合框架。Liu Y等将自适应稀疏模型应用于图像融合中,根据原始图像中的结构特征将不同子块分类,并自适应选择字典。目前,国内外学者提出了多种稀疏表示模型和字典的改进方法,并利用稀疏系数重构融合图像。现有的方法还在以下几个问题:Yang introduced the sparse representation into multi-focus image fusion for the first time, decomposed the image with a dictionary, and used the maximum sum of absolute values of sparse coefficients as the fusion rule. Liu Y et al. proposed an image fusion method combining multi-scale transformation and sparse representation, and presented a general image fusion framework. Liu Y et al. applied an adaptive sparse model to image fusion, classified different sub-blocks according to the structural features in the original image, and adaptively selected a dictionary. At present, scholars at home and abroad have proposed a variety of improved methods for sparse representation models and dictionaries, and use sparse coefficients to reconstruct fused images. The existing methods still have the following problems:

(1)直接利用稀疏系数,存在因字典对图像细节(例如纹理,边缘等)表示能力不足而造成的图像细节信息丢失的问题。(1) Using the sparse coefficient directly, there is a problem of loss of image detail information due to the insufficient ability of the dictionary to represent image details (such as texture, edge, etc.).

(2)利用稀疏系数重构融合图像会产生方块效应。(2) Reconstructing the fused image with sparse coefficients will produce a block effect.

(3)基于鲁棒稀疏模型的图像多聚焦融合方法利用每个图像块的局部细节信息检测聚焦区域,而有某些细节信息可能同时散焦,因此会产生错误的检测结果。(3) The image multi-focus fusion method based on the robust sparse model uses the local detail information of each image block to detect the focal area, and some detail information may be defocused at the same time, so it will produce erroneous detection results.

发明内容SUMMARY OF THE INVENTION

为解决现有技术问题,本发明提出了一种基于显著稀疏表示模型的多聚焦图像融合方法。其特征在于该方法利用利用稀疏表示,最优理论和图像处理等技术实现了图像融合方法。该方法可以克服在融合图像过程中存在细节信息过度平滑的问题,更精确的区分聚焦区域和散焦区域。In order to solve the problems of the prior art, the present invention proposes a multi-focus image fusion method based on a salient sparse representation model. It is characterized in that the method realizes the image fusion method by utilizing the techniques of sparse representation, optimal theory and image processing. This method can overcome the problem of over-smoothing of detail information in the process of fusing images, and more accurately distinguish the focus area and the defocus area.

具体步骤包括:Specific steps include:

步骤1,基于均匀网格将待融合的图像划分为图像块,并构建向量化的图像融合字典;Step 1: Divide the image to be fused into image blocks based on a uniform grid, and construct a vectorized image fusion dictionary;

步骤2,进行图像显著稀疏建模,得到解图像显著稀疏模型;Step 2: Perform image significant sparse modeling to obtain a solution image significant sparse model;

步骤3,求解图像显著稀疏分解模型的参数;Step 3, solve the parameters of the image significant sparse decomposition model;

步骤4,进行图像标签初始融合;Step 4, perform initial fusion of image labels;

步骤5,对图像标签融合优化;Step 5, optimize the image label fusion;

步骤6,基于优化的图像标签重构融合的图像;Step 6, reconstruct the fused image based on the optimized image label;

步骤1包括:Step 1 includes:

将待融和的两幅图像A和图像B按照大小为Px×Py的网格单元均匀划分为无重叠的图像块,其中Px和Py分别表示横坐标方向(即图像的横向)的像素点个数和纵坐标方向(即图像的纵向)的像素点个数,图像A中第i个图像块灰度值矩阵拉长为d=Px×Py行1列的列向量

Figure GDA0002661961580000021
图像B中第i个图像块灰度值矩阵拉长为d=Px×Py行1列的列向量
Figure GDA0002661961580000022
N表示图像块的总数,图像A和图像B分别转换为矩阵
Figure GDA0002661961580000023
Figure GDA0002661961580000024
其中
Figure GDA0002661961580000025
Figure GDA0002661961580000026
分别表示图像A的第N个图像块的列向量和图像B的第N个图像块的列向量;通过图像A和B的图像块列向量构造图像融和字典D为:The two images A and B to be merged are evenly divided into non-overlapping image blocks according to the grid unit of size P x ×P y , where P x and P y respectively represent the horizontal direction of the abscissa (that is, the horizontal direction of the image). The number of pixels and the number of pixels in the ordinate direction (that is, the longitudinal direction of the image), the gray value matrix of the ith image block in image A is elongated into a column vector with d=P x ×P y row and 1 column
Figure GDA0002661961580000021
The gray value matrix of the ith image block in image B is elongated into a column vector with d=P x ×P y row and 1 column
Figure GDA0002661961580000022
N represents the total number of image blocks, and image A and image B are converted to matrices respectively
Figure GDA0002661961580000023
and
Figure GDA0002661961580000024
in
Figure GDA0002661961580000025
and
Figure GDA0002661961580000026
respectively represent the column vector of the Nth image block of image A and the column vector of the Nth image block of image B; the image fusion dictionary D constructed by the image block column vectors of images A and B is:

Figure GDA0002661961580000027
Figure GDA0002661961580000027

步骤2包括以下步骤:Step 2 includes the following steps:

待融和图像建模分解为公共稀疏项、图像独有特征的显著稀疏项和含有图像细节信息的误差项,公共稀疏项为数据字典与公共稀疏系数之积,显著稀疏项为数据字典与显著稀疏系数之积。定义公共稀疏系数、显著稀疏系数和误差矩阵的1范数之和为目标函数。当目标函数最小时,输出融和图像的公共稀疏系数、显著稀疏系数和误差。The image modeling to be fused is decomposed into common sparse items, significant sparse items with unique image features, and error items containing image details. The common sparse item is the product of the data dictionary and the common sparse coefficient, and the significant sparse item is the data dictionary and significant Product of coefficients. Define the sum of common sparse coefficients, significant sparse coefficients and the 1-norm of the error matrix as the objective function. When the objective function is the smallest, output the common sparse coefficients, significant sparse coefficients and errors of the fusion image.

图像显著稀疏建模为:Image significant sparseness is modeled as:

在约束条件YA=DXA+ZAD+EA下最小化目标函数min||XA||1+||ZA||1+λ|EA||2,1,其中,λ为常数系数,常数系数λ本发明取值为30,XA、ZA和EA分别表示图像A的显著稀疏模型的公共稀疏系数矩阵、显著稀疏系数矩阵和误差;Minimize the objective function min||X A || 1 +||Z A || 1 +λ|E A || 2,1 under the constraint Y A =DX A +Z A D+E A , where λ is a constant coefficient, the constant coefficient λ is 30 in the present invention, and X A , Z A and EA represent the common sparse coefficient matrix, the significant sparse coefficient matrix and the error of the significant sparse model of the image A , respectively;

在约束条件YB=DXB+ZBD+EB下最小化目标函数min||XB||1+||ZB||1+λ||EB||2,1,XB、ZB和EB分别表示图像B的显著稀疏模型的公共稀疏系数矩阵、显著稀疏系数矩阵和误差。Minimize the objective function min||X B || 1 +||Z B || 1 +λ||E B || 2,1 , X B under the constraint Y B =DX B +Z B D+E B , Z B and E B represent the common sparse coefficient matrix, the significant sparse coefficient matrix and the error of the significant sparse model of image B, respectively.

步骤3包括以下步骤:Step 3 includes the following steps:

步骤3-1:初始化图像A和图像B的显著稀疏模型参数:图像A和图像B的初始公共稀疏系数矩阵分别为

Figure GDA0002661961580000031
Figure GDA0002661961580000032
图像A和图像B的初始显著稀疏系数分别为
Figure GDA0002661961580000033
Figure GDA0002661961580000034
Figure GDA0002661961580000035
图像A和图像B的初始误差分别为
Figure GDA0002661961580000036
Figure GDA0002661961580000037
Figure GDA0002661961580000038
图像A和图像B的初始拉格朗日乘子系数分别为
Figure GDA0002661961580000039
Figure GDA00026619615800000310
Figure GDA00026619615800000311
加快收敛速度因子ρ=1.1,收敛因子ε=0.05,惩罚因子μ0=10-6,最大惩罚参数μmax=1010,迭代次数j=0,常系数λ=30;Step 3-1: Initialize the significant sparse model parameters of image A and image B: the initial common sparse coefficient matrices of image A and image B are respectively
Figure GDA0002661961580000031
and
Figure GDA0002661961580000032
The initial significant sparse coefficients of image A and image B are respectively
Figure GDA0002661961580000033
and
Figure GDA0002661961580000034
Figure GDA0002661961580000035
The initial errors of image A and image B are respectively
Figure GDA0002661961580000036
and
Figure GDA0002661961580000037
Figure GDA0002661961580000038
The initial Lagrangian multiplier coefficients of image A and image B are respectively
Figure GDA0002661961580000039
and
Figure GDA00026619615800000310
Figure GDA00026619615800000311
Accelerate the convergence speed factor ρ=1.1, convergence factor ε=0.05, penalty factor μ 0 =10 -6 , maximum penalty parameter μ max =10 10 , iteration number j=0, constant coefficient λ=30;

步骤3-2:计算第j+1次迭代图像A的误差

Figure GDA00026619615800000312
Step 3-2: Calculate the error of image A at the j+1th iteration
Figure GDA00026619615800000312

Figure GDA00026619615800000313
Figure GDA00026619615800000313

其中,

Figure GDA00026619615800000314
Figure GDA00026619615800000315
的第i列,为1≤i≤N,
Figure GDA00026619615800000316
为图像A在约束条件下的误差,GA(:,i)为GA的第i列,μj为第j次迭代的惩罚因子,
Figure GDA00026619615800000317
为第j次迭代图像A的公共稀疏系数矩阵,
Figure GDA00026619615800000318
为第j次迭代图像A的显著稀疏系数矩阵,
Figure GDA00026619615800000319
为第j次迭代图像A的拉格朗日乘子;in,
Figure GDA00026619615800000314
for
Figure GDA00026619615800000315
The i-th column of , is 1≤i≤N,
Figure GDA00026619615800000316
is the error of image A under the constraints, G A (:, i) is the i-th column of G A , μ j is the penalty factor of the j-th iteration,
Figure GDA00026619615800000317
is the public sparse coefficient matrix of image A for the jth iteration,
Figure GDA00026619615800000318
is the significant sparse coefficient matrix of the jth iteration image A,
Figure GDA00026619615800000319
is the Lagrange multiplier of the jth iteration image A;

计算第j+1次迭代图像B的误差

Figure GDA00026619615800000320
Calculate the error of image B at the j+1th iteration
Figure GDA00026619615800000320

Figure GDA00026619615800000321
Figure GDA00026619615800000321

其中,

Figure GDA00026619615800000322
Figure GDA00026619615800000323
的第i列,
Figure GDA00026619615800000324
为图像B在约束条件下的误差,GB(:,i)为GB的第i列,
Figure GDA0002661961580000041
为第j次迭代图像B的公共稀疏系数矩阵,
Figure GDA0002661961580000042
第j次迭代图像B的拉格朗日乘子,
Figure GDA0002661961580000043
为第j次迭代图像B的显著稀疏系数矩阵;in,
Figure GDA00026619615800000322
for
Figure GDA00026619615800000323
the i-th column of ,
Figure GDA00026619615800000324
is the error of image B under constraints, GB (:,i) is the i-th column of GB ,
Figure GDA0002661961580000041
is the common sparse coefficient matrix of image B for the jth iteration,
Figure GDA0002661961580000042
Lagrange multipliers of image B for the jth iteration,
Figure GDA0002661961580000043
is the significant sparse coefficient matrix of the j-th iteration image B;

步骤3-3:计算第j+1次迭代图像A的公共稀疏系数矩阵

Figure GDA0002661961580000044
Step 3-3: Calculate the common sparse coefficient matrix of the j+1-th iteration image A
Figure GDA0002661961580000044

Figure GDA0002661961580000045
Figure GDA0002661961580000045

计算第j+1次迭代图像B的公共稀疏系数矩阵

Figure GDA0002661961580000046
Calculate the common sparse coefficient matrix of image B at the j+1th iteration
Figure GDA0002661961580000046

Figure GDA0002661961580000047
Figure GDA0002661961580000047

其中S为函数,函数S定义为

Figure GDA0002661961580000048
x和τ为函数参数;where S is a function, and the function S is defined as
Figure GDA0002661961580000048
x and τ are function parameters;

步骤3-4:计算第j+1次迭代图像A的显著稀疏系数矩阵

Figure GDA0002661961580000049
Step 3-4: Calculate the significant sparse coefficient matrix of image A at the j+1th iteration
Figure GDA0002661961580000049

Figure GDA00026619615800000410
Figure GDA00026619615800000410

计算第j+1次迭代图像B的显著稀疏系数矩阵

Figure GDA00026619615800000411
Calculate the significant sparse coefficient matrix of image B at the j+1th iteration
Figure GDA00026619615800000411

Figure GDA00026619615800000412
Figure GDA00026619615800000412

步骤3-5:计算第j+1次迭代图像A的拉格朗日乘子

Figure GDA00026619615800000413
Step 3-5: Calculate the Lagrange multiplier of image A at the j+1th iteration
Figure GDA00026619615800000413

Figure GDA00026619615800000414
Figure GDA00026619615800000414

计算第j+1次迭代图像B的拉格朗日乘子

Figure GDA00026619615800000415
Calculate the Lagrangian multiplier of image B at the j+1th iteration
Figure GDA00026619615800000415

Figure GDA00026619615800000416
Figure GDA00026619615800000416

步骤3-6:计算第j+1次迭代惩罚因子μj+1Step 3-6: Calculate the penalty factor μ j+1 for the j+1th iteration:

μj+1=min(μjρ,μmax) (10)μ j+1 = min(μ j ρ, μ max ) (10)

其中ρ为收敛速度因子,μmax为最大惩罚因子;where ρ is the convergence rate factor and μ max is the maximum penalty factor;

步骤3-7:如果收敛条件

Figure GDA0002661961580000051
成立,则输出
Figure GDA0002661961580000052
Figure GDA0002661961580000053
否则,将j更新为j+1,转入步骤3-2;Steps 3-7: If Convergence Condition
Figure GDA0002661961580000051
is established, the output
Figure GDA0002661961580000052
Figure GDA0002661961580000053
Otherwise, update j to j+1, and go to step 3-2;

如果收敛条件

Figure GDA0002661961580000054
成立,则输出
Figure GDA0002661961580000055
Figure GDA0002661961580000056
否则,将j更新为j+1,转入步骤3-2。If the convergence condition
Figure GDA0002661961580000054
is established, the output
Figure GDA0002661961580000055
Figure GDA0002661961580000056
Otherwise, update j to j+1, and go to step 3-2.

步骤4包括:Step 4 includes:

定义待融合图像A的第i个图像块的聚焦参数J(A,i)为误差乘以平衡因子与显著稀疏系数矩阵乘以字典的第i列的2范数,计算公式如下:Define the focus parameter J(A,i) of the ith image block of the image A to be fused as the error multiplied by the balance factor and the sparse coefficient matrix multiplied by the 2 norm of the ith column of the dictionary. The calculation formula is as follows:

J(A,i)=||bEA(:,i)+ZAD(:,i)||2 (11)J(A,i)=||bE A (:,i)+Z A D(:,i)|| 2 (11)

其中EA(:,i)表示误差EA的第i列,D(:,i)表示字典D的第i列;平衡因子b定义为2幅待配准图像A和B的显著稀疏系数矩阵分别与第i列字典矩阵乘积之和除以2幅待配准图像误差矩阵的第i列之和,计算公式为:where E A (:,i) represents the i-th column of the error E A , D(:, i) represents the i-th column of the dictionary D; the balance factor b is defined as the significant sparse coefficient matrix of the two images A and B to be registered Divide the sum of the product of the dictionary matrix of the i-th column by the sum of the i-th column of the error matrix of the two images to be registered. The calculation formula is:

Figure GDA0002661961580000057
Figure GDA0002661961580000057

其中EB(:,i)表示误差EB的第i列;where EB (:,i) represents the i - th column of error EB;

定义待融合图像B第i个图像块的聚焦参数J(B,i)为误差乘以平衡因子与显著稀疏系数矩阵乘以字典的第i列的2范数,计算公式如下:Define the focus parameter J(B,i) of the i-th image block of the image B to be fused as the error multiplied by the balance factor and the sparse coefficient matrix multiplied by the 2-norm of the i-th column of the dictionary. The calculation formula is as follows:

J(B,i)=||bEB(:,i)+ZBD(:,i)||2 (13)J(B,i)=||bE B (:,i)+Z B D(:,i)|| 2 (13)

采用2范数最大化规则融和像素,使用公式(14)构造融合标签

Figure GDA0002661961580000058
其中
Figure GDA0002661961580000059
表示第N个图像块的标签,每个列向量都包含了所选择源图像的对应列向量的标签,1和0分别代表该列来自于源图像A和图像B,标签融合规则为:The 2-norm maximization rule is used to fuse the pixels, and formula (14) is used to construct the fusion label
Figure GDA0002661961580000058
in
Figure GDA0002661961580000059
Represents the label of the Nth image block. Each column vector contains the label of the corresponding column vector of the selected source image. 1 and 0 represent that the column comes from source image A and image B respectively. The label fusion rule is:

Figure GDA00026619615800000510
Figure GDA00026619615800000510

步骤5包括:Step 5 includes:

步骤5-1:为了保留两幅源图像的细节信息,分别求误差EA的聚焦区域焦点细节最大值

Figure GDA00026619615800000511
和误差EB的聚焦区域焦点细节最大值
Figure GDA00026619615800000512
如果图像A的误差大于
Figure GDA00026619615800000513
的90%,则保留源图像A的像素;如果图像B的误差大于
Figure GDA00026619615800000514
的90%,则保留源图像B的像素,基于源图像细节信息的融合标签优化规则为:Step 5-1: In order to retain the detail information of the two source images, find the maximum value of the focal area focus detail of the error EA respectively
Figure GDA00026619615800000511
and error E B the focal area focus detail maximum
Figure GDA00026619615800000512
If the error of image A is greater than
Figure GDA00026619615800000513
90% of , then keep the pixels of source image A; if the error of image B is greater than
Figure GDA00026619615800000514
90%, then keep the pixels of the source image B, and the fusion label optimization rule based on the source image detail information is:

Figure GDA0002661961580000061
Figure GDA0002661961580000061

步骤5-2:按照标签优化规则公式(15)融合图像,计算融合图像中每一个图像块8邻域的图像块来源,ni A表示第i列所对应的图像块的8邻域中来源于图像A中选择的个数,ni B表示第i列所对应的图像块的8邻域中来源于图像B中选择的个数,如果该图像块8个邻域块从图像A中选择的图像块个数大于从源图像B中选择的图像块个数,则代表在源图像A中该图像块所对应的区域是聚焦区域;如果该图像块8个邻域块从图像A中选择的图像块个数小于从源图像B中选择的图像块个数,则代表源图像B在该图像块所对应的区域是聚焦区域;如果数目相等,则是边界区域,即按照式(16)更新融合图像:Step 5-2: fuse the image according to the label optimization rule formula (15), and calculate the source of image blocks in the 8 neighborhoods of each image block in the fused image, n i A represents the source in the 8 neighborhoods of the image block corresponding to the i-th column For the number selected in image A, n i B represents the number selected from image B in the 8 neighborhoods of the image block corresponding to the i-th column, if the 8 neighborhood blocks of the image block are selected from image A The number of image blocks is greater than the number of image blocks selected from source image B, it means that the area corresponding to the image block in source image A is the focus area; if the eight neighborhood blocks of the image block are selected from image A If the number of image blocks is less than the number of image blocks selected from the source image B, it means that the area corresponding to the image block of the source image B is the focus area; if the number is equal, it is the boundary area, that is, according to formula (16) Update the fused image:

Figure GDA0002661961580000062
Figure GDA0002661961580000062

其中-1代表边界,从而得到了最终融合图像标签yi Fwhere -1 represents the boundary, resulting in the final fused image label y i F .

步骤6包括:Step 6 includes:

根据融合图像标签yi F赋值构造融合图像Fi,如果图像标签为1,则选用图像A对应的像素;如果图像标签为0,则选用图像B对应的像素;否则取图像A和图像B的平均值:The fusion image F i is constructed according to the fused image label y i F assignment. If the image label is 1, the pixel corresponding to image A is selected; if the image label is 0, the pixel corresponding to image B is selected; otherwise, the pixel corresponding to image A and image B is selected. average value:

Figure GDA0002661961580000063
Figure GDA0002661961580000063

将第i个图像块融合后的大小为d行1列向量的图像Fi,从第一个元素开始,px个元素为一列,共计转换为py列,得到大小为Px×Py的图像块DiThe image F i with the size of d row and 1 column vector after fusion of the i-th image block, starting from the first element, p x elements as a column, is converted into p y columns in total, and the size is P x ×P y the image block D i ;

设定待融合图像A和图像B的宽为w,则图像块Di在重构融合图像的行ix和列iy分别为:Set the width of image A and image B to be fused as w, then the image block D i in the row i x and column i y of the reconstructed fused image are respectively:

Figure GDA0002661961580000071
Figure GDA0002661961580000071

Figure GDA0002661961580000072
Figure GDA0002661961580000072

其中mod为取余数函数。where mod is the remainder function.

综上,本发明利用待融合图像的客观成像规律,在处理步骤中,采用了一些符合这些规律的计算机图像处理方法进行组合。本发明利用细节信息和独有的特征的不同提取了多聚焦图像的焦点区域,并结合邻域信息和源图像信息优化融合,保留了更多的图像细节信息,融合图像具有较好的平滑性,克服了方块效应,具有较好的鲁棒性,属于专利法的保护范围。To sum up, the present invention utilizes the objective imaging laws of the images to be fused, and in the processing steps, adopts some computer image processing methods that conform to these laws for combination. The invention extracts the focus area of the multi-focus image by using the difference between the detail information and the unique feature, and combines the neighborhood information and the source image information to optimize the fusion, retains more image detail information, and the fusion image has better smoothness , overcomes the block effect, has good robustness, and belongs to the protection scope of the patent law.

有益效果:Beneficial effects:

(1)本发明提出了图像显著稀疏模型。把图像分解为含有所有待融合图像的公共稀疏特征,只包含单幅图像独有的显著稀疏特征和包含图像细节信息的误差矩阵,其中显著稀疏特征可以有效的发现多聚焦图像的焦点区域,克服了传统图像稀疏模型缺乏图像独有的特征信息,具有更高的鲁棒性。(1) The present invention proposes an image significant sparse model. The image is decomposed into common sparse features that contain all the images to be fused, only significant sparse features unique to a single image and an error matrix containing image details. Because the traditional image sparse model lacks the unique feature information of the image, it has higher robustness.

(2)本发明提出了基于最大平衡聚焦参数准则的图像标签初始融合。通过聚误差矩阵、显著稀疏系数矩阵及其平衡因子定义聚焦参数,采用2范数最大化原则确定标签融合规则,有效利用了图像细节信息和图像独有信息,并通过平衡因子调节,具有更高的鲁棒性。(2) The present invention proposes an initial fusion of image labels based on the criterion of maximum balanced focus parameter. The focusing parameters are defined by the aggregation error matrix, the significant sparse coefficient matrix and its balance factor, and the label fusion rule is determined by the 2-norm maximization principle, which effectively utilizes the image detail information and unique image information, and is adjusted by the balance factor. robustness.

(3)本发明提出了基于源图像细节信息和邻域图像块统计信息的标签融合优化。提出了规则:如果源图像像素的误差大于最大误差的90%,则融合图像保留源图像像素,保留了更多的源图信息。统计融合图像块周围8个邻域块的像素来源信息,提出了少数服从多数的原则,优化标签融合。可以有效克服网格划分带来的方块效应,保证图像内容的平滑性和连续性。(3) The present invention proposes label fusion optimization based on source image detail information and neighborhood image block statistics. A rule is proposed: if the error of the source image pixels is greater than 90% of the maximum error, the fusion image retains the source image pixels and retains more source image information. The pixel source information of 8 neighborhood blocks around the image block is statistically fused, and the principle of minority obeying the majority is proposed to optimize label fusion. It can effectively overcome the block effect caused by grid division and ensure the smoothness and continuity of image content.

附图说明Description of drawings

下面结合附图和具体实施方式对本发明做更进一步的具体说明,本发明的上述或其他方面的优点将会变得更加清楚。The present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments, and the advantages of the above or other aspects of the present invention will become clearer.

图1为本发明流程图。Fig. 1 is a flow chart of the present invention.

图2是待融合图像A。Figure 2 is an image A to be fused.

图3是待融合图像B。FIG. 3 is an image B to be fused.

图4是采用本发明方法融合图像A和图像B后的结果。FIG. 4 is the result of fusing image A and image B using the method of the present invention.

具体实施方式Detailed ways

下面结合附图及实施例对本发明做进一步说明。The present invention will be further described below with reference to the accompanying drawings and embodiments.

本方法分为图像字典构建、图像建模、图像显著稀疏分解、标签初始融合、标签融合优化和图像重构六个部分,具体的工作流程如图1所示。This method is divided into six parts: image dictionary construction, image modeling, image sparse decomposition, initial label fusion, label fusion optimization and image reconstruction. The specific workflow is shown in Figure 1.

(一)基于均匀网格划分图像块和向量化的图像字典构建。(1) Image block division based on uniform grid and vectorized image dictionary construction.

将待融和的两幅图像A和B按照相同大小为Px×Py(其中Px和Py分别表示横坐标和纵坐标方向的像素点个数)的网格单元划分为无重叠的图像块。图像A和B的每个图像块的灰度值矩阵拉长为d=Px×Py行1列的列向量

Figure GDA0002661961580000081
Figure GDA0002661961580000082
N表示图像块的总数。图像A和B转换为矩阵
Figure GDA0002661961580000083
Figure GDA0002661961580000084
其中
Figure GDA0002661961580000085
Figure GDA0002661961580000086
分别表示图像A和B的第N个图像块的列向量。通过图像A和B的图像块列向量构造图像融和字典D为:Divide the two images A and B to be merged into non-overlapping images according to grid units of the same size P x × P y (where P x and P y represent the number of pixels in the abscissa and ordinate directions, respectively) piece. The gray value matrix of each image block of images A and B is elongated into a column vector with d=P x ×P y row and 1 column
Figure GDA0002661961580000081
and
Figure GDA0002661961580000082
N represents the total number of image blocks. Convert images A and B to matrices
Figure GDA0002661961580000083
and
Figure GDA0002661961580000084
in
Figure GDA0002661961580000085
and
Figure GDA0002661961580000086
Column vector representing the Nth image patch of images A and B, respectively. The image fusion dictionary D is constructed by the image block column vectors of images A and B as:

Figure GDA0002661961580000087
Figure GDA0002661961580000087

(二)基于公共稀疏特征、显著稀疏特征和误差矩阵表示的图像显著稀疏建模。(2) Image salient sparse modeling based on common sparse features, salient sparse features and error matrix representation.

待融和图像建模分解为公共稀疏项、图像独有特征的显著稀疏项和含有图像细节信息的误差项,公共稀疏项为数据字典与公共稀疏系数之积,显著稀疏项为数据字典与显著稀疏系数之积。定义公共稀疏系数、显著稀疏系数和误差矩阵的1范数之和为目标函数。当目标函数最小时,输出融和图像的公共稀疏系数、显著稀疏系数和误差。The image modeling to be fused is decomposed into common sparse items, significant sparse items with unique image features, and error items containing image details. The common sparse item is the product of the data dictionary and the common sparse coefficient, and the significant sparse item is the data dictionary and significant Product of coefficients. Define the sum of common sparse coefficients, significant sparse coefficients and the 1-norm of the error matrix as the objective function. When the objective function is the smallest, output the common sparse coefficients, significant sparse coefficients and errors of the fusion image.

图像显著稀疏建模为在约束条件YA=DXA+ZAD+EA下最小化目标函数min||XA||1+||ZA||1+λ||EA||2,1,其中常数系数λ本发明取值为30,求解公共稀疏系数矩阵XA,显著稀疏系数矩阵ZA和误差EA。在约束条件YB=DXB+ZBD+EB下最小化目标函数min||XB||1+||ZB||1+λ||E||B2,1,求解公共稀疏系数矩阵XB,显著稀疏系数矩阵ZB,误差EBThe image significant sparsity is modeled as minimizing the objective function min||X A || 1 +||Z A || 1 +λ||E A || under the constraint Y A =DX A + Z A D+E A 2,1 , where the constant coefficient λ is 30 in the present invention, and the common sparse coefficient matrix X A , the significant sparse coefficient matrix Z A and the error E A are solved. Minimize the objective function min||X B || 1 +||Z B || 1 +λ||E|| B2,1 under the constraint Y B =DX B +Z B D+E B , and solve the common sparsity Coefficient matrix X B , significant sparse coefficient matrix Z B , error E B .

(三)基于动态惩罚因子的线性交替框架求解图像显著稀疏模型分解的公共稀疏特征、显著稀疏特征和误差等参数。具体步骤如下:(3) The linear alternating framework based on dynamic penalty factor solves the parameters such as common sparse features, significant sparse features and errors of image significant sparse model decomposition. Specific steps are as follows:

步骤(31)初始化图像A和图像B的显著稀疏模型参数。Step (31) initializes the significant sparse model parameters of image A and image B.

图像A和图像B的公共稀疏系数矩阵分别为

Figure GDA0002661961580000091
Figure GDA0002661961580000092
图像A和图像B的显著稀疏系数
Figure GDA0002661961580000093
图像A和图像B的误差
Figure GDA0002661961580000094
图像A和图像B的拉格朗日乘子
Figure GDA0002661961580000095
加快收敛速度因子ρ=1.1,收敛因子ε=0.05,惩罚因子μ0=10-6,最大惩罚参数μmax=1010,迭代次数j=0,常系数λ=30;The common sparse coefficient matrices of image A and image B are respectively
Figure GDA0002661961580000091
and
Figure GDA0002661961580000092
Significant sparse coefficients for image A and image B
Figure GDA0002661961580000093
Error between image A and image B
Figure GDA0002661961580000094
Lagrange Multipliers for Image A and Image B
Figure GDA0002661961580000095
Accelerate the convergence speed factor ρ=1.1, convergence factor ε=0.05, penalty factor μ 0 =10 -6 , maximum penalty parameter μ max =10 10 , iteration number j=0, constant coefficient λ=30;

步骤(32)使用收缩算子法更新误差。Step (32) uses the shrinkage operator method to update the error.

计算第j+1次迭代图像A的误差:Calculate the error of image A at the j+1th iteration:

Figure GDA0002661961580000096
Figure GDA0002661961580000096

其中1≤i≤N,

Figure GDA0002661961580000097
为图像A在约束条件下的误差,GA(:,i)为GA的第i列,μj为第j次迭代的惩罚因子,
Figure GDA0002661961580000098
为第j次迭代图像A的公共稀疏系数矩阵,
Figure GDA0002661961580000099
为第j次迭代图像A的显著稀疏系数矩阵,
Figure GDA00026619615800000910
为第j次迭代图像A的拉格朗日乘子。where 1≤i≤N,
Figure GDA0002661961580000097
is the error of image A under the constraints, G A (:, i) is the i-th column of G A , μ j is the penalty factor of the j-th iteration,
Figure GDA0002661961580000098
is the public sparse coefficient matrix of image A for the jth iteration,
Figure GDA0002661961580000099
is the significant sparse coefficient matrix of the jth iteration image A,
Figure GDA00026619615800000910
is the Lagrange multiplier of image A for the jth iteration.

计算第j+1次迭代图像B的误差:Calculate the error of image B at the j+1th iteration:

Figure GDA00026619615800000911
Figure GDA00026619615800000911

其中1≤i≤N,

Figure GDA00026619615800000912
为图像B在约束条件下的误差,GB(:,i)为GB的第i列,
Figure GDA00026619615800000913
为第j次迭代图像B的公共稀疏系数矩阵,
Figure GDA00026619615800000914
第j次迭代图像B的拉格朗日乘子。where 1≤i≤N,
Figure GDA00026619615800000912
is the error of image B under constraints, GB (:,i) is the i-th column of GB ,
Figure GDA00026619615800000913
is the common sparse coefficient matrix of image B for the jth iteration,
Figure GDA00026619615800000914
Lagrange multipliers of image B for the jth iteration.

步骤(33)使用阈值法更新公共稀疏系数矩阵。Step (33) uses the threshold method to update the common sparse coefficient matrix.

计算第j+1次迭代图像A的公共稀疏系数矩阵:Compute the common sparse coefficient matrix of image A for the j+1th iteration:

Figure GDA0002661961580000101
Figure GDA0002661961580000101

计算第j+1次迭代图像B的公共稀疏系数矩阵:Compute the common sparse coefficient matrix of image B for the j+1th iteration:

Figure GDA0002661961580000102
Figure GDA0002661961580000102

其中S为函数,定义为

Figure GDA0002661961580000103
x和τ为函数参数。where S is a function, defined as
Figure GDA0002661961580000103
x and τ are function parameters.

步骤(34)使用阈值法更新显著稀疏系数矩阵ZA和ZBStep (34) updates the significant sparse coefficient matrices Z A and Z B using a threshold method.

计算第j+1次迭代图像A的显著稀疏系数矩阵:Compute the significant sparse coefficient matrix of image A at iteration j+1:

Figure GDA0002661961580000104
Figure GDA0002661961580000104

计算第j+1次迭代图像B的显著稀疏系数矩阵:Compute the significant sparse coefficient matrix of image B for the j+1th iteration:

Figure GDA0002661961580000105
Figure GDA0002661961580000105

步骤(35)更新拉格朗日乘子LA和LBStep (35) updates the Lagrangian multipliers LA and LB.

计算第j+1次迭代图像B的拉格朗日乘子:Compute the Lagrangian multipliers of image B at the j+1th iteration:

Figure GDA0002661961580000106
Figure GDA0002661961580000106

计算第j+1次迭代图像B的拉格朗日乘子:Compute the Lagrangian multipliers of image B at the j+1th iteration:

Figure GDA0002661961580000107
Figure GDA0002661961580000107

步骤(36)更新惩罚参数μ。Step (36) updates the penalty parameter μ.

计算第j+1次迭代惩罚因子:Calculate the penalty factor for the j+1th iteration:

μj+1=min(μjρ,μmax) (10)μ j+1 = min(μ j ρ, μ max ) (10)

其中ρ为收敛速度因子,μmax为最大惩罚因子。where ρ is the convergence rate factor and μ max is the maximum penalty factor.

步骤(37)迭代收敛判断。Step (37) iterative convergence judgment.

如果判断收敛条件

Figure GDA0002661961580000111
成立,则输出
Figure GDA0002661961580000112
Figure GDA0002661961580000113
否则,j=j+1,转入第二步。If the convergence condition is judged
Figure GDA0002661961580000111
is established, the output
Figure GDA0002661961580000112
Figure GDA0002661961580000113
Otherwise, j=j+1, go to the second step.

如果判断收敛条件

Figure GDA0002661961580000114
成立,则输出
Figure GDA0002661961580000115
Figure GDA0002661961580000116
否则,j=j+1,转入步骤(32)。If the convergence condition is judged
Figure GDA0002661961580000114
is established, the output
Figure GDA0002661961580000115
Figure GDA0002661961580000116
Otherwise, j=j+1, go to step (32).

(四)基于最大平衡聚焦参数准则的图像标签初始融合。(4) Initial fusion of image labels based on the maximum balanced focus parameter criterion.

定义待融合图像A的第i个图像子块的聚焦参数J(A,i)为误差乘以平衡因子与显著稀疏系数矩阵乘以字典的第i列的2范数Define the focus parameter J(A, i) of the i-th image sub-block of the image A to be fused as the error multiplied by the balance factor and the significant sparse coefficient matrix multiplied by the 2-norm of the i-th column of the dictionary

J(A,i)=||bEA(:,i)+ZAD(:,i)||2 (11)J(A,i)=||bE A (:,i)+Z A D(:,i)|| 2 (11)

其中EA(:,i)表示误差EA的第i列,D(:,i)表示字典D的第i列。平衡因子b定义为2幅待配准图像A和B的显著稀疏系数矩阵分别与第i列字典矩阵乘积之和除以2幅待配准图像误差矩阵的第i列之和,计算公式为Where E A (:, i) represents the i-th column of the error E A , and D(:, i) represents the i-th column of the dictionary D. The balance factor b is defined as the sum of the products of the significant sparse coefficient matrices of the two images A and B to be registered and the i-th column dictionary matrix respectively divided by the i-th column of the error matrix of the two images to be registered. The calculation formula is

Figure GDA0002661961580000117
Figure GDA0002661961580000117

其中EB(:,i)表示误差EB的第i列。where EB (:,i) represents the i - th column of error EB.

定义待融合图像B第i个图像子块的聚焦参数J(B,i)为误差乘以平衡因子与显著稀疏系数矩阵乘以字典的第i列的2范数Define the focus parameter J(B, i) of the ith image sub-block of the image B to be fused as the error multiplied by the balance factor and the significant sparse coefficient matrix multiplied by the 2 norm of the ith column of the dictionary

J(B,i)=||bEB(:,i)+ZBD(:,i)||2 (13)J(B,i)=||bE B (:,i)+Z B D(:,i)|| 2 (13)

采用2范数最大化规则融和像素,使用公式(14)构造融合标签

Figure GDA0002661961580000118
(其中
Figure GDA0002661961580000119
表示第N个图像块的标签),每个列向量都包含了所选择源图像的对应列向量的标签,1和0分别代表该列来自于源图像A和B,融合标签规则为:The 2-norm maximization rule is used to fuse the pixels, and formula (14) is used to construct the fusion label
Figure GDA0002661961580000118
(in
Figure GDA0002661961580000119
represents the label of the Nth image block), each column vector contains the label of the corresponding column vector of the selected source image, 1 and 0 represent that the column comes from source images A and B, respectively, and the fusion label rule is:

Figure GDA00026619615800001110
Figure GDA00026619615800001110

(四)基于源图像细节信息和邻域图像块统计信息的图像标签融合优化。(4) Image label fusion optimization based on source image detail information and neighborhood image block statistics.

在检测聚焦区域的过程中,图像的聚焦区域会聚集在焦点附近,因此结合图像邻域信息更精确的划分聚焦和散焦区域。根据图像邻域信息和稀疏模型分解的细节信息误差EA和EB对初始化融合YF优化,更准确地划分聚焦区域和散焦区域,同时对散焦区和聚焦区域边缘做平滑处理。详细步骤包括:In the process of detecting the focal area, the focal area of the image will be gathered near the focal point, so the focal area and the defocused area are more accurately divided by combining the image neighborhood information. According to the image neighborhood information and the detailed information errors EA and EB decomposed by the sparse model, the initialization fusion Y F is optimized to divide the focus area and the defocus area more accurately, and at the same time, the edges of the defocus area and the focus area are smoothed. Detailed steps include:

步骤(41)基于源图像细节信息的融合稀疏优化。Step (41) is based on fusion sparse optimization of source image detail information.

为了保留两幅源图像的细节信息,分别求误差EA的聚焦区域焦点细节最大值

Figure GDA0002661961580000121
和误差EB的聚焦区域焦点细节最大值
Figure GDA0002661961580000122
如果图像A的误差大于
Figure GDA0002661961580000123
的90%,则保留源图像A的像素;如果图像B的误差大于
Figure GDA0002661961580000124
的90%,则保留源图像B的像素。基于源图像细节信息的融合标签优化规则为In order to retain the detail information of the two source images, the maximum value of the focal detail in the focus area of the error EA is calculated respectively .
Figure GDA0002661961580000121
and error E B the focal area focus detail maximum
Figure GDA0002661961580000122
If the error of image A is greater than
Figure GDA0002661961580000123
90% of , then keep the pixels of source image A; if the error of image B is greater than
Figure GDA0002661961580000124
90%, the pixels of source image B are preserved. The fusion label optimization rule based on source image detail information is:

Figure GDA0002661961580000125
Figure GDA0002661961580000125

步骤(42)基于邻域图像块信息的融合优化。Step (42) is based on the fusion optimization of the neighborhood image block information.

按照标签优化规则公式(15)融合图像,计算融合图像中每一个图像块8邻域的图像块来源,ni A和ni B分别表示第i列所对应的图像块的8邻域中分别源从图像A,B中选择的个数。若该图像块8个邻域块从图像A中选择的图像块个数大于从源图像B中选择的图像块个数,则代表在源图像A中该图像块所对应的区域是聚焦区域;反之,源图像B在该图像块所对应的区域是聚焦区域;若数目相等,则是边界区域。即按照式(16)更新融合图像。According to the label optimization rule formula (15) to fuse the image, calculate the image block source of each image block in the 8 neighborhoods of the fused image, n i A and n i B respectively represent the 8 neighborhoods of the image block corresponding to the ith column, respectively The number of sources selected from images A,B. If the number of image blocks selected from the image A by the eight neighborhood blocks of the image block is greater than the number of image blocks selected from the source image B, it means that the area corresponding to the image block in the source image A is the focus area; On the contrary, the area corresponding to the image block of the source image B is the focus area; if the number is equal, it is the boundary area. That is, the fused image is updated according to formula (16).

Figure GDA0002661961580000126
Figure GDA0002661961580000126

其中-1代表边界,从而得到了最终融合图像标签yi Fwhere -1 represents the boundary, resulting in the final fused image label y i F .

(六)基于优化后的融合标签重构图像。(6) Image reconstruction based on the optimized fusion labels.

根据融合图像标签yi F赋值构造融合图像Fi∈Rd×N=[F1,F2,…,FN]。如果图像标签为1,则选用图像A对应的像素;如果图像标签为0,则选用图像B对应的像素;否则取图像A和B对应像素的平均值。The fusion image F i ∈ R d×N =[F 1 ,F 2 ,...,F N ] is constructed according to the assignment of the fusion image label y i F . If the image label is 1, the pixel corresponding to the image A is selected; if the image label is 0, the pixel corresponding to the image B is selected; otherwise, the average value of the pixels corresponding to the images A and B is taken.

Figure GDA0002661961580000131
Figure GDA0002661961580000131

最后将第i个图像块融合后的大小为d行1列向量Fi,从第一个元素开始,px个元素为一列,共计转换为py列,得到大小为Px×Py的图像块Di,假设待融合图像A和图像B的宽为w,则图像块Di在重构融合图像的行ix和列iy分别为Finally, the size of the ith image block after fusion is a vector F i of d row and 1 column, starting from the first element, p x elements are a column, and the total is converted into p y columns, and the size of P x ×P y is obtained. Image block D i , assuming that the width of image A and image B to be fused is w, then image block D i in the row i x and column i y of the reconstructed fused image are respectively:

Figure GDA0002661961580000132
Figure GDA0002661961580000132

Figure GDA0002661961580000133
Figure GDA0002661961580000133

其中mod为取余数函数。where mod is the remainder function.

待融合图像A和B分别如图2和3所示,采用本发明方法融合后的结果如图4所示。The images A and B to be fused are shown in Figures 2 and 3, respectively, and the result of fusion using the method of the present invention is shown in Figure 4.

本发明利用误差矩阵和显著稀疏矩阵的细节信息和独有特征提取了多聚焦图像的焦点区域,并结合邻域信息和源图像信息优化融合,比现实领域的技术有更精确的聚焦区域划分,保留了更多的图像细节信息,克服了方块效应,具有较好的平滑性和鲁棒性。本发明提供了一种基于显著稀疏表示和邻域信息的多聚焦图像融合方法,具体实现该技术方案的方法和途径很多,以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。本实施例中未明确的各组成部分均可用现有技术加以实现。The invention extracts the focal area of the multi-focus image by using the detailed information and unique features of the error matrix and the significant sparse matrix, and combines the neighborhood information and the source image information to optimize the fusion, and has a more accurate focal area division than the technology in the real field. It retains more image details, overcomes the block effect, and has better smoothness and robustness. The present invention provides a multi-focus image fusion method based on significant sparse representation and neighborhood information. There are many specific methods and approaches for implementing this technical solution. The above are only the preferred embodiments of the present invention. For those skilled in the art, without departing from the principles of the present invention, several improvements and modifications can also be made, and these improvements and modifications should also be regarded as the protection scope of the present invention. All components not specified in this embodiment can be implemented by existing technologies.

Claims (3)

1. A multi-focus image fusion method based on significant sparse representation and neighborhood information is characterized by comprising the following steps:
step 1, dividing an image to be fused into image blocks based on a uniform grid, and constructing a vectorized image fusion dictionary;
step 2, performing image significant sparse modeling to obtain an image significant sparse model;
step 3, solving parameters of the image significant sparse decomposition model;
step 4, carrying out initial fusion of image labels;
step 5, fusing and optimizing the image labels;
step 6, reconstructing a fused image based on the optimized image label;
the step 1 comprises the following steps:
two images A and B to be fused are taken as Px×PyIs divided evenly into non-overlapping image blocks, where PxAnd PyRespectively representing the number of pixel points in the abscissa direction and the number of pixel points in the ordinate direction, wherein the lengthening of the gray value matrix of the ith image block in the image A is d-Px×PyColumn vector of row 1 column
Figure FDA0002628838040000011
The gray value matrix of the ith image block in the image B is elongated to d-Px×PyColumn vector of row 1 column
Figure FDA0002628838040000012
I is more than or equal to 1 and less than or equal to N, N represents the total number of image blocks, and the image A and the image B are respectively converted into matrixes
Figure FDA0002628838040000013
And
Figure FDA0002628838040000014
wherein
Figure FDA0002628838040000015
And
Figure FDA0002628838040000016
respectively representing a column vector of an Nth image block of the image A and a column vector of an Nth image block of the image B; constructing an image fusion dictionary D by using image block column vectors of the images A and B as follows:
Figure FDA0002628838040000017
the step 2 comprises the following steps:
the image significant sparse modeling is as follows:
under constraint YA=DXA+ZAD+EALower minimization objective function min | | XA||1+||ZA||1+λ||EA||2,1Wherein λ is a constant coefficient, XA、ZAAnd EARespectively representing a public sparse coefficient matrix, a significant sparse coefficient matrix and an error of a significant sparse model of the image A;
under constraint YB=DXB+ZBD+EBLower minimization objective function min | | XB||1+||ZB||1+λ||EB||2,1,XB、ZBAnd EBRespectively representing a public sparse coefficient matrix, a significant sparse coefficient matrix and an error of the significant sparse model of the image B;
the step 3 comprises the following steps:
step 3-1: initializing significant sparse model parameters for image a and image B: the initial common sparse coefficient matrices of image A and image B are respectively
Figure FDA0002628838040000021
And is
Figure FDA0002628838040000022
The initial significant sparse coefficients of image A and image B are respectively
Figure FDA0002628838040000023
And
Figure FDA0002628838040000024
Figure FDA0002628838040000025
the initial errors of image A and image B are respectively
Figure FDA0002628838040000026
And
Figure FDA0002628838040000027
Figure FDA0002628838040000028
the initial Lagrange multiplier coefficients of image A and image B are respectively
Figure FDA0002628838040000029
And
Figure FDA00026288380400000210
Figure FDA00026288380400000211
the convergence speed factor rho is 1.1, the convergence factor is 0.05, and the penalty factor mu is0=10-6Maximum penalty parameter μmax=1010The iteration number j is 0, and the constant coefficient lambda is 30;
step 3-2: calculate the error of the (j + 1) th iteration image A
Figure FDA00026288380400000212
Figure FDA00026288380400000213
Wherein,
Figure FDA00026288380400000214
is composed of
Figure FDA00026288380400000215
I is more than or equal to 1 and less than or equal to N in the ith row,
Figure FDA00026288380400000216
error of image A under constraint, GA(i) is GAI column of (1), mujIs the penalty factor for the j-th iteration,
Figure FDA00026288380400000217
is the common sparse coefficient matrix for the jth iteration image a,
Figure FDA00026288380400000218
for the significant sparse coefficient matrix of the jth iteration image a,
Figure FDA00026288380400000219
a Lagrange multiplier for the jth iteration image A;
calculating the error of the (j + 1) th iteration image B
Figure FDA00026288380400000220
Figure FDA00026288380400000221
Wherein,
Figure FDA00026288380400000222
is composed of
Figure FDA00026288380400000223
The (c) th column of (a),
Figure FDA00026288380400000224
error of image B under constraint, GB(i) is GBThe (c) th column of (a),
Figure FDA00026288380400000225
is the common sparse coefficient matrix for the jth iteration image B,
Figure FDA00026288380400000226
the lagrangian multiplier for the jth iteration image B,
Figure FDA00026288380400000227
a significant sparse coefficient matrix of a jth iteration image B;
step 3-3: computingCommon sparse coefficient matrix of j +1 th iteration image A
Figure FDA00026288380400000228
Figure FDA0002628838040000031
Calculating a common sparse coefficient matrix of the (j + 1) th iteration image B
Figure FDA0002628838040000032
Figure FDA0002628838040000033
Wherein S is a function defined as
Figure FDA0002628838040000034
x and tau are function parameters;
step 3-4: calculating a significant sparse coefficient matrix of the (j + 1) th iteration image A
Figure FDA0002628838040000035
Figure FDA0002628838040000036
Calculating a significant sparse coefficient matrix of the (j + 1) th iteration image B
Figure FDA0002628838040000037
Figure FDA0002628838040000038
Step 3-5: computing Lagrange multiplier for j +1 th iteration image A
Figure FDA0002628838040000039
Figure FDA00026288380400000310
Computing Lagrange multiplier for j +1 th iteration image B
Figure FDA00026288380400000311
Figure FDA00026288380400000312
Step 3-6: calculating the (j + 1) th iteration penalty factor muj+1
μj+1=min(μjρ,μmax) (10)
Where ρ is the convergence rate factor, μmaxIs the maximum penalty factor;
step 3-7: if the convergence condition is satisfied
Figure FDA00026288380400000318
If true, output
Figure FDA00026288380400000314
Figure FDA00026288380400000315
Otherwise, updating j to j +1, and turning to the step 3-2;
if the convergence condition is satisfied
Figure FDA00026288380400000316
If true, output
Figure FDA00026288380400000317
Figure FDA0002628838040000041
Otherwise, update j to j+1, going to step 3-2;
step 4 comprises the following steps:
defining a focusing parameter J (A, i) of the ith image block of the image A to be fused as an error multiplied by a balance factor and a significant sparse coefficient matrix multiplied by a 2 norm of the ith column of the dictionary, wherein the calculation formula is as follows:
J(A,i)=||bEA(:,i)+ZAD(:,i)||2(11)
wherein EA(i) represents an error EARepresents the ith column of the dictionary D; the balance factor B is defined as the sum of products of significant sparse coefficient matrixes of 2 images A and B to be registered and an ith column dictionary matrix is divided by the sum of an ith column of an error matrix of the 2 images to be registered, and the calculation formula is as follows:
Figure FDA0002628838040000042
wherein EB(i) represents an error EBThe ith column;
defining a focusing parameter J (B, i) of the ith image block of the image B to be fused as an error multiplied by a balance factor and a significant sparse coefficient matrix multiplied by a 2 norm of the ith column of the dictionary, wherein the calculation formula is as follows:
J(B,i)=||bEB(:,i)+ZBD(:,i)||2(13)
fusing pixels by adopting a 2 norm maximization rule; construction of fusion tags Using formula (14)
Figure FDA0002628838040000043
Wherein
Figure FDA0002628838040000044
Labels representing the Nth image block, wherein each column vector comprises the labels of the corresponding column vector of the selected source image, 1 and 0 respectively represent that the column is from the source image A and the image B, and the label fusion rule is as follows:
Figure FDA0002628838040000045
2. the method of claim 1, wherein step 5 comprises:
step 5-1: respectively calculating the errors EAFocal zone focus detail maximum of
Figure FDA0002628838040000046
And error EBFocal zone focus detail maximum of
Figure FDA0002628838040000047
If the error of image A is larger than
Figure FDA0002628838040000048
90% of the original image A, retaining the pixels of the source image A; if the error of image B is larger than
Figure FDA0002628838040000049
And 90% of the original image B, retaining the pixels of the original image B, and the fusion label optimization rule based on the detail information of the original image is as follows:
Figure FDA00026288380400000410
step 5-2: fusing the images according to a label optimization rule formula (15), and calculating the image block source n of each 8 neighborhood of the image blocks in the fused imagesi AThe number n of image blocks corresponding to the ith column in the neighborhood of 8 selected from the image Ai BRepresenting the number of selected image blocks from the image B in 8 neighborhoods of the image block corresponding to the ith column, and if the number of the image blocks selected from the image A by the 8 neighborhoods of the image block is greater than the number of the image blocks selected from the source image B, representing that the area corresponding to the image block in the source image A is a focus area; if the number of the image blocks selected from the image A by the 8 adjacent domain blocks of the image block is less than the number of the image blocks selected from the source image B, the area corresponding to the image block of the source image B is a focusing area; if number of phasesAnd the boundary region is obtained, namely the fused image is updated according to the formula (16):
Figure FDA0002628838040000051
where-1 represents the boundary, resulting in the final fused image label yi F
3. The method of claim 2, wherein step 6 comprises:
from the fused image label yi FAssignment construct fusion image FiIf the image label is 1, selecting the pixel corresponding to the image A; if the image label is 0, selecting the pixel corresponding to the image B; otherwise take the average of image a and image B:
Figure FDA0002628838040000052
image F with d rows and 1 columns of vectors after fusion of ith image blockiStarting from the first element, pxEach element being a column and being converted to p in totalyColumn, get size Px×PyImage block D ofi
Setting the width of the image A and the image B to be fused as w, and then, setting the image block DiIn the row i of the reconstructed fused imagexAnd column iyRespectively as follows:
Figure FDA0002628838040000053
Figure FDA0002628838040000054
where mod is a remainder taking function.
CN201910126869.3A 2019-02-20 2019-02-20 A Multi-Focus Image Fusion Method Based on Significant Sparse Representation and Neighborhood Information Active CN109934794B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910126869.3A CN109934794B (en) 2019-02-20 2019-02-20 A Multi-Focus Image Fusion Method Based on Significant Sparse Representation and Neighborhood Information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910126869.3A CN109934794B (en) 2019-02-20 2019-02-20 A Multi-Focus Image Fusion Method Based on Significant Sparse Representation and Neighborhood Information

Publications (2)

Publication Number Publication Date
CN109934794A CN109934794A (en) 2019-06-25
CN109934794B true CN109934794B (en) 2020-10-27

Family

ID=66985723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910126869.3A Active CN109934794B (en) 2019-02-20 2019-02-20 A Multi-Focus Image Fusion Method Based on Significant Sparse Representation and Neighborhood Information

Country Status (1)

Country Link
CN (1) CN109934794B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295196A (en) * 2013-05-21 2013-09-11 西安电子科技大学 Super-resolution image reconstruction method based on non-local dictionary learning and biregular terms
CN104077761A (en) * 2014-06-26 2014-10-01 桂林电子科技大学 Multi-focus image fusion method based on self-adaption sparse representation
CN107680070A (en) * 2017-09-15 2018-02-09 电子科技大学 A kind of layering weight image interfusion method based on original image content
CN109003256A (en) * 2018-06-13 2018-12-14 天津师范大学 A kind of multi-focus image fusion quality evaluating method indicated based on joint sparse

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9152881B2 (en) * 2012-09-13 2015-10-06 Los Alamos National Security, Llc Image fusion using sparse overcomplete feature dictionaries
CN104008533B (en) * 2014-06-17 2017-09-29 华北电力大学 Multisensor Image Fusion Scheme based on block adaptive signature tracking
CN106056564B (en) * 2016-05-27 2018-10-16 西华大学 Edge clear image interfusion method based on joint sparse model
CN106447640B (en) * 2016-08-26 2019-07-16 西安电子科技大学 Multi-focus image fusion method and device based on dictionary learning and rotation-guided filtering
CN108510465B (en) * 2018-01-30 2019-12-24 西安电子科技大学 Multi-focus Image Fusion Method Based on Consistency Constrained Nonnegative Sparse Representation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295196A (en) * 2013-05-21 2013-09-11 西安电子科技大学 Super-resolution image reconstruction method based on non-local dictionary learning and biregular terms
CN104077761A (en) * 2014-06-26 2014-10-01 桂林电子科技大学 Multi-focus image fusion method based on self-adaption sparse representation
CN107680070A (en) * 2017-09-15 2018-02-09 电子科技大学 A kind of layering weight image interfusion method based on original image content
CN109003256A (en) * 2018-06-13 2018-12-14 天津师范大学 A kind of multi-focus image fusion quality evaluating method indicated based on joint sparse

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Visual tracking via robust multi-task multi-feature joint sparse representation;Yong Wang等;《Multimedia Tools and Applications》;20181231;第77卷;第31447-31467页 *
结合稀疏表示与神经网络的医学图像融合;陈轶鸣等;《河南科技大学学报(自然科学版)》;20180430;第39卷(第2期);第40-47页 *
联合稀疏表示的医学图像融合及同步去噪;宗静静等;《中国生物医学工程学报》;20160430;第35卷(第2期);第133-140页 *

Also Published As

Publication number Publication date
CN109934794A (en) 2019-06-25

Similar Documents

Publication Publication Date Title
US11450066B2 (en) 3D reconstruction method based on deep learning
CN112001960B (en) Monocular image depth estimation method based on multi-scale residual error pyramid attention network model
CN113962893B (en) Face image restoration method based on multiscale local self-attention generation countermeasure network
CN104361328B (en) A kind of facial image normalization method based on adaptive multiple row depth model
CN107766794A (en) The image, semantic dividing method that a kind of Fusion Features coefficient can learn
CN113870128B (en) Digital mural image restoration method based on depth convolution countermeasure network
Zhao et al. Deep equilibrium models for snapshot compressive imaging
CN108021930B (en) Self-adaptive multi-view image classification method and system
CN111553296B (en) A Binary Neural Network Stereo Vision Matching Method Based on FPGA
CN111814884A (en) An upgrade method of target detection network model based on deformable convolution
Dinesh et al. 3D point cloud color denoising using convex graph-signal smoothness priors
CN109461122A (en) A kind of compressed sensing image rebuilding method based on multi-view image
CN116152100B (en) Point cloud denoising method, device and storage medium based on feature analysis and scale selection
CN104952051B (en) Low-rank image repair method based on gauss hybrid models
CN106529604A (en) Adaptive image tag robust prediction method and system
CN115100406B (en) Weight information entropy fuzzy C-means clustering method based on superpixel processing
CN113822825B (en) Optical building target three-dimensional reconstruction method based on 3D-R2N2
CN109934794B (en) A Multi-Focus Image Fusion Method Based on Significant Sparse Representation and Neighborhood Information
CN118823342A (en) An event-RGB semantic segmentation method based on large model adaptation
Zhao et al. NormalNet: Learning-based normal filtering for mesh denoising
CN109583406B (en) Facial Expression Recognition Method Based on Feature Attention Mechanism
Fu et al. Monocular depth estimation based on multi-scale graph convolution networks
Alt et al. Learning sparse masks for diffusion-based image inpainting
CN118505755B (en) Dynamic target tracking method based on spatiotemporal graph representation and learning
Haiyun et al. Learning optical flow via deformable convolution and feature pyramid networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant