CN109934794B - Multi-focus image fusion method based on significant sparse representation and neighborhood information - Google Patents

Multi-focus image fusion method based on significant sparse representation and neighborhood information Download PDF

Info

Publication number
CN109934794B
CN109934794B CN201910126869.3A CN201910126869A CN109934794B CN 109934794 B CN109934794 B CN 109934794B CN 201910126869 A CN201910126869 A CN 201910126869A CN 109934794 B CN109934794 B CN 109934794B
Authority
CN
China
Prior art keywords
image
column
error
sparse
significant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910126869.3A
Other languages
Chinese (zh)
Other versions
CN109934794A (en
Inventor
谢从华
张冰
高蕴梅
刘在德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changshu Institute of Technology
Original Assignee
Changshu Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changshu Institute of Technology filed Critical Changshu Institute of Technology
Priority to CN201910126869.3A priority Critical patent/CN109934794B/en
Publication of CN109934794A publication Critical patent/CN109934794A/en
Application granted granted Critical
Publication of CN109934794B publication Critical patent/CN109934794B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a multi-focus image fusion method based on significant sparse representation and neighborhood information, which comprises the following specific steps of: step 1, dividing image blocks based on a uniform grid and constructing an image dictionary in a vectorization manner; step 2, establishing an image significant sparse model based on the public sparse features, the significant sparse features and the error representation; step 3, solving parameters such as public sparse features, obvious sparse features, errors and the like of the image obvious sparse decomposition model based on a dynamic penalty factor linear alternating frame; step 4, label fusion based on the maximum balanced focusing parameter criterion; step 5, optimizing label fusion based on the detail information of the source image and the statistical information of the neighborhood image block; and 6, fusing and reconstructing the image based on the optimized label.

Description

Multi-focus image fusion method based on significant sparse representation and neighborhood information
Technical Field
The invention belongs to the field of computer image processing, and particularly relates to a multi-focus image fusion method based on significant sparse representation and neighborhood information.
Background
Due to the limitation that the optical lens lacks depth of field information, the obtained single image information amount is limited, and objects with different depth of field in the image cannot be clear. Different focuses are set on the same scene through the sensor, and a plurality of images with different clear areas can be obtained. The multi-focus image fusion technology is to combine images with different focuses into a uniform and clear image by utilizing complementary and redundant information among multi-focus images, thereby retaining important visual information and having important significance for obtaining more comprehensive and accurate scenes.
In recent years, scholars at home and abroad propose various methods, such as a fusion algorithm based on guided filtering, multi-focus image fusion based on dense sift, and a multi-focus image fusion algorithm based on binary differential evolution and a self-adaptive blocking mechanism. Due to sparse representation, the potential information of the image can be efficiently extracted, and the model is widely applied to multi-focus image fusion and has a better prospect.
Yang introduces sparse representation into multi-focus image fusion for the first time, decomposes images by utilizing a dictionary, and takes the sum of the maximum sparse coefficient absolute values as a fusion rule. Liu Y and the like propose an image fusion method combining multi-scale transformation and sparse representation, and provide a general image fusion framework. Liu Y and the like apply the self-adaptive sparse model to image fusion, classify different sub-blocks according to structural features in an original image, and select a dictionary in a self-adaptive mode. At present, scholars at home and abroad propose various sparse representation models and dictionary improvement methods, and reconstruct a fusion image by using sparse coefficients. The existing methods also have several problems:
(1) by directly utilizing sparse coefficients, the problem of image detail information loss caused by insufficient representation capability of a dictionary on image details (such as textures, edges and the like) exists.
(2) Reconstructing the fused image using sparse coefficients produces a blocking effect.
(3) The image multi-focus fusion method based on the robust sparse model detects a focus area by using local detail information of each image block, and some detail information can be defocused at the same time, so that an error detection result can be generated.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a multi-focus image fusion method based on a significant sparse representation model. The method is characterized in that the image fusion method is realized by using technologies such as sparse representation, optimal theory, image processing and the like. The method can overcome the problem that the detail information is excessively smooth in the process of fusing the images, and can more accurately distinguish the focusing area from the defocusing area.
The method comprises the following specific steps:
step 1, dividing an image to be fused into image blocks based on a uniform grid, and constructing a vectorized image fusion dictionary;
step 2, performing image significant sparse modeling to obtain an image significant sparse model;
step 3, solving parameters of the image significant sparse decomposition model;
step 4, carrying out initial fusion of image labels;
step 5, fusing and optimizing the image labels;
step 6, reconstructing a fused image based on the optimized image label;
the step 1 comprises the following steps:
two images A and B to be fused are taken as Px×PyIs divided evenly into non-overlapping image blocks, where PxAnd PyRespectively representing the number of pixels in the abscissa direction (i.e. the horizontal direction of the image) and the number of pixels in the ordinate direction (i.e. the vertical direction of the image), wherein the extension of the gray value matrix of the ith image block in the image A is d-Px×PyColumn vector of row 1 column
Figure GDA0002661961580000021
The gray value matrix of the ith image block in the image B is elongated to d-Px×PyColumn vector of row 1 column
Figure GDA0002661961580000022
N denotes the total number of image blocks, image A and image B being converted into matrices, respectively
Figure GDA0002661961580000023
And
Figure GDA0002661961580000024
wherein
Figure GDA0002661961580000025
And
Figure GDA0002661961580000026
respectively representing a column vector of an Nth image block of the image A and a column vector of an Nth image block of the image B; images through images A and BThe block column vector construction image fusion dictionary D is:
Figure GDA0002661961580000027
the step 2 comprises the following steps:
the modeling of the image to be fused is decomposed into a public sparse term, a remarkable sparse term with unique characteristics of the image and an error term containing image detail information, the public sparse term is the product of a data dictionary and a public sparse coefficient, and the remarkable sparse term is the product of the data dictionary and the remarkable sparse coefficient. And defining the sum of the 1 norm of the public sparse coefficient, the obvious sparse coefficient and the error matrix as an objective function. And when the target function is minimum, outputting a common sparse coefficient, a significant sparse coefficient and an error of the fused image.
The image significant sparse modeling is as follows:
under constraint YA=DXA+ZAD+EALower minimization objective function min | | XA||1+||ZA||1+λ|EA||2,1Wherein, λ is constant coefficient, the constant coefficient λ is 30, XA、ZAAnd EARespectively representing a public sparse coefficient matrix, a significant sparse coefficient matrix and an error of a significant sparse model of the image A;
under constraint YB=DXB+ZBD+EBLower minimization objective function min | | XB||1+||ZB||1+λ||EB||2,1,XB、ZBAnd EBA common sparse coefficient matrix, a significant sparse coefficient matrix, and an error, respectively, representing the significant sparse model of image B.
The step 3 comprises the following steps:
step 3-1: initializing significant sparse model parameters for image a and image B: the initial common sparse coefficient matrices of image A and image B are respectively
Figure GDA0002661961580000031
And is
Figure GDA0002661961580000032
The initial significant sparse coefficients of image A and image B are respectively
Figure GDA0002661961580000033
And
Figure GDA0002661961580000034
Figure GDA0002661961580000035
the initial errors of image A and image B are respectively
Figure GDA0002661961580000036
And
Figure GDA0002661961580000037
Figure GDA0002661961580000038
the initial Lagrange multiplier coefficients of image A and image B are respectively
Figure GDA0002661961580000039
And
Figure GDA00026619615800000310
Figure GDA00026619615800000311
the convergence speed factor rho is 1.1, the convergence factor is 0.05, and the penalty factor mu is0=10-6Maximum penalty parameter μmax=1010The iteration number j is 0, and the constant coefficient lambda is 30;
step 3-2: calculate the error of the (j + 1) th iteration image A
Figure GDA00026619615800000312
Figure GDA00026619615800000313
Wherein,
Figure GDA00026619615800000314
is composed of
Figure GDA00026619615800000315
I is more than or equal to 1 and less than or equal to N in the ith row,
Figure GDA00026619615800000316
error of image A under constraint, GA(i) is GAI column of (1), mujIs the penalty factor for the j-th iteration,
Figure GDA00026619615800000317
is the common sparse coefficient matrix for the jth iteration image a,
Figure GDA00026619615800000318
for the significant sparse coefficient matrix of the jth iteration image a,
Figure GDA00026619615800000319
a Lagrange multiplier for the jth iteration image A;
calculating the error of the (j + 1) th iteration image B
Figure GDA00026619615800000320
Figure GDA00026619615800000321
Wherein,
Figure GDA00026619615800000322
is composed of
Figure GDA00026619615800000323
The (c) th column of (a),
Figure GDA00026619615800000324
error of image B under constraint, GB(i) is GBThe (c) th column of (a),
Figure GDA0002661961580000041
is the common sparse coefficient matrix for the jth iteration image B,
Figure GDA0002661961580000042
the lagrangian multiplier for the jth iteration image B,
Figure GDA0002661961580000043
a significant sparse coefficient matrix of a jth iteration image B;
step 3-3: calculating a common sparse coefficient matrix of the (j + 1) th iteration image A
Figure GDA0002661961580000044
Figure GDA0002661961580000045
Calculating a common sparse coefficient matrix of the (j + 1) th iteration image B
Figure GDA0002661961580000046
Figure GDA0002661961580000047
Wherein S is a function defined as
Figure GDA0002661961580000048
x and tau are function parameters;
step 3-4: calculating a significant sparse coefficient matrix of the (j + 1) th iteration image A
Figure GDA0002661961580000049
Figure GDA00026619615800000410
Calculating a significant sparse coefficient matrix of the (j + 1) th iteration image B
Figure GDA00026619615800000411
Figure GDA00026619615800000412
Step 3-5: computing Lagrange multiplier for j +1 th iteration image A
Figure GDA00026619615800000413
Figure GDA00026619615800000414
Computing Lagrange multiplier for j +1 th iteration image B
Figure GDA00026619615800000415
Figure GDA00026619615800000416
Step 3-6: calculating the (j + 1) th iteration penalty factor muj+1
μj+1=min(μjρ,μmax) (10)
Where ρ is the convergence rate factor, μmaxIs the maximum penalty factor;
step 3-7: if the convergence condition is satisfied
Figure GDA0002661961580000051
If true, output
Figure GDA0002661961580000052
Figure GDA0002661961580000053
Otherwise, updating j to j +1, and turning to the step 3-2;
if the convergence condition is satisfied
Figure GDA0002661961580000054
If true, output
Figure GDA0002661961580000055
Figure GDA0002661961580000056
Otherwise, updating j to j +1, and turning to the step 3-2.
Step 4 comprises the following steps:
defining a focusing parameter J (A, i) of the ith image block of the image A to be fused as an error multiplied by a balance factor and a significant sparse coefficient matrix multiplied by a 2 norm of the ith column of the dictionary, wherein the calculation formula is as follows:
J(A,i)=||bEA(:,i)+ZAD(:,i)||2(11)
wherein EA(i) represents an error EARepresents the ith column of the dictionary D; the balance factor B is defined as the sum of products of significant sparse coefficient matrixes of 2 images A and B to be registered and an ith column dictionary matrix is divided by the sum of an ith column of an error matrix of the 2 images to be registered, and the calculation formula is as follows:
Figure GDA0002661961580000057
wherein EB(i) represents an error EBThe ith column;
defining a focusing parameter J (B, i) of the ith image block of the image B to be fused as an error multiplied by a balance factor and a significant sparse coefficient matrix multiplied by a 2 norm of the ith column of the dictionary, wherein the calculation formula is as follows:
J(B,i)=||bEB(:,i)+ZBD(:,i)||2(13)
fusing pixels by adopting 2 norm maximization rule, and constructing a fusion label by using a formula (14)
Figure GDA0002661961580000058
Wherein
Figure GDA0002661961580000059
Labels for representing the Nth image block, each column vectorLabels containing the corresponding column vectors of the selected source image, 1 and 0 represent that the column comes from source image a and image B, respectively, and the label fusion rule is:
Figure GDA00026619615800000510
the step 5 comprises the following steps:
step 5-1: in order to retain the detail information of two source images, respectively calculating error EAFocal zone focus detail maximum of
Figure GDA00026619615800000511
And error EBFocal zone focus detail maximum of
Figure GDA00026619615800000512
If the error of image A is larger than
Figure GDA00026619615800000513
90% of the original image A, retaining the pixels of the source image A; if the error of image B is larger than
Figure GDA00026619615800000514
And 90% of the original image B, retaining the pixels of the original image B, and the fusion label optimization rule based on the detail information of the original image is as follows:
Figure GDA0002661961580000061
step 5-2: fusing the images according to a label optimization rule formula (15), and calculating the image block source n of each 8 neighborhood of the image blocks in the fused imagesi AThe number n of image blocks corresponding to the ith column in the neighborhood of 8 selected from the image Ai BRepresenting the number of selected image blocks from the image B in 8 neighborhoods of the image block corresponding to the ith column, and if the number of the image blocks selected from the image A by the 8 neighborhoods of the image block is greater than the number of the image blocks selected from the source image B, representing that the area corresponding to the image block in the source image A is a focus area; if the figure isThe number of the image blocks selected from the image A by the 8 adjacent domain blocks of the image block is less than that of the image blocks selected from the source image B, and the image blocks represent that the area corresponding to the image blocks of the source image B is a focusing area; if the number is equal, it is the boundary region, i.e. the fused image is updated according to equation (16):
Figure GDA0002661961580000062
where-1 represents the boundary, resulting in the final fused image label yi F
The step 6 comprises the following steps:
from the fused image label yi FAssignment construct fusion image FiIf the image label is 1, selecting the pixel corresponding to the image A; if the image label is 0, selecting the pixel corresponding to the image B; otherwise take the average of image a and image B:
Figure GDA0002661961580000063
image F with d rows and 1 columns of vectors after fusion of ith image blockiStarting from the first element, pxEach element being a column and being converted to p in totalyColumn, get size Px×PyImage block D ofi
Setting the width of the image A and the image B to be fused as w, and then, setting the image block DiIn the row i of the reconstructed fused imagexAnd column iyRespectively as follows:
Figure GDA0002661961580000071
Figure GDA0002661961580000072
where mod is a remainder taking function.
In summary, the present invention utilizes the objective imaging rules of the images to be fused, and in the processing steps, computer image processing methods conforming to these rules are adopted for combination. The method extracts the focus area of the multi-focus image by utilizing the difference of detail information and unique characteristics, optimally fuses the neighborhood information and the source image information, retains more image detail information, fuses the image with better smoothness, overcomes the block effect, has better robustness and belongs to the protection range of patent laws.
Has the advantages that:
(1) the invention provides an image significant sparse model. The image is decomposed into the public sparse features containing all the images to be fused, and only the public sparse features containing the unique obvious sparse features of a single image and the error matrix containing image detail information are contained, wherein the obvious sparse features can effectively find the focus area of a multi-focus image, the defect that a traditional image sparse model is lack of unique feature information of the image is overcome, and the robustness is higher.
(2) The invention provides image tag initial fusion based on a maximum balanced focusing parameter criterion. Focusing parameters are defined through a convergence error matrix, a significant sparse coefficient matrix and balance factors thereof, a label fusion rule is determined by adopting a 2-norm maximization principle, image detail information and image unique information are effectively utilized, and higher robustness is achieved through balance factor adjustment.
(3) The invention provides label fusion optimization based on the detail information of a source image and the statistical information of a neighborhood image block. Rules are proposed: if the error of the source image pixel is more than 90% of the maximum error, the fusion image retains the source image pixel and retains more source image information. And (3) counting the pixel source information of 8 neighborhood blocks around the fused image block, providing a few principle of obeying majority, and optimizing label fusion. The blocking effect caused by grid division can be effectively overcome, and the smoothness and continuity of image content are ensured.
Drawings
The foregoing and other advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1 is a flow chart of the present invention.
Fig. 2 is an image a to be fused.
Fig. 3 is an image B to be fused.
Fig. 4 shows the result of fusing image a and image B by the method of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
The method comprises six parts of image dictionary construction, image modeling, image significant sparse decomposition, label initial fusion, label fusion optimization and image reconstruction, and the specific work flow is shown in figure 1.
And (I) dividing the image block based on a uniform grid and constructing a vectorized image dictionary.
Two images A and B to be fused are taken as P according to the same sizex×Py(wherein P isxAnd PyRespectively representing the number of pixel points in the abscissa and ordinate directions) into non-overlapping image blocks. The gray value matrix elongation of each image block of images a and B is d ═ Px×PyColumn vector of row 1 column
Figure GDA0002661961580000081
And
Figure GDA0002661961580000082
n denotes the total number of image blocks. Conversion of images A and B into a matrix
Figure GDA0002661961580000083
And
Figure GDA0002661961580000084
wherein
Figure GDA0002661961580000085
And
Figure GDA0002661961580000086
the column vectors of the nth image blocks of images a and B, respectively. Constructing an image fusion dictionary D by using image block column vectors of the images A and B as follows:
Figure GDA0002661961580000087
and (II) image significant sparse modeling based on the common sparse features, the significant sparse features and the error matrix representation.
The modeling of the image to be fused is decomposed into a public sparse term, a remarkable sparse term with unique characteristics of the image and an error term containing image detail information, the public sparse term is the product of a data dictionary and a public sparse coefficient, and the remarkable sparse term is the product of the data dictionary and the remarkable sparse coefficient. And defining the sum of the 1 norm of the public sparse coefficient, the obvious sparse coefficient and the error matrix as an objective function. And when the target function is minimum, outputting a common sparse coefficient, a significant sparse coefficient and an error of the fused image.
Image significant sparse modeling as being under constraint YA=DXA+ZAD+EALower minimization objective function min | | XA||1+||ZA||1+λ||EA||2,1Wherein the constant coefficient lambda is 30, solving the public sparse coefficient matrix XASignificant sparse coefficient matrix ZAAnd error EA. Under constraint YB=DXB+ZBD+EBLower minimization objective function min | | XB||1+||ZB||1+λ||E||B2,1Solving a common sparse coefficient matrix XBSignificant sparse coefficient matrix ZBError EB
And (III) solving parameters such as public sparse features, obvious sparse features, errors and the like of the image obvious sparse model decomposition by a linear alternating frame based on the dynamic penalty factors. The method comprises the following specific steps:
step (31) initializes salient sparse model parameters for image a and image B.
The common sparse coefficient matrices of image A and image B are respectively
Figure GDA0002661961580000091
And is
Figure GDA0002661961580000092
Significant sparseness factor for image A and image B
Figure GDA0002661961580000093
Error of image A and image B
Figure GDA0002661961580000094
Lagrange multipliers for image a and image B
Figure GDA0002661961580000095
The convergence speed factor rho is 1.1, the convergence factor is 0.05, and the penalty factor mu is0=10-6Maximum penalty parameter μmax=1010The iteration number j is 0, and the constant coefficient lambda is 30;
step (32) updates the error using a shrink operator method.
Calculating the error of the (j + 1) th iteration image A:
Figure GDA0002661961580000096
wherein i is more than or equal to 1 and less than or equal to N,
Figure GDA0002661961580000097
error of image A under constraint, GA(i) is GAI column of (1), mujIs the penalty factor for the j-th iteration,
Figure GDA0002661961580000098
is the common sparse coefficient matrix for the jth iteration image a,
Figure GDA0002661961580000099
for the significant sparse coefficient matrix of the jth iteration image a,
Figure GDA00026619615800000910
is the lagrangian multiplier for the jth iteration image a.
Calculating the error of the (j + 1) th iteration image B:
Figure GDA00026619615800000911
wherein i is more than or equal to 1 and less than or equal to N,
Figure GDA00026619615800000912
error of image B under constraint, GB(i) is GBThe (c) th column of (a),
Figure GDA00026619615800000913
is the common sparse coefficient matrix for the jth iteration image B,
Figure GDA00026619615800000914
the lagrangian multiplier for image B is iterated for the j-th time.
Step (33) updates the common sparse coefficient matrix using a thresholding method.
Calculating a public sparse coefficient matrix of the (j + 1) th iteration image A:
Figure GDA0002661961580000101
calculating a public sparse coefficient matrix of the (j + 1) th iteration image B:
Figure GDA0002661961580000102
wherein S is a function defined as
Figure GDA0002661961580000103
x and τ are function parameters.
Step (34) of updating the significant sparse coefficient matrix Z using a thresholding methodAAnd ZB
Calculating a significant sparse coefficient matrix of the (j + 1) th iteration image A:
Figure GDA0002661961580000104
calculating a significant sparse coefficient matrix of the (j + 1) th iteration image B:
Figure GDA0002661961580000105
step (35) update Lagrange multiplier LAAnd LB
Calculating Lagrange multiplier of j +1 time iteration image B:
Figure GDA0002661961580000106
calculating Lagrange multiplier of j +1 time iteration image B:
Figure GDA0002661961580000107
step (36) updates the penalty parameter mu.
Calculating a (j + 1) th iteration penalty factor:
μj+1=min(μjρ,μmax) (10)
where ρ is the convergence rate factor, μmaxIs the maximum penalty factor.
And (37) iterative convergence judgment.
If the convergence condition is judged
Figure GDA0002661961580000111
If true, output
Figure GDA0002661961580000112
Figure GDA0002661961580000113
Otherwise, j equals j +1, and the process goes to the second step.
If the convergence condition is judged
Figure GDA0002661961580000114
If true, output
Figure GDA0002661961580000115
Figure GDA0002661961580000116
Otherwise, j equals j +1, and the process proceeds to step (32).
And (IV) image label initial fusion based on the maximum balance focusing parameter criterion.
Defining the focusing parameter J (A, i) of the ith image sub-block of the image A to be fused as the 2-norm of the error multiplied by the balance factor and the significant sparse coefficient matrix multiplied by the ith column of the dictionary
J(A,i)=||bEA(:,i)+ZAD(:,i)||2(11)
Wherein EA(i) represents an error EAThe ith column of (1), D (: i) represents the ith column of the dictionary D. The balance factor B is defined as the sum of products of significant sparse coefficient matrixes of the 2 images A and B to be registered and the ith column dictionary matrix respectively divided by the sum of the ith column of the error matrix of the 2 images to be registered, and the calculation formula is
Figure GDA0002661961580000117
Wherein EB(i) represents an error EBColumn i.
Defining a focusing parameter J (B, i) of the ith image sub-block of the image B to be fused as an error multiplied by a balance factor and a significant sparse coefficient matrix multiplied by a 2-norm of the ith column of the dictionary
J(B,i)=||bEB(:,i)+ZBD(:,i)||2(13)
Fusing pixels by adopting 2 norm maximization rule, and constructing a fusion label by using a formula (14)
Figure GDA0002661961580000118
(wherein
Figure GDA0002661961580000119
Labels representing the nth image block), each column vector contains the label of the corresponding column vector of the selected source image, 1 and 0 represent that the column is from the source images a and B, respectively, and the fusion label rule is:
Figure GDA00026619615800001110
and (IV) image label fusion optimization based on the detail information of the source image and the statistical information of the neighborhood image block.
In the process of detecting the focus area, the focus area of the image is gathered near the focus, so that the focus and defocus areas are more accurately divided by combining the neighborhood information of the image. Detail information error E decomposed according to image neighborhood information and sparse modelAAnd EBFuse Y to initializationFOptimizing, dividing the focusing area and the defocusing area more accurately, and simultaneously smoothing the edges of the defocusing area and the focusing area. The detailed steps comprise:
and (41) fusion sparse optimization based on the source image detail information.
In order to retain the detail information of two source images, respectively calculating error EAFocal zone focus detail maximum of
Figure GDA0002661961580000121
And error EBFocal zone focus detail maximum of
Figure GDA0002661961580000122
If the error of image A is larger than
Figure GDA0002661961580000123
90% of the original image A, retaining the pixels of the source image A; if the error of image B is larger than
Figure GDA0002661961580000124
90% of the source image B, the pixels of the source image B remain. The fusion label optimization rule based on the detail information of the source image is
Figure GDA0002661961580000125
Step (42) is based on fusion optimization of neighborhood image block information.
Fusing the images according to a label optimization rule formula (15), and calculating the image block source n of each 8 neighborhood of the image blocks in the fused imagesi AAnd ni BEach representing the number of source selections from images a and B in the 8 th neighborhood of the image block corresponding to the ith column. If the number of the image blocks selected from the image A by the 8 neighborhood blocks of the image block is larger than that of the image block selected from the source image B, the area corresponding to the image block in the source image A is a focusing area; otherwise, the area of the source image B corresponding to the image block is a focus area; if the number is equal, the boundary region is determined. I.e. the fused image is updated according to equation (16).
Figure GDA0002661961580000126
Where-1 represents the boundary, resulting in the final fused image label yi F
And (VI) reconstructing an image based on the optimized fusion label.
From the fused image label yi FAssignment construct fusion image Fi∈Rd×N=[F1,F2,…,FN]. If the image label is 1, selecting the pixel corresponding to the image A; if the image label is 0, selecting the pixel corresponding to the image B; otherwise, taking the average value of corresponding pixels of the images A and B.
Figure GDA0002661961580000131
Finally fusing the ith image block into a vector F with the size of d rows and 1 columnsiStarting from the first element, pxEach element being a column and being converted to p in totalyColumn, get size Px×PyImage block D ofiAssuming that the width of the image A and the image B to be fused is w, the image block DiIn the row i of the reconstructed fused imagexAnd column iyAre respectively as
Figure GDA0002661961580000132
Figure GDA0002661961580000133
Where mod is a remainder taking function.
Images A and B to be fused are shown in FIGS. 2 and 3, respectively, and the result after fusion by the method of the present invention is shown in FIG. 4.
According to the method, the focus region of the multi-focus image is extracted by utilizing the detail information and the unique characteristics of the error matrix and the obvious sparse matrix, and the neighborhood information and the source image information are combined for optimization and fusion, so that the method has more accurate focus region division compared with the technology in the real field, retains more image detail information, overcomes the block effect, and has better smoothness and robustness. The present invention provides a multi-focus image fusion method based on significant sparse representation and neighborhood information, and a number of methods and approaches for implementing the technical solution are provided, and the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, a number of improvements and modifications can be made without departing from the principle of the present invention, and these improvements and modifications should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (3)

1. A multi-focus image fusion method based on significant sparse representation and neighborhood information is characterized by comprising the following steps:
step 1, dividing an image to be fused into image blocks based on a uniform grid, and constructing a vectorized image fusion dictionary;
step 2, performing image significant sparse modeling to obtain an image significant sparse model;
step 3, solving parameters of the image significant sparse decomposition model;
step 4, carrying out initial fusion of image labels;
step 5, fusing and optimizing the image labels;
step 6, reconstructing a fused image based on the optimized image label;
the step 1 comprises the following steps:
two images A and B to be fused are taken as Px×PyIs divided evenly into non-overlapping image blocks, where PxAnd PyRespectively representing the number of pixel points in the abscissa direction and the number of pixel points in the ordinate direction, wherein the lengthening of the gray value matrix of the ith image block in the image A is d-Px×PyColumn vector of row 1 column
Figure FDA0002628838040000011
The gray value matrix of the ith image block in the image B is elongated to d-Px×PyColumn vector of row 1 column
Figure FDA0002628838040000012
I is more than or equal to 1 and less than or equal to N, N represents the total number of image blocks, and the image A and the image B are respectively converted into matrixes
Figure FDA0002628838040000013
And
Figure FDA0002628838040000014
wherein
Figure FDA0002628838040000015
And
Figure FDA0002628838040000016
respectively representing a column vector of an Nth image block of the image A and a column vector of an Nth image block of the image B; constructing an image fusion dictionary D by using image block column vectors of the images A and B as follows:
Figure FDA0002628838040000017
the step 2 comprises the following steps:
the image significant sparse modeling is as follows:
under constraint YA=DXA+ZAD+EALower minimization objective function min | | XA||1+||ZA||1+λ||EA||2,1Wherein λ is a constant coefficient, XA、ZAAnd EARespectively representing a public sparse coefficient matrix, a significant sparse coefficient matrix and an error of a significant sparse model of the image A;
under constraint YB=DXB+ZBD+EBLower minimization objective function min | | XB||1+||ZB||1+λ||EB||2,1,XB、ZBAnd EBRespectively representing a public sparse coefficient matrix, a significant sparse coefficient matrix and an error of the significant sparse model of the image B;
the step 3 comprises the following steps:
step 3-1: initializing significant sparse model parameters for image a and image B: the initial common sparse coefficient matrices of image A and image B are respectively
Figure FDA0002628838040000021
And is
Figure FDA0002628838040000022
The initial significant sparse coefficients of image A and image B are respectively
Figure FDA0002628838040000023
And
Figure FDA0002628838040000024
Figure FDA0002628838040000025
the initial errors of image A and image B are respectively
Figure FDA0002628838040000026
And
Figure FDA0002628838040000027
Figure FDA0002628838040000028
the initial Lagrange multiplier coefficients of image A and image B are respectively
Figure FDA0002628838040000029
And
Figure FDA00026288380400000210
Figure FDA00026288380400000211
the convergence speed factor rho is 1.1, the convergence factor is 0.05, and the penalty factor mu is0=10-6Maximum penalty parameter μmax=1010The iteration number j is 0, and the constant coefficient lambda is 30;
step 3-2: calculate the error of the (j + 1) th iteration image A
Figure FDA00026288380400000212
Figure FDA00026288380400000213
Wherein,
Figure FDA00026288380400000214
is composed of
Figure FDA00026288380400000215
I is more than or equal to 1 and less than or equal to N in the ith row,
Figure FDA00026288380400000216
error of image A under constraint, GA(i) is GAI column of (1), mujIs the penalty factor for the j-th iteration,
Figure FDA00026288380400000217
is the common sparse coefficient matrix for the jth iteration image a,
Figure FDA00026288380400000218
for the significant sparse coefficient matrix of the jth iteration image a,
Figure FDA00026288380400000219
a Lagrange multiplier for the jth iteration image A;
calculating the error of the (j + 1) th iteration image B
Figure FDA00026288380400000220
Figure FDA00026288380400000221
Wherein,
Figure FDA00026288380400000222
is composed of
Figure FDA00026288380400000223
The (c) th column of (a),
Figure FDA00026288380400000224
error of image B under constraint, GB(i) is GBThe (c) th column of (a),
Figure FDA00026288380400000225
is the common sparse coefficient matrix for the jth iteration image B,
Figure FDA00026288380400000226
the lagrangian multiplier for the jth iteration image B,
Figure FDA00026288380400000227
a significant sparse coefficient matrix of a jth iteration image B;
step 3-3: computingCommon sparse coefficient matrix of j +1 th iteration image A
Figure FDA00026288380400000228
Figure FDA0002628838040000031
Calculating a common sparse coefficient matrix of the (j + 1) th iteration image B
Figure FDA0002628838040000032
Figure FDA0002628838040000033
Wherein S is a function defined as
Figure FDA0002628838040000034
x and tau are function parameters;
step 3-4: calculating a significant sparse coefficient matrix of the (j + 1) th iteration image A
Figure FDA0002628838040000035
Figure FDA0002628838040000036
Calculating a significant sparse coefficient matrix of the (j + 1) th iteration image B
Figure FDA0002628838040000037
Figure FDA0002628838040000038
Step 3-5: computing Lagrange multiplier for j +1 th iteration image A
Figure FDA0002628838040000039
Figure FDA00026288380400000310
Computing Lagrange multiplier for j +1 th iteration image B
Figure FDA00026288380400000311
Figure FDA00026288380400000312
Step 3-6: calculating the (j + 1) th iteration penalty factor muj+1
μj+1=min(μjρ,μmax) (10)
Where ρ is the convergence rate factor, μmaxIs the maximum penalty factor;
step 3-7: if the convergence condition is satisfied
Figure FDA00026288380400000318
If true, output
Figure FDA00026288380400000314
Figure FDA00026288380400000315
Otherwise, updating j to j +1, and turning to the step 3-2;
if the convergence condition is satisfied
Figure FDA00026288380400000316
If true, output
Figure FDA00026288380400000317
Figure FDA0002628838040000041
Otherwise, update j to j+1, going to step 3-2;
step 4 comprises the following steps:
defining a focusing parameter J (A, i) of the ith image block of the image A to be fused as an error multiplied by a balance factor and a significant sparse coefficient matrix multiplied by a 2 norm of the ith column of the dictionary, wherein the calculation formula is as follows:
J(A,i)=||bEA(:,i)+ZAD(:,i)||2(11)
wherein EA(i) represents an error EARepresents the ith column of the dictionary D; the balance factor B is defined as the sum of products of significant sparse coefficient matrixes of 2 images A and B to be registered and an ith column dictionary matrix is divided by the sum of an ith column of an error matrix of the 2 images to be registered, and the calculation formula is as follows:
Figure FDA0002628838040000042
wherein EB(i) represents an error EBThe ith column;
defining a focusing parameter J (B, i) of the ith image block of the image B to be fused as an error multiplied by a balance factor and a significant sparse coefficient matrix multiplied by a 2 norm of the ith column of the dictionary, wherein the calculation formula is as follows:
J(B,i)=||bEB(:,i)+ZBD(:,i)||2(13)
fusing pixels by adopting a 2 norm maximization rule; construction of fusion tags Using formula (14)
Figure FDA0002628838040000043
Wherein
Figure FDA0002628838040000044
Labels representing the Nth image block, wherein each column vector comprises the labels of the corresponding column vector of the selected source image, 1 and 0 respectively represent that the column is from the source image A and the image B, and the label fusion rule is as follows:
Figure FDA0002628838040000045
2. the method of claim 1, wherein step 5 comprises:
step 5-1: respectively calculating the errors EAFocal zone focus detail maximum of
Figure FDA0002628838040000046
And error EBFocal zone focus detail maximum of
Figure FDA0002628838040000047
If the error of image A is larger than
Figure FDA0002628838040000048
90% of the original image A, retaining the pixels of the source image A; if the error of image B is larger than
Figure FDA0002628838040000049
And 90% of the original image B, retaining the pixels of the original image B, and the fusion label optimization rule based on the detail information of the original image is as follows:
Figure FDA00026288380400000410
step 5-2: fusing the images according to a label optimization rule formula (15), and calculating the image block source n of each 8 neighborhood of the image blocks in the fused imagesi AThe number n of image blocks corresponding to the ith column in the neighborhood of 8 selected from the image Ai BRepresenting the number of selected image blocks from the image B in 8 neighborhoods of the image block corresponding to the ith column, and if the number of the image blocks selected from the image A by the 8 neighborhoods of the image block is greater than the number of the image blocks selected from the source image B, representing that the area corresponding to the image block in the source image A is a focus area; if the number of the image blocks selected from the image A by the 8 adjacent domain blocks of the image block is less than the number of the image blocks selected from the source image B, the area corresponding to the image block of the source image B is a focusing area; if number of phasesAnd the boundary region is obtained, namely the fused image is updated according to the formula (16):
Figure FDA0002628838040000051
where-1 represents the boundary, resulting in the final fused image label yi F
3. The method of claim 2, wherein step 6 comprises:
from the fused image label yi FAssignment construct fusion image FiIf the image label is 1, selecting the pixel corresponding to the image A; if the image label is 0, selecting the pixel corresponding to the image B; otherwise take the average of image a and image B:
Figure FDA0002628838040000052
image F with d rows and 1 columns of vectors after fusion of ith image blockiStarting from the first element, pxEach element being a column and being converted to p in totalyColumn, get size Px×PyImage block D ofi
Setting the width of the image A and the image B to be fused as w, and then, setting the image block DiIn the row i of the reconstructed fused imagexAnd column iyRespectively as follows:
Figure FDA0002628838040000053
Figure FDA0002628838040000054
where mod is a remainder taking function.
CN201910126869.3A 2019-02-20 2019-02-20 Multi-focus image fusion method based on significant sparse representation and neighborhood information Active CN109934794B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910126869.3A CN109934794B (en) 2019-02-20 2019-02-20 Multi-focus image fusion method based on significant sparse representation and neighborhood information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910126869.3A CN109934794B (en) 2019-02-20 2019-02-20 Multi-focus image fusion method based on significant sparse representation and neighborhood information

Publications (2)

Publication Number Publication Date
CN109934794A CN109934794A (en) 2019-06-25
CN109934794B true CN109934794B (en) 2020-10-27

Family

ID=66985723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910126869.3A Active CN109934794B (en) 2019-02-20 2019-02-20 Multi-focus image fusion method based on significant sparse representation and neighborhood information

Country Status (1)

Country Link
CN (1) CN109934794B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295196A (en) * 2013-05-21 2013-09-11 西安电子科技大学 Super-resolution image reconstruction method based on non-local dictionary learning and biregular terms
CN104077761A (en) * 2014-06-26 2014-10-01 桂林电子科技大学 Multi-focus image fusion method based on self-adaption sparse representation
CN107680070A (en) * 2017-09-15 2018-02-09 电子科技大学 A kind of layering weight image interfusion method based on original image content
CN109003256A (en) * 2018-06-13 2018-12-14 天津师范大学 A kind of multi-focus image fusion quality evaluating method indicated based on joint sparse

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9152881B2 (en) * 2012-09-13 2015-10-06 Los Alamos National Security, Llc Image fusion using sparse overcomplete feature dictionaries
CN104008533B (en) * 2014-06-17 2017-09-29 华北电力大学 Multisensor Image Fusion Scheme based on block adaptive signature tracking
CN106056564B (en) * 2016-05-27 2018-10-16 西华大学 Edge clear image interfusion method based on joint sparse model
CN106447640B (en) * 2016-08-26 2019-07-16 西安电子科技大学 Multi-focus image fusing method and device based on dictionary learning, rotation guiding filtering
CN108510465B (en) * 2018-01-30 2019-12-24 西安电子科技大学 Multi-focus image fusion method based on consistency constraint non-negative sparse representation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295196A (en) * 2013-05-21 2013-09-11 西安电子科技大学 Super-resolution image reconstruction method based on non-local dictionary learning and biregular terms
CN104077761A (en) * 2014-06-26 2014-10-01 桂林电子科技大学 Multi-focus image fusion method based on self-adaption sparse representation
CN107680070A (en) * 2017-09-15 2018-02-09 电子科技大学 A kind of layering weight image interfusion method based on original image content
CN109003256A (en) * 2018-06-13 2018-12-14 天津师范大学 A kind of multi-focus image fusion quality evaluating method indicated based on joint sparse

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Visual tracking via robust multi-task multi-feature joint sparse representation;Yong Wang等;《Multimedia Tools and Applications》;20181231;第77卷;第31447-31467页 *
结合稀疏表示与神经网络的医学图像融合;陈轶鸣等;《河南科技大学学报(自然科学版)》;20180430;第39卷(第2期);第40-47页 *
联合稀疏表示的医学图像融合及同步去噪;宗静静等;《中国生物医学工程学报》;20160430;第35卷(第2期);第133-140页 *

Also Published As

Publication number Publication date
CN109934794A (en) 2019-06-25

Similar Documents

Publication Publication Date Title
Yang et al. Unsupervised learning of geometry from videos with edge-aware depth-normal consistency
CN111784602B (en) Method for generating countermeasure network for image restoration
CN107766794B (en) Image semantic segmentation method with learnable feature fusion coefficient
CN112001960B (en) Monocular image depth estimation method based on multi-scale residual error pyramid attention network model
CN105513064B (en) A kind of solid matching method based on image segmentation and adaptive weighting
CN108038906B (en) Three-dimensional quadrilateral mesh model reconstruction method based on image
Zhou et al. FSAD-Net: feedback spatial attention dehazing network
CN114255238A (en) Three-dimensional point cloud scene segmentation method and system fusing image features
CN110570363A (en) Image defogging method based on Cycle-GAN with pyramid pooling and multi-scale discriminator
CN110728183A (en) Human body action recognition method based on attention mechanism neural network
CN111080591A (en) Medical image segmentation method based on combination of coding and decoding structure and residual error module
CN107169417A (en) Strengthened based on multinuclear and the RGBD images of conspicuousness fusion cooperate with conspicuousness detection method
CN111368637B (en) Transfer robot target identification method based on multi-mask convolutional neural network
CN112200752B (en) Multi-frame image deblurring system and method based on ER network
CN115393231B (en) Defect image generation method and device, electronic equipment and storage medium
CN110852199A (en) Foreground extraction method based on double-frame coding and decoding model
CN115588237A (en) Three-dimensional hand posture estimation method based on monocular RGB image
CN111127353B (en) High-dynamic image ghost-removing method based on block registration and matching
CN116051936A (en) Chlorophyll concentration ordered complement method based on space-time separation external attention
CN115578574A (en) Three-dimensional point cloud completion method based on deep learning and topology perception
CN115049739A (en) Binocular vision stereo matching method based on edge detection
CN117078982B (en) Deep learning-based large-dip-angle stereoscopic image alignment dense feature matching method
CN111882495B (en) Image highlight processing method based on user-defined fuzzy logic and GAN
CN113628143A (en) Weighted fusion image defogging method and device based on multi-scale convolution
CN109934794B (en) Multi-focus image fusion method based on significant sparse representation and neighborhood information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant