CN109934794B - Multi-focus image fusion method based on significant sparse representation and neighborhood information - Google Patents
Multi-focus image fusion method based on significant sparse representation and neighborhood information Download PDFInfo
- Publication number
- CN109934794B CN109934794B CN201910126869.3A CN201910126869A CN109934794B CN 109934794 B CN109934794 B CN 109934794B CN 201910126869 A CN201910126869 A CN 201910126869A CN 109934794 B CN109934794 B CN 109934794B
- Authority
- CN
- China
- Prior art keywords
- image
- column
- error
- sparse
- significant
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 10
- 230000004927 fusion Effects 0.000 claims abstract description 47
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 63
- 238000000034 method Methods 0.000 claims description 23
- 239000013598 vector Substances 0.000 claims description 21
- 238000005457 optimization Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 7
- 102000003712 Complement factor B Human genes 0.000 claims description 3
- 108090000056 Complement factor B Proteins 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 description 4
- 230000000903 blocking effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a multi-focus image fusion method based on significant sparse representation and neighborhood information, which comprises the following specific steps of: step 1, dividing image blocks based on a uniform grid and constructing an image dictionary in a vectorization manner; step 2, establishing an image significant sparse model based on the public sparse features, the significant sparse features and the error representation; step 3, solving parameters such as public sparse features, obvious sparse features, errors and the like of the image obvious sparse decomposition model based on a dynamic penalty factor linear alternating frame; step 4, label fusion based on the maximum balanced focusing parameter criterion; step 5, optimizing label fusion based on the detail information of the source image and the statistical information of the neighborhood image block; and 6, fusing and reconstructing the image based on the optimized label.
Description
Technical Field
The invention belongs to the field of computer image processing, and particularly relates to a multi-focus image fusion method based on significant sparse representation and neighborhood information.
Background
Due to the limitation that the optical lens lacks depth of field information, the obtained single image information amount is limited, and objects with different depth of field in the image cannot be clear. Different focuses are set on the same scene through the sensor, and a plurality of images with different clear areas can be obtained. The multi-focus image fusion technology is to combine images with different focuses into a uniform and clear image by utilizing complementary and redundant information among multi-focus images, thereby retaining important visual information and having important significance for obtaining more comprehensive and accurate scenes.
In recent years, scholars at home and abroad propose various methods, such as a fusion algorithm based on guided filtering, multi-focus image fusion based on dense sift, and a multi-focus image fusion algorithm based on binary differential evolution and a self-adaptive blocking mechanism. Due to sparse representation, the potential information of the image can be efficiently extracted, and the model is widely applied to multi-focus image fusion and has a better prospect.
Yang introduces sparse representation into multi-focus image fusion for the first time, decomposes images by utilizing a dictionary, and takes the sum of the maximum sparse coefficient absolute values as a fusion rule. Liu Y and the like propose an image fusion method combining multi-scale transformation and sparse representation, and provide a general image fusion framework. Liu Y and the like apply the self-adaptive sparse model to image fusion, classify different sub-blocks according to structural features in an original image, and select a dictionary in a self-adaptive mode. At present, scholars at home and abroad propose various sparse representation models and dictionary improvement methods, and reconstruct a fusion image by using sparse coefficients. The existing methods also have several problems:
(1) by directly utilizing sparse coefficients, the problem of image detail information loss caused by insufficient representation capability of a dictionary on image details (such as textures, edges and the like) exists.
(2) Reconstructing the fused image using sparse coefficients produces a blocking effect.
(3) The image multi-focus fusion method based on the robust sparse model detects a focus area by using local detail information of each image block, and some detail information can be defocused at the same time, so that an error detection result can be generated.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a multi-focus image fusion method based on a significant sparse representation model. The method is characterized in that the image fusion method is realized by using technologies such as sparse representation, optimal theory, image processing and the like. The method can overcome the problem that the detail information is excessively smooth in the process of fusing the images, and can more accurately distinguish the focusing area from the defocusing area.
The method comprises the following specific steps:
step 1, dividing an image to be fused into image blocks based on a uniform grid, and constructing a vectorized image fusion dictionary;
step 3, solving parameters of the image significant sparse decomposition model;
step 4, carrying out initial fusion of image labels;
step 5, fusing and optimizing the image labels;
step 6, reconstructing a fused image based on the optimized image label;
the step 1 comprises the following steps:
two images A and B to be fused are taken as Px×PyIs divided evenly into non-overlapping image blocks, where PxAnd PyRespectively representing the number of pixels in the abscissa direction (i.e. the horizontal direction of the image) and the number of pixels in the ordinate direction (i.e. the vertical direction of the image), wherein the extension of the gray value matrix of the ith image block in the image A is d-Px×PyColumn vector of row 1 columnThe gray value matrix of the ith image block in the image B is elongated to d-Px×PyColumn vector of row 1 columnN denotes the total number of image blocks, image A and image B being converted into matrices, respectivelyAndwhereinAndrespectively representing a column vector of an Nth image block of the image A and a column vector of an Nth image block of the image B; images through images A and BThe block column vector construction image fusion dictionary D is:
the step 2 comprises the following steps:
the modeling of the image to be fused is decomposed into a public sparse term, a remarkable sparse term with unique characteristics of the image and an error term containing image detail information, the public sparse term is the product of a data dictionary and a public sparse coefficient, and the remarkable sparse term is the product of the data dictionary and the remarkable sparse coefficient. And defining the sum of the 1 norm of the public sparse coefficient, the obvious sparse coefficient and the error matrix as an objective function. And when the target function is minimum, outputting a common sparse coefficient, a significant sparse coefficient and an error of the fused image.
The image significant sparse modeling is as follows:
under constraint YA=DXA+ZAD+EALower minimization objective function min | | XA||1+||ZA||1+λ|EA||2,1Wherein, λ is constant coefficient, the constant coefficient λ is 30, XA、ZAAnd EARespectively representing a public sparse coefficient matrix, a significant sparse coefficient matrix and an error of a significant sparse model of the image A;
under constraint YB=DXB+ZBD+EBLower minimization objective function min | | XB||1+||ZB||1+λ||EB||2,1,XB、ZBAnd EBA common sparse coefficient matrix, a significant sparse coefficient matrix, and an error, respectively, representing the significant sparse model of image B.
The step 3 comprises the following steps:
step 3-1: initializing significant sparse model parameters for image a and image B: the initial common sparse coefficient matrices of image A and image B are respectivelyAnd isThe initial significant sparse coefficients of image A and image B are respectivelyAnd the initial errors of image A and image B are respectivelyAnd the initial Lagrange multiplier coefficients of image A and image B are respectivelyAnd the convergence speed factor rho is 1.1, the convergence factor is 0.05, and the penalty factor mu is0=10-6Maximum penalty parameter μmax=1010The iteration number j is 0, and the constant coefficient lambda is 30;
Wherein,is composed ofI is more than or equal to 1 and less than or equal to N in the ith row,error of image A under constraint, GA(i) is GAI column of (1), mujIs the penalty factor for the j-th iteration,is the common sparse coefficient matrix for the jth iteration image a,for the significant sparse coefficient matrix of the jth iteration image a,a Lagrange multiplier for the jth iteration image A;
Wherein,is composed ofThe (c) th column of (a),error of image B under constraint, GB(i) is GBThe (c) th column of (a),is the common sparse coefficient matrix for the jth iteration image B,the lagrangian multiplier for the jth iteration image B,a significant sparse coefficient matrix of a jth iteration image B;
Step 3-6: calculating the (j + 1) th iteration penalty factor muj+1:
μj+1=min(μjρ,μmax) (10)
Where ρ is the convergence rate factor, μmaxIs the maximum penalty factor;
step 3-7: if the convergence condition is satisfiedIf true, output Otherwise, updating j to j +1, and turning to the step 3-2;
if the convergence condition is satisfiedIf true, output Otherwise, updating j to j +1, and turning to the step 3-2.
Step 4 comprises the following steps:
defining a focusing parameter J (A, i) of the ith image block of the image A to be fused as an error multiplied by a balance factor and a significant sparse coefficient matrix multiplied by a 2 norm of the ith column of the dictionary, wherein the calculation formula is as follows:
J(A,i)=||bEA(:,i)+ZAD(:,i)||2(11)
wherein EA(i) represents an error EARepresents the ith column of the dictionary D; the balance factor B is defined as the sum of products of significant sparse coefficient matrixes of 2 images A and B to be registered and an ith column dictionary matrix is divided by the sum of an ith column of an error matrix of the 2 images to be registered, and the calculation formula is as follows:
wherein EB(i) represents an error EBThe ith column;
defining a focusing parameter J (B, i) of the ith image block of the image B to be fused as an error multiplied by a balance factor and a significant sparse coefficient matrix multiplied by a 2 norm of the ith column of the dictionary, wherein the calculation formula is as follows:
J(B,i)=||bEB(:,i)+ZBD(:,i)||2(13)
fusing pixels by adopting 2 norm maximization rule, and constructing a fusion label by using a formula (14)WhereinLabels for representing the Nth image block, each column vectorLabels containing the corresponding column vectors of the selected source image, 1 and 0 represent that the column comes from source image a and image B, respectively, and the label fusion rule is:
the step 5 comprises the following steps:
step 5-1: in order to retain the detail information of two source images, respectively calculating error EAFocal zone focus detail maximum ofAnd error EBFocal zone focus detail maximum ofIf the error of image A is larger than90% of the original image A, retaining the pixels of the source image A; if the error of image B is larger thanAnd 90% of the original image B, retaining the pixels of the original image B, and the fusion label optimization rule based on the detail information of the original image is as follows:
step 5-2: fusing the images according to a label optimization rule formula (15), and calculating the image block source n of each 8 neighborhood of the image blocks in the fused imagesi AThe number n of image blocks corresponding to the ith column in the neighborhood of 8 selected from the image Ai BRepresenting the number of selected image blocks from the image B in 8 neighborhoods of the image block corresponding to the ith column, and if the number of the image blocks selected from the image A by the 8 neighborhoods of the image block is greater than the number of the image blocks selected from the source image B, representing that the area corresponding to the image block in the source image A is a focus area; if the figure isThe number of the image blocks selected from the image A by the 8 adjacent domain blocks of the image block is less than that of the image blocks selected from the source image B, and the image blocks represent that the area corresponding to the image blocks of the source image B is a focusing area; if the number is equal, it is the boundary region, i.e. the fused image is updated according to equation (16):
where-1 represents the boundary, resulting in the final fused image label yi F。
The step 6 comprises the following steps:
from the fused image label yi FAssignment construct fusion image FiIf the image label is 1, selecting the pixel corresponding to the image A; if the image label is 0, selecting the pixel corresponding to the image B; otherwise take the average of image a and image B:
image F with d rows and 1 columns of vectors after fusion of ith image blockiStarting from the first element, pxEach element being a column and being converted to p in totalyColumn, get size Px×PyImage block D ofi;
Setting the width of the image A and the image B to be fused as w, and then, setting the image block DiIn the row i of the reconstructed fused imagexAnd column iyRespectively as follows:
where mod is a remainder taking function.
In summary, the present invention utilizes the objective imaging rules of the images to be fused, and in the processing steps, computer image processing methods conforming to these rules are adopted for combination. The method extracts the focus area of the multi-focus image by utilizing the difference of detail information and unique characteristics, optimally fuses the neighborhood information and the source image information, retains more image detail information, fuses the image with better smoothness, overcomes the block effect, has better robustness and belongs to the protection range of patent laws.
Has the advantages that:
(1) the invention provides an image significant sparse model. The image is decomposed into the public sparse features containing all the images to be fused, and only the public sparse features containing the unique obvious sparse features of a single image and the error matrix containing image detail information are contained, wherein the obvious sparse features can effectively find the focus area of a multi-focus image, the defect that a traditional image sparse model is lack of unique feature information of the image is overcome, and the robustness is higher.
(2) The invention provides image tag initial fusion based on a maximum balanced focusing parameter criterion. Focusing parameters are defined through a convergence error matrix, a significant sparse coefficient matrix and balance factors thereof, a label fusion rule is determined by adopting a 2-norm maximization principle, image detail information and image unique information are effectively utilized, and higher robustness is achieved through balance factor adjustment.
(3) The invention provides label fusion optimization based on the detail information of a source image and the statistical information of a neighborhood image block. Rules are proposed: if the error of the source image pixel is more than 90% of the maximum error, the fusion image retains the source image pixel and retains more source image information. And (3) counting the pixel source information of 8 neighborhood blocks around the fused image block, providing a few principle of obeying majority, and optimizing label fusion. The blocking effect caused by grid division can be effectively overcome, and the smoothness and continuity of image content are ensured.
Drawings
The foregoing and other advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1 is a flow chart of the present invention.
Fig. 2 is an image a to be fused.
Fig. 3 is an image B to be fused.
Fig. 4 shows the result of fusing image a and image B by the method of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
The method comprises six parts of image dictionary construction, image modeling, image significant sparse decomposition, label initial fusion, label fusion optimization and image reconstruction, and the specific work flow is shown in figure 1.
And (I) dividing the image block based on a uniform grid and constructing a vectorized image dictionary.
Two images A and B to be fused are taken as P according to the same sizex×Py(wherein P isxAnd PyRespectively representing the number of pixel points in the abscissa and ordinate directions) into non-overlapping image blocks. The gray value matrix elongation of each image block of images a and B is d ═ Px×PyColumn vector of row 1 columnAndn denotes the total number of image blocks. Conversion of images A and B into a matrixAndwhereinAndthe column vectors of the nth image blocks of images a and B, respectively. Constructing an image fusion dictionary D by using image block column vectors of the images A and B as follows:
and (II) image significant sparse modeling based on the common sparse features, the significant sparse features and the error matrix representation.
The modeling of the image to be fused is decomposed into a public sparse term, a remarkable sparse term with unique characteristics of the image and an error term containing image detail information, the public sparse term is the product of a data dictionary and a public sparse coefficient, and the remarkable sparse term is the product of the data dictionary and the remarkable sparse coefficient. And defining the sum of the 1 norm of the public sparse coefficient, the obvious sparse coefficient and the error matrix as an objective function. And when the target function is minimum, outputting a common sparse coefficient, a significant sparse coefficient and an error of the fused image.
Image significant sparse modeling as being under constraint YA=DXA+ZAD+EALower minimization objective function min | | XA||1+||ZA||1+λ||EA||2,1Wherein the constant coefficient lambda is 30, solving the public sparse coefficient matrix XASignificant sparse coefficient matrix ZAAnd error EA. Under constraint YB=DXB+ZBD+EBLower minimization objective function min | | XB||1+||ZB||1+λ||E||B2,1Solving a common sparse coefficient matrix XBSignificant sparse coefficient matrix ZBError EB。
And (III) solving parameters such as public sparse features, obvious sparse features, errors and the like of the image obvious sparse model decomposition by a linear alternating frame based on the dynamic penalty factors. The method comprises the following specific steps:
step (31) initializes salient sparse model parameters for image a and image B.
The common sparse coefficient matrices of image A and image B are respectivelyAnd isSignificant sparseness factor for image A and image BError of image A and image BLagrange multipliers for image a and image BThe convergence speed factor rho is 1.1, the convergence factor is 0.05, and the penalty factor mu is0=10-6Maximum penalty parameter μmax=1010The iteration number j is 0, and the constant coefficient lambda is 30;
step (32) updates the error using a shrink operator method.
Calculating the error of the (j + 1) th iteration image A:
wherein i is more than or equal to 1 and less than or equal to N,error of image A under constraint, GA(i) is GAI column of (1), mujIs the penalty factor for the j-th iteration,is the common sparse coefficient matrix for the jth iteration image a,for the significant sparse coefficient matrix of the jth iteration image a,is the lagrangian multiplier for the jth iteration image a.
Calculating the error of the (j + 1) th iteration image B:
wherein i is more than or equal to 1 and less than or equal to N,error of image B under constraint, GB(i) is GBThe (c) th column of (a),is the common sparse coefficient matrix for the jth iteration image B,the lagrangian multiplier for image B is iterated for the j-th time.
Step (33) updates the common sparse coefficient matrix using a thresholding method.
Calculating a public sparse coefficient matrix of the (j + 1) th iteration image A:
calculating a public sparse coefficient matrix of the (j + 1) th iteration image B:
Step (34) of updating the significant sparse coefficient matrix Z using a thresholding methodAAnd ZB。
Calculating a significant sparse coefficient matrix of the (j + 1) th iteration image A:
calculating a significant sparse coefficient matrix of the (j + 1) th iteration image B:
step (35) update Lagrange multiplier LAAnd LB。
Calculating Lagrange multiplier of j +1 time iteration image B:
calculating Lagrange multiplier of j +1 time iteration image B:
step (36) updates the penalty parameter mu.
Calculating a (j + 1) th iteration penalty factor:
μj+1=min(μjρ,μmax) (10)
where ρ is the convergence rate factor, μmaxIs the maximum penalty factor.
And (37) iterative convergence judgment.
If the convergence condition is judgedIf true, output Otherwise, j equals j +1, and the process goes to the second step.
If the convergence condition is judgedIf true, output Otherwise, j equals j +1, and the process proceeds to step (32).
And (IV) image label initial fusion based on the maximum balance focusing parameter criterion.
Defining the focusing parameter J (A, i) of the ith image sub-block of the image A to be fused as the 2-norm of the error multiplied by the balance factor and the significant sparse coefficient matrix multiplied by the ith column of the dictionary
J(A,i)=||bEA(:,i)+ZAD(:,i)||2(11)
Wherein EA(i) represents an error EAThe ith column of (1), D (: i) represents the ith column of the dictionary D. The balance factor B is defined as the sum of products of significant sparse coefficient matrixes of the 2 images A and B to be registered and the ith column dictionary matrix respectively divided by the sum of the ith column of the error matrix of the 2 images to be registered, and the calculation formula is
Wherein EB(i) represents an error EBColumn i.
Defining a focusing parameter J (B, i) of the ith image sub-block of the image B to be fused as an error multiplied by a balance factor and a significant sparse coefficient matrix multiplied by a 2-norm of the ith column of the dictionary
J(B,i)=||bEB(:,i)+ZBD(:,i)||2(13)
Fusing pixels by adopting 2 norm maximization rule, and constructing a fusion label by using a formula (14)(whereinLabels representing the nth image block), each column vector contains the label of the corresponding column vector of the selected source image, 1 and 0 represent that the column is from the source images a and B, respectively, and the fusion label rule is:
and (IV) image label fusion optimization based on the detail information of the source image and the statistical information of the neighborhood image block.
In the process of detecting the focus area, the focus area of the image is gathered near the focus, so that the focus and defocus areas are more accurately divided by combining the neighborhood information of the image. Detail information error E decomposed according to image neighborhood information and sparse modelAAnd EBFuse Y to initializationFOptimizing, dividing the focusing area and the defocusing area more accurately, and simultaneously smoothing the edges of the defocusing area and the focusing area. The detailed steps comprise:
and (41) fusion sparse optimization based on the source image detail information.
In order to retain the detail information of two source images, respectively calculating error EAFocal zone focus detail maximum ofAnd error EBFocal zone focus detail maximum ofIf the error of image A is larger than90% of the original image A, retaining the pixels of the source image A; if the error of image B is larger than90% of the source image B, the pixels of the source image B remain. The fusion label optimization rule based on the detail information of the source image is
Step (42) is based on fusion optimization of neighborhood image block information.
Fusing the images according to a label optimization rule formula (15), and calculating the image block source n of each 8 neighborhood of the image blocks in the fused imagesi AAnd ni BEach representing the number of source selections from images a and B in the 8 th neighborhood of the image block corresponding to the ith column. If the number of the image blocks selected from the image A by the 8 neighborhood blocks of the image block is larger than that of the image block selected from the source image B, the area corresponding to the image block in the source image A is a focusing area; otherwise, the area of the source image B corresponding to the image block is a focus area; if the number is equal, the boundary region is determined. I.e. the fused image is updated according to equation (16).
Where-1 represents the boundary, resulting in the final fused image label yi F。
And (VI) reconstructing an image based on the optimized fusion label.
From the fused image label yi FAssignment construct fusion image Fi∈Rd×N=[F1,F2,…,FN]. If the image label is 1, selecting the pixel corresponding to the image A; if the image label is 0, selecting the pixel corresponding to the image B; otherwise, taking the average value of corresponding pixels of the images A and B.
Finally fusing the ith image block into a vector F with the size of d rows and 1 columnsiStarting from the first element, pxEach element being a column and being converted to p in totalyColumn, get size Px×PyImage block D ofiAssuming that the width of the image A and the image B to be fused is w, the image block DiIn the row i of the reconstructed fused imagexAnd column iyAre respectively as
Where mod is a remainder taking function.
Images A and B to be fused are shown in FIGS. 2 and 3, respectively, and the result after fusion by the method of the present invention is shown in FIG. 4.
According to the method, the focus region of the multi-focus image is extracted by utilizing the detail information and the unique characteristics of the error matrix and the obvious sparse matrix, and the neighborhood information and the source image information are combined for optimization and fusion, so that the method has more accurate focus region division compared with the technology in the real field, retains more image detail information, overcomes the block effect, and has better smoothness and robustness. The present invention provides a multi-focus image fusion method based on significant sparse representation and neighborhood information, and a number of methods and approaches for implementing the technical solution are provided, and the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, a number of improvements and modifications can be made without departing from the principle of the present invention, and these improvements and modifications should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.
Claims (3)
1. A multi-focus image fusion method based on significant sparse representation and neighborhood information is characterized by comprising the following steps:
step 1, dividing an image to be fused into image blocks based on a uniform grid, and constructing a vectorized image fusion dictionary;
step 2, performing image significant sparse modeling to obtain an image significant sparse model;
step 3, solving parameters of the image significant sparse decomposition model;
step 4, carrying out initial fusion of image labels;
step 5, fusing and optimizing the image labels;
step 6, reconstructing a fused image based on the optimized image label;
the step 1 comprises the following steps:
two images A and B to be fused are taken as Px×PyIs divided evenly into non-overlapping image blocks, where PxAnd PyRespectively representing the number of pixel points in the abscissa direction and the number of pixel points in the ordinate direction, wherein the lengthening of the gray value matrix of the ith image block in the image A is d-Px×PyColumn vector of row 1 columnThe gray value matrix of the ith image block in the image B is elongated to d-Px×PyColumn vector of row 1 columnI is more than or equal to 1 and less than or equal to N, N represents the total number of image blocks, and the image A and the image B are respectively converted into matrixesAndwhereinAndrespectively representing a column vector of an Nth image block of the image A and a column vector of an Nth image block of the image B; constructing an image fusion dictionary D by using image block column vectors of the images A and B as follows:
the step 2 comprises the following steps:
the image significant sparse modeling is as follows:
under constraint YA=DXA+ZAD+EALower minimization objective function min | | XA||1+||ZA||1+λ||EA||2,1Wherein λ is a constant coefficient, XA、ZAAnd EARespectively representing a public sparse coefficient matrix, a significant sparse coefficient matrix and an error of a significant sparse model of the image A;
under constraint YB=DXB+ZBD+EBLower minimization objective function min | | XB||1+||ZB||1+λ||EB||2,1,XB、ZBAnd EBRespectively representing a public sparse coefficient matrix, a significant sparse coefficient matrix and an error of the significant sparse model of the image B;
the step 3 comprises the following steps:
step 3-1: initializing significant sparse model parameters for image a and image B: the initial common sparse coefficient matrices of image A and image B are respectivelyAnd isThe initial significant sparse coefficients of image A and image B are respectivelyAnd the initial errors of image A and image B are respectivelyAnd the initial Lagrange multiplier coefficients of image A and image B are respectivelyAnd the convergence speed factor rho is 1.1, the convergence factor is 0.05, and the penalty factor mu is0=10-6Maximum penalty parameter μmax=1010The iteration number j is 0, and the constant coefficient lambda is 30;
Wherein,is composed ofI is more than or equal to 1 and less than or equal to N in the ith row,error of image A under constraint, GA(i) is GAI column of (1), mujIs the penalty factor for the j-th iteration,is the common sparse coefficient matrix for the jth iteration image a,for the significant sparse coefficient matrix of the jth iteration image a,a Lagrange multiplier for the jth iteration image A;
Wherein,is composed ofThe (c) th column of (a),error of image B under constraint, GB(i) is GBThe (c) th column of (a),is the common sparse coefficient matrix for the jth iteration image B,the lagrangian multiplier for the jth iteration image B,a significant sparse coefficient matrix of a jth iteration image B;
Step 3-6: calculating the (j + 1) th iteration penalty factor muj+1:
μj+1=min(μjρ,μmax) (10)
Where ρ is the convergence rate factor, μmaxIs the maximum penalty factor;
step 3-7: if the convergence condition is satisfiedIf true, output Otherwise, updating j to j +1, and turning to the step 3-2;
if the convergence condition is satisfiedIf true, output Otherwise, update j to j+1, going to step 3-2;
step 4 comprises the following steps:
defining a focusing parameter J (A, i) of the ith image block of the image A to be fused as an error multiplied by a balance factor and a significant sparse coefficient matrix multiplied by a 2 norm of the ith column of the dictionary, wherein the calculation formula is as follows:
J(A,i)=||bEA(:,i)+ZAD(:,i)||2(11)
wherein EA(i) represents an error EARepresents the ith column of the dictionary D; the balance factor B is defined as the sum of products of significant sparse coefficient matrixes of 2 images A and B to be registered and an ith column dictionary matrix is divided by the sum of an ith column of an error matrix of the 2 images to be registered, and the calculation formula is as follows:
wherein EB(i) represents an error EBThe ith column;
defining a focusing parameter J (B, i) of the ith image block of the image B to be fused as an error multiplied by a balance factor and a significant sparse coefficient matrix multiplied by a 2 norm of the ith column of the dictionary, wherein the calculation formula is as follows:
J(B,i)=||bEB(:,i)+ZBD(:,i)||2(13)
fusing pixels by adopting a 2 norm maximization rule; construction of fusion tags Using formula (14)WhereinLabels representing the Nth image block, wherein each column vector comprises the labels of the corresponding column vector of the selected source image, 1 and 0 respectively represent that the column is from the source image A and the image B, and the label fusion rule is as follows:
2. the method of claim 1, wherein step 5 comprises:
step 5-1: respectively calculating the errors EAFocal zone focus detail maximum ofAnd error EBFocal zone focus detail maximum ofIf the error of image A is larger than90% of the original image A, retaining the pixels of the source image A; if the error of image B is larger thanAnd 90% of the original image B, retaining the pixels of the original image B, and the fusion label optimization rule based on the detail information of the original image is as follows:
step 5-2: fusing the images according to a label optimization rule formula (15), and calculating the image block source n of each 8 neighborhood of the image blocks in the fused imagesi AThe number n of image blocks corresponding to the ith column in the neighborhood of 8 selected from the image Ai BRepresenting the number of selected image blocks from the image B in 8 neighborhoods of the image block corresponding to the ith column, and if the number of the image blocks selected from the image A by the 8 neighborhoods of the image block is greater than the number of the image blocks selected from the source image B, representing that the area corresponding to the image block in the source image A is a focus area; if the number of the image blocks selected from the image A by the 8 adjacent domain blocks of the image block is less than the number of the image blocks selected from the source image B, the area corresponding to the image block of the source image B is a focusing area; if number of phasesAnd the boundary region is obtained, namely the fused image is updated according to the formula (16):
where-1 represents the boundary, resulting in the final fused image label yi F。
3. The method of claim 2, wherein step 6 comprises:
from the fused image label yi FAssignment construct fusion image FiIf the image label is 1, selecting the pixel corresponding to the image A; if the image label is 0, selecting the pixel corresponding to the image B; otherwise take the average of image a and image B:
image F with d rows and 1 columns of vectors after fusion of ith image blockiStarting from the first element, pxEach element being a column and being converted to p in totalyColumn, get size Px×PyImage block D ofi;
Setting the width of the image A and the image B to be fused as w, and then, setting the image block DiIn the row i of the reconstructed fused imagexAnd column iyRespectively as follows:
where mod is a remainder taking function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910126869.3A CN109934794B (en) | 2019-02-20 | 2019-02-20 | Multi-focus image fusion method based on significant sparse representation and neighborhood information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910126869.3A CN109934794B (en) | 2019-02-20 | 2019-02-20 | Multi-focus image fusion method based on significant sparse representation and neighborhood information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109934794A CN109934794A (en) | 2019-06-25 |
CN109934794B true CN109934794B (en) | 2020-10-27 |
Family
ID=66985723
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910126869.3A Active CN109934794B (en) | 2019-02-20 | 2019-02-20 | Multi-focus image fusion method based on significant sparse representation and neighborhood information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109934794B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103295196A (en) * | 2013-05-21 | 2013-09-11 | 西安电子科技大学 | Super-resolution image reconstruction method based on non-local dictionary learning and biregular terms |
CN104077761A (en) * | 2014-06-26 | 2014-10-01 | 桂林电子科技大学 | Multi-focus image fusion method based on self-adaption sparse representation |
CN107680070A (en) * | 2017-09-15 | 2018-02-09 | 电子科技大学 | A kind of layering weight image interfusion method based on original image content |
CN109003256A (en) * | 2018-06-13 | 2018-12-14 | 天津师范大学 | A kind of multi-focus image fusion quality evaluating method indicated based on joint sparse |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9152881B2 (en) * | 2012-09-13 | 2015-10-06 | Los Alamos National Security, Llc | Image fusion using sparse overcomplete feature dictionaries |
CN104008533B (en) * | 2014-06-17 | 2017-09-29 | 华北电力大学 | Multisensor Image Fusion Scheme based on block adaptive signature tracking |
CN106056564B (en) * | 2016-05-27 | 2018-10-16 | 西华大学 | Edge clear image interfusion method based on joint sparse model |
CN106447640B (en) * | 2016-08-26 | 2019-07-16 | 西安电子科技大学 | Multi-focus image fusing method and device based on dictionary learning, rotation guiding filtering |
CN108510465B (en) * | 2018-01-30 | 2019-12-24 | 西安电子科技大学 | Multi-focus image fusion method based on consistency constraint non-negative sparse representation |
-
2019
- 2019-02-20 CN CN201910126869.3A patent/CN109934794B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103295196A (en) * | 2013-05-21 | 2013-09-11 | 西安电子科技大学 | Super-resolution image reconstruction method based on non-local dictionary learning and biregular terms |
CN104077761A (en) * | 2014-06-26 | 2014-10-01 | 桂林电子科技大学 | Multi-focus image fusion method based on self-adaption sparse representation |
CN107680070A (en) * | 2017-09-15 | 2018-02-09 | 电子科技大学 | A kind of layering weight image interfusion method based on original image content |
CN109003256A (en) * | 2018-06-13 | 2018-12-14 | 天津师范大学 | A kind of multi-focus image fusion quality evaluating method indicated based on joint sparse |
Non-Patent Citations (3)
Title |
---|
Visual tracking via robust multi-task multi-feature joint sparse representation;Yong Wang等;《Multimedia Tools and Applications》;20181231;第77卷;第31447-31467页 * |
结合稀疏表示与神经网络的医学图像融合;陈轶鸣等;《河南科技大学学报(自然科学版)》;20180430;第39卷(第2期);第40-47页 * |
联合稀疏表示的医学图像融合及同步去噪;宗静静等;《中国生物医学工程学报》;20160430;第35卷(第2期);第133-140页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109934794A (en) | 2019-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yang et al. | Unsupervised learning of geometry from videos with edge-aware depth-normal consistency | |
CN111784602B (en) | Method for generating countermeasure network for image restoration | |
CN107766794B (en) | Image semantic segmentation method with learnable feature fusion coefficient | |
CN112001960B (en) | Monocular image depth estimation method based on multi-scale residual error pyramid attention network model | |
CN105513064B (en) | A kind of solid matching method based on image segmentation and adaptive weighting | |
CN108038906B (en) | Three-dimensional quadrilateral mesh model reconstruction method based on image | |
Zhou et al. | FSAD-Net: feedback spatial attention dehazing network | |
CN114255238A (en) | Three-dimensional point cloud scene segmentation method and system fusing image features | |
CN110570363A (en) | Image defogging method based on Cycle-GAN with pyramid pooling and multi-scale discriminator | |
CN110728183A (en) | Human body action recognition method based on attention mechanism neural network | |
CN111080591A (en) | Medical image segmentation method based on combination of coding and decoding structure and residual error module | |
CN107169417A (en) | Strengthened based on multinuclear and the RGBD images of conspicuousness fusion cooperate with conspicuousness detection method | |
CN111368637B (en) | Transfer robot target identification method based on multi-mask convolutional neural network | |
CN112200752B (en) | Multi-frame image deblurring system and method based on ER network | |
CN115393231B (en) | Defect image generation method and device, electronic equipment and storage medium | |
CN110852199A (en) | Foreground extraction method based on double-frame coding and decoding model | |
CN115588237A (en) | Three-dimensional hand posture estimation method based on monocular RGB image | |
CN111127353B (en) | High-dynamic image ghost-removing method based on block registration and matching | |
CN116051936A (en) | Chlorophyll concentration ordered complement method based on space-time separation external attention | |
CN115578574A (en) | Three-dimensional point cloud completion method based on deep learning and topology perception | |
CN115049739A (en) | Binocular vision stereo matching method based on edge detection | |
CN117078982B (en) | Deep learning-based large-dip-angle stereoscopic image alignment dense feature matching method | |
CN111882495B (en) | Image highlight processing method based on user-defined fuzzy logic and GAN | |
CN113628143A (en) | Weighted fusion image defogging method and device based on multi-scale convolution | |
CN109934794B (en) | Multi-focus image fusion method based on significant sparse representation and neighborhood information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |