CN111784572B - Image fusion and super-resolution joint implementation method based on discriminant dictionary learning - Google Patents

Image fusion and super-resolution joint implementation method based on discriminant dictionary learning Download PDF

Info

Publication number
CN111784572B
CN111784572B CN202010425926.0A CN202010425926A CN111784572B CN 111784572 B CN111784572 B CN 111784572B CN 202010425926 A CN202010425926 A CN 202010425926A CN 111784572 B CN111784572 B CN 111784572B
Authority
CN
China
Prior art keywords
image
low
dictionary
resolution
sparse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010425926.0A
Other languages
Chinese (zh)
Other versions
CN111784572A (en
Inventor
李华锋
陈怡文
杨默远
余正涛
张亚飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming University of Science and Technology
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN202010425926.0A priority Critical patent/CN111784572B/en
Publication of CN111784572A publication Critical patent/CN111784572A/en
Application granted granted Critical
Publication of CN111784572B publication Critical patent/CN111784572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a method for jointly realizing image fusion and super-resolution based on discriminative dictionary learning, belonging to the technical field of digital image processing. Specifically, two pairs of low-rank, sparse dictionaries and a high-resolution and low-resolution image coding coefficient transformation matrix are jointly trained first. One pair of dictionaries is used for representing low-rank and sparse components of an input image, the other pair is used for reconstructing a high-resolution fusion low-rank and sparse components, and a conversion matrix is used for establishing a potential relation between a high-resolution image and a low-resolution image. Then, a sparse and low-rank separation model is constructed, the input image is effectively decomposed into low-rank and sparse components, and therefore a high-resolution fusion image can be constructed through different dictionaries. The invention realizes the fusion of images and super-resolution reconstruction jointly. The experimental result shows that the invention has better fusion performance in both visual effect and objective index.

Description

Image fusion and super-resolution joint implementation method based on discriminant dictionary learning
Technical Field
The invention relates to a method for realizing image fusion and super-resolution combination based on discriminative dictionary learning, belonging to the technical field of digital image processing.
Background
Image fusion can integrate complementary information acquired by different sensors about the same scene into one image, and can provide more comprehensive and accurate description for the scene, thereby facilitating identification of events and objects. In recent years, this technology has been receiving more and more attention from researchers, and has made significant research progress.
Existing image fusion methods can be roughly classified into three categories, namely, a multiscale transform (MST) -based method, a Dictionary Learning (DL) -based method, and a deep learning-based method. Among the MST based methods, commonly used MSTs include wavelet transforms, dual tree complex transforms (DTCWT), shear transforms, curvelet transforms, contourlet transforms, and non-subsampled contourlet transforms (NSCT). Usually, the base of MST is fixed, sparseness is weak, and local information of an image cannot be represented adaptively sparsely. Different from the MST method, the Sparse Representation (SR) technology based on dictionary learning can effectively overcome the defects of the MST method and show good fusion performance. With the development of deep learning, image fusion based on deep learning is more and more concerned by people, and accordingly, some excellent fusion methods also appear. However, the above method has good performance only when the source image has a high resolution. If the input image is of low resolution, the resolution of the fusion result will also be low, which hinders the application of the fusion result.
In order to improve the resolution of the fused image, a common solution is to implement fusion and super-resolution reconstruction step by step. However, this method is likely to introduce the workpiece created in the first step into the next step, thereby degrading the visual quality of the final result.
Disclosure of Invention
The invention aims to provide a multi-source image fusion method based on discriminant dictionary learning and morphological component decomposition aiming at the defects and shortcomings in the prior art.
The technical scheme adopted by the invention is as follows: a multi-source image fusion method based on discriminant dictionary learning and morphological component decomposition comprises the following steps:
1) and constructing a training sample of dictionary learning. Selecting 8 HR training samples for training a high-resolution image discrimination dictionary pair, then performing down-sampling, and performing up-sampling by a bicubic interpolation method to restore the images to be the same as the high-resolution images as corresponding LR training images;
2) an initial dictionary is randomly generated. A new dictionary learning method is then proposed for decomposing the input image into low rank and sparse components. For synchronously realizing image fusion and super-resolution reconstruction, a pair of HR dictionaries and a pair of LR dictionaries are jointly trained, and a coefficient conversion matrix H between coding coefficients of high-resolution image blocks and low-resolution image blocks is trained, wherein the pair of HR dictionaries is D h,lAnd Dh,sOne pair of LR dictionaries is Dl,lAnd Dl,s
3) Obtaining a low-rank dictionary D from the step 2)l,lAnd sparse dictionary Dl,sThen, taking the LR image in pairs, decomposing the LR image block YlObtaining a low-rank sparse coding coefficient Al,lAnd Al,s
4) LR coding coefficient A obtained by step 3)l,s、Al,lAnd the conversion matrix H obtained in the step 2) adopts a method of maximum absolute value to construct the coding coefficient of the HR fusion image
Figure BDA0002498653850000021
And
Figure BDA0002498653850000022
and finally obtaining a high-resolution fusion image block.
Specifically, the step 2) includes the following steps:
step2.1 utilizes the constructed training sample learning dictionary, and the proposed discriminant dictionary learning model is as follows:
Figure BDA0002498653850000023
Figure BDA0002498653850000024
and
Figure BDA0002498653850000025
low rank dictionary and sparse dictionary that are HR images。XhIs a high resolution training image, XlIs a corresponding low resolution image. In addition, D is used to low resolution low rank, sparse dictionaryl,lAnd Dl,sIs shown in which
Figure BDA0002498653850000026
Figure BDA0002498653850000027
M × K represents the matrix dimension of M rows and K columns, ε1、ε2、ε3And ε4A constant controlling the amplitude of each atom in the different dictionaries. Zl,lAnd Zl,sIs XlIn dictionary Dl,lAnd Dl,sCoding coefficient ofh,lAnd Zh,sAre each XhIn dictionary Dh,lAnd Dh,sThe coding coefficients of (1). H is Zl,iAnd Zh,i(i ═ l, s); i | · | purple windFIs an F norm operator; Ψ (H, Z)l,l,Zh,l,Zl,s,Zh,s) And Φ (D)l,l,Dh,l,Zl,l,Zh,l,Zl,s,Zh,s) Is a discriminant regular term to ensure the learning dictionary D l,l、Dh,l、Dl,sAnd Dh,sThe discrimination ability of (1).
For convenience of processing, the LR image is set to the same size as the HR image by the bicubic interpolation process. Then, the relationship between the encoding coefficients of the LR image and the HR image can be described as:
Figure BDA0002498653850000031
in the formula (2) < lambda >1,λ2And λ5Is a regularization parameter.
In the formula (2), the reaction mixture is,
Figure BDA0002498653850000032
for the purpose of avoiding over-fitting,
Figure BDA0002498653850000033
is a relational transformation term used to represent the relationship between sparse component coding coefficients, where | · |. includesFIs the F-norm operator. In the LR image and its corresponding HR image,
Figure BDA0002498653850000034
for establishing Zh,lAnd Zl,lThe relationship between them. It is clear that the low rank components have strong linear correlation between them and that each element in the same coding coefficient vector has similar values. Based on this fact, an all-1 matrix is introduced
Figure BDA0002498653850000035
And using regularization terms
Figure BDA0002498653850000036
To characterize the low rank component coded coefficients.
To improve the discrimination of the learning dictionary, the following regularization terms are defined:
Figure BDA0002498653850000037
in formula (3) < lambda >3And λ4Is a regularization parameter. I Dh,lZh,l||*And | | | Dl,lZl,l||*D for ensuring separation from an input image pairh,lZh,lAnd Dl,lZl,lIs low-rank, wherein | · |. non-woven phosphor*Is the kernel norm operator. After obtaining the low rank component, it can also pass Xi-Di,lZi,lObtaining a sparse component, wherein XiRepresenting input source data. Therefore, the objective function that discriminates the dictionary learning can be expressed as:
Figure BDA0002498653850000038
Figure BDA0002498653850000041
And (4) optimizing and solving the Step2.2 dictionary learning model.
Variable D to be solvedl,l、Dl,s、Dh,l、Dh,s、Zh,s、Zh,l、Zl,s、Zl,lAnd H, which are non-convex and difficult to solve directly. If one of the variables is solved while the other variables are fixed, each sub-problem is a convex function. Therefore, each variable in (4) is solved in turn by an alternating iterative method.
Step2.2.1 for ease of optimization, four variable matrices X were introducedh,s、Xh,l、Xl,sAnd Xl,lThat is, training the sparse component and the low-rank component of the input high-resolution image and the sparse component and the low-rank component of the low-resolution image, the optimization problem in equation (4) is converted into:
Figure BDA0002498653850000042
step2.2.2 update Xh,s、Xh,lAnd a coding coefficient matrix Zh,sAnd Zh,l. First, fixing other variables, updating Xh,sThen, equation (5) is converted into the following expression:
Figure BDA0002498653850000043
the above formula has the following closed solution form:
Figure BDA0002498653850000044
similarly, fix other variables, Xh,lThe solution form of (c) is as follows:
Figure BDA0002498653850000045
equation (8) can be solved efficiently by the SVT algorithm.
After X is updatedh,sAnd Xh,lThen, the coding coefficient matrix Z is further updatedh,sAnd Zh,l
Figure BDA0002498653850000046
Figure BDA0002498653850000047
In the formula (9)
Figure BDA0002498653850000051
Formula (9) belongs to1Minimizing the problem by using a Two-Step Iterative Shrinkage/Thresholding algorithm (Two-Step Iterative Shrinkage/Thresholding algorithm, TwinT)[57]To solve. For equation (10), the solution is as follows:
Figure BDA0002498653850000052
step2.2.3 updating X l,s,Xl,lAnd a coding coefficient matrix Zl,sAnd Zl,l. First, fixing other variables, optimizing Xl,sAnd Xl,lSolving expressions are respectively as follows:
Figure BDA0002498653850000053
Figure BDA0002498653850000054
wherein
Figure BDA0002498653850000055
Equation (13) can be solved efficiently using the SVT algorithm.
Further, by fixing other variables, Z is obtainedl,sUpdating expression sum Zl,lThe closed solution is in the form:
Figure BDA0002498653850000056
Figure BDA0002498653850000057
wherein
Figure BDA0002498653850000058
The same applies to TwinT algorithm (14).
Step2.2.4 updates the transformation matrix H. The other variable matrix is fixed, and the updated expression of H obtained by the formula (5) is as follows:
Figure BDA0002498653850000059
the formula (16) is F norm, and the following expression about H is obtained by calculating partial derivative of H:
Figure BDA00024986538500000510
the above equation can effectively solve H by Sylvester function of MATLAB.
Step2.2.5 updating dictionary pair Dh,s、Dh,lAnd Dl,s、Dl,l. Fixing other variables to be unchanged, solving dictionary variables by using a Lagrange dual method to obtain Dh,sAnd Dh,lThe update expression of (a) is as follows:
Figure BDA0002498653850000061
Figure BDA0002498653850000062
in the same way, Dl,sAnd Dl,lThe closed solution expression is as follows:
Figure BDA0002498653850000063
Figure BDA0002498653850000064
in the above four expressions, Λ1、Λ2、Λ3And Λ4Is a diagonal matrix of the corresponding optimal dual variables.
Specifically, the steps of step 3) are as follows:
from step 2), in order to ensure the successful separation of different components, a new image decomposition model is provided, and a source image is decomposed into a low-rank part and a sparse part.
Obtaining a low-rank dictionary D from Step3.1 l,lSparse dictionary Dl,sThen, taking paired LR images, collecting N image block data by using a sliding window with the size of (N multiplied by N), wherein the number of each image block is used as a column vector, and then forming a matrix by the column vectors. Decomposition of LR image blocks YlObtaining low rank, sparse coding coefficient Al,lAnd Al,s. The low rank, sparse decomposition model is as follows:
Figure BDA0002498653850000065
in the formula Al,l(i, j) is Al,lThe (i, j) th value of (a). If A is to bel,lAs a group, the value in each row of (a) is minimizedl,l||2,1Will be such that Al,lThe values in each row of (a) are the same. From the inequalities R (AB) ≦ min { R (A), R (B) } (R denotes the rank of the matrix), it is known that the best is obtainedMiniaturization | | | Al,l||2,1Can further ensure Dl,lAl,lLow rank and avoid D due to minimization | |l,lAl,l||*Causing some drawbacks.
The solution of the Step3.2 decomposition model can also adopt an alternative iteration method to solve the image decomposition model (22) and update Al,sAnd Al,l
Step3.2.1 for ease of optimization, variable Y was introducedl,lAnd Yl,sI.e., the low rank component and the sparse component of the LR image block, equation (22) can be transformed into:
Figure BDA0002498653850000071
step3.2.2 updating Yl,sAnd Yl,l: first fix Yl,l,Al,lAnd Al,s,Yl,sThe update expression is as follows:
Figure BDA0002498653850000072
similarly, using the TwinT algorithm to solve for Yl,sL of1And (4) optimizing the norm.
Further, Y is fixedl,s,Al,lAnd Al,s,Yl,lThe update expression is as follows:
Figure BDA0002498653850000073
In formula (25)
Figure BDA0002498653850000074
Step3.2.3 optimized sparse coding coefficient Al,sAnd Al,l. The update expression is as follows:
Figure BDA0002498653850000075
Figure BDA0002498653850000076
formula (26) is1And (3) solving the norm optimization problem by adopting a TwinT algorithm in the section. Formula (27) is2,1Norm optimization problem.
Specifically, the steps of step 4) are as follows:
assuming N LR images to be fused, the sparse dictionary D learned through step 2l,lLow rank dictionary Dl,sAnd a transformation matrix H; the decomposition model in the step3 is used for decomposing the image to be fused to obtain a coding coefficient A of low-rank sparse componentsl,lAnd Al,s. Order to
Figure BDA0002498653850000077
And
Figure BDA0002498653850000078
respectively representing low-rank, sparse coding coefficients of the LR input image. Then low rank, sparse fusion coding coefficients of HR
Figure BDA0002498653850000079
And
Figure BDA00024986538500000710
is constructed by the following steps:
Figure BDA00024986538500000711
m in the formula (28) represents the number of input images, such that
Figure BDA0002498653850000081
A high resolution fused image block can be obtained
Figure BDA0002498653850000082
Figure BDA0002498653850000083
In formula (29)
Figure BDA0002498653850000084
Rec is a reconstruction operation used to convert image blocks into an image.
The invention has the beneficial effects that:
(1) the invention provides an effective discriminant dictionary learning model, which can learn a conversion dictionary between high and low resolutions and two pairs of discriminant dictionaries and jointly represent low-rank and sparse components of a low-resolution and high-resolution image pair.
(2) The conversion dictionary in the invention reveals the relationship between the coding coefficient of the low-resolution image and the corresponding high-resolution version image, and can improve the resolution of the low-resolution image after fusion, so that the image details are clearer.
(3) The invention realizes the fusion and the high-resolution reconstruction of the low-resolution images at the same time, and can obtain the fusion images with better vision and objective evaluation performance, thereby providing more comprehensive and accurate image description for the identification of events and objects.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is an HR source image in which the first row is a pair of infrared and visible light images (240X 320) of "street", respectively; the second row is the MR-T1/MR-T2 medical image pair (256 × 256), respectively; the third row is a "bookshelf" and "clock" multi-focal image pair (240 × 320);
FIG. 3 is an LR source image, in which (a) is an infrared and visible light image pair (120 × 160); (b) for a medical image pair (128 x 128); (c) for a pair of multifocal images (120 × 160);
FIG. 4 shows the fusion and high resolution results (240X 320) of infrared and visible images of "street", in which (a) - (d) represent the fusion and super resolution reconstruction result images of Ours, Li's, Zhu's (bicubic), Zhu's (SRSR), respectively;
FIG. 5 shows the fusion and high resolution results (256X 256) of MR-T1/MR-T2 medical images, and (a) to (d) respectively show the fusion and super resolution reconstruction result images of Ours, Li's, Zhu's (bicubic), Zhu's (SRSR);
Fig. 6 shows the fusion and high resolution results (240 × 320) of the multi-focus image of "bookshelf", and in the drawings, (a) to (d) respectively show the fusion and super-resolution reconstruction result images of Ours, Li's, Zhu's (bicubic), Zhu's (srsr).
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments.
Example 1: as shown in fig. 1 to 6, a multi-source image fusion method based on discriminative dictionary learning and morphological component decomposition includes the following steps:
1) and constructing a training sample for dictionary learning. Selecting 8 HR training samples for training a high-resolution image discrimination dictionary pair, then performing down-sampling, and performing up-sampling by a bicubic interpolation method to restore the images to be the same as the high-resolution images as corresponding LR training images;
2) an initial dictionary is randomly generated. A new dictionary learning method is then proposed for decomposing the input image into low rank and sparse components. For synchronously realizing image fusion and super-resolution reconstruction, a pair of HR dictionaries and a pair of LR dictionaries are jointly trained, and a coefficient conversion matrix H between coding coefficients of high-resolution image blocks and low-resolution image blocks is trained, wherein the pair of HR dictionaries is Dh,lAnd Dh,sOne pair of LR dictionaries is D l,lAnd Dl,s
3) Obtaining a low-rank dictionary D from the step 2)l,lAnd sparse dictionary Dl,sThen, taking the LR image in pairs, decomposing the LR image block YlObtaining low rank, sparse coding coefficient Al,lAnd Al,s
4) LR coding coefficient A obtained by step 3)l,s、Al,lAnd the conversion matrix H obtained in the step 2) adopts a method of maximum absolute value to construct the coding coefficient of the HR fusion image
Figure BDA0002498653850000091
And
Figure BDA0002498653850000092
and finally obtaining a high-resolution fusion image block.
Specifically, the step 2) includes the following steps:
step2.1 utilizes the constructed training sample learning dictionary, and the proposed discriminant dictionary learning model is as follows:
Figure BDA0002498653850000093
Figure BDA0002498653850000094
and
Figure BDA0002498653850000095
are the low rank dictionary and sparse dictionary of the HR image. XhIs a high resolution training image, XlIs a corresponding low resolution image. In addition, D is used to low resolution low rank, sparse dictionaryl,lAnd Dl,sIs shown in which
Figure BDA0002498653850000101
Figure BDA0002498653850000102
M × K represents the matrix dimension of M rows and K columns, ε1、ε2、ε3And ε4A constant controlling the amplitude of each atom in the different dictionaries. Zl,lAnd Zl,sIs XlIn dictionary Dl,lAnd Dl,sCoding coefficient ofh,lAnd Zh,sAre each XhIn dictionary Dh,lAnd Dh,sThe coding coefficients of (1). H is Zl,iAnd Zh,i(i ═ l, s); i | · | purple windFIs an F norm operator; Ψ (H, Z)l,l,Zh,l,Zl,s,Zh,s) And Φ (D)l,l,Dh,l,Zl,l,Zh,l,Zl,s,Zh,s) Is a discriminant regular term to ensure the learning dictionary Dl,l、Dh,l、Dl,sAnd Dh,sThe discrimination ability of (1).
For convenience of processing, the LR image is set to the same size as the HR image by the bicubic interpolation process. Then, the relationship between the encoding coefficients of the LR image and the HR image can be described as:
Figure BDA0002498653850000103
in the formula (2) < lambda >12And λ5Is a regularization parameter.
In the formula (2), the reaction mixture is,
Figure BDA0002498653850000104
for the purpose of avoiding over-fitting,
Figure BDA0002498653850000105
is a relational transformation term used to represent the relationship between sparse component coding coefficients, where | · |. includesFIs the F-norm operator. In the LR image and its corresponding HR image,
Figure BDA0002498653850000106
for establishing Zh,lAnd Zl,lThe relationship between them. It is clear that the low rank components have strong linear correlation between them and that each element in the same coding coefficient vector has similar values. Based on this fact, an all-1 matrix is introduced
Figure BDA0002498653850000107
And using regularization terms
Figure BDA0002498653850000108
To characterize the low rank component coded coefficients.
To improve the discrimination of the learning dictionary, the following regularization terms are defined:
Figure BDA0002498653850000109
in formula (3) < lambda >3And λ4Is a regularization parameter. I Dh,lZh,l||*And | | | Dl,lZl,l||*D for ensuring separation from an input image pairh,lZh,lAnd Dl,lZl,lIs low-rank, wherein | · |. non-woven phosphor*Is the kernel norm operator. After obtaining the low rank component, it can also pass Xi-Di,lZi,lObtaining a sparse component, wherein XiRepresenting input source data. Therefore, the objective function that discriminates the dictionary learning can be expressed as:
Figure BDA0002498653850000111
And (4) optimizing and solving the Step2.2 dictionary learning model.
Variable D to be solvedl,l、Dl,s、Dh,l、Dh,s、Zh,s、Zh,l、Zl,s、Zl,lAnd H, which are non-convex and difficult to solve directly. If one of the variables is solved while the other variables are fixed, each sub-problem is a convex function. Therefore, each variable in (4) is solved in turn by an alternating iterative method.
Step2.2.1 for ease of optimization, four variable matrices X were introducedh,s、Xh,l、Xl,sAnd Xl,lThat is, training the sparse component and the low-rank component of the input high-resolution image and the sparse component and the low-rank component of the low-resolution image, the optimization problem in equation (4) is converted into:
Figure BDA0002498653850000112
step2.2.2 update Xh,s,Xh,lAnd a coding coefficient matrix Zh,sAnd Zh,l. First, fixing other variables, updating Xh,sThen, equation (5) is converted into the following expression:
Figure BDA0002498653850000113
the above formula has the following closed solution form:
Figure BDA0002498653850000114
similarly, fix other variables, Xh,lThe solution form of (c) is as follows:
Figure BDA0002498653850000121
equation (8) can be solved efficiently by the SVT algorithm.
After X is updatedh,sAnd Xh,lThen, the coding coefficient matrix Z is further updatedh,sAnd Zh,l
Figure BDA0002498653850000122
Figure BDA0002498653850000123
In the formula (9)
Figure BDA0002498653850000124
Formula (9) belongs to1Minimizing the problem by using a Two-Step Iterative Shrinkage/Thresholding algorithm (Two-Step Iterative Shrinkage/Thresholding algorithm, TwinT)[57]To solve. For equation (10), the solution is as follows:
Figure BDA0002498653850000125
step2.2.3 updating X l,s、Xl,lAnd a coding coefficient matrix Zl,sAnd Zl,l. First, fixing other variables, optimizing Xl,sAnd Xl,lSolving expressions are respectively as follows:
Figure BDA0002498653850000126
Figure BDA0002498653850000127
wherein
Figure BDA0002498653850000128
Equation (13) can be solved efficiently using the SVT algorithm.
Further, by fixing other variables, Z is obtainedl,sUpdating expression sum Zl,lThe closed solution is in the form:
Figure BDA0002498653850000129
Figure BDA00024986538500001210
wherein
Figure BDA00024986538500001211
The same applies to TwinT algorithm (14).
Step2.2.4 updates the transformation matrix H. The other variable matrix is fixed, and the updated expression of H obtained by the formula (5) is as follows:
Figure BDA0002498653850000131
the formula (16) is F norm, and the following expression about H is obtained by calculating partial derivative of H:
Figure BDA0002498653850000132
the above equation can effectively solve H by Sylvester function of MATLAB.
Step2.2.5 updating dictionary pair Dh,s、Dh,lAnd Dl,s、Dl,l. Fixing other variables to be unchanged, solving dictionary variables by using a Lagrange dual method to obtain Dh,sAnd Dh,lThe update expression of (a) is as follows:
Figure BDA0002498653850000133
Figure BDA0002498653850000134
in the same way, Dl,sAnd Dl,lThe closed solution expression is as follows:
Figure BDA0002498653850000135
Figure BDA0002498653850000136
in the above four expressions, Λ1、Λ2、Λ3And Λ4Is a diagonal matrix of the corresponding optimal dual variables.
Specifically, the steps of step 3) are as follows:
from step 2), in order to ensure the successful separation of different components, a new image decomposition model is provided, and a source image is decomposed into a low-rank part and a sparse part.
Obtaining a low-rank dictionary D from Step3.1 l,lSparse dictionary Dl,sThen, a pair of LR images is taken, which is (n × n) largeThe small sliding window collects N image block data, each image block number is used as a column vector, and then the column vectors form a matrix. Decomposition of LR image blocks YlObtaining low rank, sparse coding coefficient Al,lAnd Al,s. The low rank, sparse decomposition model is as follows:
Figure BDA0002498653850000137
in the formula Al,l(i, j) is Al,lThe (i, j) th value of (a). If A is to bel,lAs a group, the value in each row of (a) is minimizedl,l||2,1Will be such that Al,lThe values in each row of (a) are the same. From the inequalities R (AB) ≦ min { R (A), R (B) } (R represents the rank of the matrix), it may be known to minimize | | | Al,l||2,1Can further ensure Dl,lAl,lLow rank and avoid D due to minimization | |l,lAl,lSome defects are caused.
The solution of the Step3.2 decomposition model can also adopt an alternative iteration method to solve the image decomposition model (21) and update Al,sAnd Al,l
Step3.2.1 for ease of optimization, variable Y was introducedl,lAnd Yl,sI.e., the low rank component and the sparse component of the LR image block, equation (22) can be transformed into:
Figure BDA0002498653850000141
step3.2.2 updating Yl,sAnd Yl,l: first fix Yl,l,Al,lAnd Al,s,Yl,sThe update expression is as follows:
Figure BDA0002498653850000142
similarly, using the TwinT algorithm to solve for Yl,sL of1And (4) optimizing the norm.
Further, Y is fixedl,s、Al,lAnd Al,s、Yl,lThe update expression is as follows:
Figure BDA0002498653850000143
In formula (24)
Figure BDA0002498653850000144
Step3.2.3 optimized sparse coding coefficient Al,sAnd Al,l. The update expression is as follows:
Figure BDA0002498653850000145
Figure BDA0002498653850000146
formula (26) is1And (3) solving the norm optimization problem by adopting a TwinT algorithm in the section. Formula (27) is2,1And (5) carrying out norm optimization.
Specifically, the steps of step 4) are as follows:
assuming N LR images to be fused, the sparse dictionary D learned through step 2l,lLow rank dictionary Dl,sAnd a transformation matrix H; the decomposition model in the step3 is used for decomposing the image to be fused to obtain a coding coefficient A of low-rank sparse componentsl,lAnd Al,s. Order to
Figure BDA0002498653850000151
And
Figure BDA0002498653850000152
respectively representing low-rank, sparse coding coefficients of the LR input image. Then low rank, sparse fusion coding coefficients of HR
Figure BDA0002498653850000153
And
Figure BDA0002498653850000154
is constructed by the following steps:
Figure BDA0002498653850000155
m in the formula (28) represents the number of input images, such that
Figure BDA0002498653850000156
A high resolution fused image block can be obtained
Figure BDA0002498653850000157
Figure BDA0002498653850000158
In formula (29)
Figure BDA0002498653850000159
Rec is a reconstruction operation used to convert image blocks into an image.
The present invention will be described in detail with reference to specific examples.
In the dictionary learning part, 8 training samples are selected as HR training images for training images, then down sampling is carried out, and then up sampling is carried out by a bicubic interpolation method to restore the images with the same size as the high-resolution images as corresponding LR training images. And as a training sample, obtaining a required low-rank dictionary, a sparse dictionary and a conversion matrix by iterative updating according to a proposed dictionary learning algorithm. For the test image, a pair of IR and visible images of HR (as shown in the first row of FIG. 2), a pair of medical images of HR (as shown in the second row of FIG. 2) and a pair of multi-focus images of HR (as shown in the last row of FIG. 2) are selected, with the low resolution versions of these source images being shown in FIG. 3. In dictionary learning and image decomposition, lambda is involved in total 1,λ2,λ3,λ4And λ5And 5 regularization parameters and the maximum iteration number K, wherein six parameters are required to be set. According to experimental experience, setting lambdaiThe numerical values of (i ═ 1,2, …,5) are: 1, 0.01, 1.5, 0.01 and 0.00001; k is 10. In order to verify the effectiveness of the method, experiments are respectively carried out on medical images, infrared and visible light images and multi-focus image images.
In order to verify the superiority of the invention, the proposed method is compared with the latest image fusion and super-resolution methods at present. The Li method can realize image fusion and super-resolution at the same time, so that the method is one of comparison methods. However, due to the limitations of such methods, fusion results produced by excellent fusion algorithms such as the method of Zhu are super resolved. For super resolution, high resolution results are constructed using bicubic (bicubic) and sparse representation based super resolution (SRSR). Therefore, in the present invention, the super resolution results of the method of Zhu were compared with the obtained results.
In order to objectively and fairly evaluate the quality of the fusion result generated by different methods, the invention adopts four objective evaluation indexes to measure the quality of the fusion result besides comparing the fusion result in visual effect. These metrics include a Spatial Frequency (SF) based metric Q SFQuality-aware clustering (Q) methodQACCommon entropy of image information QENTAnd image mean gradient (gradient) QGD。QSFThe ratio of the SF errors is used to measure the quality of the fused image. Specifically, if QSFIf the value of (a) is lower than zero, it means that the active image information is lost in the fusion result; and if QSFA value greater than 0, and in the absence of any artifacts and noise, indicates that the details of the source image are enhanced. The higher the objective evaluation value of the other three indexes, the better the fusion quality.
Experiment 1: and (3) fusion and super-resolution reconstruction of the infrared and visible light images.
The first set of experiments was performed on the low resolution infrared and visible images of fig. 3(a) as test images. The fusion and super-resolution reconstruction results of the different methods are shown in fig. 4.
"Ours" in fig. 4 indicates the fusion and hyper-resolution reconstruction effect of the proposed method. As can be seen from fig. 4 and table 1 objective evaluation index data, the fusion results of Ours are superior to other methods in terms of visual perception and objective evaluation. The digital detail in the upper left corner of fig. 4 shows that the image after fusion reconstruction by the method of the present invention is clearer and brighter. The visual result of the method of the invention at the details proves to be better, i.e. the proposed method is very efficient and rational. In addition, three objective evaluation results of different methods are shown in table 1. From these data, it can be seen that objective evaluation leads to a conclusion that is consistent with subjective evaluation, which further verifies the superiority of the method of the invention.
TABLE 1 quantitative evaluation of different fusion and super-resolution reconstruction methods for infrared and visible light images of "streets
Figure BDA0002498653850000161
Experiment 2: fusion and super-resolution reconstruction of medical images
In a second experiment, a low resolution medical image pair such as that of fig. 3(b) was fused and super-resolved reconstructed. From these source images, it can be seen that the image information of the same scene obtained by different devices is complementary. The purpose of medical image fusion and super-resolution is to extract complementary information in a low-resolution source image, inject the complementary information into a fused image and improve the resolution of a fusion result. Fig. 5 shows the fusion and super-resolution reconstruction results of different methods. As can be seen from fig. 5(a) to 5(d), the proposed Ours are visually clearly superior to the Li method, and the step-wise operation method: zhu's (bicubic) and Zhu's (SRSR). In addition, it can be seen from the table that the proposed method excludes the index QENTSlightly lower than the other three results, and better than the results of the Li method and the zhu method after the ultra-separation treatment in other evaluation indexes.
TABLE 2 quantitative evaluation of the Performance of different fusion and super-resolution reconstruction methods for MR-T1/MR-T2 medical images
Figure BDA0002498653850000171
Experiment 3: fusion and super-resolution reconstruction of multi-focus images
In a third experiment, the image pair of fig. 3(c) was tested. In the focused images, each focused image is captured by the same sensor modality, but the sensor modes are different, and the focus areas are different, and the multi-focus image fusion is to obtain images of all focused objects and fuse the images into one image. The fusion and super-resolution results for the different fusion methods are given in fig. 6. Comparing fig. 6(a) with fig. 6(c) - (d), it can be found that the resolution of the alarm clock at the near position and the resolution of the bookshelf at the far position are both higher in the result image of the method provided by the present invention, which proves that the method sufficiently fuses the images with different focuses. And the proposed Ours method is superior to other methods in terms of image contrast. As can be seen in Table 3, in the proposed method, QSFAlso less than 0, but still loses less information than the other three methods. The results of the objective evaluation of the data in table 3 further demonstrate the superiority of the proposed method over other methods.
TABLE 3 quantitative evaluation of different fusion and super-resolution reconstruction method performances of "Bookshelf" multi-focus image
Figure BDA0002498653850000172
Figure BDA0002498653850000181
While the present invention has been described in detail with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, and various changes can be made without departing from the spirit and scope of the present invention.

Claims (4)

1. A multi-source image fusion method based on discriminant dictionary learning and morphological component decomposition is characterized in that: the method comprises the following specific steps:
1) constructing a training sample for dictionary learning: selecting 8 HR training samples to train a high-resolution image discrimination dictionary pair, then performing down-sampling, and performing up-sampling by a bicubic interpolation method to restore the images to be the same as the high-resolution images as corresponding LR training images;
2) randomly generating an initial dictionary: a new dictionary learning method is provided for decomposing an input image into low-rank and sparse components, jointly training a pair of HR dictionaries, a pair of LR dictionaries and a coefficient conversion matrix H between coding coefficients of high-resolution and low-resolution image blocks for synchronously realizing image fusion and super-resolution reconstruction, wherein the pair of HR dictionaries is Dh,lAnd Dh,sOne pair of LR dictionaries is Dl,lAnd Dl,s
3) Obtaining a low-rank dictionary D from step 2)l,lAnd sparse dictionary Dl,sThen, taking the LR image pair, decomposing the LR image block YlObtaining low rank, sparse coding coefficient Al,lAnd Al,s
4) LR coding coefficient A obtained by step 3)l,s,Al,lAnd the conversion matrix H obtained in the step 2) adopts a method of maximum absolute value to construct the coding coefficient of the HR fusion image
Figure FDA0002498653840000011
And
Figure FDA0002498653840000012
and finally obtaining a high-resolution fusion image block.
2. The multi-source image fusion method based on discriminative dictionary learning and morphological component decomposition according to claim 1, wherein the step 2) comprises the following steps:
step2.1 utilizes the constructed training sample to learn the dictionary, and the proposed discriminant dictionary learning model is as follows:
Figure FDA0002498653840000013
Figure FDA0002498653840000014
Figure FDA0002498653840000015
and
Figure FDA0002498653840000016
low rank dictionary and sparse dictionary, X, being HR imageshIs a high resolution training image, XlIs a corresponding low resolution image, and further, is a low resolution low rank, sparse dictionary for Dl,lAnd Dl,sIs shown in which
Figure FDA0002498653840000017
Figure FDA0002498653840000021
M × K represents the matrix dimension of M rows and K columns, ε1、ε2、ε3And ε4Constants controlling the amplitude of each atom in different dictionaries, Zl,lAnd Zl,sIs XlIn dictionary Dl,lAnd Dl,sCoding coefficient ofh,lAnd Zh,sAre each XhIn dictionary Dh,lAnd Dh,sH is Zl,iAnd Zh,i(i ═ l, s); i | · | purple windFIs an F norm operator; Ψ (H, Z)l,l,Zh,l,Zl,s,Zh,s) And Φ (D)l,l,Dh,l,Zl,l,Zh,l,Zl,s,Zh,s) Is a discriminant regular term to ensure the learning dictionary Dl,l、Dh,l、Dl,sAnd Dh,sThe discrimination ability of (1);
the LR image is set to the same size as the HR image by the bicubic interpolation process, and then the relationship between the encoding coefficients of the LR image and the HR image is described as:
Figure FDA0002498653840000022
in the formula (2) < lambda >12And λ5Is a regularization parameter;
in the formula (2), the reaction mixture is,
Figure FDA0002498653840000023
for the purpose of avoiding over-fitting,
Figure FDA0002498653840000024
Is a relational transform term used to represent the relationship between sparse component coding coefficients, where | · | | | calvert |FFor the F norm operator, in the LR image and its corresponding HR image,
Figure FDA0002498653840000025
for establishing Zh,lAnd Zl,lThe relationship between the low rank components is obviously strong in linear correlation, and each element in the same coding coefficient vector has similar value, based on the fact that all 1 matrixes are introduced
Figure FDA0002498653840000026
And using regularization terms
Figure FDA0002498653840000027
To characterize the low rank component coding coefficients;
the following regularization term is defined:
Figure FDA0002498653840000028
in formula (3) < lambda >3And λ4Is the regularization parameter, | Dh,lZh,l||*And | | | Dl,lZl,l||*D for ensuring separation from an input image pairh,lZh,lAnd Dl,lZl,lIs low-rank, wherein | · |. non-woven phosphor*For the kernel norm operator, after the low rank component is obtained, pass Xi-Di,lZi,lObtaining a sparse component, wherein XiRepresenting input source data, and therefore, discriminating the objective function representing dictionary learning is expressed as:
Figure FDA0002498653840000031
Figure FDA0002498653840000032
optimized solution of Step2.2 dictionary learning model
Variable D to be solvedl,l、Dl,s、Dh,l、Dh,s、Zh,s、Zh,l、Zl,s、Zl,lAnd H, which are non-convex and difficult to solve directly, if one of the variables is solved while the other variables are fixed, each sub-problem is a convex function, and therefore each variable in (4) is solved in turn by an alternating iterative method;
step2.2.1 introduces four variable matrices X h,s、Xh,l、Xl,sAnd Xl,lThat is, training the sparse component and the low-rank component of the input high-resolution image and the sparse component and the low-rank component of the low-resolution image, the optimization problem in equation (4) is converted into:
Figure FDA0002498653840000033
Figure FDA0002498653840000034
step2.2.2 update Xh,s、Xh,lAnd a coding coefficient matrix Zh,sAnd Zh,l: first, fixing other variables, updating Xh,sThen, equation (5) is converted into the following expression:
Figure FDA0002498653840000035
the above formula has the following closed solution form:
Figure FDA0002498653840000036
similarly, fix other variables, Xh,lThe solution form of (c) is as follows:
Figure FDA0002498653840000041
the formula (8) can be solved effectively through SVT algorithm;
after X is updatedh,sAnd Xh,lThen, the coding coefficient matrix Z is further updatedh,sAnd Zh,l
Figure FDA0002498653840000042
Figure FDA0002498653840000043
In the formula (9)
Figure FDA0002498653840000044
Formula (9) belongs to1Minimum sizeSolving the problem by using a two-step iterative shrinkage/threshold algorithm, wherein for the formula (10), the solving result is as follows:
Figure FDA0002498653840000045
step2.2.3 updating Xl,s,Xl,lAnd a coding coefficient matrix Zl,sAnd Zl,lFirst, fixing other variables, optimizing Xl,sAnd Xl,lSolving the expressions respectively as follows:
Figure FDA0002498653840000046
Figure FDA0002498653840000047
wherein
Figure FDA0002498653840000048
The formula (13) can be effectively solved by using an SVT algorithm;
further, by fixing other variables, Z is obtainedl,sUpdating expression sum Zl,lThe closed solution is in the form:
Figure FDA0002498653840000049
Figure FDA00024986538400000410
wherein
Figure FDA00024986538400000411
Solving (14) by using a TwinT algorithm;
step2.2.4 updates the transformation matrix H: the other variable matrix is fixed, and the updated expression of H obtained by the formula (5) is as follows:
Figure FDA00024986538400000412
The formula (16) is F norm, and the following expression about H is obtained by calculating partial derivative of H:
Figure FDA0002498653840000051
according to the formula, H can be effectively solved through a Sylvester function of MATLAB;
step2.2.5 updating dictionary pair Dh,s、Dh,lAnd Dl,s、Dl,lFixing other variables to be unchanged, solving dictionary variables by using a Lagrange dual method to obtain Dh,sAnd Dh,lThe update expression of (a) is as follows:
Figure FDA0002498653840000052
Figure FDA0002498653840000053
in the same way, Dl,sAnd Dl,lThe closed solution expression is as follows:
Figure FDA0002498653840000054
Figure FDA0002498653840000055
in the above four expressions, Λ1、Λ2、Λ3And Λ4Is the most correspondingDiagonal matrix of the dual-superior variable.
3. The multi-source image fusion method based on the discriminant dictionary learning and morphological component decomposition as claimed in claim 2, wherein the step 3) comprises the steps of:
obtaining a low rank dictionary D from Step3.1l,lSparse dictionary Dl,sThen, taking paired LR images, collecting N image block data by a sliding window with the size of (N multiplied by N), taking the number of each image block as a column vector, forming the column vectors into a matrix, decomposing an LR image block YlObtaining low rank, sparse coding coefficient Al,lAnd Al,sThe low rank, sparse decomposition model is as follows:
Figure FDA0002498653840000056
in the formula Al,l(i, j) is Al,lIf A is equal to the (i, j) th value of (a)l,lAs a group, the value in each row of (a) is minimizedl,l||2,1Will be such that Al,lIs different from the equation R (AB) ≦ min { R (A), R (B) }, R represents the rank of the matrix, it can be known to minimize | | A l,l||2,1Can further ensure Dl,lAl,lLow rank of (c), and avoid D due to minimization | | |l, lAl,l||*Some defects are caused;
the solution of Step3.2 decomposition model is carried out by adopting an alternative iteration method for the image decomposition model (22) and updating Al,sAnd Al,l
Step3.2.1 introduction of variable Yl,lAnd Yl,sI.e., the low rank component and the sparse component of the LR image block, equation (22) can be transformed into:
Figure FDA0002498653840000061
Step3.2.2 update Yl,sAnd Yl,l: first fix Yl,l,Al,lAnd Al,s,Yl,sThe update expression is as follows:
Figure FDA0002498653840000062
similarly, using the TwinT algorithm to solve for Yl,sL of1A norm optimization problem;
further, Y is fixedl,s,Al,lAnd Al,s,Yl,lThe update expression is as follows:
Figure FDA0002498653840000063
in the formula (25)
Figure FDA0002498653840000064
Step3.2.3 optimized sparse coding coefficient Al,sAnd Al,lThe update expression is as follows:
Figure FDA0002498653840000065
Figure FDA0002498653840000066
formula (26) is1The norm optimization problem is solved by adopting a TwinT algorithm, and the equation (27) is l2,1Norm optimization problem.
4. The multi-source image fusion method based on the discriminant dictionary learning and morphological component decomposition as claimed in claim 3, wherein the step 4) comprises the steps of:
let us assume that N treats the fused LR image, learn through step 2To sparse dictionary Dl,lLow rank dictionary Dl,sAnd a transformation matrix H; the decomposition model in the step3 is used for decomposing the image to be fused to obtain a coding coefficient A of low-rank sparse components l,lAnd Al,sLet us order
Figure FDA0002498653840000067
And
Figure FDA0002498653840000068
respectively representing low-rank and sparse coding coefficients of LR input image, and low-rank and sparse fusion coding coefficients of HR
Figure FDA0002498653840000071
And
Figure FDA0002498653840000072
is constructed by the following steps:
Figure FDA0002498653840000073
m in the formula (28) represents the number of input images, such that
Figure FDA0002498653840000074
Figure FDA0002498653840000075
A high resolution fused image block can be obtained
Figure FDA0002498653840000076
Figure FDA0002498653840000077
In formula (29)
Figure FDA0002498653840000078
Rec is a reconstruction operation used to convert image blocks into an image.
CN202010425926.0A 2020-05-19 2020-05-19 Image fusion and super-resolution joint implementation method based on discriminant dictionary learning Active CN111784572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010425926.0A CN111784572B (en) 2020-05-19 2020-05-19 Image fusion and super-resolution joint implementation method based on discriminant dictionary learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010425926.0A CN111784572B (en) 2020-05-19 2020-05-19 Image fusion and super-resolution joint implementation method based on discriminant dictionary learning

Publications (2)

Publication Number Publication Date
CN111784572A CN111784572A (en) 2020-10-16
CN111784572B true CN111784572B (en) 2022-06-28

Family

ID=72754253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010425926.0A Active CN111784572B (en) 2020-05-19 2020-05-19 Image fusion and super-resolution joint implementation method based on discriminant dictionary learning

Country Status (1)

Country Link
CN (1) CN111784572B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561842B (en) * 2020-12-07 2022-12-09 昆明理工大学 Multi-source damaged image fusion and recovery combined implementation method based on dictionary learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563968A (en) * 2017-07-26 2018-01-09 昆明理工大学 A kind of method based on the group medicine image co-registration denoising for differentiating dictionary learning
CN107977949A (en) * 2017-07-26 2018-05-01 昆明理工大学 A kind of method improved based on projection dictionary to the Medical image fusion quality of study
CN108985320A (en) * 2018-05-31 2018-12-11 昆明理工大学 Based on the multisource image anastomosing method for differentiating that dictionary learning and anatomic element decompose
CN109410157A (en) * 2018-06-19 2019-03-01 昆明理工大学 The image interfusion method with PCNN is decomposed based on low-rank sparse
CN110706156A (en) * 2019-09-16 2020-01-17 昆明理工大学 Image fusion and super-resolution reconstruction combined implementation method based on multi-component analysis and residual compensation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10409888B2 (en) * 2017-06-02 2019-09-10 Mitsubishi Electric Research Laboratories, Inc. Online convolutional dictionary learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563968A (en) * 2017-07-26 2018-01-09 昆明理工大学 A kind of method based on the group medicine image co-registration denoising for differentiating dictionary learning
CN107977949A (en) * 2017-07-26 2018-05-01 昆明理工大学 A kind of method improved based on projection dictionary to the Medical image fusion quality of study
CN108985320A (en) * 2018-05-31 2018-12-11 昆明理工大学 Based on the multisource image anastomosing method for differentiating that dictionary learning and anatomic element decompose
CN109410157A (en) * 2018-06-19 2019-03-01 昆明理工大学 The image interfusion method with PCNN is decomposed based on low-rank sparse
CN110706156A (en) * 2019-09-16 2020-01-17 昆明理工大学 Image fusion and super-resolution reconstruction combined implementation method based on multi-component analysis and residual compensation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Analysis-synthesis dictionary pair learning and patch saliency measure for image fusion";Yafei Zhang等;《Signal Processing》;20200229;第167卷;1-13 *
"Super-resolution images fusion via compressed sensing and low-rank matrix decomposition";Kan Ren等;《Infrared Physics & Technology》;20150131;第68卷;61-68 *
"基于判别低秩稀疏字典学习的医学图像融合质量改善算法研究";和晓歌;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20190115;I138-3746 *
"基于判别字典学习与形态成分分解的多源图像融合";王一棠 等;《光学技术》;20190131;第45卷(第1期);63-69 *

Also Published As

Publication number Publication date
CN111784572A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
Yin et al. Simultaneous image fusion and super-resolution using sparse representation
CN103854262B (en) Medical image denoising method based on documents structured Cluster with sparse dictionary study
CN106485656B (en) A kind of method of image super-resolution reconstruct
CN107301630B (en) CS-MRI image reconstruction method based on ordering structure group non-convex constraint
Chen et al. A novel medical image fusion method based on rolling guidance filtering
CN106920214B (en) Super-resolution reconstruction method for space target image
CN109522971B (en) CS-MRI image reconstruction method based on classified image block sparse representation
CN105046672A (en) Method for image super-resolution reconstruction
CN110047058A (en) A kind of image interfusion method based on residual pyramid
CN106097253B (en) A kind of single image super resolution ratio reconstruction method based on block rotation and clarity
CN104574456B (en) A kind of super lack sampling K data imaging method of magnetic resonance based on figure regularization sparse coding
CN106251320A (en) Remote sensing image fusion method based on joint sparse Yu structure dictionary
CN111487573B (en) Enhanced residual error cascade network model for magnetic resonance undersampling imaging
CN105654425A (en) Single-image super-resolution reconstruction method applied to medical X-ray image
CN105590296B (en) A kind of single-frame images Super-Resolution method based on doubledictionary study
CN110570387A (en) image fusion method based on feature level Copula model similarity
CN111784572B (en) Image fusion and super-resolution joint implementation method based on discriminant dictionary learning
CN105931181B (en) Super resolution image reconstruction method and system based on non-coupled mapping relations
CN104200439B (en) Image super-resolution method based on adaptive filtering and regularization constraint
CN108596866B (en) Medical image fusion method based on combination of sparse low-rank decomposition and visual saliency
CN111242873A (en) Image denoising method based on sparse representation
CN109886869A (en) A kind of unreal structure method of face of the non-linear expansion based on contextual information
CN110706156B (en) Image fusion and super-resolution reconstruction combined implementation method based on multi-component analysis and residual compensation
CN117333365A (en) Image super-resolution method based on hybrid transducer super-resolution network
CN107993205A (en) A kind of MRI image reconstructing method based on study dictionary with the constraint of non-convex norm minimum

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant