CN113870159A - Hyperspectral fusion method based on dynamic gradient group sparsity and low-rank regularization - Google Patents

Hyperspectral fusion method based on dynamic gradient group sparsity and low-rank regularization Download PDF

Info

Publication number
CN113870159A
CN113870159A CN202111019135.9A CN202111019135A CN113870159A CN 113870159 A CN113870159 A CN 113870159A CN 202111019135 A CN202111019135 A CN 202111019135A CN 113870159 A CN113870159 A CN 113870159A
Authority
CN
China
Prior art keywords
resolution
image
matrix
low
hyperspectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111019135.9A
Other languages
Chinese (zh)
Inventor
田昕
张玮
李坤
李松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202111019135.9A priority Critical patent/CN113870159A/en
Publication of CN113870159A publication Critical patent/CN113870159A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a hyperspectral fusion method based on dynamic gradient group sparseness and low-rank regularization. The method comprises the steps of obtaining a low-resolution hyperspectral image through a hyperspectral sensor, collecting a low-resolution multispectral image and a high-resolution panchromatic image of the same picture through the multispectral sensor, constructing a fusion model among the low-resolution hyperspectral image, the low-resolution multispectral image and the high-resolution panchromatic image through dynamic gradient group sparse regularization and low-rank regularization, solving the fusion model through an alternating direction multiplier method, obtaining a coefficient matrix, and further multiplying the coefficient matrix and a subspace matrix to obtain the high-resolution hyperspectral image. The fusion method combines the subspace regularization of the image with the image fusion, so that the solution of the fusion target is converted into the solution of the low-dimensional coefficient matrix, the calculation efficiency is improved, and the high-resolution hyperspectral image which is superior to the contrast method in qualitative and quantitative aspects is obtained.

Description

Hyperspectral fusion method based on dynamic gradient group sparsity and low-rank regularization
Technical Field
The invention belongs to the field of remote sensing image fusion, and particularly relates to a hyperspectral fusion method based on dynamic gradient group sparseness and low-rank regularization.
Background
In the field of remote sensing, Panchromatic (PAN), Multispectral (MS) and Hyperspectral (HS) data have wide applications, including land classification, change detection, and the like. Due to the cost limitations of optical sensors and the limitations of data storage and transmission bandwidth, the design of optical sensors requires a trade-off between spatial and spectral resolution. For example, hyperspectral data with bands up to several hundred provides higher spectral resolution, but its spatial picture is generally blurred. Full-color images with a single band have relatively single spectral information, but are rich in spatial information. Due to the application requirements for remote sensing data with both high spatial resolution and spectral resolution, various image fusion techniques have been developed.
The Multispectral fusion technology fuses a Low-Resolution Multispectral Image (LR-MSI) and a High-Resolution Panchromatic Image (HR-PAN) to improve the spatial Resolution of the Multispectral Image, and the fusion technology is developed more mature at present and mainly comprises a component replacement method, a multiresolution analysis method and a variation method and a machine learning-based method. The Hyperspectral fusion technology generally fuses a Low-Resolution Hyperspectral Image (LR-HSI) and a High-Resolution Multispectral Image (HR-MSI) to improve the spatial Resolution of the Hyperspectral Image, and compared with the Hyperspectral Image fusion technology, the Hyperspectral Image fusion technology has higher data dimension and more variables to be estimated. The hyperspectral image fusion technology mainly comprises the expansion of a multispectral image fusion method, a Bayes-based method and a linear spectral decomposition-based method.
In consideration of high spectrum, information contained in the multispectral image and information contained in the panchromatic image are complementary, so that the spatial and spectral information contained in the three kinds of data of the same picture can be better mined by fusing the three kinds of data of the same picture. The research of the data fusion method is less, and in order to better utilize the total information of a hyperspectral image, a multispectral image and a panchromatic image, the invention provides a novel model-based method for combining the sparseness and low-rank regularization of a dynamic gradient group with the fusion of hyperspectral, multispectral and panchromatic data.
Disclosure of Invention
Aiming at the defects of the existing remote sensing image fusion method, the invention provides a hyperspectral fusion method based on dynamic gradient group sparseness and low-rank regularization.
Step 1: acquiring a high-spectrum image with low resolution by a high-spectrum sensor, and acquiring a low-resolution multi-spectrum image and a high-resolution full-color image of the same picture by the multi-spectrum sensor;
step 2: constructing a fusion model among the low-resolution hyperspectral image, the low-resolution multispectral image and the high-resolution panchromatic image according to the low-resolution hyperspectral image, the low-resolution multispectral image and the high-resolution panchromatic image in the step 1;
and step 3: solving a coefficient matrix X by an alternative direction multiplier method based on the fusion model in the step 2(3)And further multiplying the coefficient matrix and the subspace matrix to obtain a high-resolution hyperspectral image F(3)
Preferably, the low-resolution hyperspectral image in step 1 is recorded as a tensor
Figure BDA0003241124940000021
Whose 3-mode expansion matrix is
Figure BDA0003241124940000022
Indicates that the image has LhIndividual wave band, Wh×HhA plurality of pixels;
step 1, the multispectral image with low resolution is recorded as tensor
Figure BDA0003241124940000023
Whose 3-mode expansion matrix is
Figure BDA0003241124940000024
Indicates that the image has LmIndividual wave band, Wm×HmA plurality of pixels;
step 1 the high resolution panchromatic image is recorded as a tensor
Figure BDA0003241124940000025
Whose 3-mode expansion matrix is
Figure BDA0003241124940000026
Indicating that the image has 1 band, Wp×HpA plurality of pixels;
the ratio of the resolution between the high-resolution panchromatic image and the low-resolution hyperspectral image is shThe ratio of the resolution between the high-resolution panchromatic image and the low-resolution multispectral image is smI.e. by
Figure BDA0003241124940000027
Preferably, the fusion model among the low-resolution hyperspectral image, the low-resolution multispectral image and the high-resolution panchromatic image in the step 2 is composed of a hyperspectral image fitting term, a multispectral image fitting term, a dynamic gradient group sparse regularization term and a low-rank regularization term;
the hyperspectral image fitting item and the multispectral image fitting item are low-resolution hyperspectral images H(3)Low resolution multispectral image M(3)The low-resolution hyperspectral image H is constructed by the space degradation relation and the spectrum degradation relation between the high-resolution hyperspectral image H and the high-resolution hyperspectral image(3)Spatial down-sampling version of high-resolution equivalent high-spectral image, low-resolution multi-spectral image M(3)The high-spectrum image equivalent to high resolution is a version after spatial down-sampling and spectral down-sampling;
the hyperspectral image fitting term, the multispectral image fitting term, the dynamic gradient group sparse regularization term and the low-rank regularization term form a target energy function related to a coefficient matrix;
preferably, the step 2 of constructing the fusion model among the hyperspectral image, the multispectral image and the panchromatic image specifically comprises the following sub-steps:
step 2.1: the low-resolution hyperspectral image H (3) can be considered as a spatially down-sampled version of the high-resolution hyperspectral image F (3):
H(3)=F(3)BhSh
wherein, the matrix
Figure BDA0003241124940000031
Is a spatial blur matrix.
Figure BDA0003241124940000032
Is a spatial down-sampling operation for reducing the spatial resolution of the hyperspectral image.
Step 2.2: multispectral image M of low resolution(3)Hyperspectral image F which can be regarded as high resolution(3)Spatial down-sampling and spectral down-sampling versions:
M(3)=RmF(3)BmSm
wherein the content of the first and second substances,
Figure BDA0003241124940000033
is the spectral response matrix of the multispectral instrument. Matrix array
Figure BDA0003241124940000034
Is a spatial blur matrix.
Figure BDA0003241124940000035
Is a spatial down-sampling operation for reducing the spatial resolution of the hyperspectral image.
Step 2.3: the high-resolution panchromatic image P and the high-resolution hyperspectral image F have the same spatial edge information, the gradient difference of the high-resolution panchromatic image P and the high-resolution hyperspectral image F meets the group sparsity characteristic, and a regularization item for dynamic gradient group sparsity can be established:
the high-resolution hyperspectral image is recorded as a tensor
Figure BDA0003241124940000036
Whose 3-mode expansion matrix is
Figure BDA0003241124940000037
Indicates that the image has LhIndividual wave band, Wp×HpA plurality of pixels;
Figure BDA0003241124940000038
wherein the content of the first and second substances,
Figure BDA0003241124940000039
indicating the replication of high resolution panchromatic images to LmAnd (4) a plurality of wave bands.
B in step 2.1hMatrix, R in step 2.2m,BmThe matrix can be composed of a low-resolution hyperspectral image H(3)Low resolution multispectral image M(3)High resolution full color image P(3)The estimation is obtained, and the estimation method provided by the hyperspectral image fusion method named HySure is used in the invention.
Step 2.4: building 3D tensor for using low rank regularization
Figure BDA00032411249400000310
Regularization term of (1):
||X||*
step 2.5: converting the high-resolution hyperspectral image into a coefficient matrix model of the high-resolution hyperspectral image in a low-dimensional subspace by combining a product model of the subspace matrix and the coefficient matrix;
step 2.5 the product model of the subspace matrix and the coefficient matrix is:
Figure BDA0003241124940000041
wherein the extract is3The mode product of the 3 rd mode is shown,
Figure BDA0003241124940000042
is a subspace matrix representing a vector composed of D pure spectral features, the subspace momentsThe array is directly obtained by the low-resolution hyperspectral image through image decomposition, and the image decomposition method selected by the invention is vertex component analysis;
Zhang Liang
Figure BDA0003241124940000043
is a coefficient matrix, representing
Figure BDA0003241124940000048
Is represented by a linear combination of vector members in a subspace matrix,
Figure BDA00032411249400000410
representing a pixel
Figure BDA0003241124940000049
And corresponding coefficients, i is the spatial transverse coordinate of the pixel point, and j is the spatial longitudinal coordinate of the pixel point.
Step 2.5, the coefficient matrix model of the high-resolution hyperspectral image converted into the high-resolution hyperspectral image in the low-dimensional subspace is as follows:
F(3)=EX(3)
wherein the content of the first and second substances,
Figure BDA0003241124940000044
is tensor
Figure BDA00032411249400000411
The 3-mode development;
step 2.6: based on the steps 2.1 to 2.5, establishing a fusion model of the three components:
Figure BDA0003241124940000045
wherein
Figure BDA0003241124940000046
Representing Frobenius norm, | | · |. luminance2,1Representing L2,1 norm, | | · |. non-volatile memory*Representing a kernelNumber, lambdam,λφAnd λlParameters for balancing the respective items are respectively. X(3)In the form of a two-dimensional matrix representation of the coefficients.
Preferably, the specific implementation of step 3 comprises the following sub-steps:
step 3.1: introducing an auxiliary variable O, wherein O is equal to X(3)Bh(ii) a Introducing an auxiliary variable U, wherein U is equal to X(3)Bm(ii) a Introducing an auxiliary variable V, wherein V is equal to X(3)(ii) a Introducing an auxiliary variable W, and satisfying that W is equal to RmEV; introducing an auxiliary variable Q, wherein Q is equal to X(3). The three fusion model is expressed as:
Figure BDA0003241124940000047
s.t.O=X(3)Bh
U=X(3)Bm
V=X(3)
W=RmEV
Q=X(3)
the augmented Lagrange function of the three fusion models is expressed as:
Figure BDA0003241124940000051
wherein, Y1,Y2,Y3,Y4And Y5For scale Dual variables (Scaled Dual Variable),
Figure BDA00032411249400000514
representing Frobenius norm, | | · |. luminance2,1Representing L2,1 norm, | | · |. non-volatile memory*Denotes the nuclear norm, λm,λφAnd λlParameters for balancing the respective items are respectively. μ is a penalty parameter. X(3)In the form of a two-dimensional matrix representation of the coefficients. H(3)Is a two-dimensional matrix representation of a low-resolution hyperspectral image. M(3)Multiple lights of low resolutionTwo-dimensional matrix representation of the spectral image.
Figure BDA0003241124940000052
Indicating the replication of high resolution panchromatic images to LmAnd (4) a plurality of wave bands.
Figure BDA0003241124940000053
The sign of the gradient is indicated.
Figure BDA0003241124940000054
In the form of a spatial blur matrix, the matrix is,
Figure BDA0003241124940000055
in the form of a spatial blur matrix, the matrix is,
Figure BDA0003241124940000056
in order to perform a spatial down-sampling operation,
Figure BDA0003241124940000057
in order to perform a spatial down-sampling operation,
Figure BDA0003241124940000058
is a spectral response matrix of a multi-spectral instrument,
Figure BDA0003241124940000059
is a subspace matrix. The optimization problem of the augmented Lagrangian function of the three-component fusion model can be decomposed into an X subproblem, an O subproblem, a U subproblem, a V subproblem, a W subproblem, a Q subproblem and a Y subproblem.
Step 3.2: solving the X subproblem:
Figure BDA00032411249400000510
wherein t in the upper right corner represents the number of iterations and Y1,Y2,Y3And Y5In the form of a dual-scale variable,
Figure BDA00032411249400000515
represents Frobenius norm, mu is penalty parameter. X(3)In the form of a two-dimensional matrix representation of the coefficients.
Figure BDA00032411249400000511
In the form of a spatial blur matrix, the matrix is,
Figure BDA00032411249400000512
is a spatial fuzzy matrix;
the solution to the above problem is:
Figure BDA00032411249400000513
step 3.3: solving an O subproblem:
Figure BDA0003241124940000061
wherein, Y1In the form of a dual-scale variable,
Figure BDA0003241124940000062
represents Frobenius norm, mu is penalty parameter. X(3)In the form of a two-dimensional matrix representation of the coefficients. H(3)Is a two-dimensional matrix representation of a low-resolution hyperspectral image.
Figure BDA0003241124940000063
In the form of a spatial blur matrix, the matrix is,
Figure BDA0003241124940000064
in order to perform a spatial down-sampling operation,
Figure BDA0003241124940000065
is a subspace matrix.
Divide O into OShAnd
Figure BDA0003241124940000066
wherein
Figure BDA0003241124940000067
Representation is not represented by matrix ShAnd (3) solving the O subproblem by the selected pixel point:
Figure BDA0003241124940000068
Figure BDA0003241124940000069
step 3.4: solving the U sub-problem:
Figure BDA00032411249400000610
wherein, Y2In the form of a dual-scale variable,
Figure BDA00032411249400000611
denotes the Frobenius norm, λmTo balance the parameters of the various items. μ is a penalty parameter. X(3)In the form of a two-dimensional matrix representation of the coefficients. M(3)Is a two-dimensional matrix representation of the low resolution multispectral image.
Figure BDA00032411249400000612
In the form of a spatial blur matrix, the matrix is,
Figure BDA00032411249400000613
in order to perform a spatial down-sampling operation,
Figure BDA00032411249400000614
is a spectral response matrix of a multi-spectral instrument,
Figure BDA00032411249400000615
is a subspace matrix.
Divide U into USmAnd
Figure BDA00032411249400000616
wherein
Figure BDA00032411249400000617
Representation is not represented by matrix SmThe solution of the selected pixel point and the U subproblem is as follows:
Figure BDA00032411249400000618
Figure BDA00032411249400000619
step 3.5: solving the V subproblem:
Figure BDA00032411249400000620
wherein, Y3,Y4In the form of a dual-scale variable,
Figure BDA0003241124940000071
represents Frobenius norm, mu is penalty parameter. X(3)In the form of a two-dimensional matrix representation of the coefficients.
Figure BDA0003241124940000072
Is a spectral response matrix of a multi-spectral instrument,
Figure BDA0003241124940000073
is a subspace matrix.
The solution to the above problem is:
Figure BDA0003241124940000074
step 3.6: solving the W sub-problem:
Figure BDA0003241124940000075
wherein, Y4In the form of a dual-scale variable,
Figure BDA0003241124940000076
representing Frobenius norm, | | · |. luminance2,1Denotes L2,1 norm,. lambda.φTo balance the parameters of the various items. μ is a penalty parameter.
Figure BDA0003241124940000077
Indicating the replication of high resolution panchromatic images to LmAnd (4) a plurality of wave bands.
Figure BDA0003241124940000078
The sign of the gradient is indicated.
Figure BDA0003241124940000079
Is a spectral response matrix of a multi-spectral instrument,
Figure BDA00032411249400000710
is a subspace matrix.
Let G be W-P, then the W sub-problem becomes
Figure BDA00032411249400000711
The W subproblem) can be solved directly by a vector Total Variation (vector Total Variation) algorithm.
Step 3.7: solving the Q sub-problem:
Figure BDA00032411249400000712
wherein, Y5In the form of a dual-scale variable,
Figure BDA00032411249400000713
representing Frobenius norm, | | · |. luminance*Denotes the nuclear norm, λlTo balance the parameters of the various items. μ is a penalty parameter. X(3)Is a two-dimensional matrix representation of coefficientsForm (a).
The Q sub-problem can be solved directly by a singular value contraction algorithm.
Step 3.8: and solving the Y subproblem. This subproblem can be directly updated by the following equation:
Figure BDA00032411249400000714
Figure BDA00032411249400000715
Figure BDA00032411249400000716
Figure BDA0003241124940000081
Figure BDA0003241124940000082
wherein, Y1,Y2,Y3,Y4And Y5For scale-duality variables, μ is a penalty parameter. X(3)In the form of a two-dimensional matrix representation of the coefficients.
Figure BDA0003241124940000083
In the form of a spatial blur matrix, the matrix is,
Figure BDA0003241124940000084
in the form of a spatial blur matrix, the matrix is,
Figure BDA0003241124940000085
is a spectral response matrix of a multi-spectral instrument,
Figure BDA0003241124940000086
is a subspace matrix. O, U, V, W and Q as auxiliary variablesAmount of the compound (A).
Step 3.9: obtaining the coefficient tensor by the iterative solution of the subproblem
Figure BDA0003241124940000089
According to step 2
Figure BDA0003241124940000087
E, obtaining a high-resolution hyperspectral image
Figure BDA0003241124940000088
The method mainly meets the application requirement of obtaining the high-resolution hyperspectral image by fusing the low-resolution hyperspectral image with the low-resolution multispectral image and the high-resolution panchromatic image. The fusion method combines the subspace regularization of the image with the image fusion, thereby converting the solution of the fusion target into the solution of the low-dimensional coefficient matrix. The model provided by the invention comprises two secondary data fitting terms, a dynamic gradient group sparse regularization term and a low-rank regularization term, and is solved by an ADMM algorithm, so that a hyperspectral fusion image which is superior to a comparison method in qualitative and quantitative aspects is obtained.
Drawings
FIG. 1: is an experimental flow chart of the invention.
FIG. 2: is a fusion result graph of CNMF.
FIG. 3: is a fusion result graph of the GSA of the comparison method of the invention.
FIG. 4: is a fusion result graph of the method provided by the invention.
FIG. 5: is a reference image of the present invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention will be described in further detail with reference to the accompanying drawings and examples, it is to be understood that the examples described herein are only for the purpose of illustrating the present invention and are not to be construed as limiting the present invention.
The data used in the present invention was taken from the department of university of Pavia taken by a rosss aerial imaging spectrometer in germany. The invention selects 93 wave bands from 115 wave bands of the data to obtain HR-HSI (320 multiplied by 93), and the HR-HSI is used as a reference image Ref for quality evaluation after fusion. To generate LR-HSI (20 × 20 × 93), the present invention first blurs HR-HSI using a 7 × 7 Gaussian blur (mean zero, standard deviation of 2), and then downsamples every sixteen pixels in the width and height modes of HR-HSI. To simulate the LR-MSI (80 × 80 × 4) of the same scene, HR-HSI was sampled along the spectral mode using a reflective spectral response filter similar to IKONOS, then blurred with 7 × 7 Gaussian blur (mean zero, standard deviation of 2), and finally sampled every four pixels in the width and height modes of HR-HSI. The HR-PAN (320 x 320) for the same scene is generated by first sampling the HR-HSI into four band images along the spectral mode using a reflected spectral response filter similar to IKONOS, and then linearly combining the first three bands with [0.114, 0.587, 0.299] weights, respectively. The flow chart of the fusion experiment is shown in FIG. 1.
The embodiment provides a hyperspectral fusion method based on dynamic gradient group sparseness and low-rank regularization, which comprises the following steps of:
step 1: acquiring a high-spectrum image with low resolution by a high-spectrum sensor, and acquiring a low-resolution multi-spectrum image and a high-resolution full-color image of the same picture by the multi-spectrum sensor;
step 1, recording the low-resolution hyperspectral image as tensor
Figure BDA0003241124940000091
Whose 3-mode expansion matrix is
Figure BDA0003241124940000092
Indicates that the image has Lh93 bands, Wh×Hh20 × 20-400 pixels;
step 1, the multispectral image with low resolution is recorded as tensor
Figure BDA0003241124940000093
Whose 3-mode expansion matrix is
Figure BDA0003241124940000094
Indicates that the image has Lm4 bands, Wm×Hm6400 pixels 80 × 80;
step 1 the high resolution panchromatic image is recorded as a tensor
Figure BDA0003241124940000095
Whose 3-mode expansion matrix is
Figure BDA0003241124940000096
Indicating that the image has 1 band, Wp×Hp320 × 320 — 102400 pixels;
the ratio of the resolution between the high-resolution panchromatic image and the low-resolution hyperspectral image is shThe ratio of the resolution between the high-resolution panchromatic image and the low-resolution multispectral image is smI.e. by
Figure BDA0003241124940000097
Step 2: constructing a fusion model among the low-resolution hyperspectral image, the low-resolution multispectral image and the high-resolution panchromatic image according to the low-resolution hyperspectral image, the low-resolution multispectral image and the high-resolution panchromatic image in the step 1;
preferably, the fusion model among the low-resolution hyperspectral image, the low-resolution multispectral image and the high-resolution panchromatic image in the step 2 is composed of a hyperspectral image fitting term, a multispectral image fitting term, a dynamic gradient group sparse regularization term and a low-rank regularization term;
the hyperspectral image fitting item and the multispectral image fitting item are low-resolution hyperspectral images H(3)Low resolution multispectral image M(3)The low-resolution hyperspectral image H is constructed by the space degradation relation and the spectrum degradation relation between the high-resolution hyperspectral image H and the high-resolution hyperspectral image(3)Spatial down-sampling version of high-resolution equivalent high-spectral image, low-resolution multi-spectral image M(3)The high-spectrum image equivalent to high resolution is a version after spatial down-sampling and spectral down-sampling;
the hyperspectral image fitting term, the multispectral image fitting term, the dynamic gradient group sparse regularization term and the low-rank regularization term form a target energy function related to the coefficient matrix.
The specific implementation of the construction of the fusion model among the hyperspectral image, the multispectral image and the panchromatic image in the step 2 comprises the following substeps:
step 2.1: low-resolution hyperspectral image H(3)Hyperspectral image F which can be regarded as high resolution(3)Spatially down-sampled version of (a):
H(3)=F(3)BhSh
wherein, the matrix
Figure BDA0003241124940000101
Is a spatial blur matrix.
Figure BDA0003241124940000102
Is a spatial down-sampling operation for reducing the spatial resolution of the hyperspectral image.
Step 2.2: multispectral image M of low resolution(3)Hyperspectral image F which can be regarded as high resolution(3)Spatial down-sampling and spectral down-sampling versions:
M(3)=RmF(3)BmSm
wherein
Figure BDA0003241124940000103
Is the spectral response matrix of the multispectral instrument. Matrix array
Figure BDA0003241124940000104
Is a spatial blur matrix.
Figure BDA0003241124940000105
Is a spatial down-sampling operation for reducing the spatial resolution of the hyperspectral image.
Step 2.3: the high-resolution panchromatic image P and the high-resolution hyperspectral image F have the same spatial edge information, the gradient difference of the high-resolution panchromatic image P and the high-resolution hyperspectral image F meets the group sparsity characteristic, and a regularization item for dynamic gradient group sparsity can be established:
the high-resolution hyperspectral image is recorded as a tensor
Figure BDA0003241124940000106
Whose 3-mode expansion matrix is
Figure BDA0003241124940000107
Indicates that the image has Lh93 bands, Wp×Hp320 × 320 — 102400 pixels;
Figure BDA0003241124940000111
wherein the content of the first and second substances,
Figure BDA0003241124940000112
indicating the replication of high resolution panchromatic images to Lm4 bands.
B in step 2.1hMatrix, R in step 2.2m,BmThe matrix can be composed of a low-resolution hyperspectral image H(3)Low resolution multispectral image M(3)High resolution full color image P(3)The estimation is obtained by using an estimation method provided in a hyperspectral image fusion method named HySure.
Step 2.4: building 3D tensor for using low rank regularization
Figure BDA00032411249400001113
Regularization term of (1):
||X||*
step 2.5: converting the high-resolution hyperspectral image into a coefficient matrix model of the high-resolution hyperspectral image in a low-dimensional subspace by combining a product model of the subspace matrix and the coefficient matrix;
step 2.5 the product model of the subspace matrix and the coefficient matrix is:
Figure BDA0003241124940000113
wherein the extract is3The mode product of the 3 rd mode is shown,
Figure BDA0003241124940000114
the method comprises the following steps that a subspace matrix is used, wherein the subspace matrix is composed of vectors with D being 10 pure spectral features, the subspace matrix is directly obtained by low-resolution hyperspectral images through image decomposition, and the selected image decomposition method is vertex component analysis;
Zhang Liang
Figure BDA0003241124940000115
is a coefficient matrix, representing
Figure BDA0003241124940000119
Is represented by a linear combination of vector members in a subspace matrix,
Figure BDA00032411249400001110
representing a pixel
Figure BDA0003241124940000116
And corresponding coefficients, i is the spatial transverse coordinate of the pixel point, and j is the spatial longitudinal coordinate of the pixel point.
Step 2.5, the coefficient matrix model of the high-resolution hyperspectral image converted into the high-resolution hyperspectral image in the low-dimensional subspace is as follows:
F(3)=EX(3)
wherein the content of the first and second substances,
Figure BDA0003241124940000117
is tensor
Figure BDA00032411249400001111
The 3-mode development;
step 2.6: based on the steps 2.1 to 2.5, establishing a fusion model of the three components:
Figure BDA0003241124940000118
wherein
Figure BDA00032411249400001112
Representing Frobenius norm, | | · |. luminance2,1Representing L2,1 norm, | | · |. non-volatile memory*Denotes the nuclear norm, λm=0.01,λφ0.01 and λlEach parameter 0.001 is a parameter for balancing each item. X(3)In the form of a two-dimensional matrix representation of the coefficients.
And step 3: solving a coefficient matrix X by an alternative direction multiplier method based on the fusion model in the step 2(3)And further multiplying the coefficient matrix and the subspace matrix to obtain a high-resolution hyperspectral image F(3)
The specific implementation of the step 3 comprises the following substeps:
step 3.1: introducing an auxiliary variable O, wherein O is equal to X(3)Bh(ii) a Introducing an auxiliary variable U, wherein U is equal to X(3)Bm(ii) a Introducing an auxiliary variable V, wherein V is equal to X(3)(ii) a Introducing an auxiliary variable W, and satisfying that W is equal to RmEV; introducing an auxiliary variable Q, wherein Q is equal to X(3). The three fusion model is expressed as:
Figure BDA0003241124940000121
s.t.O=X(3)Bh
U=X(3)Bm
V=X(3)
W=RmEV
Q=X(3)
the augmented Lagrange function of the three fusion models is expressed as:
Figure BDA0003241124940000122
wherein, Y1,Y2,Y3,Y4And Y5For scale Dual variables (Scaled Dual Variable),
Figure BDA0003241124940000123
representing Frobenius norm, | | · |. luminance2,1Representing L2,1 norm, | | · |. non-volatile memory*Denotes the nuclear norm, λm=0.01,λφ0.01 and λlEach parameter 0.001 is a parameter for balancing each item. μ ═ 0.001 is a penalty parameter. X(3)In the form of a two-dimensional matrix representation of the coefficients. H(3)Is a two-dimensional matrix representation of a low-resolution hyperspectral image. M(3)Is a two-dimensional matrix representation of the low resolution multispectral image.
Figure BDA0003241124940000124
Indicating the replication of high resolution panchromatic images to Lm4 bands.
Figure BDA0003241124940000125
The sign of the gradient is indicated.
Figure BDA0003241124940000126
In the form of a spatial blur matrix, the matrix is,
Figure BDA0003241124940000127
in the form of a spatial blur matrix, the matrix is,
Figure BDA0003241124940000128
in order to perform a spatial down-sampling operation,
Figure BDA0003241124940000129
in order to perform a spatial down-sampling operation,
Figure BDA00032411249400001210
is a spectral response matrix of a multi-spectral instrument,
Figure BDA00032411249400001211
is a subspace matrix. The optimization problem of the augmented Lagrangian function of the three-component fusion model can be decomposed into an X subproblem, an O subproblem, a U subproblem, a V subproblem, a W subproblem, a Q subproblem and a Y subproblem.
Step 3.2: solving the X subproblem:
Figure BDA0003241124940000131
wherein t in the upper right corner represents the number of iterations and Y1,Y2,Y3And Y5In the form of a dual-scale variable,
Figure BDA00032411249400001314
denotes Frobenius norm, with μ ═ 0.001 as a penalty parameter. X(3)In the form of a two-dimensional matrix representation of the coefficients.
Figure BDA0003241124940000132
In the form of a spatial blur matrix, the matrix is,
Figure BDA0003241124940000133
is a spatial fuzzy matrix;
the solution to the above problem is:
Figure BDA0003241124940000134
step 3.3: solving an O subproblem:
Figure BDA0003241124940000135
wherein, Y1Is a scale pairThe amount of the even variable is,
Figure BDA0003241124940000136
denotes Frobenius norm, with μ ═ 0.001 as a penalty parameter. X(3)In the form of a two-dimensional matrix representation of the coefficients. H(3)Is a two-dimensional matrix representation of a low-resolution hyperspectral image.
Figure BDA0003241124940000137
In the form of a spatial blur matrix, the matrix is,
Figure BDA0003241124940000138
in order to perform a spatial down-sampling operation,
Figure BDA0003241124940000139
is a subspace matrix.
Divide O into OShAnd
Figure BDA00032411249400001310
wherein
Figure BDA00032411249400001311
Representation is not represented by matrix ShAnd (3) solving the O subproblem by the selected pixel point:
Figure BDA00032411249400001312
Figure BDA00032411249400001313
step 3.4: solving the U sub-problem:
Figure BDA0003241124940000141
wherein, Y2In the form of a dual-scale variable,
Figure BDA0003241124940000142
denotes the Frobenius norm, λm0.01 is a parameter for balancing the respective items. μ ═ 0.001 is a penalty parameter. X(3)In the form of a two-dimensional matrix representation of the coefficients. M(3)Is a two-dimensional matrix representation of the low resolution multispectral image.
Figure BDA0003241124940000143
In the form of a spatial blur matrix, the matrix is,
Figure BDA0003241124940000144
in order to perform a spatial down-sampling operation,
Figure BDA0003241124940000145
is a spectral response matrix of a multi-spectral instrument,
Figure BDA0003241124940000146
is a subspace matrix.
Divide U into USmAnd
Figure BDA0003241124940000147
wherein
Figure BDA0003241124940000148
Representation is not represented by matrix SmThe solution of the selected pixel point and the U subproblem is as follows:
Figure BDA0003241124940000149
Figure BDA00032411249400001410
step 3.5: solving the V subproblem:
Figure BDA00032411249400001411
wherein, Y3,Y4In the form of a dual-scale variable,
Figure BDA00032411249400001412
denotes Frobenius norm, with μ ═ 0.001 as a penalty parameter. X(3)In the form of a two-dimensional matrix representation of the coefficients.
Figure BDA00032411249400001413
Is a spectral response matrix of a multi-spectral instrument,
Figure BDA00032411249400001414
is a subspace matrix.
The solution to the above problem is:
Figure BDA00032411249400001415
step 3.6: solving the W sub-problem:
Figure BDA00032411249400001416
wherein, Y4In the form of a dual-scale variable,
Figure BDA00032411249400001417
representing Frobenius norm, | | · |. luminance2,1Denotes L2,1 norm,. lambda.φ0.01 is a parameter for balancing the respective items. μ ═ 0.001 is a penalty parameter.
Figure BDA00032411249400001418
Indicating the replication of high resolution panchromatic images to Lm4 bands.
Figure BDA00032411249400001419
The sign of the gradient is indicated.
Figure BDA00032411249400001420
Is a spectral response matrix of a multi-spectral instrument,
Figure BDA0003241124940000151
is a subspace matrix.
Let G be W-P, then the W sub-problem becomes
Figure BDA0003241124940000152
The W subproblem) can be solved directly by a vector Total Variation (vector Total Variation) algorithm.
Step 3.7: solving the Q sub-problem:
Figure BDA0003241124940000153
wherein, Y5In the form of a dual-scale variable,
Figure BDA0003241124940000154
representing Frobenius norm, | | · |. luminance*Denotes the nuclear norm, λl0.001 is the parameter balancing each term. μ ═ 0.001 is a penalty parameter. X(3)In the form of a two-dimensional matrix representation of the coefficients.
The Q sub-problem can be solved directly by a singular value contraction algorithm.
Step 3.8: and solving the Y subproblem. This subproblem can be directly updated by the following equation:
Figure BDA0003241124940000155
Figure BDA0003241124940000156
Figure BDA0003241124940000157
Figure BDA0003241124940000158
Figure BDA0003241124940000159
wherein, Y1,Y2,Y3,Y4And Y5For scale-duality variables, μ ═ 0.001 is a penalty parameter. X(3)In the form of a two-dimensional matrix representation of the coefficients.
Figure BDA00032411249400001510
In the form of a spatial blur matrix, the matrix is,
Figure BDA00032411249400001511
in the form of a spatial blur matrix, the matrix is,
Figure BDA00032411249400001512
is a spectral response matrix of a multi-spectral instrument,
Figure BDA00032411249400001513
is a subspace matrix. O, U, V, W and Q are auxiliary variables.
Step 3.9: obtaining the coefficient tensor by the iterative solution of the subproblem
Figure BDA00032411249400001516
According to step 2
Figure BDA00032411249400001514
Further obtaining a high-resolution hyperspectral image
Figure BDA00032411249400001515
The flow of the optimization solution is shown in table 1.
TABLE 1
Figure BDA0003241124940000161
And 4, step 4: based on the stepsThe invention will be based on hyperspectral images
Figure BDA0003241124940000162
Multispectral images
Figure BDA0003241124940000163
Full color image
Figure BDA0003241124940000164
And fusing the three data to obtain a high-resolution hyperspectral image. In order to quantitatively and qualitatively evaluate the fused image, the invention selects CNMF and GSA as comparison methods, and the invention uses the comparison methods to fuse the hyperspectral image and the multispectral image and compare the fusion result of the invention. The invention uses four quality evaluation indexes of ERGAS, SAM, RMSE and PSNR, the quantitative analysis result is shown in Table 2, the bold represents the best result, and the visual comparison result is shown in figures 2-5. Fig. 2 is a graph of the fusion result of the comparison method CNMF, fig. 3 is a graph of the fusion result of the comparison method GSA, fig. 4 is a graph of the fusion result of the present invention, and fig. 5 is a reference image. It can be seen that fig. 2 and 3 are clearly obscured, with much detail obscured, whereas fig. 4 is much clearer than fig. 2 and 3, taking the upper right hand small house as an example, the grid division can be seen in fig. 4, while fig. 2 and 3 are not. Fig. 4 is closer to the reference image fig. 5 than to fig. 2 and 3.
TABLE 2
Figure BDA0003241124940000171
It can be seen that, because the fusion method of the present invention utilizes the information in the full-color image, compared with the HSI/MSI fusion algorithm and the HSI/PAN fusion algorithm, the fusion result of the present invention has higher definition, each index is also closest to the ideal value, and the present invention is superior to the comparison method in both qualitative and quantitative aspects.
It should be understood that parts of the description not set forth in detail are of prior art.
It should be understood that the above-mentioned embodiments are described in some detail, and not intended to limit the scope of the invention, and those skilled in the art will be able to make alterations and modifications without departing from the scope of the invention as defined by the appended claims.

Claims (5)

1. A hyperspectral fusion method based on dynamic gradient group sparseness and low-rank regularization is characterized in that,
step 1: acquiring a high-spectrum image with low resolution by a high-spectrum sensor, and acquiring a low-resolution multi-spectrum image and a high-resolution full-color image of the same picture by the multi-spectrum sensor;
step 2: constructing a fusion model among the low-resolution hyperspectral image, the low-resolution multispectral image and the high-resolution panchromatic image according to the low-resolution hyperspectral image, the low-resolution multispectral image and the high-resolution panchromatic image in the step 1;
and step 3: solving a coefficient matrix X by an alternative direction multiplier method based on the fusion model in the step 2(3)And further multiplying the coefficient matrix and the subspace matrix to obtain a high-resolution hyperspectral image F(3)
2. The hyperspectral fusion method based on dynamic gradient group sparseness and low-rank regularization according to claim 1, wherein the low-resolution hyperspectral image in the step 1 is recorded as tensor
Figure FDA0003241124930000011
Whose 3-mode expansion matrix is
Figure FDA0003241124930000012
Indicates that the image has LhIndividual wave band, Wh×HhA plurality of pixels;
step 1 the low resolution ofThe spectral image is noted as tensor
Figure FDA0003241124930000013
Whose 3-mode expansion matrix is
Figure FDA0003241124930000014
Indicates that the image has LmIndividual wave band, Wm×HmA plurality of pixels;
step 1 the high resolution panchromatic image is recorded as a tensor
Figure FDA0003241124930000015
Whose 3-mode expansion matrix is
Figure FDA0003241124930000016
Indicating that the image has 1 band, Wp×HpA plurality of pixels;
the ratio of the resolution between the high-resolution panchromatic image and the low-resolution hyperspectral image is shThe ratio of the resolution between the high-resolution panchromatic image and the low-resolution multispectral image is smI.e. by
Figure FDA0003241124930000017
3. The hyperspectral fusion method based on dynamic gradient group sparseness and low-rank regularization according to claim 1, wherein the fusion model among the low-resolution hyperspectral image, the low-resolution multispectral image and the high-resolution panchromatic image in the step 2 is composed of a hyperspectral image fitting term, a multispectral image fitting term, a dynamic gradient group sparseness regularization term and a low-rank regularization term;
the hyperspectral image fitting item and the multispectral image fitting item are low-resolution hyperspectral images H(3)Low resolution multispectral image M(3)Low resolution highlight constructed based on spatial and spectral degradation relationships with high resolution hyperspectral imagesSpectral image H(3)Spatial down-sampling version of high-resolution equivalent high-spectral image, low-resolution multi-spectral image M(3)The high-spectrum image equivalent to high resolution is a version after spatial down-sampling and spectral down-sampling;
the hyperspectral image fitting term, the multispectral image fitting term, the dynamic gradient group sparse regularization term and the low-rank regularization term form a target energy function related to the coefficient matrix.
4. The hyperspectral fusion method based on dynamic gradient group sparseness and low-rank regularization according to claim 1, wherein the step 2 of constructing the fusion model among the hyperspectral image, the multispectral image and the panchromatic image specifically comprises the following sub-steps:
step 2.1: low-resolution hyperspectral image H(3)Hyperspectral image F which can be regarded as high resolution(3)Spatially down-sampled version of (a):
H(3)=F(3)BhSh
wherein, the matrix
Figure FDA0003241124930000021
Is a spatial fuzzy matrix;
Figure FDA0003241124930000022
the method is a spatial down-sampling operation and is used for reducing the spatial resolution of the hyperspectral image;
step 2.2: multispectral image M of low resolution(3)Hyperspectral image F which can be regarded as high resolution(3)Spatial down-sampling and spectral down-sampling versions:
M(3)=RmF(3)BmSm
wherein the content of the first and second substances,
Figure FDA0003241124930000023
is a spectral response matrix of a multispectral instrument; matrix array
Figure FDA0003241124930000024
Is a spatial fuzzy matrix;
Figure FDA0003241124930000025
the method is a spatial down-sampling operation and is used for reducing the spatial resolution of the hyperspectral image;
step 2.3: the high-resolution panchromatic image P and the high-resolution hyperspectral image F have the same spatial edge information, the gradient difference of the high-resolution panchromatic image P and the high-resolution hyperspectral image F meets the group sparsity characteristic, and a regularization item for dynamic gradient group sparsity can be established:
the high-resolution hyperspectral image is recorded as a tensor
Figure FDA0003241124930000026
Whose 3-mode expansion matrix is
Figure FDA0003241124930000027
Indicates that the image has LhIndividual wave band, Wp×HpA plurality of pixels;
Figure FDA0003241124930000028
wherein the content of the first and second substances,
Figure FDA0003241124930000029
indicating the replication of high resolution panchromatic images to LmA plurality of wave bands;
b in step 2.1hMatrix, R in step 2.2m,BmThe matrix can be composed of a low-resolution hyperspectral image H(3)Low resolution multispectral image M(3)High resolution full color image P(3)The estimation is obtained, and the estimation method provided by the hyperspectral image fusion method named HySure is used in the invention;
step 2.4: building 3D tensor for using low rank regularization
Figure FDA00032411249300000210
Regularization term of (1):
||X||*
step 2.5: converting the high-resolution hyperspectral image into a coefficient matrix model of the high-resolution hyperspectral image in a low-dimensional subspace by combining a product model of the subspace matrix and the coefficient matrix;
step 2.5 the product model of the subspace matrix and the coefficient matrix is:
Figure FDA0003241124930000031
wherein the extract is3The mode product of the 3 rd mode is shown,
Figure FDA0003241124930000032
the method is characterized in that the method is a subspace matrix, the subspace matrix is represented by vectors of D pure spectral features, the subspace matrix is directly obtained by low-resolution hyperspectral images through image decomposition, and the image decomposition method selected by the invention is vertex component analysis;
Zhang Liang
Figure FDA0003241124930000033
is a coefficient matrix, representing
Figure FDA0003241124930000034
Is represented by a linear combination of vector members in a subspace matrix,
Figure FDA0003241124930000035
representing a pixel
Figure FDA0003241124930000036
Corresponding coefficients, i is the spatial transverse coordinate of the pixel point, and j is the spatial longitudinal coordinate of the pixel point;
step 2.5, the coefficient matrix model of the high-resolution hyperspectral image converted into the high-resolution hyperspectral image in the low-dimensional subspace is as follows:
F(3)=EX(3)
wherein the content of the first and second substances,
Figure FDA0003241124930000037
is tensor
Figure FDA0003241124930000038
The 3-mode development;
step 2.6: based on the steps 2.1 to 2.5, establishing a fusion model of the three components:
Figure FDA0003241124930000039
wherein
Figure FDA00032411249300000310
Representing Frobenius norm, | | · |. luminance2,1Representing L2,1 norm, | | · |. non-volatile memory*Denotes the nuclear norm, λm,λφAnd λlParameters of each balance item are respectively; x(3)In the form of a two-dimensional matrix representation of the coefficients.
5. The hyperspectral fusion method based on dynamic gradient group sparseness and low-rank regularization according to claim 1 is characterized in that the specific implementation of step 3 comprises the following substeps:
step 3.1: introducing an auxiliary variable O, wherein O is equal to X(3)Bh(ii) a Introducing an auxiliary variable U, wherein U is equal to X(3)Bm(ii) a Introducing an auxiliary variable V, wherein V is equal to X(3)(ii) a Introducing an auxiliary variable W, and satisfying that W is equal to RmEV; introducing an auxiliary variable Q, wherein Q is equal to X(3)(ii) a The three fusion model is expressed as:
Figure FDA0003241124930000041
s.t.O=X(3)Bh
U=X(3)Bm
V=X(3)
W=RmEV
Q=X(3)
the augmented Lagrange function of the three fusion models is expressed as:
Figure FDA0003241124930000042
wherein, Y1,Y2,Y3,Y4And Y5For scale Dual variables (Scaled Dual Variable),
Figure FDA0003241124930000043
representing Frobenius norm, | | · |. luminance2,1Representing L2,1 norm, | | · |. non-volatile memory*Denotes the nuclear norm, λm,λφAnd λlParameters of each balance item are respectively; mu is a penalty parameter; x(3)Is a two-dimensional matrix representation of the coefficients; h(3)A two-dimensional matrix representation of the hyperspectral image at low resolution; m(3)A two-dimensional matrix representation of the low-resolution multispectral image;
Figure FDA0003241124930000044
indicating the replication of high resolution panchromatic images to LmA plurality of wave bands;
Figure FDA0003241124930000045
represents a gradient sign;
Figure FDA0003241124930000046
in the form of a spatial blur matrix, the matrix is,
Figure FDA0003241124930000047
for spatially fuzzy matrices,
Figure FDA0003241124930000048
In order to perform a spatial down-sampling operation,
Figure FDA0003241124930000049
in order to perform a spatial down-sampling operation,
Figure FDA00032411249300000410
is a spectral response matrix of a multi-spectral instrument,
Figure FDA00032411249300000411
is a subspace matrix; the optimization problem of the augmented Lagrangian function of the three-part fusion model can be decomposed into an X subproblem, an O subproblem, a U subproblem, a V subproblem, a W subproblem, a Q subproblem and a Y subproblem;
step 3.2: solving the X subproblem:
Figure FDA00032411249300000412
wherein t in the upper right corner represents the number of iterations and Y1,Y2,Y3And Y5In the form of a dual-scale variable,
Figure FDA00032411249300000413
representing Frobenius norm, mu is a penalty parameter; x(3)Is a two-dimensional matrix representation of the coefficients;
Figure FDA0003241124930000051
in the form of a spatial blur matrix, the matrix is,
Figure FDA0003241124930000052
is a spatial fuzzy matrix;
the solution to the above problem is:
Figure FDA0003241124930000053
step 3.3: solving an O subproblem:
Figure FDA0003241124930000054
wherein, Y1In the form of a dual-scale variable,
Figure FDA0003241124930000055
representing Frobenius norm, mu is a penalty parameter; x(3)Is a two-dimensional matrix representation of the coefficients; h(3)A two-dimensional matrix representation of the hyperspectral image at low resolution;
Figure FDA0003241124930000056
in the form of a spatial blur matrix, the matrix is,
Figure FDA0003241124930000057
in order to perform a spatial down-sampling operation,
Figure FDA0003241124930000058
is a subspace matrix;
divide O into OShAnd
Figure FDA0003241124930000059
wherein
Figure FDA00032411249300000510
Representation is not represented by matrix ShAnd (3) solving the O subproblem by the selected pixel point:
Figure FDA00032411249300000511
Figure FDA00032411249300000512
step 3.4: solving the U sub-problem:
Figure FDA00032411249300000513
wherein, Y2In the form of a dual-scale variable,
Figure FDA00032411249300000514
denotes the Frobenius norm, λmTo balance the parameters of the various items; mu is a penalty parameter; x(3)Is a two-dimensional matrix representation of the coefficients; m(3)A two-dimensional matrix representation of the low-resolution multispectral image;
Figure FDA00032411249300000515
in the form of a spatial blur matrix, the matrix is,
Figure FDA00032411249300000516
in order to perform a spatial down-sampling operation,
Figure FDA00032411249300000517
is a spectral response matrix of a multi-spectral instrument,
Figure FDA00032411249300000518
is a subspace matrix;
divide U into USmAnd
Figure FDA00032411249300000519
wherein
Figure FDA00032411249300000520
Representation is not represented by matrix SmThe solution of the selected pixel point and the U subproblem is as follows:
Figure FDA0003241124930000061
Figure FDA0003241124930000062
step 3.5: solving the V subproblem:
Figure FDA0003241124930000063
wherein, Y3,Y4In the form of a dual-scale variable,
Figure FDA0003241124930000064
representing Frobenius norm, mu is a penalty parameter; x(3)Is a two-dimensional matrix representation of the coefficients;
Figure FDA0003241124930000065
is a spectral response matrix of a multi-spectral instrument,
Figure FDA0003241124930000066
is a subspace matrix;
the solution to the above problem is:
Figure FDA0003241124930000067
step 3.6: solving the W sub-problem:
Figure FDA0003241124930000068
wherein, Y4In the form of a dual-scale variable,
Figure FDA0003241124930000069
represents FrobeniusNorm, | · | luminance2,1Denotes L2,1 norm,. lambda.φTo balance the parameters of the various items; mu is a penalty parameter;
Figure FDA00032411249300000610
indicating the replication of high resolution panchromatic images to LmA plurality of wave bands;
Figure FDA00032411249300000611
represents a gradient sign;
Figure FDA00032411249300000612
is a spectral response matrix of a multi-spectral instrument,
Figure FDA00032411249300000613
is a subspace matrix;
let G be W-P, then the W sub-problem becomes
Figure FDA00032411249300000614
The W subproblem) can be solved directly by a vector Total Variation (vector Total Variation) algorithm;
step 3.7: solving the Q sub-problem:
Figure FDA00032411249300000615
wherein, Y5In the form of a dual-scale variable,
Figure FDA00032411249300000616
representing Frobenius norm, | | · |. luminance*Denotes the nuclear norm, λlTo balance the parameters of the various items; mu is a penalty parameter; x(3)Is a two-dimensional matrix representation of the coefficients;
the Q sub-problem can be directly solved through a singular value shrinkage algorithm;
step 3.8: solving the Y subproblem; this subproblem can be directly updated by the following equation:
Figure FDA0003241124930000071
Figure FDA0003241124930000072
Figure FDA0003241124930000073
Figure FDA0003241124930000074
Figure FDA0003241124930000075
wherein, Y1,Y2,Y3,Y4And Y5Is a scale dual variable, mu is a penalty parameter; x(3)Is a two-dimensional matrix representation of the coefficients;
Figure FDA0003241124930000076
in the form of a spatial blur matrix, the matrix is,
Figure FDA0003241124930000077
in the form of a spatial blur matrix, the matrix is,
Figure FDA0003241124930000078
is a spectral response matrix of a multi-spectral instrument,
Figure FDA0003241124930000079
is a subspace matrix; o, U, V, W and Q are auxiliary variables;
step 3.9: obtaining the coefficient tensor by the iterative solution of the subproblem
Figure FDA00032411249300000710
According to step 2
Figure FDA00032411249300000711
Further obtaining a high-resolution hyperspectral image
Figure FDA00032411249300000712
CN202111019135.9A 2021-09-01 2021-09-01 Hyperspectral fusion method based on dynamic gradient group sparsity and low-rank regularization Pending CN113870159A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111019135.9A CN113870159A (en) 2021-09-01 2021-09-01 Hyperspectral fusion method based on dynamic gradient group sparsity and low-rank regularization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111019135.9A CN113870159A (en) 2021-09-01 2021-09-01 Hyperspectral fusion method based on dynamic gradient group sparsity and low-rank regularization

Publications (1)

Publication Number Publication Date
CN113870159A true CN113870159A (en) 2021-12-31

Family

ID=78989033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111019135.9A Pending CN113870159A (en) 2021-09-01 2021-09-01 Hyperspectral fusion method based on dynamic gradient group sparsity and low-rank regularization

Country Status (1)

Country Link
CN (1) CN113870159A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998167A (en) * 2022-05-16 2022-09-02 电子科技大学 Hyperspectral and multispectral image fusion method based on space-spectrum combined low rank
CN115880199A (en) * 2023-03-03 2023-03-31 湖南大学 Long-wave infrared hyperspectral and multispectral image fusion method, system and medium
CN116402726A (en) * 2023-06-08 2023-07-07 四川工程职业技术学院 Denoising fusion method of hyperspectral-multispectral image

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998167A (en) * 2022-05-16 2022-09-02 电子科技大学 Hyperspectral and multispectral image fusion method based on space-spectrum combined low rank
CN114998167B (en) * 2022-05-16 2024-04-05 电子科技大学 High-spectrum and multi-spectrum image fusion method based on space-spectrum combined low rank
CN115880199A (en) * 2023-03-03 2023-03-31 湖南大学 Long-wave infrared hyperspectral and multispectral image fusion method, system and medium
CN116402726A (en) * 2023-06-08 2023-07-07 四川工程职业技术学院 Denoising fusion method of hyperspectral-multispectral image
CN116402726B (en) * 2023-06-08 2023-08-22 四川工程职业技术学院 Denoising fusion method of hyperspectral-multispectral image

Similar Documents

Publication Publication Date Title
CN113870159A (en) Hyperspectral fusion method based on dynamic gradient group sparsity and low-rank regularization
Xu et al. Nonlocal patch tensor sparse representation for hyperspectral image super-resolution
Wang et al. Deep blind hyperspectral image fusion
Xie et al. Hyperspectral pansharpening with deep priors
Loncan et al. Hyperspectral pansharpening: A review
Xie et al. Hyperspectral image super-resolution using deep feature matrix factorization
Zhang et al. A super-resolution reconstruction algorithm for hyperspectral images
CN110070518B (en) Hyperspectral image super-resolution mapping method based on dual-path support
Fu et al. Joint camera spectral sensitivity selection and hyperspectral image recovery
Li et al. Hyperspectral pansharpening via improved PCA approach and optimal weighted fusion strategy
Dian et al. Hyperspectral image super-resolution via local low-rank and sparse representations
CN107316309B (en) Hyperspectral image saliency target detection method based on matrix decomposition
CN109360147B (en) Multispectral image super-resolution reconstruction method based on color image fusion
CN111223049B (en) Remote sensing image variation fusion method based on structure-texture decomposition
CN108520495B (en) Hyperspectral image super-resolution reconstruction method based on clustering manifold prior
CN113205453B (en) Hyperspectral fusion method based on space-spectrum total variation regularization
Paris et al. A novel sharpening approach for superresolving multiresolution optical images
Lanaras et al. Advances in hyperspectral and multispectral image fusion and spectral unmixing
CN109859153B (en) Multispectral image fusion method based on adaptive spectrum-spatial gradient sparse regularization
Rangzan et al. Supervised cross-fusion method: a new triplet approach to fuse thermal, radar, and optical satellite data for land use classification
Teodoro et al. Sharpening hyperspectral images using plug-and-play priors
Licciardi et al. Super-resolution of hyperspectral images using local spectral unmixing
CN113744134A (en) Hyperspectral image super-resolution method based on spectrum unmixing convolution neural network
Dai et al. Spatial-spectral representation for x-ray fluorescence image super-resolution
Cao et al. Universal high spatial resolution hyperspectral imaging using hybrid-resolution image fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination