CN115855839A - Improved space-spectrum fusion hyperspectral calculation reconstruction method based on ADMM framework - Google Patents

Improved space-spectrum fusion hyperspectral calculation reconstruction method based on ADMM framework Download PDF

Info

Publication number
CN115855839A
CN115855839A CN202310151149.9A CN202310151149A CN115855839A CN 115855839 A CN115855839 A CN 115855839A CN 202310151149 A CN202310151149 A CN 202310151149A CN 115855839 A CN115855839 A CN 115855839A
Authority
CN
China
Prior art keywords
convolution
output
hyperspectral
reconstruction
spectrum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310151149.9A
Other languages
Chinese (zh)
Other versions
CN115855839B (en
Inventor
王耀南
苏学叁
毛建旭
张辉
朱青
陈煜嵘
刘彩苹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202310151149.9A priority Critical patent/CN115855839B/en
Publication of CN115855839A publication Critical patent/CN115855839A/en
Application granted granted Critical
Publication of CN115855839B publication Critical patent/CN115855839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses an improved space-spectrum fusion hyperspectral calculation reconstruction method based on an ADMM framework, wherein a medical hyperspectral image is subjected to coded aperture snapshot type spectral imaging to obtain a medical hyperspectral measured value, and the medical hyperspectral measured value is vectorized; performing optimization solution on the medical hyperspectral measured value subjected to the vector quantization by adopting a Lagrange multiplier method with constraint terms to obtain an optimization solution expression; solving and iterating the prior terms in the optimization solving expression by using an improved space-spectrum fusion denoising method, and obtaining a reconstruction value of the medical hyperspectral image when the iteration times reach preset times. Reconstructing and solving an inverse problem of computational imaging based on an ADMM optimization framework, and adding a dual constraint term based on hyperspectral image prior to limiting the direction of image reconstruction. The hyperspectral image is denoised through the improved space-spectrum fusion network, and the optimization direction of the ADMM can be corrected through the priori constraint after denoising, so that the calculation reconstruction achieves a satisfactory effect.

Description

Improved space-spectrum fusion hyperspectral calculation reconstruction method based on ADMM framework
Technical Field
The invention belongs to the technical field of hyperspectral computational reconstruction, and particularly relates to an improved hyperspectral fusion hyperspectral computational reconstruction method based on an ADMM framework.
Background
Traditional RGB cameras can only capture spectral information through three channels, with the other bands being filtered by a bayer array placed in front of the sensor. Thus, RGB cameras have inherent disadvantages in exploring spectral information with more frequency bands. Therefore, research on new optical transmission models and image reconstruction methods is a hot spot in recent years, and sensing high-dimensional spectral signals and constructing portable spectral cameras are mainstream of future imaging research.
High-dimensional spectral images have a great advantage in revealing material composition compared to conventional RGB images. In fact, natural light is reflected or transmitted through the surface of a material, each substance having a different reflectivity and transmissivity to light of different wavelengths. Therefore, by analyzing the ratio of reflected light or transmitted light, it is possible to realize identification of the material composition without contact and damage. The method is applied to a plurality of scenes using spectral information, such as hyperspectral imaging of medicines for traditional Chinese medicine classification and geographical source identification, and anomaly detection is carried out on medicine samples.
The hyperspectral imaging has a very long-term application prospect. However, the design of hyperspectral camera is complex. The spectral information of the same scene obtained by the hyperspectral camera is dozens of times of RGB. Therefore, the hyperspectral computed imaging technology has great significance for processing the imaging problem and making the imaging problem suitable for daily life. There is a long history of computational imaging and researchers have proposed a pixel level filter array camera with an extremely compact design that performs spectral filtering at the CCD level. This technique places high demands on the fabrication of pixel-level filters and presents mutual limitations in spatial and spectral resolution. Therefore, the filter array snapshot imaging technique has not been widely accepted by researchers. The other is the CASSI technology, and researchers add an encoding aperture plate behind the objective lens to compress the spatial information. The spectral splitting element is then placed to disperse the composite light and then the encoded spatial information is dispersed into multiple bands and superimposed on the detector. The CASSI is inspired by compressive sensing, a hyperspectral image can be efficiently reconstructed, and the measurement quantity is small. However, the CASSI technology still has defects and shortcomings, although the single measurement quantity is small, the method is limited by a calculation reconstruction algorithm, and the situations of poor reconstruction quality, unclear reconstruction effect and the like exist in each reconstruction. The mainstream method for reconstructing the CASSI image at present comprises an iterative optimization calculation reconstruction algorithm and an end-to-end calculation reconstruction algorithm based on deep learning, wherein the iterative optimization calculation reconstruction algorithm is a model-driven method which has poor reconstruction effect and cannot flexibly adjust the reconstruction effect, and the deep learning calculation reconstruction algorithm is a learning-based method which is easily limited by the characteristics of a training set and easily causes poor generalization performance. At present, a method suitable for medical hyperspectral calculation reconstruction is urgently searched.
Disclosure of Invention
Aiming at the technical problems, the invention provides an improved space-spectrum fusion hyperspectral calculation reconstruction method based on an ADMM framework.
The technical scheme adopted by the invention for solving the technical problem is as follows:
an improved space spectrum fusion hyperspectral calculation reconstruction method based on an ADMM framework comprises the following steps:
s100: medicine highlightSpectral image X is belonged to R H×W×B After coded aperture snapshot type spectral imaging, a medical hyperspectral measured value Y epsilon R is obtained H×(W+B-1) Vectorizing the medical hyperspectral measured value to obtain a vectorized medical hyperspectral measured value y; wherein, H, W and B respectively represent the height, width and spectral band number of the image;
s200: performing optimization solution on the medical hyperspectral measured value y subjected to the vector quantization by adopting a Lagrange multiplier method with constraint terms to obtain an optimization solution expression;
s300: solving and iterating the prior terms in the optimization solving expression by using an improved space-spectrum fusion denoising method, and obtaining a reconstruction value of the medical hyperspectral image when the iteration times reach preset times
Figure SMS_1
Preferably, S100 includes:
s110: medical hyperspectral image X belongs to R H×W×B After coded aperture snapshot type spectral imaging, a medical hyperspectral measured value Y epsilon R is obtained H×(W+B-1) Defining the coding matrix as H, and defining the functional relationship between the medicine hyperspectral measured value Y and the medicine hyperspectral data X as follows:
Figure SMS_2
(1)
wherein ,X∈RH×W×B ,Y∈R H×(W+B-1)
S120: when calculating, the image in the form of matrix is vectorized, and the vectorized functional relation can be obtained:
Figure SMS_3
(2)
let N = H × W × B, M = H × (W + B-1), then
Figure SMS_4
,/>
Figure SMS_5
,/>
Figure SMS_6
Represents the vectorized medical hyperspectral measurement value->
Figure SMS_7
And representing the vectorized medical hyperspectral image.
Preferably, S200 is specifically:
Figure SMS_8
(3)
wherein ,
Figure SMS_9
the method is used for expressing the inherent prior knowledge of medical hyperspectral data X, and lambda expresses a Lagrange multiplier.
Preferably, the solving the prior term in the optimization solution expression by using the improved space-spectrum fusion denoising method in S300 includes:
s310: let v be an auxiliary dual variable, the prior constraint form of the variable v be a function g (v), and the optimization solution expression be written as follows:
Figure SMS_10
(4)
in this case, the augmented lagrange function of equation (4) is:
Figure SMS_11
(5)
wherein, in the formula (5)
Figure SMS_12
Represents a Lagrangian multiplier, <' > or>
Figure SMS_13
Representing a penalty factor;
s320: after the sub-problem decomposition is carried out on the formula (5), variables can be obtained
Figure SMS_14
,/>
Figure SMS_15
and />
Figure SMS_16
The subproblem solving function of (1):
Figure SMS_17
(6)/>
Figure SMS_18
(7)
Figure SMS_19
(8)
s330: in pair (6)
Figure SMS_20
A partial derivative is determined and made 0, a variable->
Figure SMS_21
Analytic solution form of (2):
Figure SMS_22
(9)
s340: for the matrix inversion in equation (9), the conversion is performed using the Woodbury matrix theorem:
Figure SMS_23
(10)
s350: substituting equation (10) into equation (9) to obtain a variable
Figure SMS_24
Iterative function form of (c):
Figure SMS_25
(11)
s360: solution method for auxiliary dual variable by adopting de-noising network
Figure SMS_26
And carrying out iterative solution.
Preferably, S360 includes:
iterating formula (11)
Figure SMS_27
As an input to the denoising network, then a variable is then present>
Figure SMS_28
The iteration form of (a) is:
Figure SMS_29
(12)
preferably, the denoising network comprises a spectrum denoising module, a space denoising module and a spectrum reconstruction module which are connected in sequence, wherein the spectrum denoising module is used for extracting the spectral characteristics of the input image, the space denoising module is used for extracting the spatial characteristics, and the spectrum reconstruction module is used for reconstructing the spectrum after denoising information is completed.
Preferably, the spectrum denoising module is configured to extract a spectral feature of the input image, specifically:
to be denoised by matrix SVD decomposition
Figure SMS_30
Conversion to:
Figure SMS_31
(13)
wherein U represents an m-order orthogonal matrix, Σ represents a non-negative diagonal matrix, and V represents an n-order orthogonal matrix.
Preferably, the spatial denoising module is configured to extract spatial features, and includes:
in order to ensure that the water-soluble organic acid,
Figure SMS_32
if the spatial denoising module is defined as an S function, the following steps are performed:
Figure SMS_33
(14)
the network structure of the spatial denoising module comprises a feature extraction layer, a first coding layer, a second coding layer, a third coding layer, a fourth coding layer, a fifth coding layer, a sixth coding layer, a first decoding layer, a second decoding layer, a third decoding layer and a reconstruction layer,
the input of the feature extraction layer is X0, the feature is extracted through the first 3D convolution to obtain an output X1, the number of input channels is 1, the number of output channels of the first 3D convolution is 16, the convolution kernel size is (3, 3), the step length is (1, 1), and zero padding is (1, 1);
the first coding layer stacks output X1 and input X0 of the feature extraction layer along the dimension of the channel number by using jump connection, and outputs X2 by performing second 3D convolution on the stacked values; wherein, the number of input channels of the second 3D convolution is 17, the number of output channels is 17, the convolution kernel size is (3, 3), the step length is (1, 1), and the zero padding is (1, 1);
the second coding layer uses jump connection to stack the output X1 and the input X0 of the feature extraction layer, the output X2 of the first coding layer along the dimension of the channel number, and the stacked value is convolved by a third 3D to obtain an output X3; wherein, the number of input channels of the third 3D convolution is 34, the number of output channels is 68, the convolution kernel size is (3, 3), the step length is (1, 2), and the zero padding is (1, 1);
inputting X3 into a fourth 3D convolution by a third coding layer to obtain output X4; wherein, the number of input channels of the fourth 3D convolution is 68, the number of output channels is 68, the convolution kernel size is (3, 3), the step length is (1, 1), and the zero padding is (1, 1);
stacking the X3 and the X4 along the dimension of the channel number by using jump connection in the fourth coding layer, and obtaining an output X5 by carrying out fifth 3D convolution on the stacked value; the number of input channels of the fifth 3D convolution is 136, the number of output channels is 136, the convolution kernel size is (3, 3), the step size is (1, 1), and zero padding is (1, 1);
stacking X3, X4 and X5 along the dimension of the channel number by using jump connection in the fifth coding layer, and obtaining output X6 by carrying out sixth 3D convolution on the stacked values; the number of input channels of the sixth 3D convolution is 272, the number of output channels is 272, the convolution kernel size is (3, 3), the step size is (1, 1), and zero padding is (1, 1);
inputting X6 into a seventh 3D convolution by a sixth coding layer to obtain an output X7; the number of input channels of the seventh 3D convolution is 272, the number of output channels is 272, the convolution kernel size is (3, 3), the step size is (1, 1), and the zero padding is (1, 1);
the first decoding layer inputs the X7 into the eighth 3D convolution to obtain an output X8; the number of input channels of the eighth 3D convolution is 272, the number of output channels is 272, the convolution kernel size is (3, 3), the step size is (1, 1), and the zero padding is (1, 1);
adding X8 and X6 to obtain X9;
the second decoding layer inputs the X9 into the first up-sampling 3D convolution to obtain an output X10; the number of input channels of the up-sampling 3D convolution is 272, the number of output channels is 17, the convolution kernel size is (3, 3), the step size is (1, 1), the zero padding is (1, 1), and the up-sampling is (1, 2);
adding X10 and X2 to obtain X11;
the third decoding layer inputs the X11 into a second up-sampling 3D convolution to obtain an output X12; wherein, the number of input channels of the second up-sampling 3D convolution is 17, the number of output channels is 16, the size of convolution kernel is (3, 3), the step length is (1, 1), the zero padding is (1, 1), and the up-sampling is (1, 2);
inputting X12 into a non-local self-similarity module, and adding the output of the non-local self-similarity module and X1 to obtain X13;
the reconstruction layer inputs the X13 into the ninth 3D convolution to obtain an output X14; wherein, the input channel number of the ninth 3D convolution is 16, the output channel number is 1, the convolution kernel size is (3, 3), the step length is (1, 1), and the zero padding is (1, 1);
adding X14 and X0 to obtain X15 which is the output of the network structure of the spatial denoising module, and assigning X15 to
Figure SMS_34
Preferably, the spectrum reconstruction module is configured to reconstruct the spectrum after the denoising information is completed, and specifically includes:
Figure SMS_35
(15)
preferably, the spectrum reconstruction module is configured to reconstruct the spectrum after the denoising information is completed, and includes:
the value of the formula (15) is assigned to the auxiliary dual variable and substituted into the formula (12) to obtain
Figure SMS_36
(16)
Sequentially iterating and finishing the formulas (11), (16) and (8) until the iteration times reach the preset times, and outputting the final value to obtain the reconstruction value of the medical hyperspectral image
Figure SMS_37
The method is based on a variable direction multiplier method (ADMM) optimization framework to reconstruct and solve the inverse problem of computational imaging, and dual constraint terms based on hyperspectral image prior are added to limit the direction of image reconstruction. The prior constraint solving adopts a data-driven image denoising method, the method denoises the hyperspectral image through an improved space-spectrum fusion network, the denoising effect is good on a hyperspectral data set of the traditional Chinese medicinal materials, the optimized direction of the ADMM can be corrected by the denoised prior constraint, and the calculation reconstruction can achieve the satisfactory effect.
Drawings
FIG. 1 is a flowchart of a method for reconstructing an improved hyperspectral computation fusion based on an ADMM framework according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a denoising module for assisting dual variables according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a hyperspectral dataset of a traditional Chinese medicine according to an embodiment of the invention;
FIG. 4 is a diagram illustrating the effect of some experiments in accordance with one embodiment of the present invention; fig. 4 (a) shows a spectral image calculation reconstruction effect map of a violet wavelength, fig. 4 (b) shows a spectral image calculation reconstruction effect map of a blue wavelength, fig. 4 (c) shows a spectral image calculation reconstruction effect map of a cyan wavelength, fig. 4 (d) shows a spectral image calculation reconstruction effect map of a green wavelength, fig. 4 (e) shows a spectral image calculation reconstruction effect map of a yellow wavelength, fig. 4 (f) shows a spectral image calculation reconstruction effect map of an orange wavelength, and fig. 4 (g) shows a spectral image calculation reconstruction effect map of a red wavelength.
Detailed Description
In order to make the technical solutions of the present invention better understood, the present invention is further described in detail below with reference to the accompanying drawings.
In one embodiment, as shown in fig. 1, an improved spatio-spectral fusion hyperspectral computation reconstruction method based on an ADMM framework comprises the following steps:
s100: medical hyperspectral image X belongs to R H×W×B After coded aperture snapshot type spectral imaging, a medical hyperspectral measured value Y epsilon R is obtained H×(W+B-1) Vectorizing the medical hyperspectral measured value to obtain a vectorized medical hyperspectral measured value y; wherein, H, W and B respectively represent the height, width and spectral band number of the image;
s200: carrying out optimization solution on the relatively quantized medical hyperspectral measured value y by adopting a Lagrange multiplier method with a constraint term to obtain an optimization solution expression;
s300: solving and iterating the prior terms in the optimization solving expression by using an improved space-spectrum fusion denoising method, and obtaining a reconstruction value of the medical hyperspectral image when the iteration times reach preset times
Figure SMS_38
Specifically, the hyperspectral data has the characteristics of wide waveband range and large data volume, so that image data with high spatial resolution and wide waveband cannot be obtained simultaneously when the hyperspectral imaging data is acquired. Based on the inspiration of the compressed sensing technology, a coded aperture snapshot type spectral imaging (CASSI) technology is widely applied. The spectral measurement value obtained by imaging is calculated and reconstructed on the basis of the CASSI imaging model.
The method is based on a variable direction multiplier method (ADMM) optimization framework to reconstruct and solve the inverse problem of computational imaging, and dual constraint terms based on hyperspectral image prior are added to limit the direction of image reconstruction. The prior constraint solving adopts a data-driven image denoising method, the method denoises the hyperspectral image through an improved space-spectrum fusion network, the denoising effect is good on a hyperspectral data set of the traditional Chinese medicinal materials, the optimized direction of the ADMM can be corrected by the denoised prior constraint, and the calculation reconstruction can achieve the satisfactory effect. The method is suitable for rapid hyperspectral computational imaging of medicine, food and medical scenes, and can be applied to various fields such as unmanned airborne hyperspectral video imaging and the like in an expanded manner.
In one embodiment, S100 includes:
s110: medical hyperspectral image X belongs to R H×W×B After coded aperture snapshot type spectral imaging, a medical hyperspectral measured value Y epsilon R is obtained H×(W+B-1) The coding matrix is defined as H, and the functional relationship between the medical hyperspectral measured value Y and the medical hyperspectral data X can be defined as:
Figure SMS_39
(1)
wherein ,X∈RH×W×B ,Y∈R H×(W+B-1)
S120: when calculating, the image in the form of matrix is vectorized, and the vectorized functional relation can be obtained:
Figure SMS_40
(2)
let N = H × W × B, M = H × (W + B-1), then
Figure SMS_41
,/>
Figure SMS_42
,/>
Figure SMS_43
Represents a vectorized medical hyperspectral measured value>
Figure SMS_44
And representing the vectorized medical hyperspectral image.
In particular, the reconstruction of medical hyperspectral measured values Y requires consideration of a forward model of computational imaging. When a medical to-be-detected product is imaged, a coding aperture plate of the CASSI imaging system is used for carrying out compression coding on space information from a scene, the aperture matrix distribution of the coding aperture plate obeys Gaussian normal distribution, the coding matrix is generally defined as H, and the functional relation between the measurement value Y of the imaging system and the original medical hyperspectral data X can be defined as the functional relation.
In one embodiment, S200 specifically is:
Figure SMS_45
(3)
wherein ,
Figure SMS_46
the method is used for expressing the inherent prior knowledge of medical hyperspectral data X, and lambda expresses a Lagrange multiplier.
Specifically, the solution of equation (2) is a typical underdetermined equation solution problem, and the solution is optimized by using lagrangian multiplier with constraint terms. g (X) represents the inherent prior knowledge of the medical hyperspectral data X, such as sparsity, low rank property and the like, and lambda represents a Lagrange multiplier and is used for balancing the influence of a constraint term.
In one embodiment, solving the prior term in the optimization solution expression by using the improved space-spectrum fusion denoising method in S300 includes:
s310: let v be an auxiliary dual variable, the prior constraint form of the variable v be a function g (v), and the optimization solution expression be written as follows:
Figure SMS_47
(4)
in this case, the augmented lagrange function form of equation (4) is:
Figure SMS_48
(5)
wherein (5) is
Figure SMS_49
Represents a Lagrangian multiplier, <' > or>
Figure SMS_50
Representing a penalty factor;
s320: after the sub-problem decomposition is carried out on the formula (5), variables can be obtained
Figure SMS_51
,/>
Figure SMS_52
and />
Figure SMS_53
The sub-problem solving function of (2): />
Figure SMS_54
(6)
Figure SMS_55
(7)
Figure SMS_56
(8)
S330: in pair (6)
Figure SMS_57
The partial derivative is calculated and set to 0, and a variable is obtained>
Figure SMS_58
Analytic solution form of (2):
Figure SMS_59
(9)
s340: for the matrix inversion in equation (9), the conversion is performed using the Woodbury matrix theorem:
Figure SMS_60
(10)
s350: substituting equation (10) into equation (9) to obtain a variable
Figure SMS_61
The iterative function form of (1):
Figure SMS_62
(11)
s360: solution method for auxiliary dual variable by adopting de-noising network
Figure SMS_63
And carrying out iterative solution.
Specifically, in the face of a specific imaging background, to improve the quality of computational reconstruction of an imaging model, a single model-based method generally cannot achieve high-precision image reconstruction. Therefore, the invention adopts a data-driven image prior, namely, an improved space-spectrum fusion denoising method is used for optimizing a prior term.
In one embodiment, S360 includes:
iterating the formula (11)
Figure SMS_64
As an input to the denoising network, then the variable is then ≥ at this time>
Figure SMS_65
The iteration form of (a) is:
Figure SMS_66
(12)
in one embodiment, the denoising network comprises a spectrum denoising module, a space denoising module and a spectrum reconstruction module which are connected in sequence, wherein the spectrum denoising module is used for extracting the spectral characteristics of an input image, the space denoising module is used for extracting the spatial characteristics, and the spectrum reconstruction module is used for reconstructing the spectrum after denoising information is completed.
In particular, a denoising network
Figure SMS_67
The network structure of (2) is shown in fig. 2.
In one embodiment, the spectral denoising module is configured to extract spectral features of the input image, specifically:
to be denoised by matrix SVD decomposition
Figure SMS_68
Conversion to:
Figure SMS_69
(13)
wherein U represents an m-order orthogonal matrix, Σ represents a non-negative diagonal matrix, and V represents an n-order orthogonal matrix.
In one embodiment, the spatial denoising module is used for extracting spatial features, and comprises:
in order to ensure that the water-soluble organic acid,
Figure SMS_70
if the spatial denoising module is defined as an S function, the following steps are performed: />
Figure SMS_71
(14)
The network structure of the spatial denoising module comprises a feature extraction layer, a first coding layer, a second coding layer, a third coding layer, a fourth coding layer, a fifth coding layer, a sixth coding layer, a first decoding layer, a second decoding layer, a third decoding layer and a reconstruction layer,
the input of the feature extraction layer is X0, the feature is extracted through the first 3D convolution to obtain an output X1, the number of input channels is 1, the number of output channels of the first 3D convolution is 16, the convolution kernel size is (3, 3), the step length is (1, 1), and zero padding is (1, 1);
stacking the output X1 and the input X0 of the feature extraction layer by the first coding layer by using jump connection along the dimension of the number of channels, and performing second 3D convolution on the stacked values to obtain an output X2; wherein, the number of input channels of the second 3D convolution is 17, the number of output channels is 17, the convolution kernel size is (3, 3), the step length is (1, 1), and the zero padding is (1, 1);
the second coding layer uses jump connection to stack the output X1 and the input X0 of the feature extraction layer, the output X2 of the first coding layer along the dimension of the channel number, and the stacked value is convolved by a third 3D to obtain an output X3; wherein, the number of input channels of the third 3D convolution is 34, the number of output channels is 68, the convolution kernel size is (3, 3), the step length is (1, 2), and the zero padding is (1, 1);
inputting X3 into a fourth 3D convolution by a third coding layer to obtain output X4; wherein, the number of input channels of the fourth 3D convolution is 68, the number of output channels is 68, the convolution kernel size is (3, 3), the step length is (1, 1), and the zero padding is (1, 1);
stacking the X3 and the X4 along the dimension of the channel number by using jump connection in the fourth coding layer, and performing fifth 3D convolution on the stacked values to obtain an output X5; the number of input channels of the fifth 3D convolution is 136, the number of output channels is 136, the convolution kernel size is (3, 3), the step size is (1, 1), and the zero padding is (1, 1);
stacking X3, X4 and X5 along the dimension of the channel number by using jump connection in the fifth coding layer, and obtaining output X6 by carrying out sixth 3D convolution on the stacked values; the number of input channels of the sixth 3D convolution is 272, the number of output channels is 272, the convolution kernel size is (3, 3), the step size is (1, 1), and the zero padding is (1, 1);
inputting X6 into a seventh 3D convolution by a sixth coding layer to obtain an output X7; the number of input channels of the seventh 3D convolution is 272, the number of output channels is 272, the convolution kernel size is (3, 3), the step size is (1, 1), and the zero padding is (1, 1);
the first decoding layer inputs the X7 into the eighth 3D convolution to obtain an output X8; the number of input channels of the eighth 3D convolution is 272, the number of output channels is 272, the convolution kernel size is (3, 3), the step size is (1, 1), and the zero padding is (1, 1);
adding X8 and X6 to obtain X9;
the second decoding layer inputs X9 into the first up-sampling 3D convolution to obtain output X10; the number of input channels of the up-sampling 3D convolution is 272, the number of output channels is 17, the convolution kernel size is (3, 3), the step size is (1, 1), the zero padding is (1, 1), and the up-sampling is (1, 2);
adding X10 and X2 to obtain X11;
the third decoding layer inputs the X11 into a second up-sampling 3D convolution to obtain an output X12; wherein, the number of input channels of the second up-sampling 3D convolution is 17, the number of output channels is 16, the size of convolution kernel is (3, 3), the step length is (1, 1), the zero padding is (1, 1), and the up-sampling is (1, 2);
inputting X12 into a non-local self-similarity module, and adding the output of the non-local self-similarity module and X1 to obtain X13;
the reconstruction layer inputs X13 into the ninth 3D convolution to obtain output X14; wherein, the input channel number of the ninth 3D convolution is 16, the output channel number is 1, the convolution kernel size is (3, 3), the step length is (1, 1), and the zero padding is (1, 1);
adding X14 and X0 to obtain X15 which is the output of the network structure of the space denoising module, and assigning X15 to
Figure SMS_72
。/>
In one embodiment, the spectrum reconstruction module is configured to reconstruct a spectrum after the denoising information is completed, and specifically includes:
Figure SMS_73
(15)
in one embodiment, the spectrum reconstruction module is configured to reconstruct the spectrum after the denoising information is completed, and includes:
the value of the formula (15) is assigned to the auxiliary dual variable and substituted into the formula (12) to obtain
Figure SMS_74
(16)
Sequentially iterating and finishing the formulas (11), (16) and (8) until the iteration times reach the preset times, and outputting the final value to obtain the reconstruction value of the medical hyperspectral image
Figure SMS_75
The medical hyperspectral calculation reconstruction experiment adopted by the invention adopts a traditional Chinese medicine hyperspectral data set shown in figure 3, and comprises imaging data of traditional Chinese medicines such as immature bitter orange, tuckahoe and Chinese yam.
The method adopts the hyperspectral data of the traditional Chinese medicinal materials shown in the figure 3, and after simulation test is carried out by a CASSI forward mathematical imaging model, the measured value is reconstructed with the effect shown in the figure 4.
The invention provides a method for reconstructing and solving an inverse problem of computational imaging based on a variable direction multiplier method (ADMM) optimization framework, and a dual constraint term based on hyperspectral image prior is added to limit the direction of image reconstruction. The prior constraint solving adopts a data-driven image denoising method, the method denoises the hyperspectral image through an improved space-spectrum fusion network, the denoising effect is good on a hyperspectral data set of the traditional Chinese medicinal materials, the optimized direction of the ADMM can be corrected by the denoised prior constraint, and the calculation reconstruction can achieve the satisfactory effect. The method can be applied to the field of high-spectrum imaging detection of medicines and the like, and can realize a quick and accurate imaging effect.
The improved hyperspectral calculation reconstruction method based on the ADMM framework for the space-spectrum fusion medicine is described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the core concepts of the present invention. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (10)

1. An improved space-spectrum fusion hyperspectral calculation reconstruction method based on an ADMM framework is characterized by comprising the following steps:
s100: medical hyperspectral image X belongs to R H×W×B After coded aperture snapshot type spectral imaging, a medical hyperspectral measured value Y epsilon R is obtained H×(W+B-1) Vectorizing the medical hyperspectral measured value to obtain a vectorized medical hyperspectral measured value y; wherein, H, W and B respectively represent the height, width and spectral band number of the image;
s200: performing optimization solution on the vectorized medical hyperspectral measured value y by adopting a Lagrange multiplier method with a constraint term to obtain an optimization solution expression;
s300: solving and iterating the prior terms in the optimization solving expression by using an improved space-spectrum fusion denoising method, and obtaining a reconstruction value of the medical hyperspectral image when the iteration times reach preset times
Figure QLYQS_1
2. The method of claim 1, wherein S100 comprises:
s110: medical hyperspectral image X belongs to R H×W×B After coded aperture snapshot type spectral imaging, a medical hyperspectral measured value Y epsilon R is obtained H×(W+B-1) Defining the coding matrix as H, and defining the functional relationship between the medicine hyperspectral measured value Y and the medicine hyperspectral data X as follows:
Figure QLYQS_2
(1)
wherein ,X∈RH×W×B ,Y∈R H×(W+B-1)
S120: when calculating, the image in the form of matrix is vectorized, and the vectorized functional relation can be obtained:
Figure QLYQS_3
(2)
let N = H × W × B, M = H × (W + B-1), then
Figure QLYQS_4
,/>
Figure QLYQS_5
,/>
Figure QLYQS_6
Represents the vectorized medical hyperspectral measurement value->
Figure QLYQS_7
And representing the vectorized medical hyperspectral image.
3. The method according to claim 2, wherein S200 is specifically:
Figure QLYQS_8
(3)
wherein ,
Figure QLYQS_9
the method is used for expressing the inherent prior knowledge of medical hyperspectral data X, and lambda expresses a Lagrange multiplier.
4. The method of claim 3, wherein solving the prior terms in the optimized solution expression using the improved spatio-spectral fusion denoising method in S300 comprises:
s310: let v be an auxiliary dual variable, the prior constrained form of the variable v be a function g (v), and the optimization solution expression is written in the form:
Figure QLYQS_10
(4)
in this case, the augmented lagrange function of equation (4) is:
Figure QLYQS_11
(5)
wherein (5) is
Figure QLYQS_12
Represents a Lagrangian multiplier, <' > or>
Figure QLYQS_13
Representing a penalty factor;
s320: after the sub-problem decomposition is carried out on the formula (5), variables can be obtained
Figure QLYQS_14
,/>
Figure QLYQS_15
and />
Figure QLYQS_16
The sub-problem solving function of (2): />
Figure QLYQS_17
(6)
Figure QLYQS_18
(7)
Figure QLYQS_19
(8)
S330: in pair (6)
Figure QLYQS_20
The partial derivative is calculated and set to 0, and a variable is obtained>
Figure QLYQS_21
Analytic solution form of (2):
Figure QLYQS_22
(9)
s340: for the matrix inversion in equation (9), we transform using the Woodbury matrix theorem:
Figure QLYQS_23
(10)
s350: substituting equation (10) into equation (9) to obtain a variable
Figure QLYQS_24
The iterative function form of (1):
Figure QLYQS_25
(11)
s360: solution method for auxiliary dual variable by adopting de-noising network
Figure QLYQS_26
And carrying out iterative solution.
5. The method of claim 4, wherein S360 comprises:
iterating the formula (11)
Figure QLYQS_27
As an input to the denoising network, then the variable is then ≥ at this time>
Figure QLYQS_28
The iteration form of (a) is:
Figure QLYQS_29
(12)。
6. the method according to claim 5, wherein the denoising network comprises a spectrum denoising module, a space denoising module and a spectrum reconstruction module which are connected in sequence, the spectrum denoising module is used for extracting the spectrum characteristics of the input image, the space denoising module is used for extracting the space characteristics, and the spectrum reconstruction module is used for reconstructing the spectrum after denoising information is completed.
7. The method according to claim 6, wherein the spectral denoising module is configured to extract spectral features of the input image, and in particular:
to be denoised by matrix SVD decomposition
Figure QLYQS_30
Conversion to:
Figure QLYQS_31
(13)
wherein U represents an m-order orthogonal matrix, Σ represents a non-negative diagonal matrix, and V represents an n-order orthogonal matrix.
8. The method of claim 7, wherein the spatial denoising module is configured to extract spatial features, and comprises:
in order to ensure that the water-soluble organic acid,
Figure QLYQS_32
defining the spatial denoising module as an S function, including:
Figure QLYQS_33
(14)
the network structure of the spatial denoising module comprises a feature extraction layer, a first coding layer, a second coding layer, a third coding layer, a fourth coding layer, a fifth coding layer, a sixth coding layer, a first decoding layer, a second decoding layer, a third decoding layer and a reconstruction layer,
the input of the feature extraction layer is X0, the feature is extracted through a first 3D convolution to obtain an output X1, the number of input channels is 1, the number of output channels of the first 3D convolution is 16, the convolution kernel size is (3, 3), the step length is (1, 1), and zero padding is (1, 1);
stacking the output X1 and the input X0 of the feature extraction layer by the first coding layer by using jump connection along the dimension of the number of channels, and performing second 3D convolution on the stacked values to obtain an output X2; wherein the number of input channels of the second 3D convolution is 17, the number of output channels is 17, the convolution kernel size is (3, 3), the step length is (1, 1), and the zero padding is (1, 1);
the second coding layer uses jump connection to stack the output X1 and the input X0 of the feature extraction layer, the output X2 of the first coding layer is stacked along the dimension of the channel number, and the stacked value is convolved by a third 3D to obtain an output X3; wherein the number of input channels of the third 3D convolution is 34, the number of output channels is 68, the convolution kernel size is (3, 3), the step size is (1, 2), and the zero padding is (1, 1);
inputting X3 into a fourth 3D convolution by the third coding layer to obtain output X4; wherein the number of input channels of the fourth 3D convolution is 68, the number of output channels is 68, the convolution kernel size is (3, 3), the step size is (1, 1), and the zero padding is (1, 1);
stacking the X3 and the X4 along the dimension of the channel number by using jump connection in the fourth coding layer, and performing fifth 3D convolution on the stacked values to obtain an output X5; the number of input channels of the fifth 3D convolution is 136, the number of output channels is 136, the size of a convolution kernel is (3, 3), the step length is (1, 1), and zero padding is (1, 1);
stacking X3, X4 and X5 along the dimension of the channel number by using jump connection in the fifth coding layer, and obtaining output X6 by performing sixth 3D convolution on the stacked values; the number of input channels of the sixth 3D convolution is 272, the number of output channels is 272, the size of a convolution kernel is (3, 3), the step length is (1, 1), and zero padding is (1, 1);
inputting X6 into a seventh 3D convolution by the sixth coding layer to obtain an output X7; the number of input channels of the seventh 3D convolution is 272, the number of output channels is 272, the size of a convolution kernel is (3, 3), the step size is (1, 1), and zero padding is (1, 1);
inputting X7 into an eighth 3D convolution by the first decoding layer to obtain an output X8; the number of input channels of the eighth 3D convolution is 272, the number of output channels is 272, the convolution kernel size is (3, 3), the step size is (1, 1), and the zero padding is (1, 1);
adding X8 and X6 to obtain X9;
the second decoding layer inputs X9 into the first up-sampling 3D convolution to obtain output X10; the number of input channels of the up-sampling 3D convolution is 272, the number of output channels is 17, the size of a convolution kernel is (3, 3), the step length is (1, 1), zero padding is (1, 1), and up-sampling is (1, 2);
adding X10 and X2 to obtain X11;
the third decoding layer inputs X11 into a second up-sampling 3D convolution to obtain output X12; wherein the number of input channels of the second up-sampling 3D convolution is 17, the number of output channels is 16, the convolution kernel size is (3, 3), the step length is (1, 1), the zero padding is (1, 1), and the up-sampling is (1, 2);
inputting X12 into a non-local self-similarity module, and adding the output of the non-local self-similarity module and X1 to obtain X13;
the reconstruction layer inputs X13 into the ninth 3D convolution to obtain output X14; wherein the ninth 3D convolution has an input channel number of 16, an output channel number of 1, a convolution kernel size of (3, 3), a step size of (1, 1), and zero padding of (1, 1);
adding X14 and X0 to obtain X15 which is the output of the network structure of the space denoising module, and assigning X15 to
Figure QLYQS_34
9. The method according to claim 8, wherein the spectrum reconstruction module is configured to reconstruct the spectrum after the de-noising information is completed, and specifically includes:
Figure QLYQS_35
(15)。
10. the method as claimed in claim 9, wherein the spectrum reconstruction module is configured to, after reconstructing the spectrum after completing the de-noising information, include:
the value of the formula (15) is assigned to the auxiliary dual variable and is substituted into the formula (12), so that the method can be obtained
Figure QLYQS_36
(16)
Sequentially iterating and finishing the formulas (11), (16) and (8) until the iteration times reach the preset times, and outputting the final value to obtain the reconstruction value of the medical hyperspectral image
Figure QLYQS_37
。/>
CN202310151149.9A 2023-02-22 2023-02-22 Improved spatial spectrum fusion hyperspectral calculation reconstruction method based on ADMM framework Active CN115855839B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310151149.9A CN115855839B (en) 2023-02-22 2023-02-22 Improved spatial spectrum fusion hyperspectral calculation reconstruction method based on ADMM framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310151149.9A CN115855839B (en) 2023-02-22 2023-02-22 Improved spatial spectrum fusion hyperspectral calculation reconstruction method based on ADMM framework

Publications (2)

Publication Number Publication Date
CN115855839A true CN115855839A (en) 2023-03-28
CN115855839B CN115855839B (en) 2023-06-02

Family

ID=85658673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310151149.9A Active CN115855839B (en) 2023-02-22 2023-02-22 Improved spatial spectrum fusion hyperspectral calculation reconstruction method based on ADMM framework

Country Status (1)

Country Link
CN (1) CN115855839B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116990243A (en) * 2023-09-26 2023-11-03 湖南大学 GAP frame-based light-weight attention hyperspectral calculation reconstruction method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011138393A (en) * 2009-12-28 2011-07-14 Canon Inc Image processor and image processing method
CN107451956A (en) * 2017-07-19 2017-12-08 北京理工大学 A kind of reconstructing method of code aperture spectrum imaging system
US20190096049A1 (en) * 2017-09-27 2019-03-28 Korea Advanced Institute Of Science And Technology Method and Apparatus for Reconstructing Hyperspectral Image Using Artificial Intelligence
CN109697697A (en) * 2019-03-05 2019-04-30 北京理工大学 The reconstructing method of the spectrum imaging system of neural network based on optimization inspiration
CN111161199A (en) * 2019-12-13 2020-05-15 中国地质大学(武汉) Spatial-spectral fusion hyperspectral image mixed pixel low-rank sparse decomposition method
CN111260576A (en) * 2020-01-14 2020-06-09 哈尔滨工业大学 Hyperspectral unmixing algorithm based on de-noising three-dimensional convolution self-coding network
US20200242740A1 (en) * 2019-12-17 2020-07-30 Guangxi university of science and technology Wide spectrum denoising method for microscopic images
CN114998167A (en) * 2022-05-16 2022-09-02 电子科技大学 Hyperspectral and multispectral image fusion method based on space-spectrum combined low rank
CN115561182A (en) * 2022-11-09 2023-01-03 湖南大学 Priori image guidance-based snapshot type spectral imaging system reconstruction method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011138393A (en) * 2009-12-28 2011-07-14 Canon Inc Image processor and image processing method
CN107451956A (en) * 2017-07-19 2017-12-08 北京理工大学 A kind of reconstructing method of code aperture spectrum imaging system
US20190096049A1 (en) * 2017-09-27 2019-03-28 Korea Advanced Institute Of Science And Technology Method and Apparatus for Reconstructing Hyperspectral Image Using Artificial Intelligence
CN109697697A (en) * 2019-03-05 2019-04-30 北京理工大学 The reconstructing method of the spectrum imaging system of neural network based on optimization inspiration
CN111161199A (en) * 2019-12-13 2020-05-15 中国地质大学(武汉) Spatial-spectral fusion hyperspectral image mixed pixel low-rank sparse decomposition method
US20200242740A1 (en) * 2019-12-17 2020-07-30 Guangxi university of science and technology Wide spectrum denoising method for microscopic images
CN111260576A (en) * 2020-01-14 2020-06-09 哈尔滨工业大学 Hyperspectral unmixing algorithm based on de-noising three-dimensional convolution self-coding network
CN114998167A (en) * 2022-05-16 2022-09-02 电子科技大学 Hyperspectral and multispectral image fusion method based on space-spectrum combined low rank
CN115561182A (en) * 2022-11-09 2023-01-03 湖南大学 Priori image guidance-based snapshot type spectral imaging system reconstruction method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
EDWIN VARGAS, ET AL: "Time-Multiplexed Coded Aperture Imaging: Learned Coded Aperture and Pixel Exposures for Compressive Imaging Systems" *
王旭;陈强;孙权森;: "基于多通道空间光谱全变差的衍射光谱图像复原算法", 计算机研究与发展 *
陈煜嵘 等: "医学高光谱显微成像与智能分析关键技术研究及应用" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116990243A (en) * 2023-09-26 2023-11-03 湖南大学 GAP frame-based light-weight attention hyperspectral calculation reconstruction method
CN116990243B (en) * 2023-09-26 2024-01-19 湖南大学 GAP frame-based light-weight attention hyperspectral calculation reconstruction method

Also Published As

Publication number Publication date
CN115855839B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
Xiong et al. Hscnn: Cnn-based hyperspectral image recovery from spectrally undersampled projections
Nie et al. Deeply learned filter response functions for hyperspectral reconstruction
Liebel et al. Single-image super resolution for multispectral remote sensing data using convolutional neural networks
Chen et al. Hyperspectral image compressive sensing reconstruction using subspace-based nonlocal tensor ring decomposition
Sara et al. Hyperspectral and multispectral image fusion techniques for high resolution applications: A review
CN109697697B (en) Reconstruction method of spectral imaging system based on optimization heuristic neural network
Peng et al. Residual pixel attention network for spectral reconstruction from RGB images
CN109741407A (en) A kind of high quality reconstructing method of the spectrum imaging system based on convolutional neural networks
He et al. DsTer: A dense spectral transformer for remote sensing spectral super-resolution
CN114998167B (en) High-spectrum and multi-spectrum image fusion method based on space-spectrum combined low rank
CN115855839B (en) Improved spatial spectrum fusion hyperspectral calculation reconstruction method based on ADMM framework
CN116228912B (en) Image compressed sensing reconstruction method based on U-Net multi-scale neural network
Wang et al. A frequency-separated 3D-CNN for hyperspectral image super-resolution
CN115984155A (en) Hyperspectral, multispectral and panchromatic image fusion method based on spectrum unmixing
Aetesam et al. Bayesian approach in a learning-based hyperspectral image denoising framework
Cai et al. Binarized spectral compressive imaging
Qi et al. Morphology-based visible-infrared image fusion framework for smart city
WO2023241188A1 (en) Data compression method for quantitative remote sensing application of unmanned aerial vehicle
Zhu et al. Adaptive local sparse representation for compressive hyperspectral imaging
Gundogan et al. Computational spectral imaging with diffractive lenses and spectral filter arrays
Zeng et al. U-net-based multispectral image generation from an rgb image
Cohen et al. Deep neural network classification in the compressively sensed spectral image domain
Schambach Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields
Wang et al. A general paradigm with detail-preserving conditional invertible network for image fusion
CN114022364A (en) Multispectral image spectrum hyper-segmentation method and system based on spectrum library optimization learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant