CN114820352A - Hyperspectral image denoising method and device and storage medium - Google Patents

Hyperspectral image denoising method and device and storage medium Download PDF

Info

Publication number
CN114820352A
CN114820352A CN202210365831.3A CN202210365831A CN114820352A CN 114820352 A CN114820352 A CN 114820352A CN 202210365831 A CN202210365831 A CN 202210365831A CN 114820352 A CN114820352 A CN 114820352A
Authority
CN
China
Prior art keywords
hyperspectral image
model
hyperspectral
noise
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210365831.3A
Other languages
Chinese (zh)
Inventor
杨敏
徐辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202210365831.3A priority Critical patent/CN114820352A/en
Publication of CN114820352A publication Critical patent/CN114820352A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a hyperspectral image denoising method, a hyperspectral image denoising device and a storage medium, wherein hyperspectral image data to be denoised are input into a hyperspectral image denoising model which is pre-trained and optimized, and an output denoised hyperspectral image result is obtained; the construction method of the hyperspectral image denoising model comprises the following steps: firstly, separating noise from an original image by using low-rank tensor decomposition to obtain a degradation model; then adding an improved weighted total variation regularizer into the model to fully represent the spatial and spectral correlation among the wave bands of the hyperspectral image; then, L1 norm is used for normalizing sparse noise, and F norm is used for normalizing Gaussian noise; and finally, solving the model by using an augmented Lagrange multiplier method, and updating the optimization model by combining with the input training set image with sparse noise and Gaussian noise. The hyperspectral image denoising method disclosed by the invention can obtain a better recovery effect, and is obviously improved in quantitative evaluation and visual comparison.

Description

Hyperspectral image denoising method and device and storage medium
Technical Field
The invention belongs to the technical field of image processing, and relates to a hyperspectral image denoising method and device based on improved weighted variation tensor decomposition and a storage medium.
Background
The hyperspectral image has abundant spatial and spectral structure information and is widely applied to multiple fields of military affairs, cities, aerospace and the like. However, the image is polluted by various noises in the acquisition process, such as gauss, salt and pepper, stripe noise and the like, so that the quality of the hyperspectral image is seriously degraded. Therefore, it is necessary to denoise the hyperspectral image and recover an image close to the original definition from the degraded image.
A natural method for denoising hyperspectral images is to regard each wave band as a gray level image and then denoise the image band by adopting a traditional two-dimensional or one-dimensional denoising method; the subsequent method utilizes the similarity and the spatial characteristic of adjacent image pixels, realizes the smooth segmentation of the space by a total variation regularization method, processes the edge information of the image and improves the image restoration precision. These methods are all for removing one to two types of noise, i.e., gaussian noise, impulse noise, etc. However, it is common to be corrupted by several different types of noise during hyperspectral acquisition, such as gaussian noise, impulse noise, dead lines, streaks, etc. Although a method of eliminating noise mixing is proposed based on low rank matrix modeling, the recovery effect is not ideal.
Disclosure of Invention
The purpose is as follows: in order to solve the problem, the invention provides a hyperspectral image denoising method, a hyperspectral image denoising device and a hyperspectral image denoising storage medium based on improved weighted variation tensor decomposition.
Converting multidimensional hyperspectral data into vectors or matrices usually destroys the correlation of the spectral-spatial structure, and the tensor modeling technology has more advantages than the matrixing technology. And simultaneously capturing the non-local similarity and the spectral correlation of the spectral space by utilizing the third-order tensor. Tensor-based methods essentially preserve the inherent structural correlation with better recovery results.
The technical scheme is as follows: in order to solve the technical problems, the technical scheme adopted by the invention is as follows:
in a first aspect, a hyperspectral image denoising method is provided, including:
acquiring hyperspectral image data to be denoised;
inputting hyperspectral image data to be denoised into a hyperspectral image denoising model which is pre-trained and optimized, and obtaining an output denoised hyperspectral image result;
the construction method of the hyperspectral image denoising model comprises the following steps:
1) decomposing and separating noise terms by using a low-rank tensor to obtain a hyperspectral degradation model;
2) adding an improved weighted total variation regularizer (w-SSTV) into the hyperspectral degradation model obtained in the step 1), and fully representing the spatial and spectral correlation among hyperspectral image wave bands;
3) in the model obtained in the step 2), normalizing sparse noise by using an L1 norm, normalizing Gaussian noise by using an F norm, and determining a constraint condition;
4) solving the model obtained in the step 3) by using an augmented Lagrange multiplier method, and performing iterative optimization on the model by inputting a training set image with sparse noise and Gaussian noise to obtain a trained and optimized hyperspectral image denoising model.
In some embodiments, the hyperspectral degradation model comprises:
Y=X+N+S
wherein Y denotes a hyper-spectral cube Y ═ Y of a noisy third-order tensor 1 ,Y 1 ,…Y B In which Y is i ∈R h×w H represents height, i represents frequency band, width represents w, and B represents frequency band number; x represents the clean image, N represents gaussian noise, S represents sparse noise, X, N, S and Y have the same tensor size.
In some embodiments, separating the noise terms using a low rank tensor decomposition includes:
the hyperspectral image is divided into overlapped three-dimensional blocks, and for an n-order tensor, the n-order tensor is decomposed into n factor matrixes and a core tensor by the Tucker decomposition, so that the spectrum and space low-rank characteristics of the image are fully utilized; the factor matrix on each modality is called the base matrix or principal component of the tensor; the Tucker decomposition equation is expressed as:
X=C× 1 U 1 × 2 U 2 ×…× n U n ,U n T U n =I
where C is a control factor matrix
Figure BDA0003586995080000031
The core tensor of interaction between; u is a coefficient matrix.
In some embodiments, adding a modified weighted total variation regularizer to the hyperspectral degradation model comprises:
||X|| SSTV =w 1 ||D x X||+w 2 ||D y X||+w 3 ||D z X||
wherein | X | Y luminance SSTV For weighted full-variational models, D x ,D y ,D z Respectively representing a first-order forward finite difference operator along a spatial horizontal direction x, a spatial vertical direction y and a spectral direction z; w is a 1 ,w 2 ,w 3 Respectively representing weighted difference operators in x, y and z directions;
D x =X(i+1,j,k)-X(i,j,k)
D y =X(i,j+1,k)-X(i,j,k)
D z =X(i,j,k+1)-X(i,j,k)
in X (i, j, k), i and j represent the spatial positions of the image X in the horizontal and vertical directions, respectively, and k represents the kth band of the image X.
In some embodiments, sparse noise is normalized with an L1 norm, gaussian noise is normalized with an F norm, and constraints are determined, including:
Figure BDA0003586995080000032
s.t.Y=X+S+N,
X=C× 1 U 1 × 2 U 2 × 3 U 3 ,U n T U n =I
wherein X represents a clean image, N represents Gaussian noise, S represents sparse noise, tau, lambda and beta respectively represent respective control coefficient factors of X, S and N, and an s.t. tableRepresenting the constraint condition of the model; c is a control factor matrix U 1 、U 2 、U 3 The core tensor of interaction between; u is a coefficient matrix.
In some embodiments, the solution is solved using an augmented lagrange multiplier method, comprising:
let X be Z, D w (Z)=F,D w The method is characterized in that a weighted three-dimensional difference operator exists in three first-order difference operators in different directions, and Z and F both represent auxiliary variables introduced in an augmented Lagrange multiplier method; the expression of the augmented Lagrangian function L is as follows:
Figure BDA0003586995080000041
where μ is a penalty parameter, μ 1 ,μ 2 ,μ 3 And represents an augmented lagrange multiplier.
In some embodiments, iterative optimization of the model is performed by inputting a training set image with sparse noise and gaussian noise, including:
firstly, inputting a training set image with sparse noise and Gaussian noise, the rank r of the matrix and the weight value w 1 ,w 2 ,w 3 Initializing a hyperspectral image X, and enabling X to be equal to Z and S to be equal to N and equal to 0 according to an augmented Lagrange multiplier law, and enabling the hyperspectral image X to be equal to Z and N to be equal to 0 1 =μ 2 =μ 3 0, k 0, then separately updating X, Z, F, S, N and updating the augmented lagrange multiplier mu 1 ,μ 2 ,μ 3 And performing k +1 times of iterative optimization on the model until an iteration stop condition is met.
In a second aspect, the invention provides a hyperspectral image denoising device, which comprises a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to the first aspect.
In a third aspect, the present invention provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
Has the advantages that: the hyperspectral image denoising method and the hyperspectral image denoising device provided by the invention have the following advantages: compared with an image denoising method, the hyperspectral image denoising method based on weighted variation tensor decomposition has a better image recovery effect in removing various mixed noises, the structural similarity and the peak signal-to-noise ratio are remarkably improved, the smoothness of a spectrum space can be improved, and a better effect is obtained in image vision.
Drawings
FIG. 1 is a flowchart of a method for constructing a hyperspectral image denoising model according to an embodiment of the invention.
Detailed Description
The invention is further described below with reference to the figures and examples. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
In the description of the present invention, the meaning of a plurality is one or more, the meaning of a plurality is two or more, and the above, below, exceeding, etc. are understood as excluding the present numbers, and the above, below, within, etc. are understood as including the present numbers. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Example 1
The hyperspectral image denoising method comprises the following steps:
acquiring hyperspectral image data to be denoised;
inputting hyperspectral image data to be denoised into a hyperspectral image denoising model which is pre-trained and optimized, and obtaining an output denoised hyperspectral image result;
the construction method of the hyperspectral image denoising model comprises the following steps:
1) decomposing and separating noise terms by using a low-rank tensor to obtain a hyperspectral degradation model;
2) adding an improved weighted total variation regularizer (w-SSTV) into the hyperspectral degradation model obtained in the step 1), and fully representing the spatial and spectral correlation among hyperspectral image wave bands;
3) in the model obtained in the step 2), normalizing sparse noise by using an L1 norm, normalizing Gaussian noise by using an F norm, and determining a constraint condition;
4) solving the model obtained in the step 3) by using an augmented Lagrange multiplier method, and performing iterative optimization on the model by inputting a training set image with sparse noise and Gaussian noise to obtain a trained and optimized hyperspectral image denoising model.
In some embodiments, a method for denoising hyperspectral images by using weighted fully-variant tensor decomposition as shown in fig. 1 comprises the following steps:
s1, decomposing and separating noise terms by using a low-rank tensor to obtain a degraded hyperspectral model; in particular, the amount of the solvent to be used,
the hyperspectral image processing method includes the steps that firstly, a hyperspectral image is divided into overlapped three-dimensional blocks, and for an n-order tensor, the n-order tensor is decomposed into n factor matrixes and a core tensor through Tucker decomposition, and the spectrum and space low-rank characteristics of the image are fully utilized. The factor matrix on each modality is called the basis matrix or principal component of the tensor. The Tucker decomposition can also be considered as a higher order PCA to describe a low rank tensor approximation. The Tucker decomposition equation is expressed as:
X=C× 1 U 1 × 2 U 2 ×…× n U n ,U n T U n =I
c is a control factor matrix
Figure BDA0003586995080000061
The core tensor of the interaction between them. U is a coefficient matrix, a classical high-order orthogonal iteration (HOOI) algorithm is adopted, and low-rank Tucker decomposition is realized on the premise of not increasing the calculated amount to obtain a hyperspectral degradation model:
Y=X+N+S
hyperspectral cube Y ═ Y for which Y represents the noisy third order tensor 1 ,Y 1 ,…Y B In which Y is i ∈R h×w H represents height, i represents frequency band, width represents w, and B represents frequency band number; x represents a clean image, N and S represent gaussian noise and sparse noise, respectively, which have the same tensor size as Y.
Step S2, adding an improved weighted total variation regularizer (w-SSTV); in particular, the amount of the solvent to be used,
hyperspectrum has a strong local smooth structure on the spectrum mode. I.e. the difference between adjacent bands in the spectral domain is mostly close to zero and the number of zeros is significantly larger than zeros in the spatial domain. The SSTV regularization algorithm is used to study piecewise smooth structures in the spatial and spectral domains.
Introducing an anisotropic fully-variable partial model | | | X | | non-woven phosphor SSTV Wherein | | X | Y phosphor SSTV =||D x X||+||D y X||+||D z X||,D x ,D y ,D z The first-order forward finite difference operators in the spatial horizontal direction, the spatial vertical direction and the spectral direction are respectively represented.
D x =X(i+1,j,k)-X(i,j,k)
D y =X(i,j+1,k)-X(i,j,k)
D z =X(i,j,k+1)-X(i,j,k)
In X (i, j, k), i and j respectively represent the spatial positions of the image X in the horizontal direction and the vertical direction, and k represents the kth wave band of the image X;
weighting the model w to obtain: | X | non-conducting phosphor SSTV =w 1 ||D x X||+w 2 ||D y X||+w 3 ||D z X | |. The weight controls the regularization speed of X. w is a 1 ,w 2 ,w 3 And respectively represents weighted difference operators in the x direction, the y direction and the z direction.
S3, normalizing sparse noise by using an L1 norm, and normalizing Gaussian noise by using an F norm; specifically, S sparse noise is modeled by L1 norm, and N Gaussian noise is modeled by F norm to obtain the following expression
Figure BDA0003586995080000071
s.t.Y=X+S+N,
X=C× 1 U 1 × 2 U 2 × 3 U 3 ,U n T U n =I
Tau, lambda and beta respectively represent control coefficient factors of X, S and N, and s.t. represents a constraint condition of the model, so that the similarity of spectral image pixels and the segmentation smoothness of a spatial spectrum are fully obtained, artifacts are reduced, and mixed noise is removed.
Step S4, performing image denoising by the augmented lagrange multiplier method, specifically,
the model obtained in step S3 is optimized so that X is Z, D w (Z)=F,D w The weighted three-dimensional difference operator has three first-order difference operators in different directions, and the ALM model expression is as follows:
Figure BDA0003586995080000081
μ is a penalty parameter, μ 1 ,μ 2 ,μ 3 And represents an augmented lagrange multiplier. Denoising algorithm: firstly, inputting an image with Gauss and sparse noise, a rank r, a weight value w, an iteration stopping criterion and the like of a matrix, initializing a hyperspectral image X, and enabling X to be Z-S-N-0 and mu to be mu according to an augmented Lagrange multiplier method 1 =μ 2 =μ 3 Then updating X, Z, F, S, N and updating Lagrange multiplier mu respectively 1 ,μ 2 ,μ 3 And, etc., fixing other variables and performing k +1 iterations on the model.
And performing a noise mixing experiment on the indian data set, wherein the final output is a hyperspectral image recovery result.
The following tables 1-2 respectively show the effect comparison between the image denoising method of the present invention and the conventional prior art under 2 experiments:
experiment 1 gaussian noise and impulse noise are added to all bands of the indian dataset. The variance of the gaussian noise (G) is 0.02, 0.06 and 0.1, respectively. At the same time, impulse noise (P) is also added to all bands to simulate sparse noise. The percentages of impulse noise (P) are 0.04, 0.12 and 0.2, respectively.
Experiment 2. dead line noise (deadlines) was added on the basis of experiment 1, with other parameters being kept constant.
3 different simulated observations were recovered using the methods herein and the comparative methods. And respectively averaging the SSIM and the PSNR of all channels of the restoration result, recording as MSSIM and MPSNR, and using the two averages as evaluation standards of the final restoration effect. The peak signal-to-noise ratio PSNR is based on an error-sensitive image quality evaluation. Given a size of clean image X and noisy image Y, PSNR is defined as:
Figure BDA0003586995080000091
Figure BDA0003586995080000092
m, N represents the spatial dimension, i.e. width and height, of the hyperspectral image, i, j represents the spatial position, and B represents the number of bands of the hyperspectral image. When the value of the peak signal-to-noise ratio PNSR is larger, the distortion of the image is smaller, and the recovered image is closer to the real image.
Structural similarity SSIM is defined as:
Figure BDA0003586995080000093
Figure BDA0003586995080000094
μ uu′ respectively representing the mean value of the pixels, σ, of image Y and image X uu′ Representing the variance of the image, C1, C2 are constants, and B represents the number of bands of the hyperspectral image. The value range of the structural similarity SSIM is [0,1 ]]The larger the value is, the smaller the image distortion is, and the image restoration effect is good.
Table 1 results of experiment 1 on indian dataset
Figure BDA0003586995080000095
Figure BDA0003586995080000101
Table 2 results of experiment 2 on indian dataset
Figure BDA0003586995080000102
Wherein, the model in the invention is replaced by OURS, and the evaluation criteria are MSSIM and MPSNR (structural similarity and peak signal-to-noise ratio). Table 1 shows different image denoising methods, and the image recovery effect is reduced with the enhancement of gaussian noise and impulse noise, which shows that the model of the present invention is excellent in indian data set. The recovery effect of three mixed noises on the hyperspectral image is shown in table 2. It can be seen that the model performs well, illustrating that the effect of various mixed noises on the image is well solved. Compared with the traditional recovery method, the method provided by the invention has obvious competitive advantages, so the method has excellent performance on solving the problem of mixed noise of the hyperspectral data set, and has certain significance.
Example 2
In a second aspect, the present embodiment provides a hyperspectral image denoising device, including a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to embodiment 1.
Example 3
In a third aspect, the present embodiment provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of embodiment 1.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (9)

1. A hyperspectral image denoising method is characterized by comprising the following steps:
acquiring hyperspectral image data to be denoised;
inputting hyperspectral image data to be denoised into a hyperspectral image denoising model which is pre-trained and optimized, and obtaining an output denoised hyperspectral image result;
the construction method of the hyperspectral image denoising model comprises the following steps:
1) decomposing and separating noise terms by using a low-rank tensor to obtain a hyperspectral degradation model;
2) adding an improved weighted total variation regularizer into the hyperspectral degradation model obtained in the step 1), and fully representing the spatial and spectral correlation among the hyperspectral image wave bands;
3) in the model obtained in the step 2), normalizing sparse noise by using an L1 norm, normalizing Gaussian noise by using an F norm, and determining a constraint condition;
4) solving the model obtained in the step 3) by using an augmented Lagrange multiplier method, and performing iterative optimization on the model by inputting a training set image with sparse noise and Gaussian noise to obtain a trained and optimized hyperspectral image denoising model.
2. The hyperspectral image denoising method according to claim 1, wherein the hyperspectral degradation model comprises:
Y=X+N+S
wherein Y denotes a hyper-spectral cube Y ═ Y of a noisy third-order tensor 1 ,Y 1 ,…Y B In which Y is i ∈R h×w H represents height, i represents frequency band, width represents w, and B represents frequency band number; x represents the clean image, N represents gaussian noise, S represents sparse noise, X, N, S and Y have the same tensor size.
3. The hyperspectral image denoising method according to claim 1 or 2, wherein separating noise terms by using low rank tensor decomposition comprises:
the hyperspectral image is divided into overlapped three-dimensional blocks, and for an n-order tensor, the n-order tensor is decomposed into n factor matrixes and a core tensor by the Tucker decomposition, so that the spectrum and space low-rank characteristics of the image are fully utilized; the factor matrix on each modality is called the base matrix or principal component of the tensor; the Tucker decomposition equation is expressed as:
X=C× 1 U 1 × 2 U 2 ×…× n U n ,U n T U n =I
where C is a control factor matrix
Figure FDA0003586995070000021
The core tensor of interaction between; u is a coefficient matrix.
4. The hyperspectral image denoising method according to claim 1, wherein adding an improved weighted holo-variate regularizer to the hyperspectral degraded model comprises:
||X|| SSTV =w 1 ||D x X||+w 2 ||D y X||+w 3 ||D z X||
wherein | X | Y luminance SSTV For weighted full-variational models, D x ,D y ,D z Respectively representing a first-order forward finite difference operator along a spatial horizontal direction x, a spatial vertical direction y and a spectral direction z; w is a 1 ,w 2 ,w 3 Respectively representing weighted difference operators in x, y and z directions;
D x =X(i+1,j,k)-X(i,j,k)
D y =X(i,j+1,k)-X(i,j,k)
D z =X(i,j,k+1)-X(i,j,k)
in X (i, j, k), i and j represent the spatial positions of the image X in the horizontal and vertical directions, respectively, and k represents the kth band of the image X.
5. The hyperspectral image denoising method of claim 1, wherein the L1 norm is used to normalize sparse noise, the F norm is used to normalize gaussian noise, and the determining the constraint condition comprises:
Figure FDA0003586995070000022
s.t.Y=X+S+N,
X=C× 1 U 1 × 2 U 2 × 3 U 3 ,U n T U n =I
wherein X represents a clean image, N represents Gaussian noise, S represents sparse noise, tau, lambda and beta respectively represent respective control coefficient factors of X, S and N, and s.t. represents constraint conditions of a model; c is a control factor matrix U 1 、U 2 、U 3 The core tensor of interaction between; u is a coefficient matrix.
6. The hyperspectral image denoising method of claim 5, wherein solving using the augmented Lagrange multiplier method comprises:
let X be Z, D w (Z)=F,D w Is a channelThe over-weighted three-dimensional difference operator comprises three first-order difference operators in different directions, and Z and F both represent auxiliary variables introduced in the augmented Lagrange multiplier method; the expression of the augmented Lagrangian function L is as follows:
Figure FDA0003586995070000031
where μ is a penalty parameter, μ 1 ,μ 2 ,μ 3 And represents an augmented lagrange multiplier.
7. The hyperspectral image denoising method of claim 6, wherein the iterative optimization of the model by inputting a training set image with sparse noise and Gaussian noise comprises:
firstly, inputting a training set image with sparse noise and Gaussian noise, the rank r of the matrix and the weight value w 1 ,w 2 ,w 3 Initializing a hyperspectral image X, and enabling X to be equal to Z and S to be equal to N and equal to 0 according to an augmented Lagrange multiplier law, and enabling the hyperspectral image X to be equal to Z and N to be equal to 0 1 =μ 2 =μ 3 0, k 0, then separately updating X, Z, F, S, N and updating the augmented lagrange multiplier mu 1 ,μ 2 ,μ 3 And performing k +1 times of iterative optimization on the model until an iteration stop condition is met.
8. A hyperspectral image denoising device is characterized by comprising: comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to any one of claims 1 to 7.
9. A storage medium having a computer program stored thereon, the computer program, when being executed by a processor, performing the steps of the method of any one of claims 1 to 7.
CN202210365831.3A 2022-04-08 2022-04-08 Hyperspectral image denoising method and device and storage medium Pending CN114820352A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210365831.3A CN114820352A (en) 2022-04-08 2022-04-08 Hyperspectral image denoising method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210365831.3A CN114820352A (en) 2022-04-08 2022-04-08 Hyperspectral image denoising method and device and storage medium

Publications (1)

Publication Number Publication Date
CN114820352A true CN114820352A (en) 2022-07-29

Family

ID=82533763

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210365831.3A Pending CN114820352A (en) 2022-04-08 2022-04-08 Hyperspectral image denoising method and device and storage medium

Country Status (1)

Country Link
CN (1) CN114820352A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272141A (en) * 2022-09-29 2022-11-01 深圳大学 Hyperspectral image denoising method, device, system and medium
CN116433534A (en) * 2023-06-09 2023-07-14 四川工程职业技术学院 Hyperspectral image restoration method and device, storage medium and electronic equipment
CN116429709A (en) * 2023-06-09 2023-07-14 季华实验室 Spectrum detection method, spectrum detection device and computer-readable storage medium
CN117541495A (en) * 2023-09-04 2024-02-09 长春理工大学 Image stripe removing method, device and medium for automatically optimizing model weight

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272141A (en) * 2022-09-29 2022-11-01 深圳大学 Hyperspectral image denoising method, device, system and medium
CN116433534A (en) * 2023-06-09 2023-07-14 四川工程职业技术学院 Hyperspectral image restoration method and device, storage medium and electronic equipment
CN116429709A (en) * 2023-06-09 2023-07-14 季华实验室 Spectrum detection method, spectrum detection device and computer-readable storage medium
CN116433534B (en) * 2023-06-09 2023-08-22 四川工程职业技术学院 Hyperspectral image restoration method and device, storage medium and electronic equipment
CN116429709B (en) * 2023-06-09 2023-09-12 季华实验室 Spectrum detection method, spectrum detection device and computer-readable storage medium
CN117541495A (en) * 2023-09-04 2024-02-09 长春理工大学 Image stripe removing method, device and medium for automatically optimizing model weight

Similar Documents

Publication Publication Date Title
Yoo et al. Photorealistic style transfer via wavelet transforms
CN109671029B (en) Image denoising method based on gamma norm minimization
Deng et al. Wavelet domain style transfer for an effective perception-distortion tradeoff in single image super-resolution
CN114820352A (en) Hyperspectral image denoising method and device and storage medium
Li et al. Diffusion Models for Image Restoration and Enhancement--A Comprehensive Survey
Zhao et al. Detail-preserving image denoising via adaptive clustering and progressive PCA thresholding
CN103854262B (en) Medical image denoising method based on documents structured Cluster with sparse dictionary study
CN105631807B (en) The single-frame image super-resolution reconstruction method chosen based on sparse domain
CN110992292B (en) Enhanced low-rank sparse decomposition model medical CT image denoising method
CN110400276B (en) Hyperspectral image denoising method and device
Zhang et al. Kernel Wiener filtering model with low-rank approximation for image denoising
CN112734763B (en) Image decomposition method based on convolution and K-SVD dictionary joint sparse coding
CN110830043B (en) Image compressed sensing reconstruction method based on mixed weighted total variation and non-local low rank
Shahdoosti et al. Combined ripplet and total variation image denoising methods using twin support vector machines
CN105590296B (en) A kind of single-frame images Super-Resolution method based on doubledictionary study
Wen et al. The power of complementary regularizers: Image recovery via transform learning and low-rank modeling
CN114612297A (en) Hyperspectral image super-resolution reconstruction method and device
Shahdoosti et al. A new compressive sensing based image denoising method using block-matching and sparse representations over learned dictionaries
Baraha et al. Speckle removal using dictionary learning and pnp-based fast iterative shrinkage threshold algorithm
Su et al. Graph neural net using analytical graph filters and topology optimization for image denoising
CN115131226B (en) Image restoration method based on wavelet tensor low-rank regularization
Liu et al. Image restoration via wavelet-based low-rank tensor regularization
CN114331853A (en) Single image restoration iteration framework based on target vector updating module
He et al. MRWM: A Multiple Residual Wasserstein Driven Model for Image Denoising
Yang et al. Hyperspectral image denoising with collaborative total variation and low rank regularization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination