CN116245761A - Low-rank tensor completion method based on total variation regularization - Google Patents
Low-rank tensor completion method based on total variation regularization Download PDFInfo
- Publication number
- CN116245761A CN116245761A CN202310190187.5A CN202310190187A CN116245761A CN 116245761 A CN116245761 A CN 116245761A CN 202310190187 A CN202310190187 A CN 202310190187A CN 116245761 A CN116245761 A CN 116245761A
- Authority
- CN
- China
- Prior art keywords
- tensor
- low
- rank
- total variation
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000000295 complement effect Effects 0.000 claims abstract description 24
- 230000009466 transformation Effects 0.000 claims abstract description 23
- 230000003190 augmentative effect Effects 0.000 claims abstract description 9
- 239000011159 matrix material Substances 0.000 claims description 30
- 101100173587 Schizosaccharomyces pombe (strain 972 / ATCC 24843) fft3 gene Proteins 0.000 claims description 3
- 238000009795 derivation Methods 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 2
- 230000010354 integration Effects 0.000 claims description 2
- 238000005070 sampling Methods 0.000 description 12
- 230000000694 effects Effects 0.000 description 9
- 238000006243 chemical reaction Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000011084 recovery Methods 0.000 description 5
- 238000006467 substitution reaction Methods 0.000 description 5
- 238000005457 optimization Methods 0.000 description 3
- 238000013178 mathematical model Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013501 data transformation Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Liquid Crystal (AREA)
Abstract
The invention discloses a low-rank tensor completion method based on total variation regularization, which comprises the following steps of: s1, utilizing the low rank property of the transformation tensor Schatten-p norm to describe tensors, combining total variation regularization and the transformation tensor Schatten-p norm, and constructing a low rank tensor complement model; s2, introducing auxiliary variables, constructing an augmented Lagrangian function of a low-rank tensor completion model by using a framework of an alternate direction multiplier method, respectively solving a plurality of sub-problems, continuously iterating until convergence conditions are reached, and finally obtaining the completed tensor. The invention can complement the image data with a large number of missing values, and finally obtain accurate complement results.
Description
Technical Field
The invention relates to the field of computer vision and image processing, in particular to a low-rank tensor complement method based on total variation regularization.
Background
The rapid development of computer technology has resulted in more and more multidimensional data, now in which multidimensional data such as color images tend to be stored in tensors. Tensors are widely used in the fields of computer vision and image processing by virtue of their multidimensional structure and their own properties. However, in the real world, since there are missing values in tensor data for various reasons, how to complement the missing values of tensors to restore images is a problem to be solved.
Low rank tensor complement is one of the most commonly used methods for complementing missing values of images today, and this method utilizes the characteristic that the potential tensor is low rank to construct a model that minimizes the tensor rank function. However, minimizing the tensor rank function is an NP-hard solution problem, so researchers use either convex substitution functions or non-convex substitution functions to replace the tensor rank function for solution. The classical convex substitution tensor kernel norm model has a larger singular value penalty for expressing the main information of the image and the tensor kernel norms are not the tightest envelope for estimating the tensor rank function, the recovery effect of this classical low-rank tensor complement model is not optimal, and many low-rank tensor complement studies are now based on tensor kernel norms, such as the transform tube kernel norms, the nonlinear transform-based tensor kernel norms, etc.
However, the existing low-rank tensor completion methods have the defect that they are only based on the global low-rank characteristic of tensors, neglect the local smoothness of tensors along the space and the tube dimension, so that the local smooth information of the restored image is lost, and the final image restoration effect needs to be further improved.
Disclosure of Invention
The invention aims to: the invention aims to provide a low-rank tensor complement method based on total variation regularization, which can protect local smoothness of tensors along space dimension and Guan Wei and improve effect and performance of low-rank tensor complement on the basis of meeting global low-rank property.
The technical scheme is as follows: the low-rank tensor completion method of the invention comprises the following steps:
s1, utilizing the low rank property of the transformation tensor Schatten-p norm to describe tensors, combining total variation regularization and the transformation tensor Schatten-p norm, and constructing a low rank tensor complement model;
s2, introducing auxiliary variables, constructing an augmented Lagrangian function of a low-rank tensor completion model by using a framework of an alternate direction multiplier method, respectively solving a plurality of sub-problems, continuously iterating until convergence conditions are reached, and finally obtaining the completed tensor.
Further, in step S1, the low-rank tensor complement model expression constructed by combining the total variation regularization and the transformation tensor Schatten-p norm is as follows:
wherein ,representing tensor to be complemented, +.>Representing the tensor observed, tensor to be complemented +.>And tensor observed->Is of size n 1 ×n 2 ×n 3 The method comprises the steps of carrying out a first treatment on the surface of the Omega represents the observation set, < >>Representing tensor to be complemented +.>And tensor observed->Element values are equal on the observation set; />Is to transform tensor Schatten-p norms to characterize the tensor to be complemented +.>Low rank of (3);is a total variation regularization term; alpha is a regularization parameter;
at the same time, according to the definition of total variation, calculating tensor to be complementedThe adjacent element differences along the three dimensions are summed again:
wherein F represents the total difference operator between adjacent elements, decomposed into difference operators F along three different dimensions h 、F v and Ft ,F=[F h ,F v ,F t ]);Representing tensor to be complemented +.>Decomposition into n sub-tensors u by Φ integration i ;0<p<1,p i>0 and />
Further, in step S2, the implementation process of the augmented lagrangian function for constructing the low-rank tensor completion model is as follows:
introducing three auxiliary variable tensors and />Tensor to be complemented->Along three dimensions respectivelyThe formula for the re-summation of adjacent element differences of the degrees is converted into the following formula:
the expression of the augmented lagrangian function is as follows:
The above equation further translates into:
Next, each matrix is solved based on the matrix derivativeThen the matrix is +.>Combining tensors along a pipe dimensionAnd->According to a third mode, developing into a matrix U, multiplying the left side of the matrix U by the transpose of a transformation matrix phi with the size of n 3 ×n 3 The method comprises the steps of carrying out a first treatment on the surface of the Then performing inverse operation on (phi U) according to the third mode expansion to obtain +.>
according to the definition of the transform tensor Schatten-p norm, there are:
To the matrix obtained by solvingCombining tensors along the tube dimension>And->Expanded into matrix according to the third mode>Then in matrix->To the left, multiply the transpose of the transform matrix Φ, the size of the transform matrix Φ being n 3 ×n 3 The method comprises the steps of carrying out a first treatment on the surface of the Then pair->The inverse operation of the third mode expansion is used to obtain +.>
Where sign (·) is a sign function, |·| is the absolute value of the element, ° represents the product between the element and the element.
according to the nature of the Frobenius norm, solving a sub-problem by derivation, and calculating the final recovered tensor to be complemented
Updating each variable in each iteration, stopping iteration when the maximum iteration number is reached, and outputting to obtain the final complement tensor
Or stopping iteration when the following conditions are met, and outputting to obtain the final complement tensor
max(Con1,Con2,Con3,Con4)<∈
Compared with the prior art, the invention has the following remarkable effects:
1. for the description of tensor low rank, the normal convex substitution tensor kernel norm is not selected, but the transformation tensor Schatten-p norm is selected, and the punishment force of the tensor norm on larger singular values is lower than that of the tensor kernel norm and can estimate the tensor rank more tightly than that of the tensor kernel norm;
2. the existing low-rank tensor completion method only considers the low rank property of tensors, and neglects the local smoothness of the tensors along the space and the tube dimension; the total variation is a tool suitable for researching the local smoothness of tensors, and the invention combines the regularization of the total variation and the Schatten-p norm of the transformation tensor, and proposes a low-rank tensor complement model, so that the restored tensor can further keep the local smoothness of the tensor along the space and the tube dimension on the basis of protecting the low-rank property;
3. the low-rank tensor completion model provided by the invention is optimized and solved by using an alternate direction multiplier method, and compared with the traditional classical method, the tensor completion effect obtained by the method is further improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 (a) is an original color image employed in an embodiment of the present invention;
FIG. 2 (b) is an observation image of a color image with a sampling rate of 15% in an embodiment of the present invention;
FIG. 2 (c) is whenTransforming phi into discrete Fourier transform, and repairing a result diagram of a color image with a sampling rate of 15% in the embodiment of the invention; />
FIG. 2 (d) is whenTransforming phi into discrete cosine transform, and repairing a result diagram of a color image with a sampling rate of 15% in the embodiment of the invention;
FIG. 2 (e) is whenThe transformation phi is based on data transformation, and the color image with the sampling rate of 15% is repaired;
FIG. 2 (f) is whenTransforming phi into discrete Fourier transform, and repairing a result diagram of a color image with a sampling rate of 15% in the embodiment of the invention;
FIG. 2 (g) is whenTransforming phi into discrete cosine transform, and repairing a result diagram of a color image with a sampling rate of 15% in the embodiment of the invention;
Detailed Description
The invention is described in further detail below with reference to the drawings and the detailed description.
To overcome the shortcomings of the convex substitution tensor kernel norms, the invention selects the transform tensor Schatten-p norms to better study the low rank of the tensor.
Advantages of the transform tensor Schatten-p norm: the punishment force for larger singular values is lower than the tensor kernel norm; tensor ranks are estimated more tightly than tensor kernel norms.
Meanwhile, the spatial dimension along the tensor and the total variation regularization of Guan Wei are combined with the transformation tensor Schatten-p norm, so that the local smoothness of the tensor along the spatial dimension and Guan Wei can be further protected on the basis of meeting the global low-rank property, and the effect and performance of low-rank tensor complementation are further improved.
The total variation has the capability of protecting the local smoothness of tensors, namely, the difference values of adjacent elements are calculated along the spatial dimension sum Guan Wei of the tensors and then summed, the total variation regularization and the transformation tensor Schatten-p norm are combined to construct a new mathematical model, the new mathematical model is finally converted into the minimization problem of the model, and the optimization solution is carried out by using the framework of the alternating direction multiplier method, so that the tensor which is completed is finally obtained.
The invention provides a low-rank tensor completion method based on total variation regularization, which is shown in a flow chart in fig. 1 and specifically comprises the following steps:
step 1, describing low rank property of tensors by utilizing a transformation tensor Schatten-p norm;
in order to further protect the local smoothness of tensors along space and tube dimensions on the basis of low rank property, the full variation regularization is combined with the transformation tensor Schatten-p norm, and a new low rank tensor complement model is constructed; the low rank tensor completion model construction process is as follows:
wherein ,representing tensor to be complemented, +.>Representing the tensor observed, tensor to be complemented +.>And tensor observed->Is of size n 1 ×n 2 ×n 3 (the tensor to be completed is the tensor of the final completion required to be obtained by solving, the tensor to be observed is the tensor containing the missing value, and is the tensor input to the solving method from the beginning); omega represents the observation set, < >>Representing tensor to be complemented +.>And tensor observed->Element values are equal on the observation set; />Is to transform tensor Schatten-p norms to characterize the tensor to be complemented +.>Low rank of (3); />Is a total variation regularization term to protect the tensor to be complemented +.>Local smoothness along the space and dimensions; alpha is a regularization parameter.
The tensor to be complemented can be defined according to the definition of the transform tensor Schatten-p normThe transformed tensor Schatten-p norm of which is decomposed into the sum of the weights of the transformed tensor Schatten-p norms of the plurality of sub-tensors; at the same time, according to the definition of total variation, the tensor to be complemented is calculated>The adjacent element differences along the three dimensions are summed again, respectively, and equation (1) is converted into equation (2):
wherein F represents the total difference operator between adjacent elements, and can be decomposed into difference operators F along three different dimensions h 、F v and Ft (i.e. F= [ F) h ,F v ,F t ]);Representing tensor->Can be decomposed into n sub-tensors by phi-product>i=1,2,…,n;0<p<1,p i>0 and />
And 2, in order to obtain a model optimization solving result, introducing three auxiliary variables, constructing an augmented Lagrangian function of the proposed low-rank tensor complement model by using a framework of an alternate direction multiplier method (Alternating Direction Method of Multipliers, ADMM), respectively solving a plurality of sub-problems, and continuously iterating until convergence conditions are reached, thereby finally obtaining the completed tensor. The implementation process is as follows:
because of the interdependence between the variables of the model in equation (2), three auxiliary variable tensors need to be introduced and />The equation (2) is converted into the equation (3),
the method comprises the steps of carrying out optimization solution on a formula (3) by utilizing a framework of an alternate direction multiplier method, and firstly constructing an augmented Lagrangian function of the formula (3), wherein the form is as follows:
Equation (5) can be transformed into equation (6) based on the definition of Φ product and unitary invariance of the Frobenius norm:
from equation (7), each matrix can be solved based on the matrix derivativeThen the matrix is +.>Combining tensors along the tube dimension>And->Is developed into a matrix U according to a third mode, and is multiplied by the transposition of a transformation matrix phi (the transformation matrix phi has the size of n) 3 ×n 3 ) Then performing inverse operation on (phi U) according to the third mode expansion to obtain
equation (8) can be transformed into equation (9) according to the definition of the transform tensor Schatten-p norm:
Solving the matrix obtained according to the formula (10)Combining tensors along the tube dimension>And->Expanded into matrix according to the third mode>Then in matrix->To the left, the transpose of the transform matrix Φ (the size of the transform matrix Φ is n 3 ×n 3 ) And then (2) to->The inverse operation of the third mode expansion is used to obtain +.>
Where sign (·) is a sign function, |·| is the absolute value of the element, ° represents the product between the element and the element.
From the nature of the Frobenius norm, equation (13) can be solved by derivativeEquation (14) is a derived form:
wherein ,F* The accompanying operator of the representation F,the solution can be found by FFT3 (3D fourier transform) and iFFT3 (3D inverse fourier transform):
Fifth update tensorThe sub-problem (i.e., tensor to be completed) can be expressed as equation (16):
according to the nature of the Frobenius norm, the equation (16) can be firstly derived, and then the finally recovered tensor to be complemented can be calculated according to the equation (17)
wherein ,Ω⊥ Representing the complement of the Ω set.
Updating the lagrangian multiplier according to equation (18), equation (19), equation (20) and equation (21) and />
Updating each variable for each iteration, stopping iteration when the maximum iteration number is reached or the convergence condition of the formula (22) is reached, and outputting to obtain the final complement tensor
max(Con1,Con2,Con3,Con4)<∈ (22)
The classical color image is selected below to illustrate the effect of the low rank tensor complement method of the invention on the basis of total variation regularization for color images containing missing values:
the experimental data is from classical color image, the data size is 300×300×3, and can be regarded as third-order tensor, in the experiment, according to the sampling rate, pixel values at certain positions are randomly set to 0 on each channel of the color image, namely tensor containing missing values is formedThe experimental task is to complement the tensor with missing values with the observable pixel values at a sampling rate of 15 +.>/>
Fig. 2 (a) shows an original color image (processed to display gray scale), fig. 2 (b) shows a color image (processed to display gray scale) containing a missing value at a sampling rate of 15%, and fig. 2 (c) to 2 (h) show the results of recovery of the method of the present invention for a sampling rate of 15%. Based on the comparison of the color image with missing values and the recovered sampled image, the peak signal-to-noise ratio is 3.8947dB before the recovery of the color image with missing values, the structural similarity is 0.0161, and after the recovery of the color image with missing values, the peak signal-to-noise ratio is 3.8947dBWhen the conversion is discrete Fourier transformation, the peak signal-to-noise ratio is 27.2272dB, and the structural similarity is 0.7925; when (when)When the conversion is discrete cosine conversion, the peak signal-to-noise ratio is 27.313dB, and the structural similarity is 0.8143; when->When the transformation is based on data, the peak signal-to-noise ratio is 27.7915dB, and the structural similarity is 0.8222; when->Conversion to discreteIn the Fourier transform, the peak signal-to-noise ratio is 27.2433dB, and the structural similarity is 0.7932; when->When the conversion is discrete cosine conversion, the peak signal-to-noise ratio is 27.2947dB, and the structural similarity is 0.8136; when->When transformed into a data-based transformation, the peak signal-to-noise ratio was 27.7913dB and the structural similarity was 0.8227. Meanwhile, from the visual restoration effect, the method restores the whole structure, main information and part of details of the color image, and verifies the effectiveness of the method by comparing before and after restoration.
In summary, the method of the invention has better recovery effect on the color image with the missing value and the low sampling rate.
Claims (8)
1. The low-rank tensor completion method based on total variation regularization is characterized by comprising the following steps of:
s1, utilizing the low rank property of the transformation tensor Schatten-p norm to describe tensors, combining total variation regularization and the transformation tensor Schatten-p norm, and constructing a low rank tensor complement model;
s2, introducing auxiliary variables, constructing an augmented Lagrangian function of a low-rank tensor completion model by using a framework of an alternate direction multiplier method, respectively solving a plurality of sub-problems, continuously iterating until convergence conditions are reached, and finally obtaining the completed tensor.
2. The low-rank tensor completion method based on total variation regularization according to claim 1, wherein in step S1, the total variation regularization and the transform tensor Schatten-p norm are combined, and the constructed low-rank tensor completion model expression is as follows:
wherein ,representing tensor to be complemented, +.>Representing the tensor observed, tensor to be complemented +.>And tensor observed->Is of size n 1 ×n 2 ×n 3 The method comprises the steps of carrying out a first treatment on the surface of the Omega represents the observation set, < >>Representing tensor to be complemented +.>And tensor observed->Element values are equal on the observation set; />Is to transform tensor Schatten-p norms to characterize the tensor to be complemented +.>Low rank of (3);is a total variation regularization term; alpha is a regularization parameter;
at the same time, according to the definition of total variation, calculating tensor to be complementedThe adjacent element differences along the three dimensions are summed again:
3. The low-rank tensor completion method based on total variation regularization according to claim 2, wherein in step S2, the implementation process of the augmented lagrangian function for constructing the low-rank tensor completion model is as follows:
introducing three auxiliary variable tensors and />Tensor to be complemented->The formula for the re-summation of adjacent element differences along three dimensions, respectively, is converted to the following:
i=1,2,…,n;
the expression of the augmented lagrangian function is as follows:
4. A low rank tensor completion method based on total variation regularization as recited in claim 3, wherein the first updated tensorThe sub-problem of (2) can be expressed as:
the above equation further translates into:
next, each matrix is solved based on the matrix derivativeThen the matrix is +.>Combining tensors along the tube dimension>And->According to a third mode, developing into a matrix U, multiplying the left side of the matrix U by the transpose of a transformation matrix phi with the size of n 3 ×n 3 The method comprises the steps of carrying out a first treatment on the surface of the Then performing inverse operation on (phi U) according to the third mode expansion to obtain +.>
5. The low rank tensor completion method based on total variation regularization of claim 3, wherein the second updated tensorIs expressed as:
According to the definition of the transform tensor Schatten-p norm, there are:
Wherein prox is λf (. Cndot.) is an approximation operator, defined as k represents the kth iteration, k+1 represents the kth+1 iteration; />
To the matrix obtained by solvingCombining tensors along the tube dimension>t=1,…,n 3 The method comprises the steps of carrying out a first treatment on the surface of the And->Expanded into matrix according to the third mode>Then in matrix->To the left, multiply the transpose of the transform matrix Φ, the size of the transform matrix Φ being n 3 ×n 3 The method comprises the steps of carrying out a first treatment on the surface of the Re-pairingThe inverse operation of the third mode expansion is used to obtain +.>
6. A low rank tensor completion method based on total variation regularization as recited in claim 3, wherein a third updated tensorThe sub-problem of (2) is expressed as:
Wherein sign (·) is a sign function, |·| is the absolute value of the element, ° represents the product between the element and the element; k represents the kth iteration, and k+1 represents the kth+1 iteration.
7. The low rank tensor completion method based on total variation regularization of claim 3, wherein a fourth updated tensorThe sub-problem of (2) is expressed as:
8. A low rank tensor completion method based on total variation regularization as recited in claim 3, wherein the tensor to be completedThe sub-problem of (2) is expressed as:
according to the nature of the Frobenius norms, the sub-problem is solved through derivative, and then the finally recovered tensor to be complemented is calculated
Updating each variable for each iteration, stopping when the maximum number of iterations is reachedIterating, and outputting to obtain final complement tensor
Or stopping iteration when the following conditions are met, and outputting to obtain the final complement tensor/>
max(Con1,Con2,Con3,Con4)<∈
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310190187.5A CN116245761A (en) | 2023-03-02 | 2023-03-02 | Low-rank tensor completion method based on total variation regularization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310190187.5A CN116245761A (en) | 2023-03-02 | 2023-03-02 | Low-rank tensor completion method based on total variation regularization |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116245761A true CN116245761A (en) | 2023-06-09 |
Family
ID=86629299
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310190187.5A Pending CN116245761A (en) | 2023-03-02 | 2023-03-02 | Low-rank tensor completion method based on total variation regularization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116245761A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117271988A (en) * | 2023-11-23 | 2023-12-22 | 广东工业大学 | Tensor wheel-based high-dimensional signal recovery method and device |
-
2023
- 2023-03-02 CN CN202310190187.5A patent/CN116245761A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117271988A (en) * | 2023-11-23 | 2023-12-22 | 广东工业大学 | Tensor wheel-based high-dimensional signal recovery method and device |
CN117271988B (en) * | 2023-11-23 | 2024-02-09 | 广东工业大学 | Tensor wheel-based high-dimensional signal recovery method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Miao et al. | Low-rank quaternion tensor completion for recovering color videos and images | |
Figueiredo et al. | Restoration of Poissonian images using alternating direction optimization | |
CN113240596B (en) | Color video recovery method and system based on high-order tensor singular value decomposition | |
Kalz et al. | Analysis of the phase transition for the Ising model on the frustrated square lattice | |
CN113222834B (en) | Visual data tensor completion method based on smoothness constraint and matrix decomposition | |
CN110139046B (en) | Tensor-based video frame synthesis method | |
CN114841888B (en) | Visual data completion method based on low-rank tensor ring decomposition and factor prior | |
CN108510013B (en) | Background modeling method for improving robust tensor principal component analysis based on low-rank core matrix | |
CN104867119B (en) | The structural missing image fill method rebuild based on low-rank matrix | |
Leung et al. | Eulerian Gaussian beams for Schrödinger equations in the semi-classical regime | |
CN103810755A (en) | Method for reconstructing compressively sensed spectral image based on structural clustering sparse representation | |
CN116245761A (en) | Low-rank tensor completion method based on total variation regularization | |
CN106651974B (en) | Utilize the compression of images sensing reconstructing system and method for weighting structures group Sparse rules | |
CN111598798A (en) | Image restoration method based on low-rank tensor chain decomposition | |
CN104657962A (en) | Image super-resolution reconstruction method based on cascading linear regression | |
CN112581378B (en) | Image blind deblurring method and device based on significance strength and gradient prior | |
Suthar et al. | A survey on various image inpainting techniques to restore image | |
CN115439344A (en) | Mixed noise hyperspectral image restoration method combining double low rank and spatial spectrum total variation | |
CN110675318B (en) | Sparse representation image super-resolution reconstruction method based on main structure separation | |
Zhang et al. | Randomized sampling techniques based low-tubal-rank plus sparse tensor recovery | |
Wang et al. | Hyperspectral unmixing via plug-and-play priors | |
CN115131226B (en) | Image restoration method based on wavelet tensor low-rank regularization | |
CN112241938A (en) | Image restoration method based on smooth Tak decomposition and high-order tensor Hank transformation | |
CN116797456A (en) | Image super-resolution reconstruction method, system, device and storage medium | |
CN108093266B (en) | image compressed sensing reconstruction system and method using group normalization sparse representation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |