CN116245761A - Low-rank tensor completion method based on total variation regularization - Google Patents

Low-rank tensor completion method based on total variation regularization Download PDF

Info

Publication number
CN116245761A
CN116245761A CN202310190187.5A CN202310190187A CN116245761A CN 116245761 A CN116245761 A CN 116245761A CN 202310190187 A CN202310190187 A CN 202310190187A CN 116245761 A CN116245761 A CN 116245761A
Authority
CN
China
Prior art keywords
tensor
low
rank
total variation
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310190187.5A
Other languages
Chinese (zh)
Inventor
刘佳慧
朱玉莲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202310190187.5A priority Critical patent/CN116245761A/en
Publication of CN116245761A publication Critical patent/CN116245761A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Liquid Crystal (AREA)

Abstract

The invention discloses a low-rank tensor completion method based on total variation regularization, which comprises the following steps of: s1, utilizing the low rank property of the transformation tensor Schatten-p norm to describe tensors, combining total variation regularization and the transformation tensor Schatten-p norm, and constructing a low rank tensor complement model; s2, introducing auxiliary variables, constructing an augmented Lagrangian function of a low-rank tensor completion model by using a framework of an alternate direction multiplier method, respectively solving a plurality of sub-problems, continuously iterating until convergence conditions are reached, and finally obtaining the completed tensor. The invention can complement the image data with a large number of missing values, and finally obtain accurate complement results.

Description

Low-rank tensor completion method based on total variation regularization
Technical Field
The invention relates to the field of computer vision and image processing, in particular to a low-rank tensor complement method based on total variation regularization.
Background
The rapid development of computer technology has resulted in more and more multidimensional data, now in which multidimensional data such as color images tend to be stored in tensors. Tensors are widely used in the fields of computer vision and image processing by virtue of their multidimensional structure and their own properties. However, in the real world, since there are missing values in tensor data for various reasons, how to complement the missing values of tensors to restore images is a problem to be solved.
Low rank tensor complement is one of the most commonly used methods for complementing missing values of images today, and this method utilizes the characteristic that the potential tensor is low rank to construct a model that minimizes the tensor rank function. However, minimizing the tensor rank function is an NP-hard solution problem, so researchers use either convex substitution functions or non-convex substitution functions to replace the tensor rank function for solution. The classical convex substitution tensor kernel norm model has a larger singular value penalty for expressing the main information of the image and the tensor kernel norms are not the tightest envelope for estimating the tensor rank function, the recovery effect of this classical low-rank tensor complement model is not optimal, and many low-rank tensor complement studies are now based on tensor kernel norms, such as the transform tube kernel norms, the nonlinear transform-based tensor kernel norms, etc.
However, the existing low-rank tensor completion methods have the defect that they are only based on the global low-rank characteristic of tensors, neglect the local smoothness of tensors along the space and the tube dimension, so that the local smooth information of the restored image is lost, and the final image restoration effect needs to be further improved.
Disclosure of Invention
The invention aims to: the invention aims to provide a low-rank tensor complement method based on total variation regularization, which can protect local smoothness of tensors along space dimension and Guan Wei and improve effect and performance of low-rank tensor complement on the basis of meeting global low-rank property.
The technical scheme is as follows: the low-rank tensor completion method of the invention comprises the following steps:
s1, utilizing the low rank property of the transformation tensor Schatten-p norm to describe tensors, combining total variation regularization and the transformation tensor Schatten-p norm, and constructing a low rank tensor complement model;
s2, introducing auxiliary variables, constructing an augmented Lagrangian function of a low-rank tensor completion model by using a framework of an alternate direction multiplier method, respectively solving a plurality of sub-problems, continuously iterating until convergence conditions are reached, and finally obtaining the completed tensor.
Further, in step S1, the low-rank tensor complement model expression constructed by combining the total variation regularization and the transformation tensor Schatten-p norm is as follows:
Figure BDA0004105139260000021
Figure BDA0004105139260000022
wherein ,
Figure BDA0004105139260000023
representing tensor to be complemented, +.>
Figure BDA0004105139260000024
Representing the tensor observed, tensor to be complemented +.>
Figure BDA0004105139260000025
And tensor observed->
Figure BDA0004105139260000026
Is of size n 1 ×n 2 ×n 3 The method comprises the steps of carrying out a first treatment on the surface of the Omega represents the observation set, < >>
Figure BDA0004105139260000027
Representing tensor to be complemented +.>
Figure BDA0004105139260000028
And tensor observed->
Figure BDA0004105139260000029
Element values are equal on the observation set; />
Figure BDA00041051392600000210
Is to transform tensor Schatten-p norms to characterize the tensor to be complemented +.>
Figure BDA00041051392600000211
Low rank of (3);
Figure BDA00041051392600000212
is a total variation regularization term; alpha is a regularization parameter;
at the same time, according to the definition of total variation, calculating tensor to be complemented
Figure BDA00041051392600000213
The adjacent element differences along the three dimensions are summed again:
Figure BDA00041051392600000214
Figure BDA00041051392600000215
wherein F represents the total difference operator between adjacent elements, decomposed into difference operators F along three different dimensions h 、F v and Ft ,F=[F h ,F v ,F t ]);
Figure BDA00041051392600000216
Representing tensor to be complemented +.>
Figure BDA00041051392600000229
Decomposition into n sub-tensors u by Φ integration i ;0<p<1,p i>0 and />
Figure BDA00041051392600000217
Further, in step S2, the implementation process of the augmented lagrangian function for constructing the low-rank tensor completion model is as follows:
introducing three auxiliary variable tensors
Figure BDA00041051392600000218
and />
Figure BDA00041051392600000219
Tensor to be complemented->
Figure BDA00041051392600000220
Along three dimensions respectivelyThe formula for the re-summation of adjacent element differences of the degrees is converted into the following formula:
Figure BDA00041051392600000221
Figure BDA00041051392600000222
Figure BDA00041051392600000223
Figure BDA00041051392600000224
Figure BDA00041051392600000225
Figure BDA00041051392600000226
wherein F= [ F ] h ,F v ,F t ];0<p<1,p i>0 and
Figure BDA00041051392600000227
i=1,2,…,n;
the expression of the augmented lagrangian function is as follows:
Figure BDA00041051392600000228
wherein ,
Figure BDA0004105139260000031
and />
Figure BDA0004105139260000032
Is a lagrange multiplier, i=1, 2, …, n; μ is a penalty parameter.
Further, the first update tensor
Figure BDA0004105139260000033
The sub-problem of (2) can be expressed as:
Figure BDA0004105139260000034
wherein
Figure BDA0004105139260000035
and />
Figure BDA0004105139260000036
The above equation further translates into:
Figure BDA0004105139260000037
by tensor
Figure BDA0004105139260000038
Is +.>
Figure BDA0004105139260000039
As variables, there are: />
Figure BDA00041051392600000310
Next, each matrix is solved based on the matrix derivative
Figure BDA00041051392600000311
Then the matrix is +.>
Figure BDA00041051392600000312
Combining tensors along a pipe dimension
Figure BDA00041051392600000313
And->
Figure BDA00041051392600000314
According to a third mode, developing into a matrix U, multiplying the left side of the matrix U by the transpose of a transformation matrix phi with the size of n 3 ×n 3 The method comprises the steps of carrying out a first treatment on the surface of the Then performing inverse operation on (phi U) according to the third mode expansion to obtain +.>
Figure BDA00041051392600000321
Further, the second updates the tensor
Figure BDA00041051392600000315
The sub-problem of (2) is expressed as:
Figure BDA00041051392600000316
according to the definition of the transform tensor Schatten-p norm, there are:
Figure BDA00041051392600000317
solving for
Figure BDA00041051392600000318
Figure BDA00041051392600000319
Wherein prox is λf (. Cndot.) is an approximation operator, defined as
Figure BDA00041051392600000320
To the matrix obtained by solving
Figure BDA0004105139260000041
Combining tensors along the tube dimension>
Figure BDA0004105139260000042
And->
Figure BDA0004105139260000043
Expanded into matrix according to the third mode>
Figure BDA0004105139260000044
Then in matrix->
Figure BDA0004105139260000045
To the left, multiply the transpose of the transform matrix Φ, the size of the transform matrix Φ being n 3 ×n 3 The method comprises the steps of carrying out a first treatment on the surface of the Then pair->
Figure BDA0004105139260000046
The inverse operation of the third mode expansion is used to obtain +.>
Figure BDA00041051392600000428
Further, a third update tensor
Figure BDA0004105139260000047
The sub-problem of (2) is expressed as:
Figure BDA0004105139260000048
tensor of ream
Figure BDA0004105139260000049
Solving tensor->
Figure BDA00041051392600000410
Figure BDA00041051392600000411
Where sign (·) is a sign function, |·| is the absolute value of the element, ° represents the product between the element and the element.
Further, a fourth update tensor
Figure BDA00041051392600000412
The sub-problem of (2) is expressed as:
Figure BDA00041051392600000413
from the nature of the Frobenius norm, solution by derivation
Figure BDA00041051392600000414
The derived expression is as follows:
Figure BDA00041051392600000415
wherein ,F* The accompanying operator of the representation F,
Figure BDA00041051392600000416
solving by FFT3 and iFFT 3:
Figure BDA00041051392600000417
wherein ,
Figure BDA00041051392600000418
representing tensors whose elements are all 1. />
Further, tensors to be complemented
Figure BDA00041051392600000419
The sub-problem of (2) is expressed as:
Figure BDA00041051392600000420
according to the nature of the Frobenius norm, solving a sub-problem by derivation, and calculating the final recovered tensor to be complemented
Figure BDA00041051392600000421
Figure BDA00041051392600000422
Updating Lagrangian multipliers according to the following formula
Figure BDA00041051392600000423
and yn+3 k+1
Figure BDA00041051392600000424
Figure BDA00041051392600000425
Figure BDA00041051392600000426
Figure BDA00041051392600000427
Updating each variable in each iteration, stopping iteration when the maximum iteration number is reached, and outputting to obtain the final complement tensor
Figure BDA0004105139260000057
Or stopping iteration when the following conditions are met, and outputting to obtain the final complement tensor
Figure BDA0004105139260000058
max(Con1,Con2,Con3,Con4)<∈
wherein ,
Figure BDA0004105139260000051
Figure BDA0004105139260000052
Figure BDA0004105139260000053
e is a set threshold.
Compared with the prior art, the invention has the following remarkable effects:
1. for the description of tensor low rank, the normal convex substitution tensor kernel norm is not selected, but the transformation tensor Schatten-p norm is selected, and the punishment force of the tensor norm on larger singular values is lower than that of the tensor kernel norm and can estimate the tensor rank more tightly than that of the tensor kernel norm;
2. the existing low-rank tensor completion method only considers the low rank property of tensors, and neglects the local smoothness of the tensors along the space and the tube dimension; the total variation is a tool suitable for researching the local smoothness of tensors, and the invention combines the regularization of the total variation and the Schatten-p norm of the transformation tensor, and proposes a low-rank tensor complement model, so that the restored tensor can further keep the local smoothness of the tensor along the space and the tube dimension on the basis of protecting the low-rank property;
3. the low-rank tensor completion model provided by the invention is optimized and solved by using an alternate direction multiplier method, and compared with the traditional classical method, the tensor completion effect obtained by the method is further improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 (a) is an original color image employed in an embodiment of the present invention;
FIG. 2 (b) is an observation image of a color image with a sampling rate of 15% in an embodiment of the present invention;
FIG. 2 (c) is when
Figure BDA0004105139260000054
Transforming phi into discrete Fourier transform, and repairing a result diagram of a color image with a sampling rate of 15% in the embodiment of the invention; />
FIG. 2 (d) is when
Figure BDA0004105139260000055
Transforming phi into discrete cosine transform, and repairing a result diagram of a color image with a sampling rate of 15% in the embodiment of the invention;
FIG. 2 (e) is when
Figure BDA0004105139260000056
The transformation phi is based on data transformation, and the color image with the sampling rate of 15% is repaired;
FIG. 2 (f) is when
Figure BDA0004105139260000061
Transforming phi into discrete Fourier transform, and repairing a result diagram of a color image with a sampling rate of 15% in the embodiment of the invention;
FIG. 2 (g) is when
Figure BDA0004105139260000062
Transforming phi into discrete cosine transform, and repairing a result diagram of a color image with a sampling rate of 15% in the embodiment of the invention;
FIG. 2 (h) is when
Figure BDA0004105139260000063
The transformation Φ is a data-based transformation, and in the embodiment of the invention, the restoration result diagram of the color image with the sampling rate of 15% is obtained.
Detailed Description
The invention is described in further detail below with reference to the drawings and the detailed description.
To overcome the shortcomings of the convex substitution tensor kernel norms, the invention selects the transform tensor Schatten-p norms to better study the low rank of the tensor.
Advantages of the transform tensor Schatten-p norm: the punishment force for larger singular values is lower than the tensor kernel norm; tensor ranks are estimated more tightly than tensor kernel norms.
Meanwhile, the spatial dimension along the tensor and the total variation regularization of Guan Wei are combined with the transformation tensor Schatten-p norm, so that the local smoothness of the tensor along the spatial dimension and Guan Wei can be further protected on the basis of meeting the global low-rank property, and the effect and performance of low-rank tensor complementation are further improved.
The total variation has the capability of protecting the local smoothness of tensors, namely, the difference values of adjacent elements are calculated along the spatial dimension sum Guan Wei of the tensors and then summed, the total variation regularization and the transformation tensor Schatten-p norm are combined to construct a new mathematical model, the new mathematical model is finally converted into the minimization problem of the model, and the optimization solution is carried out by using the framework of the alternating direction multiplier method, so that the tensor which is completed is finally obtained.
The invention provides a low-rank tensor completion method based on total variation regularization, which is shown in a flow chart in fig. 1 and specifically comprises the following steps:
step 1, describing low rank property of tensors by utilizing a transformation tensor Schatten-p norm;
in order to further protect the local smoothness of tensors along space and tube dimensions on the basis of low rank property, the full variation regularization is combined with the transformation tensor Schatten-p norm, and a new low rank tensor complement model is constructed; the low rank tensor completion model construction process is as follows:
Figure BDA0004105139260000064
Figure BDA0004105139260000065
wherein ,
Figure BDA0004105139260000066
representing tensor to be complemented, +.>
Figure BDA0004105139260000067
Representing the tensor observed, tensor to be complemented +.>
Figure BDA0004105139260000068
And tensor observed->
Figure BDA0004105139260000069
Is of size n 1 ×n 2 ×n 3 (the tensor to be completed is the tensor of the final completion required to be obtained by solving, the tensor to be observed is the tensor containing the missing value, and is the tensor input to the solving method from the beginning); omega represents the observation set, < >>
Figure BDA0004105139260000071
Representing tensor to be complemented +.>
Figure BDA0004105139260000072
And tensor observed->
Figure BDA0004105139260000073
Element values are equal on the observation set; />
Figure BDA0004105139260000074
Is to transform tensor Schatten-p norms to characterize the tensor to be complemented +.>
Figure BDA0004105139260000075
Low rank of (3); />
Figure BDA0004105139260000076
Is a total variation regularization term to protect the tensor to be complemented +.>
Figure BDA0004105139260000077
Local smoothness along the space and dimensions; alpha is a regularization parameter.
The tensor to be complemented can be defined according to the definition of the transform tensor Schatten-p norm
Figure BDA00041051392600000723
The transformed tensor Schatten-p norm of which is decomposed into the sum of the weights of the transformed tensor Schatten-p norms of the plurality of sub-tensors; at the same time, according to the definition of total variation, the tensor to be complemented is calculated>
Figure BDA0004105139260000078
The adjacent element differences along the three dimensions are summed again, respectively, and equation (1) is converted into equation (2):
Figure BDA0004105139260000079
Figure BDA00041051392600000710
Figure BDA00041051392600000711
wherein F represents the total difference operator between adjacent elements, and can be decomposed into difference operators F along three different dimensions h 、F v and Ft (i.e. F= [ F) h ,F v ,F t ]);
Figure BDA00041051392600000712
Representing tensor->
Figure BDA00041051392600000713
Can be decomposed into n sub-tensors by phi-product>
Figure BDA00041051392600000724
i=1,2,…,n;0<p<1,p i>0 and />
Figure BDA00041051392600000714
And 2, in order to obtain a model optimization solving result, introducing three auxiliary variables, constructing an augmented Lagrangian function of the proposed low-rank tensor complement model by using a framework of an alternate direction multiplier method (Alternating Direction Method of Multipliers, ADMM), respectively solving a plurality of sub-problems, and continuously iterating until convergence conditions are reached, thereby finally obtaining the completed tensor. The implementation process is as follows:
because of the interdependence between the variables of the model in equation (2), three auxiliary variable tensors need to be introduced
Figure BDA00041051392600000715
and />
Figure BDA00041051392600000725
The equation (2) is converted into the equation (3),
Figure BDA00041051392600000716
Figure BDA00041051392600000717
Figure BDA00041051392600000718
Figure BDA00041051392600000719
Figure BDA00041051392600000720
Figure BDA00041051392600000721
wherein F= [ F ] h ,F v ,F t ],0<p<1,p i>0 and
Figure BDA00041051392600000722
the method comprises the steps of carrying out optimization solution on a formula (3) by utilizing a framework of an alternate direction multiplier method, and firstly constructing an augmented Lagrangian function of the formula (3), wherein the form is as follows:
Figure BDA0004105139260000081
wherein ,
Figure BDA0004105139260000082
and />
Figure BDA0004105139260000083
Is the Lagrangian multiplier and μ is the penalty parameter.
First update tensor
Figure BDA00041051392600000818
The sub-problem of (2) can be expressed as equation (5): />
Figure BDA0004105139260000084
wherein
Figure BDA0004105139260000085
and />
Figure BDA0004105139260000086
k represents the kth iteration, and k+1 represents the kth+1 iteration.
Equation (5) can be transformed into equation (6) based on the definition of Φ product and unitary invariance of the Frobenius norm:
Figure BDA0004105139260000087
by tensor
Figure BDA0004105139260000088
Is +.>
Figure BDA0004105139260000089
As a variable, the equation (6) is converted into the equation (7):
Figure BDA00041051392600000810
from equation (7), each matrix can be solved based on the matrix derivative
Figure BDA00041051392600000811
Then the matrix is +.>
Figure BDA00041051392600000812
Combining tensors along the tube dimension>
Figure BDA00041051392600000813
And->
Figure BDA00041051392600000814
Is developed into a matrix U according to a third mode, and is multiplied by the transposition of a transformation matrix phi (the transformation matrix phi has the size of n) 3 ×n 3 ) Then performing inverse operation on (phi U) according to the third mode expansion to obtain
Figure BDA00041051392600000815
Second update tensor
Figure BDA00041051392600000816
The sub-problem of (2) can be expressed as equation (8):
Figure BDA00041051392600000817
equation (8) can be transformed into equation (9) according to the definition of the transform tensor Schatten-p norm:
Figure BDA0004105139260000091
equation (9) is expressed as tensor
Figure BDA0004105139260000092
Is +.>
Figure BDA0004105139260000093
Is the study object. Solving for +.>
Figure BDA0004105139260000094
Figure BDA0004105139260000095
Wherein prox is λf (. Cndot.) is an approximation operator, defined as
Figure BDA0004105139260000096
Solving the matrix obtained according to the formula (10)
Figure BDA0004105139260000097
Combining tensors along the tube dimension>
Figure BDA0004105139260000098
And->
Figure BDA0004105139260000099
Expanded into matrix according to the third mode>
Figure BDA00041051392600000910
Then in matrix->
Figure BDA00041051392600000911
To the left, the transpose of the transform matrix Φ (the size of the transform matrix Φ is n 3 ×n 3 ) And then (2) to->
Figure BDA00041051392600000912
The inverse operation of the third mode expansion is used to obtain +.>
Figure BDA00041051392600000926
Third update tensor
Figure BDA00041051392600000913
The sub-problem of (2) can be expressed as equation (11):
Figure BDA00041051392600000914
tensor of ream
Figure BDA00041051392600000915
Tensor +.can be solved according to equation (12)>
Figure BDA00041051392600000927
Figure BDA00041051392600000916
Where sign (·) is a sign function, |·| is the absolute value of the element, ° represents the product between the element and the element.
Fourth update tensor
Figure BDA00041051392600000925
The sub-problem of (2) can be expressed as equation (13): />
Figure BDA00041051392600000917
From the nature of the Frobenius norm, equation (13) can be solved by derivative
Figure BDA00041051392600000918
Equation (14) is a derived form:
Figure BDA00041051392600000919
wherein ,F* The accompanying operator of the representation F,
Figure BDA00041051392600000920
the solution can be found by FFT3 (3D fourier transform) and iFFT3 (3D inverse fourier transform):
Figure BDA00041051392600000921
wherein ,
Figure BDA00041051392600000922
representing tensors whose elements are all 1.
Fifth update tensor
Figure BDA00041051392600000923
The sub-problem (i.e., tensor to be completed) can be expressed as equation (16):
Figure BDA00041051392600000924
according to the nature of the Frobenius norm, the equation (16) can be firstly derived, and then the finally recovered tensor to be complemented can be calculated according to the equation (17)
Figure BDA0004105139260000101
Figure BDA0004105139260000102
wherein ,Ω Representing the complement of the Ω set.
Updating the lagrangian multiplier according to equation (18), equation (19), equation (20) and equation (21)
Figure BDA0004105139260000103
and />
Figure BDA00041051392600001015
Figure BDA0004105139260000104
Figure BDA0004105139260000105
Figure BDA0004105139260000106
Figure BDA0004105139260000107
Updating each variable for each iteration, stopping iteration when the maximum iteration number is reached or the convergence condition of the formula (22) is reached, and outputting to obtain the final complement tensor
Figure BDA0004105139260000108
max(Con1,Con2,Con3,Con4)<∈ (22)
wherein ,
Figure BDA0004105139260000109
Figure BDA00041051392600001010
Figure BDA00041051392600001011
e is a set threshold.
The classical color image is selected below to illustrate the effect of the low rank tensor complement method of the invention on the basis of total variation regularization for color images containing missing values:
the experimental data is from classical color image, the data size is 300×300×3, and can be regarded as third-order tensor, in the experiment, according to the sampling rate, pixel values at certain positions are randomly set to 0 on each channel of the color image, namely tensor containing missing values is formed
Figure BDA00041051392600001012
The experimental task is to complement the tensor with missing values with the observable pixel values at a sampling rate of 15 +.>
Figure BDA00041051392600001013
/>
Fig. 2 (a) shows an original color image (processed to display gray scale), fig. 2 (b) shows a color image (processed to display gray scale) containing a missing value at a sampling rate of 15%, and fig. 2 (c) to 2 (h) show the results of recovery of the method of the present invention for a sampling rate of 15%. Based on the comparison of the color image with missing values and the recovered sampled image, the peak signal-to-noise ratio is 3.8947dB before the recovery of the color image with missing values, the structural similarity is 0.0161, and after the recovery of the color image with missing values, the peak signal-to-noise ratio is 3.8947dB
Figure BDA00041051392600001014
When the conversion is discrete Fourier transformation, the peak signal-to-noise ratio is 27.2272dB, and the structural similarity is 0.7925; when (when)
Figure BDA0004105139260000111
When the conversion is discrete cosine conversion, the peak signal-to-noise ratio is 27.313dB, and the structural similarity is 0.8143; when->
Figure BDA0004105139260000112
When the transformation is based on data, the peak signal-to-noise ratio is 27.7915dB, and the structural similarity is 0.8222; when->
Figure BDA0004105139260000113
Conversion to discreteIn the Fourier transform, the peak signal-to-noise ratio is 27.2433dB, and the structural similarity is 0.7932; when->
Figure BDA0004105139260000114
When the conversion is discrete cosine conversion, the peak signal-to-noise ratio is 27.2947dB, and the structural similarity is 0.8136; when->
Figure BDA0004105139260000115
When transformed into a data-based transformation, the peak signal-to-noise ratio was 27.7913dB and the structural similarity was 0.8227. Meanwhile, from the visual restoration effect, the method restores the whole structure, main information and part of details of the color image, and verifies the effectiveness of the method by comparing before and after restoration.
In summary, the method of the invention has better recovery effect on the color image with the missing value and the low sampling rate.

Claims (8)

1. The low-rank tensor completion method based on total variation regularization is characterized by comprising the following steps of:
s1, utilizing the low rank property of the transformation tensor Schatten-p norm to describe tensors, combining total variation regularization and the transformation tensor Schatten-p norm, and constructing a low rank tensor complement model;
s2, introducing auxiliary variables, constructing an augmented Lagrangian function of a low-rank tensor completion model by using a framework of an alternate direction multiplier method, respectively solving a plurality of sub-problems, continuously iterating until convergence conditions are reached, and finally obtaining the completed tensor.
2. The low-rank tensor completion method based on total variation regularization according to claim 1, wherein in step S1, the total variation regularization and the transform tensor Schatten-p norm are combined, and the constructed low-rank tensor completion model expression is as follows:
Figure FDA0004105139250000011
Figure FDA0004105139250000012
wherein ,
Figure FDA0004105139250000013
representing tensor to be complemented, +.>
Figure FDA0004105139250000014
Representing the tensor observed, tensor to be complemented +.>
Figure FDA0004105139250000015
And tensor observed->
Figure FDA0004105139250000016
Is of size n 1 ×n 2 ×n 3 The method comprises the steps of carrying out a first treatment on the surface of the Omega represents the observation set, < >>
Figure FDA0004105139250000017
Representing tensor to be complemented +.>
Figure FDA0004105139250000018
And tensor observed->
Figure FDA0004105139250000019
Element values are equal on the observation set; />
Figure FDA00041051392500000110
Is to transform tensor Schatten-p norms to characterize the tensor to be complemented +.>
Figure FDA00041051392500000120
Low rank of (3);
Figure FDA00041051392500000111
is a total variation regularization term; alpha is a regularization parameter;
at the same time, according to the definition of total variation, calculating tensor to be complemented
Figure FDA00041051392500000121
The adjacent element differences along the three dimensions are summed again:
Figure FDA00041051392500000112
Figure FDA00041051392500000113
wherein F represents the total difference operator between adjacent elements, decomposed into difference operators F along three different dimensions h 、F v and Ft ,F=[F h ,F v ,F t ]);
Figure FDA00041051392500000114
Representing tensor to be complemented +.>
Figure FDA00041051392500000122
Decomposition into n sub-tensors by phi integration>
Figure FDA00041051392500000123
0<p<1,p i>0 and />
Figure FDA00041051392500000115
3. The low-rank tensor completion method based on total variation regularization according to claim 2, wherein in step S2, the implementation process of the augmented lagrangian function for constructing the low-rank tensor completion model is as follows:
introducing three auxiliary variable tensors
Figure FDA00041051392500000116
and />
Figure FDA00041051392500000117
Tensor to be complemented->
Figure FDA00041051392500000118
The formula for the re-summation of adjacent element differences along three dimensions, respectively, is converted to the following:
Figure FDA00041051392500000119
Figure FDA0004105139250000021
Figure FDA0004105139250000022
Figure FDA0004105139250000023
Figure FDA0004105139250000024
Figure FDA0004105139250000025
wherein F= [ F ] h ,F v ,F t ];0<p<1,p i>0 and
Figure FDA0004105139250000026
i=1,2,…,n;
the expression of the augmented lagrangian function is as follows:
Figure FDA0004105139250000027
wherein ,
Figure FDA0004105139250000028
and />
Figure FDA0004105139250000029
Is the Lagrangian multiplier and μ is the penalty parameter.
4. A low rank tensor completion method based on total variation regularization as recited in claim 3, wherein the first updated tensor
Figure FDA00041051392500000210
The sub-problem of (2) can be expressed as:
Figure FDA00041051392500000211
wherein
Figure FDA00041051392500000212
and />
Figure FDA00041051392500000213
Represents the kth iteration, k+1 represents the kth+1 iteration;
the above equation further translates into:
Figure FDA00041051392500000214
by tensor
Figure FDA00041051392500000215
Is +.>
Figure FDA00041051392500000216
As variables, t=1, …, n 3 The following steps are:
Figure FDA00041051392500000217
next, each matrix is solved based on the matrix derivative
Figure FDA0004105139250000031
Then the matrix is +.>
Figure FDA0004105139250000032
Combining tensors along the tube dimension>
Figure FDA0004105139250000033
And->
Figure FDA0004105139250000034
According to a third mode, developing into a matrix U, multiplying the left side of the matrix U by the transpose of a transformation matrix phi with the size of n 3 ×n 3 The method comprises the steps of carrying out a first treatment on the surface of the Then performing inverse operation on (phi U) according to the third mode expansion to obtain +.>
Figure FDA00041051392500000324
5. The low rank tensor completion method based on total variation regularization of claim 3, wherein the second updated tensor
Figure FDA0004105139250000035
Is expressed as:
Figure FDA0004105139250000036
According to the definition of the transform tensor Schatten-p norm, there are:
Figure FDA0004105139250000037
solving for
Figure FDA0004105139250000038
Figure FDA0004105139250000039
Wherein prox is λf (. Cndot.) is an approximation operator, defined as
Figure FDA00041051392500000310
Figure FDA00041051392500000311
k represents the kth iteration, k+1 represents the kth+1 iteration; />
To the matrix obtained by solving
Figure FDA00041051392500000312
Combining tensors along the tube dimension>
Figure FDA00041051392500000313
t=1,…,n 3 The method comprises the steps of carrying out a first treatment on the surface of the And->
Figure FDA00041051392500000314
Expanded into matrix according to the third mode>
Figure FDA00041051392500000315
Then in matrix->
Figure FDA00041051392500000316
To the left, multiply the transpose of the transform matrix Φ, the size of the transform matrix Φ being n 3 ×n 3 The method comprises the steps of carrying out a first treatment on the surface of the Re-pairing
Figure FDA00041051392500000317
The inverse operation of the third mode expansion is used to obtain +.>
Figure FDA00041051392500000318
6. A low rank tensor completion method based on total variation regularization as recited in claim 3, wherein a third updated tensor
Figure FDA00041051392500000319
The sub-problem of (2) is expressed as:
Figure FDA00041051392500000320
tensor of ream
Figure FDA00041051392500000321
Solving tensor->
Figure FDA00041051392500000322
Figure FDA00041051392500000323
Wherein sign (·) is a sign function, |·| is the absolute value of the element, ° represents the product between the element and the element; k represents the kth iteration, and k+1 represents the kth+1 iteration.
7. The low rank tensor completion method based on total variation regularization of claim 3, wherein a fourth updated tensor
Figure FDA00041051392500000419
The sub-problem of (2) is expressed as:
Figure FDA0004105139250000041
from the nature of the Frobenius norm, solution by derivation
Figure FDA0004105139250000042
The derived expression is as follows:
Figure FDA0004105139250000043
wherein ,F* The accompanying operator of the representation F,
Figure FDA0004105139250000044
solving by FFT3 and iFFT 3:
Figure FDA0004105139250000045
wherein ,
Figure FDA0004105139250000046
tensors representing elements of 1; k represents the kth iteration, and k+1 represents the kth+1 iteration.
8. A low rank tensor completion method based on total variation regularization as recited in claim 3, wherein the tensor to be completed
Figure FDA00041051392500000418
The sub-problem of (2) is expressed as:
Figure FDA0004105139250000047
according to the nature of the Frobenius norms, the sub-problem is solved through derivative, and then the finally recovered tensor to be complemented is calculated
Figure FDA0004105139250000048
Figure FDA0004105139250000049
Updating Lagrangian multipliers according to the following formula
Figure FDA00041051392500000410
and />
Figure FDA00041051392500000411
Figure FDA00041051392500000412
Figure FDA00041051392500000413
Figure FDA00041051392500000414
Figure FDA00041051392500000415
Updating each variable for each iteration, stopping when the maximum number of iterations is reachedIterating, and outputting to obtain final complement tensor
Figure FDA00041051392500000416
Or stopping iteration when the following conditions are met, and outputting to obtain the final complement tensor
Figure FDA00041051392500000417
/>
max(Con1,Con2,Con3,Con4)<∈
wherein ,
Figure FDA0004105139250000051
Figure FDA0004105139250000052
Figure FDA0004105139250000053
epsilon is the set threshold; k represents the kth iteration, and k+1 represents the kth+1 iteration. />
CN202310190187.5A 2023-03-02 2023-03-02 Low-rank tensor completion method based on total variation regularization Pending CN116245761A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310190187.5A CN116245761A (en) 2023-03-02 2023-03-02 Low-rank tensor completion method based on total variation regularization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310190187.5A CN116245761A (en) 2023-03-02 2023-03-02 Low-rank tensor completion method based on total variation regularization

Publications (1)

Publication Number Publication Date
CN116245761A true CN116245761A (en) 2023-06-09

Family

ID=86629299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310190187.5A Pending CN116245761A (en) 2023-03-02 2023-03-02 Low-rank tensor completion method based on total variation regularization

Country Status (1)

Country Link
CN (1) CN116245761A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117271988A (en) * 2023-11-23 2023-12-22 广东工业大学 Tensor wheel-based high-dimensional signal recovery method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117271988A (en) * 2023-11-23 2023-12-22 广东工业大学 Tensor wheel-based high-dimensional signal recovery method and device
CN117271988B (en) * 2023-11-23 2024-02-09 广东工业大学 Tensor wheel-based high-dimensional signal recovery method and device

Similar Documents

Publication Publication Date Title
Miao et al. Low-rank quaternion tensor completion for recovering color videos and images
Figueiredo et al. Restoration of Poissonian images using alternating direction optimization
CN113240596B (en) Color video recovery method and system based on high-order tensor singular value decomposition
Kalz et al. Analysis of the phase transition for the Ising model on the frustrated square lattice
CN113222834B (en) Visual data tensor completion method based on smoothness constraint and matrix decomposition
CN110139046B (en) Tensor-based video frame synthesis method
CN114841888B (en) Visual data completion method based on low-rank tensor ring decomposition and factor prior
CN108510013B (en) Background modeling method for improving robust tensor principal component analysis based on low-rank core matrix
CN104867119B (en) The structural missing image fill method rebuild based on low-rank matrix
Leung et al. Eulerian Gaussian beams for Schrödinger equations in the semi-classical regime
CN103810755A (en) Method for reconstructing compressively sensed spectral image based on structural clustering sparse representation
CN116245761A (en) Low-rank tensor completion method based on total variation regularization
CN106651974B (en) Utilize the compression of images sensing reconstructing system and method for weighting structures group Sparse rules
CN111598798A (en) Image restoration method based on low-rank tensor chain decomposition
CN104657962A (en) Image super-resolution reconstruction method based on cascading linear regression
CN112581378B (en) Image blind deblurring method and device based on significance strength and gradient prior
Suthar et al. A survey on various image inpainting techniques to restore image
CN115439344A (en) Mixed noise hyperspectral image restoration method combining double low rank and spatial spectrum total variation
CN110675318B (en) Sparse representation image super-resolution reconstruction method based on main structure separation
Zhang et al. Randomized sampling techniques based low-tubal-rank plus sparse tensor recovery
Wang et al. Hyperspectral unmixing via plug-and-play priors
CN115131226B (en) Image restoration method based on wavelet tensor low-rank regularization
CN112241938A (en) Image restoration method based on smooth Tak decomposition and high-order tensor Hank transformation
CN116797456A (en) Image super-resolution reconstruction method, system, device and storage medium
CN108093266B (en) image compressed sensing reconstruction system and method using group normalization sparse representation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination