CN114841888A - Visual data completion method based on low-rank tensor ring decomposition and factor prior - Google Patents

Visual data completion method based on low-rank tensor ring decomposition and factor prior Download PDF

Info

Publication number
CN114841888A
CN114841888A CN202210526890.4A CN202210526890A CN114841888A CN 114841888 A CN114841888 A CN 114841888A CN 202210526890 A CN202210526890 A CN 202210526890A CN 114841888 A CN114841888 A CN 114841888A
Authority
CN
China
Prior art keywords
tensor
matrix
rank
factor
transformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210526890.4A
Other languages
Chinese (zh)
Other versions
CN114841888B (en
Inventor
刘欣刚
姚佳敏
张磊
杨旻君
胡晓荣
庄晓淦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210526890.4A priority Critical patent/CN114841888B/en
Publication of CN114841888A publication Critical patent/CN114841888A/en
Application granted granted Critical
Publication of CN114841888B publication Critical patent/CN114841888B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a visual data completion method based on low-rank tensor ring decomposition and factor prior, which is used for solving the problem that the traditional data completion algorithm based on tensor decomposition is dependent on initial rank selection to cause the recovery result to lack stability and effectiveness, designing a layered tensor decomposition model, realizing tensor ring decomposition and completion at the same time, and expressing an incomplete tensor as a series of third-order factors through tensor ring decomposition for a first layer; for the second layer, low rank constraints of the factors are represented using the transform tensor nuclear norm, and the degrees of freedom of each factor are constrained in conjunction with the factor priors of graph regularization; according to the method, the low-rank structure and the prior information of the factor space are simultaneously utilized, so that on one hand, the model has implicit rank adjustment, the robustness of the model for rank selection can be improved, the burden of searching for the optimal initial rank is reduced, on the other hand, the potential information of tensor data is fully utilized, and the completion performance is further improved.

Description

Visual data completion method based on low-rank tensor ring decomposition and factor prior
Technical Field
The invention relates to the field of visual data completion, in particular to a visual data completion method based on low-rank tensor ring decomposition and factor prior.
Background
With the rapid development of information technology, modern society is stepping into an era of explosive growth of data, and a great amount of multi-attribute and multi-association data is generated. However, most data is often incomplete, which may be due to occlusion, noise, local corruption, difficulty in collection, or loss of data during conversion. Imperfections in the data may significantly degrade the quality of the data, making the analysis process difficult. The tensor is used as high-dimensional expansion of vectors and matrixes, can express a more complex data internal structure, and is widely applied to the fields of signal processing, computer vision, data mining, neuroscience and the like. The matrix-based correlation completion method destroys the spatial structure characteristics of the original multidimensional data, and the effect is poor. Therefore, tensor completion, which has received much attention in recent years, is one of the important problems in tensor analysis, and recovers the values of missing elements from the observed available elements through some a priori information and data structure attributes. In fact, most real-world natural data is low-rank or near low-rank, visual data such as color images, color video, etc., so incomplete data can be recovered using low-rank priors. With the success of low-rank matrix completion, the low-rank constraint is also a powerful tool for recovering missing items in the high-order tensor, and the missing data can be effectively estimated by using the global information of the tensor. One basic problem of low rank tensor completion is the definition of the tensor rank. However, unlike matrix rank, the definition of tensor rank is not unique. Different types of tensor ranks are defined according to different tensor decompositions.
Tensor resolution is an important content in tensor data analysis. Through tensor decomposition, essential features can be extracted from original tensor data, low-dimensional representation of the essential features is obtained, and meanwhile, structural information inside the data is reserved. In recent years, tensor networks have become a large tool for analyzing large-scale tensor data. With the introduction of tensor ring decomposition, it has been studied across disciplines because of its greater expressive power and flexibility. At present, a few theories and practices prove the feasibility and the effectiveness of tensor ring decomposition applied to a tensor completion task. The existing data completion method based on tensor ring decomposition is usually dependent on good initial rank estimation and heavy calculation overhead while achieving excellent performance. However, determining the optimal initial rank is a difficult task in practice, and the computational complexity of rank search grows exponentially as the dimensionality of the rank increases. The result of data completion is affected by the initial rank and may produce an overfitting. In addition, the model based on tensor ring decomposition has high calculation complexity, so that the existing method has low efficiency and greatly limits practical application. In summary, for the tensor ring decomposition-based completion method, the initial rank is greatly affected and the higher calculation cost still remains a challenging problem, so it is crucial to develop a robust and efficient tensor ring decomposition-based data completion algorithm.
Disclosure of Invention
The invention provides a visual data completion method based on low-rank tensor ring decomposition and factor prior, aiming at the problem that the traditional data completion algorithm based on tensor decomposition depends on initial rank selection to cause the problem that the recovery result lacks stability and effectiveness.
The invention discloses a visual data completion method based on low-rank tensor ring decomposition and factor prior, which comprises the following steps of:
s1: the object tensor is initialized. Representing incomplete original visual data as a tensor to be compensated
Figure BDA0003644727910000021
Determining an observation index set omega and according to the tensor to be compensated
Figure BDA0003644727910000022
Initializing a target tensor
Figure BDA0003644727910000023
As the data completion model input of the invention;
s2: and (5) establishing a model. Taking a simple Tensor Ring (TR) completion model as a basic frame, designing a layered Tensor decomposition model, carrying out low-rank constraint on TR factors by transforming Tensor nuclear norms, limiting the degree of freedom of each TR factor by combining factor prior information, and constructing a visual data completion model based on low-rank Tensor Ring decomposition and factor prior to obtain a target function of the data completion model;
s3: and (6) solving the model. Solving an objective function by using a calculation framework of an Alternating Direction Method of Multipliers (A DMM), converting an optimization problem of the objective function into a plurality of subproblems to be respectively solved by constructing an augmented Lagrange function form of the objective function, iteratively updating an intermediate variable by sequentially solving each subproblem, and outputting a target tensor after several iterative function convergence
Figure BDA0003644727910000024
The solution of (1);
s4: tensor of object
Figure BDA0003644727910000025
The solution of (a) is converted into a corresponding format of the original visual data to obtain a final completion result.
Wherein, step S1 includes the following steps:
s11: obtaining incomplete original visual data and storing the incomplete original visual data in a tensor form to obtain a to-be-compensated tensor
Figure BDA0003644727910000026
Taking index positions of all known pixel points in original visual data to form an observation index set omega;
s12: according to the amount of full tension to be compensated
Figure BDA0003644727910000027
Initializing a target tensor
Figure BDA0003644727910000028
Make it satisfy
Figure BDA0003644727910000029
Wherein
Figure BDA00036447279100000210
Is the complement of the omega, and is,
Figure BDA00036447279100000211
tensor representing object
Figure BDA00036447279100000212
Is known in the art, and the known item,
Figure BDA00036447279100000213
indicating the amount of tension to be compensated
Figure BDA00036447279100000214
Is known in the art, and the known item,
Figure BDA00036447279100000215
tensor representing object
Figure BDA00036447279100000216
The missing entry of (2).
Wherein, step S2 includes the following steps:
s21: by finding the tensor ring decomposed representation of the incomplete original visual data from the known entries, and then estimating the missing term of the original visual data by using the TR factor of the obtained tensor ring decomposed representation, a simple tensor ring completion model can be obtained as follows:
Figure BDA00036447279100000217
wherein,
Figure BDA00036447279100000218
the set of TR factors is represented as a set of TR factors,
Figure BDA00036447279100000219
is a tensor ring decomposition representation and,
Figure BDA00036447279100000220
represents the projection operation under the observation index set omega, | | · | | non-woven F Frobenius norm representing tensor;
s22: on the basis of a simple tensor ring completion model, each TR factor is further constrained by transforming tensor nuclear norm to utilize global low-rank characteristics of tensor data, and a basic low-rank tensor ring completion model can be obtained as follows:
Figure BDA0003644727910000031
Figure BDA0003644727910000032
wherein the object tensor
Figure BDA0003644727910000033
Figure BDA0003644727910000034
Representing the real number domain, N representing the target tensor
Figure BDA0003644727910000035
Order of (1) of n To represent
Figure BDA0003644727910000036
N-1, 2,.., N.
Figure BDA0003644727910000037
The set of TR factors is represented as a set of TR factors,
Figure BDA0003644727910000038
denotes the nth TR factor, R n-1 、I n And R n Representing real number fields
Figure BDA0003644727910000039
OfThe same dimension size. I | · | purple wind TTNN Representing the transformation tensor nuclear norm, λ > 0 is a trade-off parameter.
The basic low-rank tensor ring completion model limits the TR factor through low-rank constraint, and implicitly adjusts the TR rank in multiple iterations, so that the TR rank gradually tends to the actual rank of tensor ring decomposition, and the robustness of initial rank selection is enhanced;
s23: to further improve the completion performance of the visual data, a factor prior may be added to fully utilize the potential information of the visual data. By using the low-rank hypothesis of the TR factor and the factor prior of graph regularization, a visual data completion model based on low-rank tensor ring decomposition and the factor prior can be obtained as follows:
Figure BDA00036447279100000310
Figure BDA00036447279100000311
the first line of the above equation represents an objective function of a visual data completion model based on low-rank tensor ring decomposition and factor prior, and the second line represents a constraint condition of the objective function. Alpha ═ alpha 12 ,…,α N ]Is a graph regularization parameter vector, α n Denotes the nth element in the vector α, N being 1, 2.μ, λ are trade-off parameters and μ > 0, λ > 0. L is n Represents the nth laplace matrix and the nth laplace matrix,
Figure BDA00036447279100000312
representing the nth TR factor
Figure BDA00036447279100000313
Norm 2 of (a) expands the matrix, tr (-) denotes the trace of the matrix, and superscript T denotes the transpose of the matrix.
Wherein, step S3 includes the following steps:
s31: to solve the objective function using ADMM, a series of auxiliary tensors are first introduced
Figure BDA00036447279100000314
To simplify the optimization, the optimization problem of the objective function can therefore be re-expressed as:
Figure BDA00036447279100000315
Figure BDA00036447279100000316
Figure BDA00036447279100000317
wherein, aggregate
Figure BDA0003644727910000041
A sequence of tensors is represented which is,
Figure BDA0003644727910000042
representing the nth TR factor
Figure BDA0003644727910000043
The corresponding auxiliary tensor. Additional equality constraints by combining auxiliary tensors
Figure BDA0003644727910000044
N is 1,2, …, N, and the augmented lagrange function for the objective function is:
Figure BDA0003644727910000045
Figure BDA0003644727910000046
wherein,
Figure BDA0003644727910000047
is a set of lagrange multipliers,
Figure BDA0003644727910000048
is the nth Lagrange multiplier, N is the total number of the Lagrange multipliers, beta is more than 0 and is a punishment parameter,<x,y>the tensor inner product is represented.
Then, for each variable, each variable is alternately updated by fixing other variables except the variable and solving the optimization sub-problem corresponding to each variable in S32 to S35 in turn.
S32:
Figure BDA0003644727910000049
And (4) updating. About variables
Figure BDA00036447279100000410
The optimization sub-problem of (a) can be simplified as:
Figure BDA00036447279100000411
wherein, X <n> Tensor representing object
Figure BDA00036447279100000412
The cyclic mode n of (a) expands the matrix,
Figure BDA00036447279100000413
representing the division by the nth TR factor
Figure BDA00036447279100000414
And combining all the factors outside through the multi-linear product to generate a cyclic modulo-2 expansion matrix of the sub-chain tensor. Variables can be implemented by solving the above sub-problems
Figure BDA00036447279100000415
Updating of (1);
S33:
Figure BDA00036447279100000416
and (4) updating. About variables
Figure BDA00036447279100000417
The optimization sub-problem of (a) can be written as:
Figure BDA00036447279100000418
variables can be implemented by solving the above sub-problems
Figure BDA00036447279100000419
Updating of (1);
S34:
Figure BDA00036447279100000420
and (4) updating. About variables
Figure BDA00036447279100000421
The optimization sub-problem of (a) can be expressed as:
Figure BDA00036447279100000422
Figure BDA00036447279100000423
variables can be implemented by solving the above sub-problems
Figure BDA00036447279100000424
Updating of (1);
S35:
Figure BDA0003644727910000051
and (4) updating. Based on ADMM schemes, lagrange multipliers
Figure BDA0003644727910000052
Can be updated as:
Figure BDA0003644727910000053
furthermore, the penalty parameter β of the augmented lagrange function of the objective function may be given in each iteration by β ═ min (ρ β, β) max ) Updating, wherein 1 < rho < 1.5 is a tuning hyperparameter. Beta is a max Denotes the set upper limit of beta, min (ρ β, β) max ) Expressing taking ρ β and β max The smaller one as the current value of β;
s36: steps S32-S35 are repeated, with each variable being alternately updated by a plurality of iterations. Consider setting two convergence conditions: the maximum number of iterations maximum and the relative error threshold tol between the two iterations. When the two convergence conditions are simultaneously met, namely the maximum iteration time maximum is reached, the relative error between two iterations is less than the threshold tol, the iteration is ended, and the target tensor can be obtained
Figure BDA0003644727910000054
The solution of (1).
According to the invention, a layered tensor decomposition model is designed by simultaneously utilizing the low-rank structure and the prior information of the factor space, and tensor ring decomposition and completion can be simultaneously realized. For the first layer, the incomplete tensor is represented by tensor ring decomposition as a series of third-order TR factors. For the second layer, the low rank constraint of the TR factor is represented using the transform tensor kernel norm and the factor prior strategy of graph regularization is considered. The low-rank constraint of the TR factor can enable the data completion model of the invention to have implicit rank adjustment, and the robustness of the model to TR rank selection is enhanced, so that the burden of searching for the optimal TR rank is reduced, and the factor prior can fully utilize the potential information of the original visual data, thereby being beneficial to further improving the completion performance.
Drawings
FIG. 1 is a general block diagram of an embodiment of the present invention;
FIG. 2 is a simplified flow chart of a visual data completion method based on low rank tensor ring decomposition and factor prior in the embodiment of the present invention;
FIG. 3 is a diagram of raw color image data in an embodiment of the present invention;
FIG. 4 is a diagram of raw color video data in an embodiment of the present invention;
FIG. 5 is a graph of the completion result of a color image at different defect rates according to an embodiment of the present invention;
FIG. 6 is a graph of the completion results of color video with different defect rates according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings.
The invention provides a visual data completion method based on low-rank tensor ring decomposition and factor prior, which specifically comprises the following steps:
step S1: the object tensor is initialized.
S11: obtaining incomplete original visual data (such as color images, color videos and the like), reading in an original visual data file with missing entries through Matlab software, and storing the original visual data file into a tensor form to obtain a to-be-compensated tensor
Figure BDA0003644727910000061
Taking index positions of all known pixel points of original visual data to form an observation index set omega;
s12: according to the amount of full tension to be compensated
Figure BDA0003644727910000062
Initializing a target tensor
Figure BDA0003644727910000063
So that the mapping relation satisfies
Figure BDA0003644727910000064
Wherein
Figure BDA0003644727910000065
The complement of Ω represents the missing index set.
Figure BDA0003644727910000066
Tensor representing object
Figure BDA0003644727910000067
Is known in the art, and the known item,
Figure BDA0003644727910000068
indicating the amount of tension to be compensated
Figure BDA0003644727910000069
Is known in the art, and the known item,
Figure BDA00036447279100000610
tensor representing object
Figure BDA00036447279100000611
The missing entry of (2).
Step S2: and (5) establishing a model.
S21: by finding the corresponding tensor ring decomposition representation from the known entries of the incomplete original visual data and then estimating the missing entries of the original visual data by using the TR factor of the obtained tensor ring decomposition representation, a simple tensor ring completion model can be obtained as follows:
Figure BDA00036447279100000612
wherein,
Figure BDA00036447279100000613
the set of TR factors is represented as a set of TR factors,
Figure BDA00036447279100000614
denotes the nth TR factor, N being 1, 2.., N,
Figure BDA00036447279100000615
is a tensor ring decomposition representation and,
Figure BDA00036447279100000616
represents the projection operation under the observation index set omega, | | · | | non-woven F Frobenius norm representing tensor. In order to solve the problem that the low-rank tensor completion method depends on the initial rank, a simple tensor ring completion model is improved in the following way;
s22: the singular value decomposition of the transformed tensor and the basic algebraic knowledge of the involved tensor are introduced first.
Unitary tensor transformation: for third order tensor
Figure BDA00036447279100000617
Suppose that
Figure BDA00036447279100000618
Is a unitary transformation matrix satisfying phi H =Φ H Phi ═ I, tensor
Figure BDA00036447279100000619
The unitary transform of (a) is defined as:
Figure BDA00036447279100000620
wherein,
Figure BDA00036447279100000621
tensor of representation
Figure BDA00036447279100000622
Is determined by the unitary transformation of (a) a,
Figure BDA00036447279100000623
tensor of representation
Figure BDA00036447279100000624
Modulo-3 product of the matrix phi. I denotes the identity matrix, superscript H denotes the conjugate transpose of the matrix,
Figure BDA00036447279100000625
representing the real number field, I k′ And k' is 1,2,3 each represents a tensor
Figure BDA00036447279100000626
The size of the dimension of the k' th order.
Block diagonal matrix: based on
Figure BDA00036447279100000627
The block diagonal matrix for all forward slices of (a) is defined as:
Figure BDA0003644727910000071
wherein,
Figure BDA0003644727910000072
is that
Figure BDA0003644727910000073
I ═ 1,2, 1, I 3 To do so
Figure BDA0003644727910000074
Can be converted into a tensor, i.e. by the folding operator fold (·), i.e. a tensor
Figure BDA0003644727910000075
Tensor Φ product: the phi product between the two third order tensors is defined by the product of the forward slices in the unitary transform domain. For two tensors
Figure BDA0003644727910000076
And
Figure BDA0003644727910000077
the tensor Φ product is defined as:
Figure BDA0003644727910000078
wherein Φ The sign of the product of the tensor phi is expressed,
Figure BDA0003644727910000079
tensor of representation
Figure BDA00036447279100000710
Unitary transformation of (a). The product of the tensor phi is a third order tensor
Figure BDA00036447279100000711
Superscript H denotes conjugate transpose, I 4 Tensor of representation
Figure BDA00036447279100000712
Dimension size on the second order.
Transformation tensor singular value decomposition: the method is mainly used for factorization of third-order tensor, and a unitary transformation matrix phi is adopted to replace a discrete Fourier transformation matrix in the traditional tensor singular value decomposition. For a third order tensor
Figure BDA00036447279100000713
Its transformation tensor singular value decomposition can be expressed as:
Figure BDA00036447279100000714
wherein,
Figure BDA00036447279100000715
and
Figure BDA00036447279100000716
are both unitary tensors and are each a unitary tensor,
Figure BDA00036447279100000717
is the diagonal tensor.
Based on the transformation tensor singular value decomposition, a transformation tensor nuclear norm may be defined. For third order tensor
Figure BDA00036447279100000718
Suppose that
Figure BDA00036447279100000719
Is a unitary transformation matrix, tensor
Figure BDA00036447279100000720
The transformation tensor nuclear norm of (a) is defined as:
Figure BDA00036447279100000721
wherein | · | purple sweet TTNN Representing the transformation tensor nuclear norm, | | · | |, representing the matrix nuclear norm.
Figure BDA00036447279100000722
Represent
Figure BDA00036447279100000723
Of the ith forward slice of (2), i.e. the matrix kernel norm
Figure BDA00036447279100000724
The sum of all singular values of (a).
The rank of the TR factor satisfies the relation due to tensor rank
Figure BDA00036447279100000725
Wherein X (n) Tensor of representation
Figure BDA00036447279100000726
The standard mode n of (a) expands the matrix,
Figure BDA00036447279100000727
to represent
Figure BDA00036447279100000728
The norm 2 of (a) expands the matrix, rank () represents the rank function of the matrix, indicating the target tensor
Figure BDA0003644727910000081
Is to a certain extent subject to the corresponding TR factor
Figure BDA0003644727910000082
This makes it possible to exploit the global low-rank characteristics of tensor data by regularizing the TR factor. Furthermore, the transformed tensor kernel norm, which can be used to approximate the sum of the transformed ranks of the tensor, is a suitable tensor rank alternative. Therefore, each TR factor can be further constrained by using the transformation tensor nuclear norm, and the basic low-rank tensor ring completion model is obtained as follows:
Figure BDA0003644727910000083
Figure BDA0003644727910000084
wherein the object tensor
Figure BDA0003644727910000085
N denotes the object tensor
Figure BDA0003644727910000086
Order of (1) of n Represent
Figure BDA0003644727910000087
The size of the dimension of the nth order.
Figure BDA0003644727910000088
The set of TR factors is represented as a set of TR factors,
Figure BDA0003644727910000089
denotes the nth TR factor, R n-1 、I n And R n Three dimensions indicated respectively. I | · | purple wind TTNN Representing the transformation tensor nuclear norm, λ > 0 is a trade-off parameter. When the basic low-rank tensor loop completion model described above is optimized, the fitting errors of the transformation tensor nuclear norm and the target tensor of all TR factors are minimized at the same time. In addition, the singular value decomposition of the transformation tensor here contains a key unitary transformation matrix Φ n . In this model, based on the givenThird order TR factor of
Figure BDA00036447279100000810
Constructs a unitary transformation matrix
Figure BDA00036447279100000811
Due to the fact that
Figure BDA00036447279100000812
Is unknown and phi can be updated iteratively n . This process can be expressed as:
Figure BDA00036447279100000813
wherein,
Figure BDA00036447279100000814
to represent
Figure BDA00036447279100000815
The standard model 3 of (a) is expanded into a matrix,
Figure BDA00036447279100000816
representation expansion matrix
Figure BDA00036447279100000817
U and V represent the left and right singular matrices, respectively, and S represents the diagonal matrix. U can be selected in transformation tensor singular value decomposition H As a unitary transformation matrix. Suppose that
Figure BDA00036447279100000818
Is satisfied with
Figure BDA00036447279100000819
And then by performing a tensor unitary transformation
Figure BDA00036447279100000820
The tensor can be obtained
Figure BDA00036447279100000821
Last R of (2) n All of the r forward slices are zero matrices. Thus, U H Being a unitary transformation matrix will help to further explore the TR factor
Figure BDA00036447279100000822
Low rank information of (1).
S23: to further improve the completion performance of visual data, a factor prior may be added to fully utilize the underlying information of the data. Graph regularization is used for visual data completion, and common image priors can be encoded to facilitate image restoration. One widely used image prior is the local similarity prior, which assumes that adjacent rows and columns are highly correlated. In tensor ring decomposition, any nth TR factor
Figure BDA00036447279100000823
Respectively, represent information on the nth order of the original visual data. For example, if a color image is considered to be a third order tensor, the first two TR factors obtained after tensor ring decomposition encode the changes in the row and column spaces, respectively. Therefore, the local similarity of pixels of visual data such as color images, color videos, etc. can be described as an accurate factor prior, and the weights of a single factor graph can be defined as follows:
Figure BDA0003644727910000091
where row and column represent row and column space, respectively, and if k ═ row, then i k And j k Respectively representing any two index positions of the line space. w is a ij Is a similarity matrix
Figure BDA0003644727910000092
Of (i, j) th element of (a) is all the pair-wise distances i k -j k Average value of (a). Order to
Figure BDA0003644727910000093
For diagonal matrix, the (i, i) th element in matrix D is sigma j w ij The laplacian matrix L ═ D-W can be obtained.
By using the low-rank hypothesis of the TR factor and the factor prior of graph regularization, a visual data completion model based on low-rank tensor ring decomposition and the factor prior can be obtained as follows:
Figure BDA0003644727910000094
Figure BDA0003644727910000095
the first line of the above equation represents an objective function of a visual data completion model based on low-rank tensor ring decomposition and factor prior, and the second line represents a constraint condition of the objective function. Alpha ═ alpha 12 ,…,α N ]Is a graph regularization parameter vector, μ, λ are trade-off parameters and μ > 0, λ > 0. tr (-) is a matrix trace operation. Laplace matrix
Figure BDA0003644727910000096
The inter-dependencies within the nth TR factor are described,
Figure BDA0003644727910000097
representing the nth TR factor
Figure BDA0003644727910000098
The norm 2 of (2) expands the matrix, and the superscript T denotes the transpose of the matrix.
Step S3: and (6) solving the model.
S31: an augmented Lagrangian function is constructed.
In order to solve the objective function of the visual data completion model based on the low-rank tensor ring decomposition and factor prior by using the ADMM calculation framework, a series of auxiliary tensors are introduced firstly
Figure BDA0003644727910000099
To simplify the optimization, the optimization problem of the objective function can therefore be re-expressed as:
Figure BDA00036447279100000910
Figure BDA00036447279100000911
Figure BDA00036447279100000912
wherein, aggregate
Figure BDA0003644727910000101
A sequence of tensors is represented which is,
Figure BDA0003644727910000102
representing the nth TR factor
Figure BDA0003644727910000103
The corresponding auxiliary tensor. Additional equality constraints by combining auxiliary tensors
Figure BDA0003644727910000104
The augmented lagrangian function that can be used to obtain the objective function is:
Figure BDA0003644727910000105
Figure BDA0003644727910000106
wherein,
Figure BDA0003644727910000107
is a set of lagrange multipliers,
Figure BDA0003644727910000108
is the nth lagrange multiplier, beta > 0 is a penalty parameter,<x,y>the tensor inner product is represented. Then, by fixing the other variables and solving each sub-problem of S32 to S35 in turn, each of the following variables may be alternately updated;
S32:
Figure BDA0003644727910000109
and (4) updating.
About variables
Figure BDA00036447279100001010
The optimization sub-problem of (c) can be simplified as:
Figure BDA00036447279100001011
wherein, X <n> Tensor representing object
Figure BDA00036447279100001024
The cyclic mode n of (a) expands the matrix,
Figure BDA00036447279100001012
representing the division by the nth TR factor
Figure BDA00036447279100001013
And combining all the factors outside through the multi-linear product to generate a cyclic modulo-2 expansion matrix of the sub-chain tensor.
By comparing the above formulas with
Figure BDA00036447279100001014
Is set to zero, the solution of the sub-problem described above is equal to solving the following general sylvester matrix equation:
Figure BDA00036447279100001015
wherein, X <n> A cyclic modulo n expansion matrix representing the target tensor,
Figure BDA00036447279100001016
and
Figure BDA00036447279100001017
respectively represent
Figure BDA00036447279100001018
And
Figure BDA00036447279100001019
standard mode 2 of (2) expands the matrix.
Figure BDA00036447279100001020
Is an identity matrix. Due to the matrix-L n Sum matrix
Figure BDA00036447279100001021
There is no common characteristic value, so the equation has a unique solution, and the solution of the equation can call a Sylvester function in Matlab;
S33:
Figure BDA00036447279100001022
and (4) updating.
In that
Figure BDA00036447279100001023
After updating, the nth unitary transformation matrix Φ in the transformation tensor kernel norm is first updated according to the following formula n
Figure BDA0003644727910000111
Wherein,
Figure BDA0003644727910000112
to represent
Figure BDA0003644727910000113
The standard model 3 of (a) is expanded into a matrix,
Figure BDA0003644727910000114
representation expansion matrix
Figure BDA0003644727910000115
U and V represent the left and right singular matrices, respectively, and S represents the diagonal matrix.
Then, with respect to the variables
Figure BDA0003644727910000116
The optimization sub-problem of (a) can be written as:
Figure BDA0003644727910000117
order to
Figure BDA0003644727910000118
And
Figure BDA0003644727910000119
the optimization sub-problem described above may be equivalent to:
Figure BDA00036447279100001110
further, the air conditioner is provided with a fan,
Figure BDA00036447279100001111
can be expressed by transforming tensor singular value decomposition into
Figure BDA00036447279100001112
Wherein
Figure BDA00036447279100001137
Expressed in unitary transformation matrix phi n The product of the lower tensor phi is,
Figure BDA00036447279100001113
and
Figure BDA00036447279100001114
are both unitary tensors and are each a unitary tensor,
Figure BDA00036447279100001115
is the diagonal tensor.
Variables of
Figure BDA00036447279100001116
The sub-problem of optimization may be solved by a tensor singular value threshold (t-SVT) operator, and the solution result may be expressed as:
Figure BDA00036447279100001117
wherein the intermediate variable
Figure BDA00036447279100001118
Wherein
Figure BDA00036447279100001119
Tensor of representation
Figure BDA00036447279100001120
And matrix
Figure BDA00036447279100001121
Modulo 3 product of, and having
Figure BDA00036447279100001122
Wherein
Figure BDA00036447279100001123
Express get
Figure BDA00036447279100001124
And the larger of 0. At this time, firstly, the
Figure BDA00036447279100001125
Is obtained by doing unitary tensor transformation
Figure BDA00036447279100001126
Then according to the formula
Figure BDA00036447279100001127
To obtain
Figure BDA00036447279100001128
Finally according to
Figure BDA00036447279100001129
Obtaining intermediate variables
Figure BDA00036447279100001130
S34:
Figure BDA00036447279100001131
And (4) updating.
About variables
Figure BDA00036447279100001132
The optimization sub-problem of (a) can be expressed as:
Figure BDA00036447279100001133
Figure BDA00036447279100001134
this is a convex optimization problem with equality constraints. Variables of
Figure BDA00036447279100001135
Can be updated as:
Figure BDA00036447279100001136
wherein,
Figure BDA0003644727910000121
representing the projection operation under the observation index set omega,
Figure BDA0003644727910000122
is represented in the missing index set
Figure BDA0003644727910000123
And (4) performing a downward projection operation.
S35:
Figure BDA0003644727910000124
And (4) updating.
Based on ADMM schemes, lagrange multipliers
Figure BDA0003644727910000125
Can be updated as:
Figure BDA0003644727910000126
furthermore, the penalty parameter β of the augmented lagrange function of the objective function may be given in each iteration by β ═ min (ρ β, β) max ) Is updated, wherein 1 < rho < 1.5 is a tuning hyperparameter. Beta is a max Denotes the set upper limit of beta, min (ρ β, β) max ) Expressing taking ρ β and β max The smaller one of which is taken as the current value of beta. In a specific embodiment of the present invention, ρ is set to 1.01;
s36: and (4) performing iterative updating.
Steps S32-S35 are repeated, with each variable being alternately updated by a plurality of iterations. Consider setting two convergence conditions: the maximum iteration number maximum is 300, and the relative error threshold tol between two iterations is 10 -4 . Wherein the relative error between two iterations is calculated by the following formula
Figure BDA0003644727910000127
Figure BDA0003644727910000128
To representAt present, the current
Figure BDA0003644727910000129
The value of the one or more of the one,
Figure BDA00036447279100001210
representing the last iteration
Figure BDA00036447279100001211
The value is obtained. When the two convergence conditions are satisfied simultaneously, the maximum iteration number 300 is reached, and the relative error between the two iterations is less than the threshold value 10 -4 And finishing iteration to obtain the target tensor
Figure BDA00036447279100001212
The solution of (1).
Step S4: the obtained target tensor
Figure BDA00036447279100001213
The solution of (2) is converted into a corresponding format of the original visual data to obtain a final completion result of the incomplete original visual data.
Examples
In this embodiment, a test is performed on given tensor data (such as color images and color videos shown in fig. 3 and 4, and the english above the images indicates the name corresponding to the data). In algorithm initialization, the penalty parameter β is set to 0.01, and other parameters are manually adjusted to obtain the best performance. Incomplete tensor data are generated by randomly deleting partial pixels of visual data, several different random deletion rates (MR belongs to { 60%, 70%, 80%, 90%, 95%) are set, and the tensor completion task is performed by adopting the technical scheme provided by the invention. Fig. 5 and 6 show the completion results of color image and color video data, respectively, and peak signal-to-noise ratio (PSNR) is used to evaluate the recovery performance of the data completion method of the present invention on visual data, and the values above the image represent the corresponding PSNR values. The higher the PSNR value, the better the quality of the restored image. The effectiveness of the method can be known by comparing the images before and after recovery. The final result shows that compared with the traditional methods HaLTRC and TRALS, the completion result of the method of the invention not only has better overall visual effect, but also has better recovery in the aspect of local detail texture of the image, and the recovery result is closer to the original image. Meanwhile, the method obtains higher recovery precision from the evaluation of the visual quality evaluation index PSNR. In conclusion, the method can effectively recover the main information and the texture details of the incomplete visual data under the condition of high loss rate, can realize the tensor completion task with better performance, and has good application prospect.
The embodiments described above are only a part of the embodiments of the present invention, and not all of them. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.

Claims (5)

1. A visual data completion method based on low rank tensor ring decomposition and factor prior is characterized by comprising the following steps:
step S1), initializing the target tensor, specifically including the following sub-steps:
s11) acquiring incomplete original visual data, reading in the original visual data file with missing entries through Matlab software, and storing the original visual data file in a tensor form to obtain the tensor to be compensated
Figure FDA0003644727900000011
Taking index positions of all known pixel points of original visual data to form an observation index set omega;
s12) according to the tension to be compensated
Figure FDA0003644727900000012
Initializing a target tensor
Figure FDA00036447279000000126
So that the mapping relation satisfies
Figure FDA0003644727900000013
Wherein
Figure FDA0003644727900000014
The complement of Ω, representing the missing index set,
Figure FDA0003644727900000015
tensor representing object
Figure FDA0003644727900000016
Is known in the art, and the known item,
Figure FDA0003644727900000017
indicating the amount of tension to be compensated
Figure FDA0003644727900000018
Is known in the art, and the known item,
Figure FDA0003644727900000019
tensor representing object
Figure FDA00036447279000000110
The missing entry of (a);
step S2), model building, specifically comprising the following substeps:
s21) obtaining a simple tensor ring completion model by finding a corresponding tensor ring decomposition representation from the known entries of the incomplete original visual data, and then estimating the missing entries of the original visual data by using the TR factor of the obtained tensor ring decomposition representation:
Figure FDA00036447279000000111
wherein,
Figure FDA00036447279000000112
the set of TR factors is represented as a set of TR factors,
Figure FDA00036447279000000113
denotes the nth TR factor, N being 1, 2.., N,
Figure FDA00036447279000000114
is a tensor ring decomposition representation and,
Figure FDA00036447279000000115
represents the projection operation under the observation index set omega, | | · | | non-woven F Frobenius norm representing tensor;
in order to solve the problem that the low-rank tensor completion method depends on the initial rank, a simple tensor ring completion model is improved in the following way;
s22) firstly introduces singular value decomposition of transformation tensor and introduces basic algebraic knowledge of involved tensor
Unitary tensor transformation: for third order tensor
Figure FDA00036447279000000116
Suppose that
Figure FDA00036447279000000117
Is a unitary transformation matrix satisfying phi H =Φ H Phi ═ I, tensor
Figure FDA00036447279000000118
The unitary transform of (a) is defined as:
Figure FDA00036447279000000119
wherein,
Figure FDA00036447279000000120
tensor of representation
Figure FDA00036447279000000121
Is determined by the unitary transformation of (a) a,
Figure FDA00036447279000000122
tensor of representation
Figure FDA00036447279000000123
Modulo-3 product with matrix phi, I denotes the identity matrix, superscript H denotes the conjugate transpose of the matrix,
Figure FDA00036447279000000124
representing the real number field, I k′ And k' is 1,2,3 each represents a tensor
Figure FDA00036447279000000125
The size of the dimension on the k' th order;
block diagonal matrix: based on
Figure FDA0003644727900000025
The block diagonal matrix for all forward slices of (a) is defined as:
Figure FDA0003644727900000021
wherein,
Figure FDA0003644727900000026
is that
Figure FDA0003644727900000027
I1, 2, I 3 To do so
Figure FDA0003644727900000028
Can be converted into a tensor, i.e. by the folding operator fold (·)
Figure FDA0003644727900000029
Tensor Φ product: the phi product between the two third order tensors is defined by the product of the forward slices in the unitary transform domain,for two tensors
Figure FDA00036447279000000210
And
Figure FDA00036447279000000211
the tensor Φ product is defined as:
Figure FDA0003644727900000022
wherein Φ The sign of the product of the tensor phi is expressed,
Figure FDA00036447279000000212
tensor of representation
Figure FDA00036447279000000213
The unitary transformation of (1), the product of the tensor phi being a third order tensor
Figure FDA00036447279000000214
Superscript H denotes conjugate transpose, I 4 Tensor of representation
Figure FDA00036447279000000215
Dimension size on the second order;
transformation tensor singular value decomposition: mainly used for factorization of third-order tensor, a unitary transformation matrix phi is adopted to replace a discrete Fourier transformation matrix in the singular value decomposition of the traditional tensor, and for one third-order tensor
Figure FDA00036447279000000216
The singular value decomposition of the transformation tensor is expressed as:
Figure FDA0003644727900000023
wherein,
Figure FDA00036447279000000217
and
Figure FDA00036447279000000218
are both unitary tensors and are each a unitary tensor,
Figure FDA00036447279000000219
in the form of a diagonal tensor,
based on the transformation tensor singular value decomposition, transformation tensor nuclear norm can be defined, and for the third-order tensor
Figure FDA00036447279000000220
Suppose that
Figure FDA00036447279000000222
Is a unitary transformation matrix, tensor
Figure FDA00036447279000000221
The transformation tensor nuclear norm of (a) is defined as:
Figure FDA0003644727900000024
wherein | · | purple sweet TTNN Representing the transformation tensor kernel norm, | ·| luminance * The norm of the kernel of the matrix is represented,
Figure FDA00036447279000000223
represent
Figure FDA00036447279000000224
Of the ith forward slice of (2), i.e. the matrix kernel norm
Figure FDA00036447279000000225
The sum of all singular values of (a);
the rank of the TR factor satisfies the relation due to tensor rank
Figure FDA00036447279000000226
Wherein X (n) Tensor of representation
Figure FDA00036447279000000227
The standard mode n of (a) expands the matrix,
Figure FDA00036447279000000322
to represent
Figure FDA00036447279000000323
The standard model 2 of (1) expands a matrix, rank () represents a rank function of the matrix, each TR factor is further constrained by utilizing a transformation tensor nuclear norm, and a basic low-rank tensor ring completion model is obtained as follows:
Figure FDA0003644727900000031
Figure FDA0003644727900000032
wherein the object tensor
Figure FDA00036447279000000325
N denotes the object tensor
Figure FDA00036447279000000321
Order of (1) of n To represent
Figure FDA00036447279000000320
The size of the dimension of the nth order of (c),
Figure FDA00036447279000000318
the set of TR factors is represented as a set of TR factors,
Figure FDA00036447279000000317
denotes the nth TR factor, R n-1 、I n And R n Respectively representing three dimensions, | ·| non-woven phosphor TTNN Representing the transformation tensor nuclear norm, λ > 0 being a trade-off parameter; when the basic low-rank tensor loop completion model is optimized, the transformation tensor nuclear norm of all the TR factors and the fitting error of the target tensor are simultaneously minimized, and in the basic low-rank tensor loop completion model, the three-order TR factors are given
Figure FDA00036447279000000316
Constructs a unitary transformation matrix
Figure FDA00036447279000000314
Due to the fact that
Figure FDA00036447279000000315
Is unknown and can iteratively update phi n This process is represented as:
Figure FDA0003644727900000033
wherein,
Figure FDA00036447279000000310
to represent
Figure FDA00036447279000000311
The standard model 3 of (a) is expanded into a matrix,
Figure FDA00036447279000000312
representation expansion matrix
Figure FDA00036447279000000313
The U and V represent a left singular matrix and a right singular matrix respectively, the S represents a diagonal matrix, and the U is selected in the transformation tensor singular value decomposition H As unitary transformation momentsArray, hypothesis
Figure FDA0003644727900000038
Is satisfied with
Figure FDA0003644727900000039
And then by performing a tensor unitary transformation
Figure FDA0003644727900000036
The tensor can be obtained
Figure FDA0003644727900000037
Last R of (2) n All of the r forward slices are zero matrices, hence U H Being a unitary transformation matrix will help to further explore the TR factor
Figure FDA00036447279000000324
Low rank information of (2);
s23) to further improve the completion performance of the visual data, a factor prior is added to fully utilize the potential information of the data,
in tensor ring decomposition, any nth TR factor
Figure FDA0003644727900000035
Representing information on nth order of original visual data respectively, describing pixel local similarity of the original visual data as precise factor prior, and defining weights of a single factor graph as follows:
Figure FDA0003644727900000034
where row and column represent row and column space, respectively, and if k equals row, then i k And j k Respectively representing any two index positions, w, of the line space ij Is a similarity matrix
Figure FDA00036447279000000415
Of (i, j) th element of (a) is all the pair-wise distances i k -j k Average value of (1), order
Figure FDA00036447279000000414
For diagonal matrix, the (i, i) th element in matrix D is sigma j w ij Thus obtaining a laplacian matrix L ═ D-W;
by using the low-rank hypothesis of the TR factor and the factor prior of graph regularization, the visual data completion model based on the low-rank tensor ring decomposition and the factor prior is obtained as follows:
Figure FDA0003644727900000041
Figure FDA0003644727900000042
wherein, the first line of the above expression represents an objective function of a visual data completion model based on low rank tensor ring decomposition and factor prior, the second line represents a constraint condition of the objective function, and alpha is [ alpha ═ alpha [ alpha ] ] 12 ,…,α N ]Is a graph regularization parameter vector, mu, lambda are trade-off parameters and mu > 0, lambda > 0, tr (-) is a matrix trace operation, Laplace matrix
Figure FDA00036447279000000413
The inter-dependencies within the nth TR factor are described,
Figure FDA00036447279000000416
representing the nth TR factor
Figure FDA00036447279000000412
The standard model 2 of (1) expands the matrix, and the superscript T represents the transposition of the matrix;
step S3), solving the model, and specifically comprising the following sub-steps:
s31) constructing an augmented Lagrangian function
In order to solve the objective function of a visual data completion model based on low-rank tensor ring decomposition and factor prior by using an alternative direction multiplier ADMM calculation framework, a series of auxiliary tensors are introduced firstly
Figure FDA00036447279000000410
To simplify the optimization, the optimization problem of the objective function is thus re-expressed as:
Figure FDA0003644727900000043
Figure FDA0003644727900000044
Figure FDA0003644727900000045
wherein, aggregate
Figure FDA0003644727900000047
A sequence of tensors is represented which is,
Figure FDA0003644727900000048
representing the nth TR factor
Figure FDA0003644727900000049
By combining additional equality constraints of the auxiliary tensor
Figure FDA0003644727900000046
The augmented Lagrangian function for the objective function is obtained as:
Figure FDA0003644727900000051
Figure FDA0003644727900000052
wherein,
Figure FDA00036447279000000524
is a set of lagrange multipliers,
Figure FDA00036447279000000525
is the nth lagrange multiplier, beta > 0 is a penalty parameter,<x,y>representing the tensor inner product;
then, for each variable, each variable is alternately updated by fixing other variables except the variable and sequentially solving the optimization subproblems respectively corresponding to each variable in the steps S32) to S35);
S32)
Figure FDA00036447279000000523
update of
About variables
Figure FDA00036447279000000522
The optimization sub-problem of (a) is simplified to:
Figure FDA0003644727900000053
wherein, X <n> Tensor representing object
Figure FDA00036447279000000521
The cyclic mode n of (a) expands the matrix,
Figure FDA00036447279000000519
representing the division by the nth TR factor
Figure FDA00036447279000000520
Combining all external factors through a multi-linear product to generate a cyclic mode 2 expansion matrix of the sub-chain tensor;
by changing the above variables
Figure FDA00036447279000000518
Relative to
Figure FDA00036447279000000517
Is set to zero, the above variable
Figure FDA00036447279000000516
The solution of the optimization sub-problem of (a) is equal to solving the following general Sylvester matrix equation:
Figure FDA0003644727900000054
wherein, X <n> A cyclic modulo n expansion matrix representing the target tensor,
Figure FDA00036447279000000512
and
Figure FDA00036447279000000513
respectively represent
Figure FDA00036447279000000514
And
Figure FDA00036447279000000515
the standard model 2 of (a) is expanded into a matrix,
Figure FDA00036447279000000511
is an identity matrix; due to the matrix-L n Sum matrix
Figure FDA00036447279000000510
The Sylvester matrix equation has a unique solution because of no common characteristic value, and the solution is realized by calling a Sylvester function in Matlab;
S33)
Figure FDA00036447279000000526
update of
In that
Figure FDA00036447279000000527
After updating, the nth unitary transformation matrix Φ in the transformation tensor kernel norm is first updated according to the following formula n
Figure FDA0003644727900000055
Wherein,
Figure FDA0003644727900000059
to represent
Figure FDA0003644727900000058
The standard model 3 of (a) is expanded into a matrix,
Figure FDA0003644727900000056
representation expansion matrix
Figure FDA0003644727900000057
The U and the V respectively represent a left singular matrix and a right singular matrix, and the S represents a diagonal matrix;
then, with respect to the variables
Figure FDA00036447279000000633
The optimization sub-problem of (a) is simplified to:
Figure FDA0003644727900000061
order to
Figure FDA00036447279000000630
And
Figure FDA00036447279000000631
the above variables
Figure FDA00036447279000000632
The optimization sub-problem of (2) is equivalent to:
Figure FDA0003644727900000062
further, the air conditioner is provided with a fan,
Figure FDA00036447279000000627
can be expressed by transforming tensor singular value decomposition into
Figure FDA00036447279000000628
Wherein
Figure FDA00036447279000000629
Expressed in unitary transformation matrix phi n The product of the lower tensor phi is,
Figure FDA00036447279000000625
and
Figure FDA00036447279000000626
are both unitary tensors and are each a unitary tensor,
Figure FDA00036447279000000624
is a diagonal tensor;
variables of
Figure FDA00036447279000000623
Can be singular by tensorSolving by using a value threshold value t-SVT operator, wherein the solving result is expressed as:
Figure FDA0003644727900000063
wherein the intermediate variable
Figure FDA00036447279000000621
By first solving for
Figure FDA00036447279000000622
Is obtained by doing unitary tensor transformation
Figure FDA00036447279000000620
According to the formula
Figure FDA00036447279000000615
To obtain
Figure FDA00036447279000000616
Finally according to
Figure FDA00036447279000000617
Obtaining intermediate variables
Figure FDA00036447279000000618
Wherein
Figure FDA00036447279000000619
Express get
Figure FDA00036447279000000614
And the larger of 0;
S34)
Figure FDA00036447279000000634
update of
About variables
Figure FDA00036447279000000613
The optimization sub-problem of (c) is expressed as:
Figure FDA0003644727900000064
Figure FDA0003644727900000065
this is a convex optimization problem with equality constraints, variables
Figure FDA00036447279000000611
Is updated as:
Figure FDA0003644727900000066
wherein,
Figure FDA00036447279000000612
representing the projection operation under the observation index set omega,
Figure FDA0003644727900000069
is represented in the missing index set
Figure FDA00036447279000000610
Performing downward projection operation;
S35)
Figure FDA0003644727900000067
update of
ADMM calculation framework based on alternative direction multiplier method, Lagrange multiplier
Figure FDA0003644727900000068
Is updated as:
Figure FDA0003644727900000071
furthermore, a penalty parameter β of the augmented Lagrangian function of the objective function is given in each iteration by β ═ min (ρ β, β) max ) Where 1 < p < 1.5 is a tuning hyperparameter, beta max Denotes the set upper limit of beta, min (ρ β, β) max ) Expressing taking ρ β and β max The smaller one as the current value of β;
s36) iterative update
Repeating steps S32) -S35), alternately updating each variable by a plurality of iterations, setting two convergence conditions: maximum iteration number maximum and relative error threshold tol between two iterations, wherein the relative error calculation formula between two iterations is
Figure FDA0003644727900000072
Figure FDA0003644727900000075
Indicating the current
Figure FDA0003644727900000074
The value of the one or more of the one,
Figure FDA0003644727900000073
representing the last iteration
Figure FDA0003644727900000076
A value; when the two convergence conditions are simultaneously met, namely the maximum iteration time maximum is reached, and the relative error between two iterations is smaller than the threshold tol, ending the iteration to obtain the target tensor
Figure FDA0003644727900000077
The solution of (1);
step S4) of obtaining the target tensor
Figure FDA0003644727900000078
The solution of (2) is converted into a corresponding format of the original visual data to obtain a final completion result of the incomplete original visual data.
2. The method of visual data completion based on low rank tensor ring decomposition and factorial priors of claim 1, wherein the original visual data comprises color images, color video.
3. The visual data completion method based on low rank tensor ring decomposition and factor prior as claimed in claim 2, wherein the tuning hyperparameter p is equal to 1.01.
4. The visual data completion method based on low rank tensor ring decomposition and factor prior as claimed in claim 3, wherein the maximum iteration number maximum takes the value of maximum 300.
5. The method of claim 4, wherein a relative error threshold tol between two iterations is 10 ═ 10 -4
CN202210526890.4A 2022-05-16 2022-05-16 Visual data completion method based on low-rank tensor ring decomposition and factor prior Active CN114841888B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210526890.4A CN114841888B (en) 2022-05-16 2022-05-16 Visual data completion method based on low-rank tensor ring decomposition and factor prior

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210526890.4A CN114841888B (en) 2022-05-16 2022-05-16 Visual data completion method based on low-rank tensor ring decomposition and factor prior

Publications (2)

Publication Number Publication Date
CN114841888A true CN114841888A (en) 2022-08-02
CN114841888B CN114841888B (en) 2023-03-28

Family

ID=82569550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210526890.4A Active CN114841888B (en) 2022-05-16 2022-05-16 Visual data completion method based on low-rank tensor ring decomposition and factor prior

Country Status (1)

Country Link
CN (1) CN114841888B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115630211A (en) * 2022-09-16 2023-01-20 山东科技大学 Traffic data tensor completion method based on space-time constraint
CN116087435A (en) * 2023-04-04 2023-05-09 石家庄学院 Air quality monitoring method, electronic equipment and storage medium
CN116245779A (en) * 2023-05-11 2023-06-09 四川工程职业技术学院 Image fusion method and device, storage medium and electronic equipment
CN116450636A (en) * 2023-06-20 2023-07-18 石家庄学院 Internet of things data completion method, equipment and medium based on low-rank tensor decomposition
CN116912107A (en) * 2023-06-13 2023-10-20 重庆市荣冠科技有限公司 DCT-based weighted adaptive tensor data completion method
CN117745551A (en) * 2024-02-19 2024-03-22 电子科技大学 Method for recovering phase of image signal

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292337A (en) * 2017-06-13 2017-10-24 西北工业大学 Ultralow order tensor data filling method
CN109241491A (en) * 2018-07-28 2019-01-18 天津大学 The structural missing fill method of tensor based on joint low-rank and rarefaction representation
CN110059291A (en) * 2019-03-15 2019-07-26 上海大学 A kind of three rank low-rank tensor complementing methods based on GPU
CN110162744A (en) * 2019-05-21 2019-08-23 天津理工大学 A kind of multiple estimation new method of car networking shortage of data based on tensor
CN112116532A (en) * 2020-08-04 2020-12-22 西安交通大学 Color image completion method based on tensor block cyclic expansion
CN113222834A (en) * 2021-04-22 2021-08-06 南京航空航天大学 Visual data tensor completion method based on smooth constraint and matrix decomposition
CN113240596A (en) * 2021-05-07 2021-08-10 西南大学 Color video recovery method and system based on high-order tensor singular value decomposition
US20210338171A1 (en) * 2020-02-05 2021-11-04 The Regents Of The University Of Michigan Tensor amplification-based data processing
CN113704688A (en) * 2021-08-17 2021-11-26 南昌航空大学 Defect vibration signal recovery method based on variational Bayes parallel factor decomposition

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292337A (en) * 2017-06-13 2017-10-24 西北工业大学 Ultralow order tensor data filling method
CN109241491A (en) * 2018-07-28 2019-01-18 天津大学 The structural missing fill method of tensor based on joint low-rank and rarefaction representation
CN110059291A (en) * 2019-03-15 2019-07-26 上海大学 A kind of three rank low-rank tensor complementing methods based on GPU
CN110162744A (en) * 2019-05-21 2019-08-23 天津理工大学 A kind of multiple estimation new method of car networking shortage of data based on tensor
US20210338171A1 (en) * 2020-02-05 2021-11-04 The Regents Of The University Of Michigan Tensor amplification-based data processing
CN112116532A (en) * 2020-08-04 2020-12-22 西安交通大学 Color image completion method based on tensor block cyclic expansion
CN113222834A (en) * 2021-04-22 2021-08-06 南京航空航天大学 Visual data tensor completion method based on smooth constraint and matrix decomposition
CN113240596A (en) * 2021-05-07 2021-08-10 西南大学 Color video recovery method and system based on high-order tensor singular value decomposition
CN113704688A (en) * 2021-08-17 2021-11-26 南昌航空大学 Defect vibration signal recovery method based on variational Bayes parallel factor decomposition

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHENG DAI等: "A tucker decomposition based knowledge distillation for intelligent edge applications", 《APPLIED SOFT COMPUTING》 *
LONGHAO YUAN等: "Higher-dimension Tensor Completion via Low-rank Tensor Ring Decomposition", 《2018 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC)》 *
李琼: "基于变分贝叶斯平行因子分解的缺失信号的恢复", 《仪器仪表学报》 *
马友等: "基于张量分解的卫星遥测缺失数据预测算法", 《电子与信息学报》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115630211A (en) * 2022-09-16 2023-01-20 山东科技大学 Traffic data tensor completion method based on space-time constraint
CN116087435A (en) * 2023-04-04 2023-05-09 石家庄学院 Air quality monitoring method, electronic equipment and storage medium
CN116087435B (en) * 2023-04-04 2023-06-16 石家庄学院 Air quality monitoring method, electronic equipment and storage medium
CN116245779A (en) * 2023-05-11 2023-06-09 四川工程职业技术学院 Image fusion method and device, storage medium and electronic equipment
CN116245779B (en) * 2023-05-11 2023-08-22 四川工程职业技术学院 Image fusion method and device, storage medium and electronic equipment
CN116912107A (en) * 2023-06-13 2023-10-20 重庆市荣冠科技有限公司 DCT-based weighted adaptive tensor data completion method
CN116912107B (en) * 2023-06-13 2024-04-16 万基泰科工集团数字城市科技有限公司 DCT-based weighted adaptive tensor data completion method
CN116450636A (en) * 2023-06-20 2023-07-18 石家庄学院 Internet of things data completion method, equipment and medium based on low-rank tensor decomposition
CN116450636B (en) * 2023-06-20 2023-08-18 石家庄学院 Internet of things data completion method, equipment and medium based on low-rank tensor decomposition
CN117745551A (en) * 2024-02-19 2024-03-22 电子科技大学 Method for recovering phase of image signal
CN117745551B (en) * 2024-02-19 2024-04-26 电子科技大学 Method for recovering phase of image signal

Also Published As

Publication number Publication date
CN114841888B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN114841888B (en) Visual data completion method based on low-rank tensor ring decomposition and factor prior
Miao et al. Low-rank quaternion tensor completion for recovering color videos and images
Yuan et al. High-order tensor completion via gradient-based optimization under tensor train format
CN102722896B (en) Adaptive compressed sensing-based non-local reconstruction method for natural image
CN109241491A (en) The structural missing fill method of tensor based on joint low-rank and rarefaction representation
CN106097278B (en) Sparse model, reconstruction method and dictionary training method of multi-dimensional signal
CN108510013B (en) Background modeling method for improving robust tensor principal component analysis based on low-rank core matrix
CN110113607B (en) Compressed sensing video reconstruction method based on local and non-local constraints
CN113222834B (en) Visual data tensor completion method based on smoothness constraint and matrix decomposition
Cen et al. Boosting occluded image classification via subspace decomposition-based estimation of deep features
CN105631807A (en) Single-frame image super resolution reconstruction method based on sparse domain selection
Feng et al. Compressive sensing via nonlocal low-rank tensor regularization
CN113420421B (en) QoS prediction method based on time sequence regularized tensor decomposition in mobile edge calculation
CN114119426B (en) Image reconstruction method and device by non-local low-rank conversion domain and full-connection tensor decomposition
CN107609596A (en) Printenv weights more figure regularization Non-negative Matrix Factorizations and image clustering method automatically
Zhang et al. Effective tensor completion via element-wise weighted low-rank tensor train with overlapping ket augmentation
Xu et al. Factorized tensor dictionary learning for visual tensor data completion
Zhang et al. Tensor recovery based on a novel non-convex function minimax logarithmic concave penalty function
CN105931184B (en) SAR image super-resolution method based on combined optimization
Zhang et al. Randomized sampling techniques based low-tubal-rank plus sparse tensor recovery
Wen et al. The power of complementary regularizers: Image recovery via transform learning and low-rank modeling
Chen et al. Hierarchical factorization strategy for high-order tensor and application to data completion
Tu et al. Tensor recovery using the tensor nuclear norm based on nonconvex and nonlinear transformations
CN117056327A (en) Tensor weighted gamma norm-based industrial time series data complement method
CN111062888A (en) Hyperspectral image denoising method based on multi-target low-rank sparsity and spatial-spectral total variation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant