CN114841888A - Visual data completion method based on low-rank tensor ring decomposition and factor prior - Google Patents
Visual data completion method based on low-rank tensor ring decomposition and factor prior Download PDFInfo
- Publication number
- CN114841888A CN114841888A CN202210526890.4A CN202210526890A CN114841888A CN 114841888 A CN114841888 A CN 114841888A CN 202210526890 A CN202210526890 A CN 202210526890A CN 114841888 A CN114841888 A CN 114841888A
- Authority
- CN
- China
- Prior art keywords
- tensor
- matrix
- rank
- factor
- transformation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000354 decomposition reaction Methods 0.000 title claims abstract description 64
- 230000000007 visual effect Effects 0.000 title claims abstract description 62
- 238000000034 method Methods 0.000 title claims abstract description 32
- 239000011159 matrix material Substances 0.000 claims description 96
- 230000009466 transformation Effects 0.000 claims description 50
- 238000005457 optimization Methods 0.000 claims description 24
- 230000003190 augmentative effect Effects 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 8
- 125000004122 cyclic group Chemical group 0.000 claims description 8
- 230000005477 standard model Effects 0.000 claims description 7
- UHFICAKXFHFOCN-UHFFFAOYSA-N 6-(5,5,8,8-tetramethyl-6,7-dihydronaphthalen-2-yl)naphthalene-2-carboxylic acid Chemical compound C1=C(C(O)=O)C=CC2=CC(C=3C=C4C(C)(C)CCC(C4=CC=3)(C)C)=CC=C21 UHFICAKXFHFOCN-UHFFFAOYSA-N 0.000 claims description 5
- 239000013598 vector Substances 0.000 claims description 5
- 230000001131 transforming effect Effects 0.000 claims description 4
- 230000000295 complement effect Effects 0.000 claims description 3
- 235000009508 confectionery Nutrition 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 101100391182 Dictyostelium discoideum forI gene Proteins 0.000 claims 1
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 claims 1
- 230000017105 transposition Effects 0.000 claims 1
- 238000011084 recovery Methods 0.000 abstract description 7
- 230000001419 dependent effect Effects 0.000 abstract description 2
- 230000000875 corresponding effect Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000013441 quality evaluation Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/15—Correlation function computation including computation of convolution operations
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a visual data completion method based on low-rank tensor ring decomposition and factor prior, which is used for solving the problem that the traditional data completion algorithm based on tensor decomposition is dependent on initial rank selection to cause the recovery result to lack stability and effectiveness, designing a layered tensor decomposition model, realizing tensor ring decomposition and completion at the same time, and expressing an incomplete tensor as a series of third-order factors through tensor ring decomposition for a first layer; for the second layer, low rank constraints of the factors are represented using the transform tensor nuclear norm, and the degrees of freedom of each factor are constrained in conjunction with the factor priors of graph regularization; according to the method, the low-rank structure and the prior information of the factor space are simultaneously utilized, so that on one hand, the model has implicit rank adjustment, the robustness of the model for rank selection can be improved, the burden of searching for the optimal initial rank is reduced, on the other hand, the potential information of tensor data is fully utilized, and the completion performance is further improved.
Description
Technical Field
The invention relates to the field of visual data completion, in particular to a visual data completion method based on low-rank tensor ring decomposition and factor prior.
Background
With the rapid development of information technology, modern society is stepping into an era of explosive growth of data, and a great amount of multi-attribute and multi-association data is generated. However, most data is often incomplete, which may be due to occlusion, noise, local corruption, difficulty in collection, or loss of data during conversion. Imperfections in the data may significantly degrade the quality of the data, making the analysis process difficult. The tensor is used as high-dimensional expansion of vectors and matrixes, can express a more complex data internal structure, and is widely applied to the fields of signal processing, computer vision, data mining, neuroscience and the like. The matrix-based correlation completion method destroys the spatial structure characteristics of the original multidimensional data, and the effect is poor. Therefore, tensor completion, which has received much attention in recent years, is one of the important problems in tensor analysis, and recovers the values of missing elements from the observed available elements through some a priori information and data structure attributes. In fact, most real-world natural data is low-rank or near low-rank, visual data such as color images, color video, etc., so incomplete data can be recovered using low-rank priors. With the success of low-rank matrix completion, the low-rank constraint is also a powerful tool for recovering missing items in the high-order tensor, and the missing data can be effectively estimated by using the global information of the tensor. One basic problem of low rank tensor completion is the definition of the tensor rank. However, unlike matrix rank, the definition of tensor rank is not unique. Different types of tensor ranks are defined according to different tensor decompositions.
Tensor resolution is an important content in tensor data analysis. Through tensor decomposition, essential features can be extracted from original tensor data, low-dimensional representation of the essential features is obtained, and meanwhile, structural information inside the data is reserved. In recent years, tensor networks have become a large tool for analyzing large-scale tensor data. With the introduction of tensor ring decomposition, it has been studied across disciplines because of its greater expressive power and flexibility. At present, a few theories and practices prove the feasibility and the effectiveness of tensor ring decomposition applied to a tensor completion task. The existing data completion method based on tensor ring decomposition is usually dependent on good initial rank estimation and heavy calculation overhead while achieving excellent performance. However, determining the optimal initial rank is a difficult task in practice, and the computational complexity of rank search grows exponentially as the dimensionality of the rank increases. The result of data completion is affected by the initial rank and may produce an overfitting. In addition, the model based on tensor ring decomposition has high calculation complexity, so that the existing method has low efficiency and greatly limits practical application. In summary, for the tensor ring decomposition-based completion method, the initial rank is greatly affected and the higher calculation cost still remains a challenging problem, so it is crucial to develop a robust and efficient tensor ring decomposition-based data completion algorithm.
Disclosure of Invention
The invention provides a visual data completion method based on low-rank tensor ring decomposition and factor prior, aiming at the problem that the traditional data completion algorithm based on tensor decomposition depends on initial rank selection to cause the problem that the recovery result lacks stability and effectiveness.
The invention discloses a visual data completion method based on low-rank tensor ring decomposition and factor prior, which comprises the following steps of:
s1: the object tensor is initialized. Representing incomplete original visual data as a tensor to be compensatedDetermining an observation index set omega and according to the tensor to be compensatedInitializing a target tensorAs the data completion model input of the invention;
s2: and (5) establishing a model. Taking a simple Tensor Ring (TR) completion model as a basic frame, designing a layered Tensor decomposition model, carrying out low-rank constraint on TR factors by transforming Tensor nuclear norms, limiting the degree of freedom of each TR factor by combining factor prior information, and constructing a visual data completion model based on low-rank Tensor Ring decomposition and factor prior to obtain a target function of the data completion model;
s3: and (6) solving the model. Solving an objective function by using a calculation framework of an Alternating Direction Method of Multipliers (A DMM), converting an optimization problem of the objective function into a plurality of subproblems to be respectively solved by constructing an augmented Lagrange function form of the objective function, iteratively updating an intermediate variable by sequentially solving each subproblem, and outputting a target tensor after several iterative function convergenceThe solution of (1);
s4: tensor of objectThe solution of (a) is converted into a corresponding format of the original visual data to obtain a final completion result.
Wherein, step S1 includes the following steps:
s11: obtaining incomplete original visual data and storing the incomplete original visual data in a tensor form to obtain a to-be-compensated tensorTaking index positions of all known pixel points in original visual data to form an observation index set omega;
s12: according to the amount of full tension to be compensatedInitializing a target tensorMake it satisfyWhereinIs the complement of the omega, and is,tensor representing objectIs known in the art, and the known item,indicating the amount of tension to be compensatedIs known in the art, and the known item,tensor representing objectThe missing entry of (2).
Wherein, step S2 includes the following steps:
s21: by finding the tensor ring decomposed representation of the incomplete original visual data from the known entries, and then estimating the missing term of the original visual data by using the TR factor of the obtained tensor ring decomposed representation, a simple tensor ring completion model can be obtained as follows:
wherein,the set of TR factors is represented as a set of TR factors,is a tensor ring decomposition representation and,represents the projection operation under the observation index set omega, | | · | | non-woven F Frobenius norm representing tensor;
s22: on the basis of a simple tensor ring completion model, each TR factor is further constrained by transforming tensor nuclear norm to utilize global low-rank characteristics of tensor data, and a basic low-rank tensor ring completion model can be obtained as follows:
wherein the object tensor Representing the real number domain, N representing the target tensorOrder of (1) of n To representN-1, 2,.., N.The set of TR factors is represented as a set of TR factors,denotes the nth TR factor, R n-1 、I n And R n Representing real number fieldsOfThe same dimension size. I | · | purple wind TTNN Representing the transformation tensor nuclear norm, λ > 0 is a trade-off parameter.
The basic low-rank tensor ring completion model limits the TR factor through low-rank constraint, and implicitly adjusts the TR rank in multiple iterations, so that the TR rank gradually tends to the actual rank of tensor ring decomposition, and the robustness of initial rank selection is enhanced;
s23: to further improve the completion performance of the visual data, a factor prior may be added to fully utilize the potential information of the visual data. By using the low-rank hypothesis of the TR factor and the factor prior of graph regularization, a visual data completion model based on low-rank tensor ring decomposition and the factor prior can be obtained as follows:
the first line of the above equation represents an objective function of a visual data completion model based on low-rank tensor ring decomposition and factor prior, and the second line represents a constraint condition of the objective function. Alpha ═ alpha 1 ,α 2 ,…,α N ]Is a graph regularization parameter vector, α n Denotes the nth element in the vector α, N being 1, 2.μ, λ are trade-off parameters and μ > 0, λ > 0. L is n Represents the nth laplace matrix and the nth laplace matrix,representing the nth TR factorNorm 2 of (a) expands the matrix, tr (-) denotes the trace of the matrix, and superscript T denotes the transpose of the matrix.
Wherein, step S3 includes the following steps:
s31: to solve the objective function using ADMM, a series of auxiliary tensors are first introducedTo simplify the optimization, the optimization problem of the objective function can therefore be re-expressed as:
wherein, aggregateA sequence of tensors is represented which is,representing the nth TR factorThe corresponding auxiliary tensor. Additional equality constraints by combining auxiliary tensorsN is 1,2, …, N, and the augmented lagrange function for the objective function is:
wherein,is a set of lagrange multipliers,is the nth Lagrange multiplier, N is the total number of the Lagrange multipliers, beta is more than 0 and is a punishment parameter,<x,y>the tensor inner product is represented.
Then, for each variable, each variable is alternately updated by fixing other variables except the variable and solving the optimization sub-problem corresponding to each variable in S32 to S35 in turn.
wherein, X <n> Tensor representing objectThe cyclic mode n of (a) expands the matrix,representing the division by the nth TR factorAnd combining all the factors outside through the multi-linear product to generate a cyclic modulo-2 expansion matrix of the sub-chain tensor. Variables can be implemented by solving the above sub-problemsUpdating of (1);
furthermore, the penalty parameter β of the augmented lagrange function of the objective function may be given in each iteration by β ═ min (ρ β, β) max ) Updating, wherein 1 < rho < 1.5 is a tuning hyperparameter. Beta is a max Denotes the set upper limit of beta, min (ρ β, β) max ) Expressing taking ρ β and β max The smaller one as the current value of β;
s36: steps S32-S35 are repeated, with each variable being alternately updated by a plurality of iterations. Consider setting two convergence conditions: the maximum number of iterations maximum and the relative error threshold tol between the two iterations. When the two convergence conditions are simultaneously met, namely the maximum iteration time maximum is reached, the relative error between two iterations is less than the threshold tol, the iteration is ended, and the target tensor can be obtainedThe solution of (1).
According to the invention, a layered tensor decomposition model is designed by simultaneously utilizing the low-rank structure and the prior information of the factor space, and tensor ring decomposition and completion can be simultaneously realized. For the first layer, the incomplete tensor is represented by tensor ring decomposition as a series of third-order TR factors. For the second layer, the low rank constraint of the TR factor is represented using the transform tensor kernel norm and the factor prior strategy of graph regularization is considered. The low-rank constraint of the TR factor can enable the data completion model of the invention to have implicit rank adjustment, and the robustness of the model to TR rank selection is enhanced, so that the burden of searching for the optimal TR rank is reduced, and the factor prior can fully utilize the potential information of the original visual data, thereby being beneficial to further improving the completion performance.
Drawings
FIG. 1 is a general block diagram of an embodiment of the present invention;
FIG. 2 is a simplified flow chart of a visual data completion method based on low rank tensor ring decomposition and factor prior in the embodiment of the present invention;
FIG. 3 is a diagram of raw color image data in an embodiment of the present invention;
FIG. 4 is a diagram of raw color video data in an embodiment of the present invention;
FIG. 5 is a graph of the completion result of a color image at different defect rates according to an embodiment of the present invention;
FIG. 6 is a graph of the completion results of color video with different defect rates according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings.
The invention provides a visual data completion method based on low-rank tensor ring decomposition and factor prior, which specifically comprises the following steps:
step S1: the object tensor is initialized.
S11: obtaining incomplete original visual data (such as color images, color videos and the like), reading in an original visual data file with missing entries through Matlab software, and storing the original visual data file into a tensor form to obtain a to-be-compensated tensorTaking index positions of all known pixel points of original visual data to form an observation index set omega;
s12: according to the amount of full tension to be compensatedInitializing a target tensorSo that the mapping relation satisfiesWhereinThe complement of Ω represents the missing index set.Tensor representing objectIs known in the art, and the known item,indicating the amount of tension to be compensatedIs known in the art, and the known item,tensor representing objectThe missing entry of (2).
Step S2: and (5) establishing a model.
S21: by finding the corresponding tensor ring decomposition representation from the known entries of the incomplete original visual data and then estimating the missing entries of the original visual data by using the TR factor of the obtained tensor ring decomposition representation, a simple tensor ring completion model can be obtained as follows:
wherein,the set of TR factors is represented as a set of TR factors,denotes the nth TR factor, N being 1, 2.., N,is a tensor ring decomposition representation and,represents the projection operation under the observation index set omega, | | · | | non-woven F Frobenius norm representing tensor. In order to solve the problem that the low-rank tensor completion method depends on the initial rank, a simple tensor ring completion model is improved in the following way;
s22: the singular value decomposition of the transformed tensor and the basic algebraic knowledge of the involved tensor are introduced first.
Unitary tensor transformation: for third order tensorSuppose thatIs a unitary transformation matrix satisfying phi H =Φ H Phi ═ I, tensorThe unitary transform of (a) is defined as:
wherein,tensor of representationIs determined by the unitary transformation of (a) a,tensor of representationModulo-3 product of the matrix phi. I denotes the identity matrix, superscript H denotes the conjugate transpose of the matrix,representing the real number field, I k′ And k' is 1,2,3 each represents a tensorThe size of the dimension of the k' th order.
Block diagonal matrix: based onThe block diagonal matrix for all forward slices of (a) is defined as:
wherein,is thatI ═ 1,2, 1, I 3 To do soCan be converted into a tensor, i.e. by the folding operator fold (·), i.e. a tensor
Tensor Φ product: the phi product between the two third order tensors is defined by the product of the forward slices in the unitary transform domain. For two tensorsAndthe tensor Φ product is defined as:
wherein Φ The sign of the product of the tensor phi is expressed,tensor of representationUnitary transformation of (a). The product of the tensor phi is a third order tensorSuperscript H denotes conjugate transpose, I 4 Tensor of representationDimension size on the second order.
Transformation tensor singular value decomposition: the method is mainly used for factorization of third-order tensor, and a unitary transformation matrix phi is adopted to replace a discrete Fourier transformation matrix in the traditional tensor singular value decomposition. For a third order tensorIts transformation tensor singular value decomposition can be expressed as:
Based on the transformation tensor singular value decomposition, a transformation tensor nuclear norm may be defined. For third order tensorSuppose thatIs a unitary transformation matrix, tensorThe transformation tensor nuclear norm of (a) is defined as:
wherein | · | purple sweet TTNN Representing the transformation tensor nuclear norm, | | · | |, representing the matrix nuclear norm.RepresentOf the ith forward slice of (2), i.e. the matrix kernel normThe sum of all singular values of (a).
The rank of the TR factor satisfies the relation due to tensor rankWherein X (n) Tensor of representationThe standard mode n of (a) expands the matrix,to representThe norm 2 of (a) expands the matrix, rank () represents the rank function of the matrix, indicating the target tensorIs to a certain extent subject to the corresponding TR factorThis makes it possible to exploit the global low-rank characteristics of tensor data by regularizing the TR factor. Furthermore, the transformed tensor kernel norm, which can be used to approximate the sum of the transformed ranks of the tensor, is a suitable tensor rank alternative. Therefore, each TR factor can be further constrained by using the transformation tensor nuclear norm, and the basic low-rank tensor ring completion model is obtained as follows:
wherein the object tensorN denotes the object tensorOrder of (1) of n RepresentThe size of the dimension of the nth order.The set of TR factors is represented as a set of TR factors,denotes the nth TR factor, R n-1 、I n And R n Three dimensions indicated respectively. I | · | purple wind TTNN Representing the transformation tensor nuclear norm, λ > 0 is a trade-off parameter. When the basic low-rank tensor loop completion model described above is optimized, the fitting errors of the transformation tensor nuclear norm and the target tensor of all TR factors are minimized at the same time. In addition, the singular value decomposition of the transformation tensor here contains a key unitary transformation matrix Φ n . In this model, based on the givenThird order TR factor ofConstructs a unitary transformation matrixDue to the fact thatIs unknown and phi can be updated iteratively n . This process can be expressed as:
wherein,to representThe standard model 3 of (a) is expanded into a matrix,representation expansion matrixU and V represent the left and right singular matrices, respectively, and S represents the diagonal matrix. U can be selected in transformation tensor singular value decomposition H As a unitary transformation matrix. Suppose thatIs satisfied withAnd then by performing a tensor unitary transformationThe tensor can be obtainedLast R of (2) n All of the r forward slices are zero matrices. Thus, U H Being a unitary transformation matrix will help to further explore the TR factorLow rank information of (1).
S23: to further improve the completion performance of visual data, a factor prior may be added to fully utilize the underlying information of the data. Graph regularization is used for visual data completion, and common image priors can be encoded to facilitate image restoration. One widely used image prior is the local similarity prior, which assumes that adjacent rows and columns are highly correlated. In tensor ring decomposition, any nth TR factorRespectively, represent information on the nth order of the original visual data. For example, if a color image is considered to be a third order tensor, the first two TR factors obtained after tensor ring decomposition encode the changes in the row and column spaces, respectively. Therefore, the local similarity of pixels of visual data such as color images, color videos, etc. can be described as an accurate factor prior, and the weights of a single factor graph can be defined as follows:
where row and column represent row and column space, respectively, and if k ═ row, then i k And j k Respectively representing any two index positions of the line space. w is a ij Is a similarity matrixOf (i, j) th element of (a) is all the pair-wise distances i k -j k Average value of (a). Order toFor diagonal matrix, the (i, i) th element in matrix D is sigma j w ij The laplacian matrix L ═ D-W can be obtained.
By using the low-rank hypothesis of the TR factor and the factor prior of graph regularization, a visual data completion model based on low-rank tensor ring decomposition and the factor prior can be obtained as follows:
the first line of the above equation represents an objective function of a visual data completion model based on low-rank tensor ring decomposition and factor prior, and the second line represents a constraint condition of the objective function. Alpha ═ alpha 1 ,α 2 ,…,α N ]Is a graph regularization parameter vector, μ, λ are trade-off parameters and μ > 0, λ > 0. tr (-) is a matrix trace operation. Laplace matrixThe inter-dependencies within the nth TR factor are described,representing the nth TR factorThe norm 2 of (2) expands the matrix, and the superscript T denotes the transpose of the matrix.
Step S3: and (6) solving the model.
S31: an augmented Lagrangian function is constructed.
In order to solve the objective function of the visual data completion model based on the low-rank tensor ring decomposition and factor prior by using the ADMM calculation framework, a series of auxiliary tensors are introduced firstlyTo simplify the optimization, the optimization problem of the objective function can therefore be re-expressed as:
wherein, aggregateA sequence of tensors is represented which is,representing the nth TR factorThe corresponding auxiliary tensor. Additional equality constraints by combining auxiliary tensorsThe augmented lagrangian function that can be used to obtain the objective function is:
wherein,is a set of lagrange multipliers,is the nth lagrange multiplier, beta > 0 is a penalty parameter,<x,y>the tensor inner product is represented. Then, by fixing the other variables and solving each sub-problem of S32 to S35 in turn, each of the following variables may be alternately updated;
wherein, X <n> Tensor representing objectThe cyclic mode n of (a) expands the matrix,representing the division by the nth TR factorAnd combining all the factors outside through the multi-linear product to generate a cyclic modulo-2 expansion matrix of the sub-chain tensor.
By comparing the above formulas withIs set to zero, the solution of the sub-problem described above is equal to solving the following general sylvester matrix equation:
wherein, X <n> A cyclic modulo n expansion matrix representing the target tensor,andrespectively representAndstandard mode 2 of (2) expands the matrix.Is an identity matrix. Due to the matrix-L n Sum matrixThere is no common characteristic value, so the equation has a unique solution, and the solution of the equation can call a Sylvester function in Matlab;
In thatAfter updating, the nth unitary transformation matrix Φ in the transformation tensor kernel norm is first updated according to the following formula n 。
Wherein,to representThe standard model 3 of (a) is expanded into a matrix,representation expansion matrixU and V represent the left and right singular matrices, respectively, and S represents the diagonal matrix.
further, the air conditioner is provided with a fan,can be expressed by transforming tensor singular value decomposition intoWhereinExpressed in unitary transformation matrix phi n The product of the lower tensor phi is,andare both unitary tensors and are each a unitary tensor,is the diagonal tensor.
Variables ofThe sub-problem of optimization may be solved by a tensor singular value threshold (t-SVT) operator, and the solution result may be expressed as:
wherein the intermediate variableWhereinTensor of representationAnd matrixModulo 3 product of, and havingWhereinExpress getAnd the larger of 0. At this time, firstly, theIs obtained by doing unitary tensor transformationThen according to the formulaTo obtainFinally according toObtaining intermediate variables
wherein,representing the projection operation under the observation index set omega,is represented in the missing index setAnd (4) performing a downward projection operation.
furthermore, the penalty parameter β of the augmented lagrange function of the objective function may be given in each iteration by β ═ min (ρ β, β) max ) Is updated, wherein 1 < rho < 1.5 is a tuning hyperparameter. Beta is a max Denotes the set upper limit of beta, min (ρ β, β) max ) Expressing taking ρ β and β max The smaller one of which is taken as the current value of beta. In a specific embodiment of the present invention, ρ is set to 1.01;
s36: and (4) performing iterative updating.
Steps S32-S35 are repeated, with each variable being alternately updated by a plurality of iterations. Consider setting two convergence conditions: the maximum iteration number maximum is 300, and the relative error threshold tol between two iterations is 10 -4 . Wherein the relative error between two iterations is calculated by the following formula To representAt present, the currentThe value of the one or more of the one,representing the last iterationThe value is obtained. When the two convergence conditions are satisfied simultaneously, the maximum iteration number 300 is reached, and the relative error between the two iterations is less than the threshold value 10 -4 And finishing iteration to obtain the target tensorThe solution of (1).
Step S4: the obtained target tensorThe solution of (2) is converted into a corresponding format of the original visual data to obtain a final completion result of the incomplete original visual data.
Examples
In this embodiment, a test is performed on given tensor data (such as color images and color videos shown in fig. 3 and 4, and the english above the images indicates the name corresponding to the data). In algorithm initialization, the penalty parameter β is set to 0.01, and other parameters are manually adjusted to obtain the best performance. Incomplete tensor data are generated by randomly deleting partial pixels of visual data, several different random deletion rates (MR belongs to { 60%, 70%, 80%, 90%, 95%) are set, and the tensor completion task is performed by adopting the technical scheme provided by the invention. Fig. 5 and 6 show the completion results of color image and color video data, respectively, and peak signal-to-noise ratio (PSNR) is used to evaluate the recovery performance of the data completion method of the present invention on visual data, and the values above the image represent the corresponding PSNR values. The higher the PSNR value, the better the quality of the restored image. The effectiveness of the method can be known by comparing the images before and after recovery. The final result shows that compared with the traditional methods HaLTRC and TRALS, the completion result of the method of the invention not only has better overall visual effect, but also has better recovery in the aspect of local detail texture of the image, and the recovery result is closer to the original image. Meanwhile, the method obtains higher recovery precision from the evaluation of the visual quality evaluation index PSNR. In conclusion, the method can effectively recover the main information and the texture details of the incomplete visual data under the condition of high loss rate, can realize the tensor completion task with better performance, and has good application prospect.
The embodiments described above are only a part of the embodiments of the present invention, and not all of them. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Claims (5)
1. A visual data completion method based on low rank tensor ring decomposition and factor prior is characterized by comprising the following steps:
step S1), initializing the target tensor, specifically including the following sub-steps:
s11) acquiring incomplete original visual data, reading in the original visual data file with missing entries through Matlab software, and storing the original visual data file in a tensor form to obtain the tensor to be compensatedTaking index positions of all known pixel points of original visual data to form an observation index set omega;
s12) according to the tension to be compensatedInitializing a target tensorSo that the mapping relation satisfiesWhereinThe complement of Ω, representing the missing index set,tensor representing objectIs known in the art, and the known item,indicating the amount of tension to be compensatedIs known in the art, and the known item,tensor representing objectThe missing entry of (a);
step S2), model building, specifically comprising the following substeps:
s21) obtaining a simple tensor ring completion model by finding a corresponding tensor ring decomposition representation from the known entries of the incomplete original visual data, and then estimating the missing entries of the original visual data by using the TR factor of the obtained tensor ring decomposition representation:
wherein,the set of TR factors is represented as a set of TR factors,denotes the nth TR factor, N being 1, 2.., N,is a tensor ring decomposition representation and,represents the projection operation under the observation index set omega, | | · | | non-woven F Frobenius norm representing tensor;
in order to solve the problem that the low-rank tensor completion method depends on the initial rank, a simple tensor ring completion model is improved in the following way;
s22) firstly introduces singular value decomposition of transformation tensor and introduces basic algebraic knowledge of involved tensor
Unitary tensor transformation: for third order tensorSuppose thatIs a unitary transformation matrix satisfying phi H =Φ H Phi ═ I, tensorThe unitary transform of (a) is defined as:
wherein,tensor of representationIs determined by the unitary transformation of (a) a,tensor of representationModulo-3 product with matrix phi, I denotes the identity matrix, superscript H denotes the conjugate transpose of the matrix,representing the real number field, I k′ And k' is 1,2,3 each represents a tensorThe size of the dimension on the k' th order;
block diagonal matrix: based onThe block diagonal matrix for all forward slices of (a) is defined as:
wherein,is thatI1, 2, I 3 To do soCan be converted into a tensor, i.e. by the folding operator fold (·)
Tensor Φ product: the phi product between the two third order tensors is defined by the product of the forward slices in the unitary transform domain,for two tensorsAndthe tensor Φ product is defined as:
wherein Φ The sign of the product of the tensor phi is expressed,tensor of representationThe unitary transformation of (1), the product of the tensor phi being a third order tensorSuperscript H denotes conjugate transpose, I 4 Tensor of representationDimension size on the second order;
transformation tensor singular value decomposition: mainly used for factorization of third-order tensor, a unitary transformation matrix phi is adopted to replace a discrete Fourier transformation matrix in the singular value decomposition of the traditional tensor, and for one third-order tensorThe singular value decomposition of the transformation tensor is expressed as:
based on the transformation tensor singular value decomposition, transformation tensor nuclear norm can be defined, and for the third-order tensorSuppose thatIs a unitary transformation matrix, tensorThe transformation tensor nuclear norm of (a) is defined as:
wherein | · | purple sweet TTNN Representing the transformation tensor kernel norm, | ·| luminance * The norm of the kernel of the matrix is represented,representOf the ith forward slice of (2), i.e. the matrix kernel normThe sum of all singular values of (a);
the rank of the TR factor satisfies the relation due to tensor rankWherein X (n) Tensor of representationThe standard mode n of (a) expands the matrix,to representThe standard model 2 of (1) expands a matrix, rank () represents a rank function of the matrix, each TR factor is further constrained by utilizing a transformation tensor nuclear norm, and a basic low-rank tensor ring completion model is obtained as follows:
wherein the object tensorN denotes the object tensorOrder of (1) of n To representThe size of the dimension of the nth order of (c),the set of TR factors is represented as a set of TR factors,denotes the nth TR factor, R n-1 、I n And R n Respectively representing three dimensions, | ·| non-woven phosphor TTNN Representing the transformation tensor nuclear norm, λ > 0 being a trade-off parameter; when the basic low-rank tensor loop completion model is optimized, the transformation tensor nuclear norm of all the TR factors and the fitting error of the target tensor are simultaneously minimized, and in the basic low-rank tensor loop completion model, the three-order TR factors are givenConstructs a unitary transformation matrixDue to the fact thatIs unknown and can iteratively update phi n This process is represented as:
wherein,to representThe standard model 3 of (a) is expanded into a matrix,representation expansion matrixThe U and V represent a left singular matrix and a right singular matrix respectively, the S represents a diagonal matrix, and the U is selected in the transformation tensor singular value decomposition H As unitary transformation momentsArray, hypothesisIs satisfied withAnd then by performing a tensor unitary transformationThe tensor can be obtainedLast R of (2) n All of the r forward slices are zero matrices, hence U H Being a unitary transformation matrix will help to further explore the TR factorLow rank information of (2);
s23) to further improve the completion performance of the visual data, a factor prior is added to fully utilize the potential information of the data,
in tensor ring decomposition, any nth TR factorRepresenting information on nth order of original visual data respectively, describing pixel local similarity of the original visual data as precise factor prior, and defining weights of a single factor graph as follows:
where row and column represent row and column space, respectively, and if k equals row, then i k And j k Respectively representing any two index positions, w, of the line space ij Is a similarity matrixOf (i, j) th element of (a) is all the pair-wise distances i k -j k Average value of (1), orderFor diagonal matrix, the (i, i) th element in matrix D is sigma j w ij Thus obtaining a laplacian matrix L ═ D-W;
by using the low-rank hypothesis of the TR factor and the factor prior of graph regularization, the visual data completion model based on the low-rank tensor ring decomposition and the factor prior is obtained as follows:
wherein, the first line of the above expression represents an objective function of a visual data completion model based on low rank tensor ring decomposition and factor prior, the second line represents a constraint condition of the objective function, and alpha is [ alpha ═ alpha [ alpha ] ] 1 ,α 2 ,…,α N ]Is a graph regularization parameter vector, mu, lambda are trade-off parameters and mu > 0, lambda > 0, tr (-) is a matrix trace operation, Laplace matrixThe inter-dependencies within the nth TR factor are described,representing the nth TR factorThe standard model 2 of (1) expands the matrix, and the superscript T represents the transposition of the matrix;
step S3), solving the model, and specifically comprising the following sub-steps:
s31) constructing an augmented Lagrangian function
In order to solve the objective function of a visual data completion model based on low-rank tensor ring decomposition and factor prior by using an alternative direction multiplier ADMM calculation framework, a series of auxiliary tensors are introduced firstlyTo simplify the optimization, the optimization problem of the objective function is thus re-expressed as:
wherein, aggregateA sequence of tensors is represented which is,representing the nth TR factorBy combining additional equality constraints of the auxiliary tensorThe augmented Lagrangian function for the objective function is obtained as:
wherein,is a set of lagrange multipliers,is the nth lagrange multiplier, beta > 0 is a penalty parameter,<x,y>representing the tensor inner product;
then, for each variable, each variable is alternately updated by fixing other variables except the variable and sequentially solving the optimization subproblems respectively corresponding to each variable in the steps S32) to S35);
wherein, X <n> Tensor representing objectThe cyclic mode n of (a) expands the matrix,representing the division by the nth TR factorCombining all external factors through a multi-linear product to generate a cyclic mode 2 expansion matrix of the sub-chain tensor;
by changing the above variablesRelative toIs set to zero, the above variableThe solution of the optimization sub-problem of (a) is equal to solving the following general Sylvester matrix equation:
wherein, X <n> A cyclic modulo n expansion matrix representing the target tensor,andrespectively representAndthe standard model 2 of (a) is expanded into a matrix,is an identity matrix; due to the matrix-L n Sum matrixThe Sylvester matrix equation has a unique solution because of no common characteristic value, and the solution is realized by calling a Sylvester function in Matlab;
In thatAfter updating, the nth unitary transformation matrix Φ in the transformation tensor kernel norm is first updated according to the following formula n ,
Wherein,to representThe standard model 3 of (a) is expanded into a matrix,representation expansion matrixThe U and the V respectively represent a left singular matrix and a right singular matrix, and the S represents a diagonal matrix;
further, the air conditioner is provided with a fan,can be expressed by transforming tensor singular value decomposition intoWhereinExpressed in unitary transformation matrix phi n The product of the lower tensor phi is,andare both unitary tensors and are each a unitary tensor,is a diagonal tensor;
variables ofCan be singular by tensorSolving by using a value threshold value t-SVT operator, wherein the solving result is expressed as:
wherein the intermediate variableBy first solving forIs obtained by doing unitary tensor transformationAccording to the formulaTo obtainFinally according toObtaining intermediate variablesWhereinExpress getAnd the larger of 0;
wherein,representing the projection operation under the observation index set omega,is represented in the missing index setPerforming downward projection operation;
ADMM calculation framework based on alternative direction multiplier method, Lagrange multiplierIs updated as:
furthermore, a penalty parameter β of the augmented Lagrangian function of the objective function is given in each iteration by β ═ min (ρ β, β) max ) Where 1 < p < 1.5 is a tuning hyperparameter, beta max Denotes the set upper limit of beta, min (ρ β, β) max ) Expressing taking ρ β and β max The smaller one as the current value of β;
s36) iterative update
Repeating steps S32) -S35), alternately updating each variable by a plurality of iterations, setting two convergence conditions: maximum iteration number maximum and relative error threshold tol between two iterations, wherein the relative error calculation formula between two iterations is Indicating the currentThe value of the one or more of the one,representing the last iterationA value; when the two convergence conditions are simultaneously met, namely the maximum iteration time maximum is reached, and the relative error between two iterations is smaller than the threshold tol, ending the iteration to obtain the target tensorThe solution of (1);
2. The method of visual data completion based on low rank tensor ring decomposition and factorial priors of claim 1, wherein the original visual data comprises color images, color video.
3. The visual data completion method based on low rank tensor ring decomposition and factor prior as claimed in claim 2, wherein the tuning hyperparameter p is equal to 1.01.
4. The visual data completion method based on low rank tensor ring decomposition and factor prior as claimed in claim 3, wherein the maximum iteration number maximum takes the value of maximum 300.
5. The method of claim 4, wherein a relative error threshold tol between two iterations is 10 ═ 10 -4 。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210526890.4A CN114841888B (en) | 2022-05-16 | 2022-05-16 | Visual data completion method based on low-rank tensor ring decomposition and factor prior |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210526890.4A CN114841888B (en) | 2022-05-16 | 2022-05-16 | Visual data completion method based on low-rank tensor ring decomposition and factor prior |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114841888A true CN114841888A (en) | 2022-08-02 |
CN114841888B CN114841888B (en) | 2023-03-28 |
Family
ID=82569550
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210526890.4A Active CN114841888B (en) | 2022-05-16 | 2022-05-16 | Visual data completion method based on low-rank tensor ring decomposition and factor prior |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114841888B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115630211A (en) * | 2022-09-16 | 2023-01-20 | 山东科技大学 | Traffic data tensor completion method based on space-time constraint |
CN116087435A (en) * | 2023-04-04 | 2023-05-09 | 石家庄学院 | Air quality monitoring method, electronic equipment and storage medium |
CN116245779A (en) * | 2023-05-11 | 2023-06-09 | 四川工程职业技术学院 | Image fusion method and device, storage medium and electronic equipment |
CN116450636A (en) * | 2023-06-20 | 2023-07-18 | 石家庄学院 | Internet of things data completion method, equipment and medium based on low-rank tensor decomposition |
CN116912107A (en) * | 2023-06-13 | 2023-10-20 | 重庆市荣冠科技有限公司 | DCT-based weighted adaptive tensor data completion method |
CN117745551A (en) * | 2024-02-19 | 2024-03-22 | 电子科技大学 | Method for recovering phase of image signal |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292337A (en) * | 2017-06-13 | 2017-10-24 | 西北工业大学 | Ultralow order tensor data filling method |
CN109241491A (en) * | 2018-07-28 | 2019-01-18 | 天津大学 | The structural missing fill method of tensor based on joint low-rank and rarefaction representation |
CN110059291A (en) * | 2019-03-15 | 2019-07-26 | 上海大学 | A kind of three rank low-rank tensor complementing methods based on GPU |
CN110162744A (en) * | 2019-05-21 | 2019-08-23 | 天津理工大学 | A kind of multiple estimation new method of car networking shortage of data based on tensor |
CN112116532A (en) * | 2020-08-04 | 2020-12-22 | 西安交通大学 | Color image completion method based on tensor block cyclic expansion |
CN113222834A (en) * | 2021-04-22 | 2021-08-06 | 南京航空航天大学 | Visual data tensor completion method based on smooth constraint and matrix decomposition |
CN113240596A (en) * | 2021-05-07 | 2021-08-10 | 西南大学 | Color video recovery method and system based on high-order tensor singular value decomposition |
US20210338171A1 (en) * | 2020-02-05 | 2021-11-04 | The Regents Of The University Of Michigan | Tensor amplification-based data processing |
CN113704688A (en) * | 2021-08-17 | 2021-11-26 | 南昌航空大学 | Defect vibration signal recovery method based on variational Bayes parallel factor decomposition |
-
2022
- 2022-05-16 CN CN202210526890.4A patent/CN114841888B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292337A (en) * | 2017-06-13 | 2017-10-24 | 西北工业大学 | Ultralow order tensor data filling method |
CN109241491A (en) * | 2018-07-28 | 2019-01-18 | 天津大学 | The structural missing fill method of tensor based on joint low-rank and rarefaction representation |
CN110059291A (en) * | 2019-03-15 | 2019-07-26 | 上海大学 | A kind of three rank low-rank tensor complementing methods based on GPU |
CN110162744A (en) * | 2019-05-21 | 2019-08-23 | 天津理工大学 | A kind of multiple estimation new method of car networking shortage of data based on tensor |
US20210338171A1 (en) * | 2020-02-05 | 2021-11-04 | The Regents Of The University Of Michigan | Tensor amplification-based data processing |
CN112116532A (en) * | 2020-08-04 | 2020-12-22 | 西安交通大学 | Color image completion method based on tensor block cyclic expansion |
CN113222834A (en) * | 2021-04-22 | 2021-08-06 | 南京航空航天大学 | Visual data tensor completion method based on smooth constraint and matrix decomposition |
CN113240596A (en) * | 2021-05-07 | 2021-08-10 | 西南大学 | Color video recovery method and system based on high-order tensor singular value decomposition |
CN113704688A (en) * | 2021-08-17 | 2021-11-26 | 南昌航空大学 | Defect vibration signal recovery method based on variational Bayes parallel factor decomposition |
Non-Patent Citations (4)
Title |
---|
CHENG DAI等: "A tucker decomposition based knowledge distillation for intelligent edge applications", 《APPLIED SOFT COMPUTING》 * |
LONGHAO YUAN等: "Higher-dimension Tensor Completion via Low-rank Tensor Ring Decomposition", 《2018 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC)》 * |
李琼: "基于变分贝叶斯平行因子分解的缺失信号的恢复", 《仪器仪表学报》 * |
马友等: "基于张量分解的卫星遥测缺失数据预测算法", 《电子与信息学报》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115630211A (en) * | 2022-09-16 | 2023-01-20 | 山东科技大学 | Traffic data tensor completion method based on space-time constraint |
CN116087435A (en) * | 2023-04-04 | 2023-05-09 | 石家庄学院 | Air quality monitoring method, electronic equipment and storage medium |
CN116087435B (en) * | 2023-04-04 | 2023-06-16 | 石家庄学院 | Air quality monitoring method, electronic equipment and storage medium |
CN116245779A (en) * | 2023-05-11 | 2023-06-09 | 四川工程职业技术学院 | Image fusion method and device, storage medium and electronic equipment |
CN116245779B (en) * | 2023-05-11 | 2023-08-22 | 四川工程职业技术学院 | Image fusion method and device, storage medium and electronic equipment |
CN116912107A (en) * | 2023-06-13 | 2023-10-20 | 重庆市荣冠科技有限公司 | DCT-based weighted adaptive tensor data completion method |
CN116912107B (en) * | 2023-06-13 | 2024-04-16 | 万基泰科工集团数字城市科技有限公司 | DCT-based weighted adaptive tensor data completion method |
CN116450636A (en) * | 2023-06-20 | 2023-07-18 | 石家庄学院 | Internet of things data completion method, equipment and medium based on low-rank tensor decomposition |
CN116450636B (en) * | 2023-06-20 | 2023-08-18 | 石家庄学院 | Internet of things data completion method, equipment and medium based on low-rank tensor decomposition |
CN117745551A (en) * | 2024-02-19 | 2024-03-22 | 电子科技大学 | Method for recovering phase of image signal |
CN117745551B (en) * | 2024-02-19 | 2024-04-26 | 电子科技大学 | Method for recovering phase of image signal |
Also Published As
Publication number | Publication date |
---|---|
CN114841888B (en) | 2023-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114841888B (en) | Visual data completion method based on low-rank tensor ring decomposition and factor prior | |
Miao et al. | Low-rank quaternion tensor completion for recovering color videos and images | |
Yuan et al. | High-order tensor completion via gradient-based optimization under tensor train format | |
CN102722896B (en) | Adaptive compressed sensing-based non-local reconstruction method for natural image | |
CN109241491A (en) | The structural missing fill method of tensor based on joint low-rank and rarefaction representation | |
CN106097278B (en) | Sparse model, reconstruction method and dictionary training method of multi-dimensional signal | |
CN108510013B (en) | Background modeling method for improving robust tensor principal component analysis based on low-rank core matrix | |
CN110113607B (en) | Compressed sensing video reconstruction method based on local and non-local constraints | |
CN113222834B (en) | Visual data tensor completion method based on smoothness constraint and matrix decomposition | |
Cen et al. | Boosting occluded image classification via subspace decomposition-based estimation of deep features | |
CN105631807A (en) | Single-frame image super resolution reconstruction method based on sparse domain selection | |
Feng et al. | Compressive sensing via nonlocal low-rank tensor regularization | |
CN113420421B (en) | QoS prediction method based on time sequence regularized tensor decomposition in mobile edge calculation | |
CN114119426B (en) | Image reconstruction method and device by non-local low-rank conversion domain and full-connection tensor decomposition | |
CN107609596A (en) | Printenv weights more figure regularization Non-negative Matrix Factorizations and image clustering method automatically | |
Zhang et al. | Effective tensor completion via element-wise weighted low-rank tensor train with overlapping ket augmentation | |
Xu et al. | Factorized tensor dictionary learning for visual tensor data completion | |
Zhang et al. | Tensor recovery based on a novel non-convex function minimax logarithmic concave penalty function | |
CN105931184B (en) | SAR image super-resolution method based on combined optimization | |
Zhang et al. | Randomized sampling techniques based low-tubal-rank plus sparse tensor recovery | |
Wen et al. | The power of complementary regularizers: Image recovery via transform learning and low-rank modeling | |
Chen et al. | Hierarchical factorization strategy for high-order tensor and application to data completion | |
Tu et al. | Tensor recovery using the tensor nuclear norm based on nonconvex and nonlinear transformations | |
CN117056327A (en) | Tensor weighted gamma norm-based industrial time series data complement method | |
CN111062888A (en) | Hyperspectral image denoising method based on multi-target low-rank sparsity and spatial-spectral total variation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |