CN114841888A - Visual data completion method based on low-rank tensor ring decomposition and factor prior - Google Patents

Visual data completion method based on low-rank tensor ring decomposition and factor prior Download PDF

Info

Publication number
CN114841888A
CN114841888A CN202210526890.4A CN202210526890A CN114841888A CN 114841888 A CN114841888 A CN 114841888A CN 202210526890 A CN202210526890 A CN 202210526890A CN 114841888 A CN114841888 A CN 114841888A
Authority
CN
China
Prior art keywords
tensor
matrix
rank
factor
visual data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210526890.4A
Other languages
Chinese (zh)
Other versions
CN114841888B (en
Inventor
刘欣刚
姚佳敏
张磊
杨旻君
胡晓荣
庄晓淦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210526890.4A priority Critical patent/CN114841888B/en
Publication of CN114841888A publication Critical patent/CN114841888A/en
Application granted granted Critical
Publication of CN114841888B publication Critical patent/CN114841888B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于低秩张量环分解和因子先验的视觉数据补全方法,该方法针对传统的基于张量分解的数据补全算法依赖初始秩选择而导致恢复结果缺乏稳定性与有效性的问题,设计了分层的张量分解模型,同时实现张量环分解和补全,对于第一层,通过张量环分解将不完全张量表示为一系列的三阶因子;对于第二层,使用变换张量核范数来表示因子的低秩约束,并且结合图正则化的因子先验来限制每个因子的自由度;本发明同时利用因子空间的低秩结构和先验信息,一方面使得模型具有隐式的秩调整,可以提高模型对秩选择的鲁棒性,从而减轻了搜索最优初始秩的负担,另一方面充分利用张量数据的潜在信息,进一步提高补全性能。

Figure 202210526890

The invention discloses a visual data completion method based on low-rank tensor ring decomposition and factor prior. The method aims at the lack of stability and validity of restoration results caused by the traditional tensor decomposition-based data completion algorithm relying on initial rank selection. For the first layer, the incomplete tensor is represented as a series of third-order factors by the tensor ring decomposition; for the second layer, the transformation is used The tensor kernel norm is used to represent the low-rank constraint of the factor, and the degree of freedom of each factor is limited by combining the factor prior of graph regularization; the present invention utilizes the low-rank structure and prior information of the factor space at the same time, on the one hand, the model makes the model With implicit rank adjustment, the robustness of the model to rank selection can be improved, thereby reducing the burden of searching for the optimal initial rank. On the other hand, the potential information of tensor data is fully utilized to further improve the completion performance.

Figure 202210526890

Description

基于低秩张量环分解和因子先验的视觉数据补全方法Visual data completion method based on low-rank tensor ring decomposition and factor priors

技术领域technical field

本发明涉及视觉数据补全领域,具体涉及一种基于低秩张量环分解和因子先验的视觉数据补全方法。The invention relates to the field of visual data completion, in particular to a visual data completion method based on low-rank tensor ring decomposition and factor prior.

背景技术Background technique

随着信息技术的飞速发展,现代社会正步入一个数据爆炸式增长的时代,产生了大量的多属性和多关联的数据。然而,大部分数据通常是不完整的,这可能是由于遮挡、噪声、局部损坏、收集困难或转换过程中的数据丢失所致。数据的不完整性可能会大大降低数据的质量,从而使分析过程变得困难。张量作为向量和矩阵的高维扩展,可以表达更复杂的数据内部结构,广泛应用于信号处理、计算机视觉、数据挖掘和神经科学等领域。基于矩阵的相关补全方法破坏了原始多维数据的空间结构特性,效果不佳。因此,张量补全近年来受到了众多关注,是张量分析中的重要问题之一,它通过一些先验信息和数据结构属性,从观测到的可用元素中恢复缺失元素的值。事实上,大多数现实世界的自然数据都是低秩或近似低秩的,如彩色图像、彩色视频等视觉数据,因此可以使用低秩先验来恢复不完整的数据。随着低秩矩阵补全的成功,低秩约束也是恢复高阶张量中缺失项的有力工具,它可以利用张量的全局信息有效地估计缺失数据。低秩张量补全的一个基本问题是张量秩的定义。然而,与矩阵秩不同的是,张量秩的定义并不是唯一的。根据不同的张量分解,定义了不同类型的张量秩。With the rapid development of information technology, modern society is entering an era of explosive growth of data, resulting in a large amount of multi-attribute and multi-related data. However, most of the data is usually incomplete, which may be due to occlusion, noise, local corruption, difficulty in collection, or data loss during transformation. Incomplete data can significantly reduce the quality of the data, making the analysis process difficult. As a high-dimensional extension of vectors and matrices, tensors can express more complex data internal structures and are widely used in signal processing, computer vision, data mining, and neuroscience. The matrix-based correlation completion method destroys the spatial structure characteristics of the original multi-dimensional data, and the effect is not good. Therefore, tensor completion has received much attention in recent years and is one of the important issues in tensor analysis, which recovers the value of missing elements from the observed available elements through some prior information and data structure properties. In fact, most real-world natural data are low-rank or near-low-rank, such as color images, color videos, and other visual data, so low-rank priors can be used to recover incomplete data. Along with the success of low-rank matrix completion, low-rank constraints are also a powerful tool for recovering missing terms in higher-order tensors, which can effectively estimate missing data using the global information of the tensors. A fundamental problem in low-rank tensor completion is the definition of the rank of a tensor. However, unlike matrix rank, the definition of tensor rank is not unique. According to different tensor decomposition, different types of tensor ranks are defined.

张量分解是张量数据分析中的重要内容。通过张量分解,可以从原始张量数据中提取出本质特征,获得其低维表示,同时保留了数据内部的结构信息。近年来,张量网络已成为分析大规模张量数据的一大工具。随着张量环分解的提出,因其更强大的表示能力和灵活性,已经被跨学科研究。目前已有不少的理论与实践证实了张量环分解应用于张量补全任务中的可行性与有效性。现有的基于张量环分解的数据补全方法在取得优异性能的同时,往往依赖于良好的初始秩估计以及繁重的计算开销。然而,确定最优初始秩在实践中是一项艰巨的工作,秩搜索的计算复杂度随着秩的维度增加而呈指数增长。数据补全的结果受初始秩影响,可能会产生过拟合。此外,基于张量环分解的模型计算复杂度高,导致现有方法效率低下,大大限制了实际应用。总而言之,对于基于张量环分解的补全方法来说,受初始秩影响大和较高的计算成本仍然是一个具有挑战性的问题,因此开发稳健且高效的基于张量环分解的数据补全算法是至关重要的。Tensor decomposition is an important content in tensor data analysis. Through tensor decomposition, the essential features can be extracted from the original tensor data and its low-dimensional representation can be obtained, while retaining the structural information inside the data. In recent years, tensor networks have become a great tool for analyzing large-scale tensor data. With the proposal of tensor ring decomposition, it has been studied across disciplines because of its more powerful representation ability and flexibility. At present, many theories and practices have confirmed the feasibility and effectiveness of tensor ring decomposition in tensor completion tasks. Existing data completion methods based on tensor ring decomposition often rely on good initial rank estimation and heavy computational overhead while achieving excellent performance. However, determining the optimal initial rank is a difficult task in practice, and the computational complexity of rank search grows exponentially with the dimension of the rank. The results of data completion are affected by the initial rank and may result in overfitting. In addition, the model based on tensor ring decomposition has high computational complexity, resulting in inefficiency of existing methods, which greatly limits practical applications. All in all, the large initial rank effect and high computational cost are still a challenging problem for completion methods based on tensor ring decomposition, so it is crucial to develop robust and efficient data completion algorithms based on tensor ring decomposition. of.

发明内容SUMMARY OF THE INVENTION

本发明针对传统的基于张量分解的数据补全算法依赖初始秩选择而导致恢复结果缺乏稳定性与有效性的问题,提供一种基于低秩张量环分解和因子先验的视觉数据补全方法。The invention provides a visual data completion method based on low-rank tensor ring decomposition and factor prior, aiming at the problem that the traditional data completion algorithm based on tensor decomposition relies on initial rank selection, which leads to the lack of stability and validity of the restoration result.

本发明的基于低秩张量环分解和因子先验的视觉数据补全方法,包括下列步骤:The visual data completion method based on low-rank tensor ring decomposition and factor prior of the present invention includes the following steps:

S1:目标张量初始化。将不完整的原始视觉数据表示为待补全张量

Figure BDA0003644727910000021
确定观测索引集Ω,并根据待补全张量
Figure BDA0003644727910000022
初始化目标张量
Figure BDA0003644727910000023
作为本发明的数据补全模型输入;S1: Target tensor initialization. Represent incomplete raw visual data as a tensor to be completed
Figure BDA0003644727910000021
Determine the observation index set Ω, and according to the tensor to be completed
Figure BDA0003644727910000022
Initialize the target tensor
Figure BDA0003644727910000023
Input as the data completion model of the present invention;

S2:模型建立。以简单张量环(Tensor Ring,TR)补全模型为基本框架,设计了分层的张量分解模型,通过变换张量核范数对TR因子进行低秩约束,另外结合因子先验信息来限制每个TR因子的自由度,构建基于低秩张量环分解和因子先验的视觉数据补全模型,得到本发明数据补全模型的目标函数;S2: Model establishment. Taking the simple tensor ring (TR) completion model as the basic framework, a layered tensor decomposition model is designed. The TR factor is constrained by low rank by transforming the tensor kernel norm, and the prior information of the factor is combined to limit each factor. the degrees of freedom of the TR factors, construct a visual data completion model based on low-rank tensor loop decomposition and factor prior, and obtain the objective function of the data completion model of the present invention;

S3:模型求解。使用交替方向乘子法(Alternating Direction Method ofMultipliers,A DMM)的计算框架来求解目标函数,通过构造目标函数的增广拉格朗日函数形式,将目标函数的优化问题转化为多个子问题分别求解,并通过依次求解每一个子问题来迭代更新中间变量,经数次迭代函数收敛后输出目标张量

Figure BDA0003644727910000024
的解;S3: Model solution. Use the Alternating Direction Method of Multipliers (A DMM) computational framework to solve the objective function. By constructing the augmented Lagrangian function form of the objective function, the optimization problem of the objective function is transformed into multiple sub-problems to be solved separately. , and iteratively update the intermediate variables by solving each sub-problem in turn, and output the target tensor after several iterations of the function convergence
Figure BDA0003644727910000024
solution;

S4:将目标张量

Figure BDA0003644727910000025
的解转换为原始视觉数据的对应格式,得到最终补全结果。S4: put the target tensor
Figure BDA0003644727910000025
The solution is converted into the corresponding format of the original visual data, and the final completion result is obtained.

其中,步骤S1包括如下步骤:Wherein, step S1 includes the following steps:

S11:获取不完整的原始视觉数据并存储为张量形式,得到待补全张量

Figure BDA0003644727910000026
取原始视觉数据中所有已知像素点的索引位置组成观测索引集Ω;S11: Acquire incomplete original visual data and store it in the form of tensor to obtain the tensor to be completed
Figure BDA0003644727910000026
Take the index positions of all known pixels in the original visual data to form the observation index set Ω;

S12:根据待补全张量

Figure BDA0003644727910000027
初始化目标张量
Figure BDA0003644727910000028
使其满足
Figure BDA0003644727910000029
其中
Figure BDA00036447279100000210
为Ω的补集,
Figure BDA00036447279100000211
表示目标张量
Figure BDA00036447279100000212
的已知条目,
Figure BDA00036447279100000213
表示待补全张量
Figure BDA00036447279100000214
的已知条目,
Figure BDA00036447279100000215
表示目标张量
Figure BDA00036447279100000216
的缺失条目。S12: According to the tensor to be completed
Figure BDA0003644727910000027
Initialize the target tensor
Figure BDA0003644727910000028
to satisfy
Figure BDA0003644727910000029
in
Figure BDA00036447279100000210
is the complement of Ω,
Figure BDA00036447279100000211
represents the target tensor
Figure BDA00036447279100000212
known entries of ,
Figure BDA00036447279100000213
represents the tensor to be completed
Figure BDA00036447279100000214
known entries of ,
Figure BDA00036447279100000215
represents the target tensor
Figure BDA00036447279100000216
missing entries.

其中,步骤S2包括如下步骤:Wherein, step S2 includes the following steps:

S21:通过从已知条目中找到不完整原始视觉数据的张量环分解表示,然后利用所得到的张量环分解表示的TR因子来估计原始视觉数据的缺失项,可以得到简单张量环补全模型为:S21: By finding the tensor ring decomposition representation of the incomplete original visual data from the known entries, and then using the obtained TR factor represented by the tensor ring decomposition to estimate the missing items of the original visual data, the simple tensor ring completion model can be obtained as:

Figure BDA00036447279100000217
Figure BDA00036447279100000217

其中,

Figure BDA00036447279100000218
表示TR因子集合,
Figure BDA00036447279100000219
是张量环分解表示,
Figure BDA00036447279100000220
表示在观测索引集Ω下的投影操作,||·||F表示张量的Frobenius范数;in,
Figure BDA00036447279100000218
represents the set of TR factors,
Figure BDA00036447279100000219
is the tensor ring decomposition representation,
Figure BDA00036447279100000220
Represents the projection operation under the observation index set Ω, ||·|| F represents the Frobenius norm of the tensor;

S22:在简单张量环补全模型基础上,通过变换张量核范数进一步约束每一个TR因子来利用张量数据的全局低秩特征,可以得到基本低秩张量环补全模型为:S22: On the basis of the simple tensor ring completion model, by transforming the tensor kernel norm to further constrain each TR factor to utilize the global low-rank feature of tensor data, the basic low-rank tensor ring completion model can be obtained as:

Figure BDA0003644727910000031
Figure BDA0003644727910000031

Figure BDA0003644727910000032
Figure BDA0003644727910000032

其中,目标张量

Figure BDA0003644727910000033
Figure BDA0003644727910000034
表示实数域,N表示目标张量
Figure BDA0003644727910000035
的阶数,In表示
Figure BDA0003644727910000036
的第n阶的维度大小,n=1,2,...,N。
Figure BDA0003644727910000037
表示TR因子集合,
Figure BDA0003644727910000038
表示第n个TR因子,Rn-1、In和Rn表示实数域
Figure BDA0003644727910000039
的不同维度大小。||·||TTNN表示变换张量核范数,λ>0是权衡参数。Among them, the target tensor
Figure BDA0003644727910000033
Figure BDA0003644727910000034
represents the real number domain, and N represents the target tensor
Figure BDA0003644727910000035
The order of , In represents
Figure BDA0003644727910000036
The dimension size of the nth order, n=1,2,...,N.
Figure BDA0003644727910000037
represents the set of TR factors,
Figure BDA0003644727910000038
represents the nth TR factor, R n -1 , In and R n represent the real number field
Figure BDA0003644727910000039
of different dimensions. ||·|| TTNN represents the transform tensor kernel norm, and λ>0 is a trade-off parameter.

上述基本低秩张量环补全模型通过低秩约束限制TR因子,在数次迭代中会隐式地调整TR秩,以使TR秩逐渐趋于张量环分解的实际秩,从而增强初始秩选择的鲁棒性;The basic low-rank tensor ring completion model above restricts the TR factor through low-rank constraints, and implicitly adjusts the TR rank in several iterations, so that the TR rank gradually tends to the actual rank of the tensor ring decomposition, thereby enhancing the robustness of the initial rank selection sex;

S23:为了进一步提高视觉数据的补全性能,可以添加因子先验来充分利用视觉数据的潜在信息。利用TR因子的低秩假设和图正则化的因子先验,可以得到基于低秩张量环分解和因子先验的视觉数据补全模型为:S23: To further improve the completion performance of visual data, factor priors can be added to make full use of the latent information of visual data. Using the low-rank assumption of the TR factor and the factor prior of graph regularization, the visual data completion model based on the low-rank tensor ring decomposition and factor prior can be obtained as:

Figure BDA00036447279100000310
Figure BDA00036447279100000310

Figure BDA00036447279100000311
Figure BDA00036447279100000311

其中,上式第一行表示基于低秩张量环分解和因子先验的视觉数据补全模型的目标函数,第二行表示该目标函数的约束条件。α=[α12,…,αN]是一个图正则化参数向量,αn表示向量α中的第n个元素,n=1,2,...,N。μ,λ是权衡参数并且μ>0,λ>0。Ln表示第n个拉普拉斯矩阵,

Figure BDA00036447279100000312
表示第n个TR因子
Figure BDA00036447279100000313
的标准模2展开矩阵,tr(·)表示矩阵的迹,上标T表示矩阵的转置。Among them, the first row of the above formula represents the objective function of the visual data completion model based on low-rank tensor ring decomposition and factor prior, and the second row represents the constraints of the objective function. α=[α 12 ,...,α N ] is a graph regularization parameter vector, α n represents the nth element in the vector α, n=1,2,...,N. μ,λ are trade-off parameters and μ>0, λ>0. L n represents the nth Laplacian matrix,
Figure BDA00036447279100000312
represents the nth TR factor
Figure BDA00036447279100000313
The standard modulo 2 expansion matrix of , tr( ) represents the trace of the matrix, and the superscript T represents the transpose of the matrix.

其中,步骤S3包括如下步骤:Wherein, step S3 includes the following steps:

S31:为了使用ADMM来求解目标函数,首先引入了一系列辅助张量

Figure BDA00036447279100000314
来简化优化,因此目标函数的优化问题可以被重新表示为:S31: In order to use ADMM to solve the objective function, a series of auxiliary tensors are first introduced
Figure BDA00036447279100000314
to simplify the optimization, so the optimization problem of the objective function can be reformulated as:

Figure BDA00036447279100000315
Figure BDA00036447279100000315

Figure BDA00036447279100000316
Figure BDA00036447279100000316

Figure BDA00036447279100000317
Figure BDA00036447279100000317

其中,集合

Figure BDA0003644727910000041
表示一个张量序列,
Figure BDA0003644727910000042
表示第n个TR因子
Figure BDA0003644727910000043
的对应辅助张量。通过结合辅助张量的附加等式约束
Figure BDA0003644727910000044
n=1,2,…,N,可以得到目标函数的增广拉格朗日函数为:Among them, the collection
Figure BDA0003644727910000041
represents a sequence of tensors,
Figure BDA0003644727910000042
represents the nth TR factor
Figure BDA0003644727910000043
The corresponding auxiliary tensor of . Additional equality constraints by combining auxiliary tensors
Figure BDA0003644727910000044
n=1,2,...,N, the augmented Lagrangian function of the objective function can be obtained as:

Figure BDA0003644727910000045
Figure BDA0003644727910000045

Figure BDA0003644727910000046
Figure BDA0003644727910000046

其中,

Figure BDA0003644727910000047
是拉格朗日乘子集合,
Figure BDA0003644727910000048
是第n个拉格朗日乘子,N为拉格朗日乘子的总数,β>0是一个惩罚参数,<x,y>表示张量内积。in,
Figure BDA0003644727910000047
is the set of Lagrange multipliers,
Figure BDA0003644727910000048
is the nth Lagrange multiplier, N is the total number of Lagrange multipliers, β>0 is a penalty parameter, and <x, y> represents the tensor inner product.

然后,对于每个变量,通过固定除该变量之外的其他变量,并依次求解S32至S35中每个变量分别对应的优化子问题,交替更新每个变量。Then, for each variable, by fixing other variables except the variable, and sequentially solving the optimization sub-problems corresponding to each variable in S32 to S35, each variable is updated alternately.

S32:

Figure BDA0003644727910000049
的更新。关于变量
Figure BDA00036447279100000410
的优化子问题可以简化为:S32:
Figure BDA0003644727910000049
's update. About variables
Figure BDA00036447279100000410
The optimization subproblem can be simplified to:

Figure BDA00036447279100000411
Figure BDA00036447279100000411

其中,X<n>表示目标张量

Figure BDA00036447279100000412
的循环模n展开矩阵,
Figure BDA00036447279100000413
表示除第n个TR因子
Figure BDA00036447279100000414
外的所有因子经多线性乘积合并生成的子链张量的循环模2展开矩阵。通过求解上述子问题可以实现变量
Figure BDA00036447279100000415
的更新;where X <n> represents the target tensor
Figure BDA00036447279100000412
The cyclic modulo n expansion matrix of ,
Figure BDA00036447279100000413
means dividing the nth TR factor
Figure BDA00036447279100000414
The cyclic modulo 2 expansion matrix of the sub-chain tensors generated by the combination of all factors other than the multi-linear product. Variables can be achieved by solving the above subproblems
Figure BDA00036447279100000415
update;

S33:

Figure BDA00036447279100000416
的更新。关于变量
Figure BDA00036447279100000417
的优化子问题可以写为:S33:
Figure BDA00036447279100000416
's update. About variables
Figure BDA00036447279100000417
The optimization subproblem can be written as:

Figure BDA00036447279100000418
Figure BDA00036447279100000418

通过求解上述子问题可以实现变量

Figure BDA00036447279100000419
的更新;Variables can be achieved by solving the above subproblems
Figure BDA00036447279100000419
update;

S34:

Figure BDA00036447279100000420
的更新。关于变量
Figure BDA00036447279100000421
的优化子问题可以表述为:S34:
Figure BDA00036447279100000420
's update. About variables
Figure BDA00036447279100000421
The optimization subproblem can be formulated as:

Figure BDA00036447279100000422
Figure BDA00036447279100000422

Figure BDA00036447279100000423
Figure BDA00036447279100000423

通过求解上述子问题可以实现变量

Figure BDA00036447279100000424
的更新;Variables can be achieved by solving the above subproblems
Figure BDA00036447279100000424
update;

S35:

Figure BDA0003644727910000051
的更新。基于ADMM方案,拉格朗日乘子
Figure BDA0003644727910000052
可以被更新为:S35:
Figure BDA0003644727910000051
's update. Based on ADMM scheme, Lagrange multipliers
Figure BDA0003644727910000052
can be updated to:

Figure BDA0003644727910000053
Figure BDA0003644727910000053

此外,目标函数的增广拉格朗日函数的惩罚参数β可以在每次迭代中通过β=min(ρβ,βmax)来更新,其中1<ρ<1.5是一个调优超参数。βmax表示设定的β上限,min(ρβ,βmax)表示取ρβ和βmax中更小的一个作为当前的β值;Furthermore, the penalty parameter β of the augmented Lagrangian function of the objective function can be updated in each iteration by β= min (ρβ,βmax), where 1<ρ<1.5 is a tuning hyperparameter. β max indicates the set upper limit of β, and min(ρβ,β max ) indicates that the smaller one of ρβ and β max is taken as the current β value;

S36:重复步骤S32-S35,通过多次迭代来交替更新每个变量。考虑设置两个收敛条件:最大迭代次数maxiter和两次迭代之间的相对误差阈值tol。当同时满足上述两个收敛条件时,即达到最大迭代次数maxiter,并满足两次迭代之间的相对误差小于阈值tol,结束迭代,可以得到目标张量

Figure BDA0003644727910000054
的解。S36: Repeat steps S32-S35 to alternately update each variable through multiple iterations. Consider setting two convergence conditions: the maximum number of iterations maxiter and the relative error threshold tol between two iterations. When the above two convergence conditions are met at the same time, that is, the maximum number of iterations maxiter is reached, and the relative error between the two iterations is less than the threshold tol, the iteration ends, and the target tensor can be obtained.
Figure BDA0003644727910000054
solution.

本发明通过同时利用因子空间的低秩结构和先验信息,设计了分层的张量分解模型,可以同时实现张量环分解和补全。对于第一层,通过张量环分解将不完全张量表示为一系列的三阶TR因子。对于第二层,使用变换张量核范数来表示TR因子的低秩约束,并且考虑了图正则化的因子先验策略。TR因子的低秩约束可以使得本发明的数据补全模型具有隐式的秩调整,增强该模型对TR秩选择的鲁棒性,从而减轻了搜索最优TR秩的负担,而因子先验可以充分利用原始视觉数据的潜在信息,有助于进一步提高补全性能。The present invention designs a layered tensor decomposition model by simultaneously utilizing the low-rank structure and prior information of the factor space, and can realize tensor ring decomposition and completion at the same time. For the first layer, the incomplete tensor is represented as a series of third-order TR factors by tensor ring decomposition. For the second layer, the transform tensor kernel norm is used to represent the low-rank constraints of the TR factor, and a factor prior strategy for graph regularization is considered. The low rank constraint of the TR factor can make the data completion model of the present invention have implicit rank adjustment, enhance the robustness of the model to TR rank selection, thereby reducing the burden of searching for the optimal TR rank, and the factor prior can be Making full use of the latent information of the original visual data helps to further improve the completion performance.

附图说明Description of drawings

图1为本发明实施例总体框架示意图;1 is a schematic diagram of the overall framework of an embodiment of the present invention;

图2为本发明实施例中基于低秩张量环分解和因子先验的视觉数据补全方法简化流程图;2 is a simplified flowchart of a visual data completion method based on low-rank tensor ring decomposition and factor prior in an embodiment of the present invention;

图3为本发明实施例中原始彩色图像数据;Fig. 3 is original color image data in the embodiment of the present invention;

图4为本发明实施例中原始彩色视频数据;Fig. 4 is original color video data in the embodiment of the present invention;

图5为本发明实施例在不同缺失率下的彩色图像的补全结果图;Fig. 5 is the completion result diagram of the color image under different deletion rates according to the embodiment of the present invention;

图6为本发明实施例在不同缺失率下的彩色视频的补全结果图。FIG. 6 is a result diagram of color video completion under different deletion rates according to an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明的目的、技术方案和优点更加清楚,下面结合实施方式和附图,对本发明作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the embodiments and accompanying drawings.

本发明提出了一种基于低秩张量环分解和因子先验的视觉数据补全方法,该方法具体包括如下步骤:The present invention proposes a visual data completion method based on low-rank tensor ring decomposition and factor prior, the method specifically includes the following steps:

步骤S1:目标张量初始化。Step S1: Initialize the target tensor.

S11:获取不完整的原始视觉数据(如彩色图像、彩色视频等),通过Matlab软件将具有缺失条目的原始视觉数据文件读入,并存储为张量形式,得到待补全张量

Figure BDA0003644727910000061
取原始视觉数据的所有已知像素点的索引位置组成观测索引集Ω;S11: Obtain incomplete original visual data (such as color images, color videos, etc.), read the original visual data files with missing entries through Matlab software, and store them in the form of tensors to obtain tensors to be completed
Figure BDA0003644727910000061
Take the index positions of all known pixels of the original visual data to form the observation index set Ω;

S12:根据待补全张量

Figure BDA0003644727910000062
初始化目标张量
Figure BDA0003644727910000063
使得映射关系满足
Figure BDA0003644727910000064
其中
Figure BDA0003644727910000065
为Ω的补集,表示缺失索引集。
Figure BDA0003644727910000066
表示目标张量
Figure BDA0003644727910000067
的已知条目,
Figure BDA0003644727910000068
表示待补全张量
Figure BDA0003644727910000069
的已知条目,
Figure BDA00036447279100000610
表示目标张量
Figure BDA00036447279100000611
的缺失条目。S12: According to the tensor to be completed
Figure BDA0003644727910000062
Initialize the target tensor
Figure BDA0003644727910000063
Make the mapping relationship satisfy
Figure BDA0003644727910000064
in
Figure BDA0003644727910000065
is the complement of Ω, representing the missing index set.
Figure BDA0003644727910000066
represents the target tensor
Figure BDA0003644727910000067
known entries of ,
Figure BDA0003644727910000068
represents the tensor to be completed
Figure BDA0003644727910000069
known entries of ,
Figure BDA00036447279100000610
represents the target tensor
Figure BDA00036447279100000611
missing entries.

步骤S2:模型建立。Step S2: Model establishment.

S21:通过从不完整的原始视觉数据的已知条目中找到对应的张量环分解表示,然后利用所得到的张量环分解表示的TR因子来估计原始视觉数据的缺失条目,可以得到简单张量环补全模型为:S21: By finding the corresponding tensor ring decomposition representation from the known entries of the incomplete original visual data, and then using the obtained TR factor represented by the tensor ring decomposition to estimate the missing items of the original visual data, the simple tensor ring completion model can be obtained as :

Figure BDA00036447279100000612
Figure BDA00036447279100000612

其中,

Figure BDA00036447279100000613
表示TR因子集合,
Figure BDA00036447279100000614
表示第n个TR因子,n=1,2,...,N,
Figure BDA00036447279100000615
是张量环分解表示,
Figure BDA00036447279100000616
表示在观测索引集Ω下的投影操作,||·||F表示张量的Frobenius范数。为了解决这类低秩张量补全方法依赖于初始秩的问题,下面对简单张量环补全模型进行了改进;in,
Figure BDA00036447279100000613
represents the set of TR factors,
Figure BDA00036447279100000614
represents the nth TR factor, n=1,2,...,N,
Figure BDA00036447279100000615
is the tensor ring decomposition representation,
Figure BDA00036447279100000616
represents the projection operation under the observation index set Ω, and ||·|| F represents the Frobenius norm of the tensor. In order to solve the problem that such low-rank tensor completion methods depend on the initial rank, the simple tensor ring completion model is improved below;

S22:首先引入变换张量奇异值分解以及涉及的张量基础代数知识。S22: First, the singular value decomposition of transform tensors and the basic algebraic knowledge of tensors involved are introduced.

张量酉变换:对于三阶张量

Figure BDA00036447279100000617
假设
Figure BDA00036447279100000618
是一个酉变换矩阵,满足ΦΦH=ΦHΦ=I,张量
Figure BDA00036447279100000619
的酉变换定义为:Tensor Unitary Transform: For third-order tensors
Figure BDA00036447279100000617
Assumption
Figure BDA00036447279100000618
is a unitary transformation matrix, satisfying ΦΦ HH Φ=I, tensor
Figure BDA00036447279100000619
The unitary transformation of is defined as:

Figure BDA00036447279100000620
Figure BDA00036447279100000620

其中,

Figure BDA00036447279100000621
表示张量
Figure BDA00036447279100000622
的酉变换,
Figure BDA00036447279100000623
表示张量
Figure BDA00036447279100000624
与矩阵Φ的模3乘积。I表示单位矩阵,上标H表示矩阵的共轭转置,
Figure BDA00036447279100000625
表示实数域,Ik′,k′=1,2,3分别表示张量
Figure BDA00036447279100000626
的第k′阶上维度大小。in,
Figure BDA00036447279100000621
Represents a tensor
Figure BDA00036447279100000622
The unitary transformation of ,
Figure BDA00036447279100000623
Represents a tensor
Figure BDA00036447279100000624
Modulo 3 product with matrix Φ. I represents the identity matrix, the superscript H represents the conjugate transpose of the matrix,
Figure BDA00036447279100000625
Represents the real number field, I k′ , k′=1, 2, 3 represent tensors respectively
Figure BDA00036447279100000626
The dimension size on the k'th order of .

块对角矩阵:基于

Figure BDA00036447279100000627
的所有正向切片的块对角矩阵定义为:Block Diagonal Matrix: Based on
Figure BDA00036447279100000627
The block diagonal matrix of all forward slices of is defined as:

Figure BDA0003644727910000071
Figure BDA0003644727910000071

其中,

Figure BDA0003644727910000072
Figure BDA0003644727910000073
的第i个正向切片,i=1,2,...,I3,而
Figure BDA0003644727910000074
可以通过折叠算子fold(·)转换为一个张量,即
Figure BDA0003644727910000075
in,
Figure BDA0003644727910000072
Yes
Figure BDA0003644727910000073
The i-th forward slice of , i=1,2,...,I 3 , and
Figure BDA0003644727910000074
It can be converted into a tensor by the folding operator fold( ), i.e.
Figure BDA0003644727910000075

张量Φ积:两个三阶张量之间的Φ积由酉变换域内正向切片的乘积定义。对于两个张量

Figure BDA0003644727910000076
Figure BDA0003644727910000077
张量Φ积定义为:Tensor Φ product: The Φ product between two third-order tensors is defined by the product of forward slices in the unitary transform domain. for two tensors
Figure BDA0003644727910000076
and
Figure BDA0003644727910000077
The product of tensors Φ is defined as:

Figure BDA0003644727910000078
Figure BDA0003644727910000078

其中,*Φ表示张量Φ积符号,

Figure BDA0003644727910000079
表示张量
Figure BDA00036447279100000710
的酉变换。张量Φ积结果是一个三阶张量
Figure BDA00036447279100000711
上标H表示共轭转置,I4表示张量
Figure BDA00036447279100000712
第二阶上的维度大小。where * Φ denotes the tensor Φ product notation,
Figure BDA0003644727910000079
Represents a tensor
Figure BDA00036447279100000710
The unitary transformation of . The result of the tensor Φ product is a third-order tensor
Figure BDA00036447279100000711
Superscript H means conjugate transpose, I 4 means tensor
Figure BDA00036447279100000712
The dimension size on the second order.

变换张量奇异值分解:主要用于三阶张量的因式分解,采用酉变换矩阵Φ代替传统张量奇异值分解中的离散傅里叶变换矩阵。对于一个三阶张量

Figure BDA00036447279100000713
其变换张量奇异值分解可以表示为:Transform tensor singular value decomposition: It is mainly used for factorization of third-order tensors, and the unitary transform matrix Φ is used to replace the discrete Fourier transform matrix in the traditional tensor singular value decomposition. for a third-order tensor
Figure BDA00036447279100000713
Its transformation tensor singular value decomposition can be expressed as:

Figure BDA00036447279100000714
Figure BDA00036447279100000714

其中,

Figure BDA00036447279100000715
Figure BDA00036447279100000716
均为酉张量,
Figure BDA00036447279100000717
为对角张量。in,
Figure BDA00036447279100000715
and
Figure BDA00036447279100000716
are all unitary tensors,
Figure BDA00036447279100000717
is a diagonal tensor.

基于变换张量奇异值分解,可以定义变换张量核范数。对于三阶张量

Figure BDA00036447279100000718
假设
Figure BDA00036447279100000719
是一个酉变换矩阵,张量
Figure BDA00036447279100000720
的变换张量核范数定义为:Based on transformation tensor singular value decomposition, transformation tensor kernel norm can be defined. For third-order tensors
Figure BDA00036447279100000718
Assumption
Figure BDA00036447279100000719
is a unitary transformation matrix, tensor
Figure BDA00036447279100000720
The transformed tensor kernel norm of is defined as:

Figure BDA00036447279100000721
Figure BDA00036447279100000721

其中,||·||TTNN表示变换张量核范数,||·||*表示矩阵核范数。

Figure BDA00036447279100000722
表示
Figure BDA00036447279100000723
的第i个正向切片的矩阵核范数,也即矩阵
Figure BDA00036447279100000724
的所有奇异值之和。Among them, ||·|| TTNN represents the transformation tensor kernel norm, and ||·||* represents the matrix kernel norm.
Figure BDA00036447279100000722
express
Figure BDA00036447279100000723
The matrix kernel norm of the ith forward slice of , that is, the matrix
Figure BDA00036447279100000724
The sum of all singular values of .

由于张量秩与TR因子的秩满足关系

Figure BDA00036447279100000725
其中X(n)表示张量
Figure BDA00036447279100000726
的标准模n展开矩阵,
Figure BDA00036447279100000727
表示
Figure BDA00036447279100000728
的标准模2展开矩阵,rank(·)表示矩阵的秩函数,表明目标张量
Figure BDA0003644727910000081
的秩一定程度上会受到相应的TR因子
Figure BDA0003644727910000082
的秩的限制,这使得可以通过正则化TR因子来利用张量数据的全局低秩特征。此外,变换张量核范数可用于近似张量的变换多秩之和,是一个合适的张量秩替代。因此,可以利用变换张量核范数来进一步约束每一个TR因子,得到基本低秩张量环补全模型为:Due to the rank-satisfying relationship between the tensor rank and the TR factor
Figure BDA00036447279100000725
where X (n) represents the tensor
Figure BDA00036447279100000726
The standard modulo n expansion matrix of ,
Figure BDA00036447279100000727
express
Figure BDA00036447279100000728
The standard modulo 2 expansion matrix of , rank( ) represents the rank function of the matrix, indicating the target tensor
Figure BDA0003644727910000081
The rank of is somewhat affected by the corresponding TR factor
Figure BDA0003644727910000082
, which makes it possible to exploit the global low-rank feature of tensor data by regularizing the TR factor. In addition, the transformed tensor kernel norm can be used to approximate the sum of the transformed multi-rank of a tensor, and is a suitable tensor rank surrogate. Therefore, the transform tensor kernel norm can be used to further constrain each TR factor, and the basic low-rank tensor ring completion model can be obtained as:

Figure BDA0003644727910000083
Figure BDA0003644727910000083

Figure BDA0003644727910000084
Figure BDA0003644727910000084

其中,目标张量

Figure BDA0003644727910000085
N表示目标张量
Figure BDA0003644727910000086
的阶数,In表示
Figure BDA0003644727910000087
的第n阶的维度大小。
Figure BDA0003644727910000088
表示TR因子集合,
Figure BDA0003644727910000089
表示第n个TR因子,Rn-1、In和Rn分别表示的三个维度大小。||·||TTNN表示变换张量核范数,λ>0是权衡参数。当上述基本低秩张量环补全模型被优化时,所有TR因子的变换张量核范数和目标张量的拟合误差同时最小化。此外,这里的变换张量奇异值分解包含一个关键的酉变换矩阵Φn。在这个模型中,基于给定的三阶TR因子
Figure BDA00036447279100000810
构造了一个酉变换矩阵
Figure BDA00036447279100000811
由于
Figure BDA00036447279100000812
是未知的,可以迭代地更新Φn。这个过程可以表示为:Among them, the target tensor
Figure BDA0003644727910000085
N represents the target tensor
Figure BDA0003644727910000086
The order of , In represents
Figure BDA0003644727910000087
The dimension size of the nth order.
Figure BDA0003644727910000088
represents the set of TR factors,
Figure BDA0003644727910000089
Represents the nth TR factor, and R n -1 , In and R n represent the size of the three dimensions, respectively. ||·|| TTNN represents the transform tensor kernel norm, and λ>0 is a trade-off parameter. When the basic low-rank tensor ring completion model described above is optimized, the transformed tensor kernel norm of all TR factors and the fitting error of the target tensor are simultaneously minimized. In addition, the transformation tensor singular value decomposition here contains a key unitary transformation matrix Φ n . In this model, based on a given third-order TR factor
Figure BDA00036447279100000810
Constructs a unitary transformation matrix
Figure BDA00036447279100000811
because
Figure BDA00036447279100000812
is unknown and Φ n can be updated iteratively. This process can be expressed as:

Figure BDA00036447279100000813
Figure BDA00036447279100000813

其中,

Figure BDA00036447279100000814
表示
Figure BDA00036447279100000815
的标准模3展开矩阵,
Figure BDA00036447279100000816
表示展开矩阵
Figure BDA00036447279100000817
的奇异值分解,U和V分别表示左奇异矩阵和右奇异矩阵,S表示对角矩阵。在变换张量奇异值分解中可以选择UH作为酉变换矩阵。假设
Figure BDA00036447279100000818
的秩满足
Figure BDA00036447279100000819
然后通过执行张量酉变换
Figure BDA00036447279100000820
就会得到张量
Figure BDA00036447279100000821
的最后Rn-r个正向切片全是零矩阵。因此,UH作为一个酉变换矩阵将有助于进一步探索TR因子
Figure BDA00036447279100000822
的低秩信息。in,
Figure BDA00036447279100000814
express
Figure BDA00036447279100000815
The standard modulo-3 expansion matrix of ,
Figure BDA00036447279100000816
represents the expansion matrix
Figure BDA00036447279100000817
The singular value decomposition of , U and V represent the left singular matrix and right singular matrix, respectively, and S represents the diagonal matrix. In the transformation tensor singular value decomposition, U H can be selected as the unitary transformation matrix. Assumption
Figure BDA00036447279100000818
rank satisfaction
Figure BDA00036447279100000819
Then by performing a tensor unitary transform
Figure BDA00036447279100000820
will get the tensor
Figure BDA00036447279100000821
The last R n -r forward slices of are all zero matrices. Therefore, U H as a unitary transformation matrix will help to further explore the TR factor
Figure BDA00036447279100000822
low-rank information.

S23:为了进一步提高视觉数据的补全性能,可以添加因子先验来充分利用数据的潜在信息。图正则化用于视觉数据补全,可以对常见的图像先验进行编码,以促进图像恢复。其中一个广泛使用的图像先验是局部相似性先验,它假设相邻的行和列是高度相关的。在张量环分解中,任意第n个TR因子

Figure BDA00036447279100000823
分别代表原始视觉数据的第n阶上的信息。例如,如果将一个彩色图像视为一个三阶张量,经张量环分解后得到的前两个TR因子分别编码行空间和列空间中的变化。因此,可以将彩色图像、彩色视频等视觉数据的像素局部相似性描述为精确的因子先验,可以定义单因子图的权重如下:S23: To further improve the completion performance of visual data, factor priors can be added to make full use of the latent information of the data. Graph regularization is used for visual data completion and can encode common image priors to facilitate image restoration. One of the widely used image priors is the local similarity prior, which assumes that adjacent rows and columns are highly correlated. In a tensor ring decomposition, any nth TR factor
Figure BDA00036447279100000823
respectively represent the information on the nth order of the original visual data. For example, if a color image is viewed as a third-order tensor, the first two TR factors obtained after decomposing the tensor ring encode changes in row space and column space, respectively. Therefore, the pixel local similarity of visual data such as color images and color videos can be described as an exact factor prior, and the weights of the one-factor graph can be defined as follows:

Figure BDA0003644727910000091
Figure BDA0003644727910000091

其中,row和column分别表示行空间和列空间,若k=row,那么ik和jk分别表示行空间的任意两个索引位置。wij为相似度矩阵

Figure BDA0003644727910000092
的第(i,j)个元素,σ为所有成对距离ik-jk的平均值。令
Figure BDA0003644727910000093
为对角矩阵,矩阵D中第(i,i)个元素为∑jwij,可以得到拉普拉斯矩阵L=D-W。Among them, row and column represent row space and column space respectively, if k=row, then i k and j k represent any two index positions of row space respectively. w ij is the similarity matrix
Figure BDA0003644727910000092
The (i,j)th element of , σ is the average of all pairwise distances i k -j k . make
Figure BDA0003644727910000093
is a diagonal matrix, the (i, i)th element in matrix D is ∑ j w ij , and the Laplacian matrix L=DW can be obtained.

利用TR因子的低秩假设和图正则化的因子先验,可以得到基于低秩张量环分解和因子先验的视觉数据补全模型为:Using the low-rank assumption of the TR factor and the factor prior of graph regularization, the visual data completion model based on the low-rank tensor ring decomposition and factor prior can be obtained as:

Figure BDA0003644727910000094
Figure BDA0003644727910000094

Figure BDA0003644727910000095
Figure BDA0003644727910000095

其中,上式第一行表示基于低秩张量环分解和因子先验的视觉数据补全模型的目标函数,第二行表示该目标函数的约束条件。α=[α12,…,αN]是一个图正则化参数向量,μ,λ是权衡参数并且μ>0,λ>0。tr(·)为矩阵迹操作。拉普拉斯矩阵

Figure BDA0003644727910000096
描述了第n个TR因子内部的相互依赖关系,
Figure BDA0003644727910000097
表示第n个TR因子
Figure BDA0003644727910000098
的标准模2展开矩阵,上标T表示矩阵的转置。Among them, the first row of the above formula represents the objective function of the visual data completion model based on low-rank tensor ring decomposition and factor prior, and the second row represents the constraints of the objective function. α=[α 12 ,...,α N ] is a graph regularization parameter vector, μ,λ are trade-off parameters and μ>0, λ>0. tr(·) is a matrix trace operation. Laplace matrix
Figure BDA0003644727910000096
describes the interdependencies within the nth TR factor,
Figure BDA0003644727910000097
represents the nth TR factor
Figure BDA0003644727910000098
The standard modulo-2 expansion matrix of , where the superscript T denotes the transpose of the matrix.

步骤S3:模型求解。Step S3: model solution.

S31:构造增广拉格朗日函数。S31: Construct an augmented Lagrangian function.

为了使用ADMM计算框架来求解基于低秩张量环分解和因子先验的视觉数据补全模型的目标函数,首先引入了一系列辅助张量

Figure BDA0003644727910000099
来简化优化,因此目标函数的优化问题可以被重新表示为:In order to use the ADMM computational framework to solve the objective function of the visual data completion model based on low-rank tensor ring decomposition and factor priors, a series of auxiliary tensors are first introduced
Figure BDA0003644727910000099
to simplify the optimization, so the optimization problem of the objective function can be reformulated as:

Figure BDA00036447279100000910
Figure BDA00036447279100000910

Figure BDA00036447279100000911
Figure BDA00036447279100000911

Figure BDA00036447279100000912
Figure BDA00036447279100000912

其中,集合

Figure BDA0003644727910000101
表示一个张量序列,
Figure BDA0003644727910000102
表示第n个TR因子
Figure BDA0003644727910000103
的对应辅助张量。通过结合辅助张量的附加等式约束
Figure BDA0003644727910000104
可以得到目标函数的增广拉格朗日函数为:Among them, the collection
Figure BDA0003644727910000101
represents a sequence of tensors,
Figure BDA0003644727910000102
represents the nth TR factor
Figure BDA0003644727910000103
The corresponding auxiliary tensor of . Additional equality constraints by combining auxiliary tensors
Figure BDA0003644727910000104
The augmented Lagrangian function of the objective function can be obtained as:

Figure BDA0003644727910000105
Figure BDA0003644727910000105

Figure BDA0003644727910000106
Figure BDA0003644727910000106

其中,

Figure BDA0003644727910000107
是拉格朗日乘子集合,
Figure BDA0003644727910000108
是第n个拉格朗日乘子,β>0是一个惩罚参数,<x,y>表示张量内积。然后,通过固定其他变量并依次求解S32至S35的每个子问题,可以交替更新以下每个变量;in,
Figure BDA0003644727910000107
is the set of Lagrange multipliers,
Figure BDA0003644727910000108
is the nth Lagrange multiplier, β>0 is a penalty parameter, and <x, y> represents the tensor inner product. Then, each of the following variables can be updated alternately by fixing the other variables and solving each of the sub-problems S32 to S35 in turn;

S32:

Figure BDA0003644727910000109
的更新。S32:
Figure BDA0003644727910000109
's update.

关于变量

Figure BDA00036447279100001010
的优化子问题可以简化为:About variables
Figure BDA00036447279100001010
The optimization subproblem can be simplified to:

Figure BDA00036447279100001011
Figure BDA00036447279100001011

其中,X<n>表示目标张量

Figure BDA00036447279100001024
的循环模n展开矩阵,
Figure BDA00036447279100001012
表示除第n个TR因子
Figure BDA00036447279100001013
外的所有因子经多线性乘积合并生成的子链张量的循环模2展开矩阵。where X <n> represents the target tensor
Figure BDA00036447279100001024
The cyclic modulo n expansion matrix of ,
Figure BDA00036447279100001012
means dividing the nth TR factor
Figure BDA00036447279100001013
The cyclic modulo 2 expansion matrix of the sub-chain tensors generated by the combination of all factors other than the multi-linear product.

通过将上述公式相对于

Figure BDA00036447279100001014
的一阶梯度设为零,上述子问题的解等于求解以下一般的Syl vester矩阵方程:By comparing the above formula to
Figure BDA00036447279100001014
The first-order gradient of is set to zero, and the solution to the above subproblem is equivalent to solving the following general Syl vester matrix equation:

Figure BDA00036447279100001015
Figure BDA00036447279100001015

其中,X<n>表示目标张量的循环模n展开矩阵,

Figure BDA00036447279100001016
Figure BDA00036447279100001017
分别表示
Figure BDA00036447279100001018
Figure BDA00036447279100001019
的标准模2展开矩阵。
Figure BDA00036447279100001020
是一个单位矩阵。由于矩阵-Ln和矩阵
Figure BDA00036447279100001021
没有共同的特征值,因此该方程有唯一解,其求解可以调用Matlab中的Sylvester函数;where X <n> represents the circular modulo n expansion matrix of the target tensor,
Figure BDA00036447279100001016
and
Figure BDA00036447279100001017
Respectively
Figure BDA00036447279100001018
and
Figure BDA00036447279100001019
The standard modulo 2 expansion matrix of .
Figure BDA00036447279100001020
is an identity matrix. Since the matrix -L n and the matrix
Figure BDA00036447279100001021
There is no common eigenvalue, so the equation has a unique solution, and its solution can call the Sylvester function in Matlab;

S33:

Figure BDA00036447279100001022
的更新。S33:
Figure BDA00036447279100001022
's update.

Figure BDA00036447279100001023
更新后,首先根据以下公式来更新变换张量核范数中的第n个酉变换矩阵Φn。exist
Figure BDA00036447279100001023
After updating, firstly, the nth unitary transformation matrix Φ n in the transformation tensor kernel norm is updated according to the following formula.

Figure BDA0003644727910000111
Figure BDA0003644727910000111

其中,

Figure BDA0003644727910000112
表示
Figure BDA0003644727910000113
的标准模3展开矩阵,
Figure BDA0003644727910000114
表示展开矩阵
Figure BDA0003644727910000115
的奇异值分解,U和V分别表示左奇异矩阵和右奇异矩阵,S表示对角矩阵。in,
Figure BDA0003644727910000112
express
Figure BDA0003644727910000113
The standard modulo-3 expansion matrix of ,
Figure BDA0003644727910000114
represents the expansion matrix
Figure BDA0003644727910000115
The singular value decomposition of , U and V represent the left singular matrix and right singular matrix, respectively, and S represents the diagonal matrix.

然后,关于变量

Figure BDA0003644727910000116
的优化子问题可以写为:Then, about the variable
Figure BDA0003644727910000116
The optimization subproblem can be written as:

Figure BDA0003644727910000117
Figure BDA0003644727910000117

Figure BDA0003644727910000118
Figure BDA0003644727910000119
上述优化子问题可以等价于:make
Figure BDA0003644727910000118
and
Figure BDA0003644727910000119
The above optimization subproblem can be equivalent to:

Figure BDA00036447279100001110
Figure BDA00036447279100001110

进一步地,

Figure BDA00036447279100001111
可以通过变换张量奇异值分解表示为
Figure BDA00036447279100001112
其中
Figure BDA00036447279100001137
表示在酉变换矩阵Φn下的张量Φ积,
Figure BDA00036447279100001113
Figure BDA00036447279100001114
均为酉张量,
Figure BDA00036447279100001115
为对角张量。further,
Figure BDA00036447279100001111
It can be expressed by transforming the tensor singular value decomposition as
Figure BDA00036447279100001112
in
Figure BDA00036447279100001137
represents the product of tensors Φ under the unitary transformation matrix Φ n ,
Figure BDA00036447279100001113
and
Figure BDA00036447279100001114
are all unitary tensors,
Figure BDA00036447279100001115
is a diagonal tensor.

变量

Figure BDA00036447279100001116
的优化子问题可以通过张量奇异值阈值(t-SVT)算子来求解,求解结果可以表示为:variable
Figure BDA00036447279100001116
The optimization subproblem of can be solved by the tensor singular value threshold (t-SVT) operator, and the solution result can be expressed as:

Figure BDA00036447279100001117
Figure BDA00036447279100001117

其中,中间变量

Figure BDA00036447279100001118
其中
Figure BDA00036447279100001119
表示张量
Figure BDA00036447279100001120
与矩阵
Figure BDA00036447279100001121
的模3乘积,并且有
Figure BDA00036447279100001122
其中
Figure BDA00036447279100001123
表示取
Figure BDA00036447279100001124
和0中更大的一个。此时,首先对
Figure BDA00036447279100001125
做张量酉变换得到
Figure BDA00036447279100001126
再根据公式
Figure BDA00036447279100001127
得到
Figure BDA00036447279100001128
最后根据
Figure BDA00036447279100001129
得到中间变量
Figure BDA00036447279100001130
Among them, the intermediate variable
Figure BDA00036447279100001118
in
Figure BDA00036447279100001119
Represents a tensor
Figure BDA00036447279100001120
with the matrix
Figure BDA00036447279100001121
the modulo 3 product, and have
Figure BDA00036447279100001122
in
Figure BDA00036447279100001123
means to take
Figure BDA00036447279100001124
and the larger of 0. At this point, first
Figure BDA00036447279100001125
Do tensor unitary transformation to get
Figure BDA00036447279100001126
Then according to the formula
Figure BDA00036447279100001127
get
Figure BDA00036447279100001128
Finally according to
Figure BDA00036447279100001129
get intermediate variable
Figure BDA00036447279100001130

S34:

Figure BDA00036447279100001131
的更新。S34:
Figure BDA00036447279100001131
's update.

关于变量

Figure BDA00036447279100001132
的优化子问题可以表述为:About variables
Figure BDA00036447279100001132
The optimization subproblem can be formulated as:

Figure BDA00036447279100001133
Figure BDA00036447279100001133

Figure BDA00036447279100001134
Figure BDA00036447279100001134

这是一个具有等式约束的凸优化问题。变量

Figure BDA00036447279100001135
可以更新为:This is a convex optimization problem with equality constraints. variable
Figure BDA00036447279100001135
can be updated to:

Figure BDA00036447279100001136
Figure BDA00036447279100001136

其中,

Figure BDA0003644727910000121
表示在观测索引集Ω下的投影操作,
Figure BDA0003644727910000122
表示在缺失索引集
Figure BDA0003644727910000123
下的投影操作。in,
Figure BDA0003644727910000121
represents the projection operation under the observation index set Ω,
Figure BDA0003644727910000122
Indicates that in the missing index set
Figure BDA0003644727910000123
Projection operation below.

S35:

Figure BDA0003644727910000124
的更新。S35:
Figure BDA0003644727910000124
's update.

基于ADMM方案,拉格朗日乘子

Figure BDA0003644727910000125
可以被更新为:Based on ADMM scheme, Lagrange multipliers
Figure BDA0003644727910000125
can be updated to:

Figure BDA0003644727910000126
Figure BDA0003644727910000126

此外,所述目标函数的增广拉格朗日函数的惩罚参数β可以在每次迭代中通过β=min(ρβ,βmax)来更新,其中1<ρ<1.5是一个调优超参数。βmax表示设定的β上限,min(ρβ,βmax)表示取ρβ和βmax中更小的一个作为当前的β值。在本发明具体实施例中,设置ρ=1.01;Furthermore, the penalty parameter β of the augmented Lagrangian function of the objective function can be updated in each iteration by β=min(ρβ, β max ), where 1<ρ<1.5 is a tuning hyperparameter. β max indicates the set upper limit of β, and min(ρβ,β max ) indicates that the smaller one of ρβ and β max is taken as the current β value. In the specific embodiment of the present invention, set ρ=1.01;

S36:迭代更新。S36: Iterative update.

重复步骤S32-S35,通过多次迭代来交替更新每个变量。考虑设置两个收敛条件:最大迭代次数maxiter=300,两次迭代之间的相对误差阈值tol=10-4。其中,两次迭代之间的相对误差计算公式为

Figure BDA0003644727910000127
Figure BDA0003644727910000128
表示当前的
Figure BDA0003644727910000129
值,
Figure BDA00036447279100001210
表示上一次迭代的
Figure BDA00036447279100001211
值。当同时满足上述两个收敛条件时,即达到最大迭代次数300,并满足两次迭代之间的相对误差小于阈值10-4,结束迭代,可以得到目标张量
Figure BDA00036447279100001212
的解。Steps S32-S35 are repeated to alternately update each variable through multiple iterations. Consider setting two convergence conditions: the maximum number of iterations maxiter=300, and the relative error threshold between two iterations tol=10 −4 . Among them, the relative error between two iterations is calculated as
Figure BDA0003644727910000127
Figure BDA0003644727910000128
represents the current
Figure BDA0003644727910000129
value,
Figure BDA00036447279100001210
represents the last iteration
Figure BDA00036447279100001211
value. When the above two convergence conditions are met at the same time, that is, the maximum number of iterations is 300, and the relative error between the two iterations is less than the threshold of 10 -4 , the iteration ends, and the target tensor can be obtained
Figure BDA00036447279100001212
solution.

步骤S4:将得到的目标张量

Figure BDA00036447279100001213
的解转换为原始视觉数据的对应格式,得到不完整的原始视觉数据的最终补全结果。Step S4: The target tensor that will be obtained
Figure BDA00036447279100001213
The solution is converted into the corresponding format of the original visual data, and the final completion result of the incomplete original visual data is obtained.

实施例Example

本实施例中,对给定张量数据(如图3、4所示的彩色图像、彩色视频,图片上方的英文表示数据对应的名称)进行测试。在算法初始化中,设置惩罚参数β=0.01,其他参数手动调整以获得最佳性能。通过随机删除视觉数据的部分像素来生成不完整的张量数据,设置了几种不同的随机缺失率(MR∈{60%,70%,80%,90%,95%}),并采用本发明提出的技术方案来进行张量补全任务。图5、6分别展示了在彩色图像、彩色视频数据上的补全结果,并采用峰值信噪比(PSNR)来评估本发明的数据补全方法对视觉数据的恢复性能,图片上方的数值表示对应的PSNR值。PSNR值越高,恢复图像的质量越好。通过对比恢复前后的图像可知本发明方法的有效性。最终结果显示,与传统方法HaLTRC和TRALS相比,本发明方法的补全结果不仅整体的视觉效果更好,且在图像的局部细节纹理方面恢复的更好,恢复结果更为接近原始图像。同时从视觉质量的评估指标PSNR方面评价,本发明方法取得了更高的恢复精度。综上所述,本发明方法可在高缺失率下实现对不完整视觉数据的主要信息以及纹理细节的有效恢复,能实现性能更好的张量补全任务,具有很好的应用前景。In this embodiment, a test is performed on given tensor data (color images and color videos as shown in FIGS. 3 and 4 , and the English above the picture indicates the name corresponding to the data). In the initialization of the algorithm, the penalty parameter β=0.01 is set, and other parameters are manually adjusted to obtain the best performance. Incomplete tensor data is generated by randomly removing part of the pixels of the visual data, several different random missing rates (MR ∈ {60%, 70%, 80%, 90%, 95%}) are set, and this The proposed technical solution is invented to perform the tensor completion task. Figures 5 and 6 show the completion results on color images and color video data respectively, and the peak signal-to-noise ratio (PSNR) is used to evaluate the recovery performance of the data completion method of the present invention for visual data. The numerical value above the picture represents the The corresponding PSNR value. The higher the PSNR value, the better the quality of the recovered image. The effectiveness of the method of the present invention can be seen by comparing the images before and after restoration. The final result shows that, compared with the traditional methods HaLTRC and TRALS, the completion result of the method of the present invention not only has a better overall visual effect, but also restores the local detail texture of the image better, and the restoration result is closer to the original image. At the same time, the method of the present invention achieves higher restoration accuracy from the aspect of the visual quality evaluation index PSNR. To sum up, the method of the present invention can effectively restore the main information and texture details of incomplete visual data under a high missing rate, and can realize the tensor completion task with better performance, which has a good application prospect.

以上所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The above-described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

Claims (5)

1.一种基于低秩张量环分解和因子先验的视觉数据补全方法,其特征在于,该方法包括以下步骤:1. a visual data completion method based on low-rank tensor ring decomposition and factor prior, it is characterised in that the method comprises the following steps: 步骤S1)目标张量初始化,具体包括以下子步骤:Step S1) initializing the target tensor, which specifically includes the following sub-steps: S11)获取不完整的原始视觉数据,通过Matlab软件将具有缺失条目的原始视觉数据文件读入,并存储为张量形式,得到待补全张量
Figure FDA0003644727900000011
取原始视觉数据的所有已知像素点的索引位置组成观测索引集Ω;
S11) Obtain incomplete original visual data, read in the original visual data file with missing entries through Matlab software, and store it as a tensor to obtain a tensor to be completed
Figure FDA0003644727900000011
Take the index positions of all known pixels of the original visual data to form the observation index set Ω;
S12)根据待补全张量
Figure FDA0003644727900000012
初始化目标张量
Figure FDA00036447279000000126
使得映射关系满足
Figure FDA0003644727900000013
其中
Figure FDA0003644727900000014
为Ω的补集,表示缺失索引集,
Figure FDA0003644727900000015
表示目标张量
Figure FDA0003644727900000016
的已知条目,
Figure FDA0003644727900000017
表示待补全张量
Figure FDA0003644727900000018
的已知条目,
Figure FDA0003644727900000019
表示目标张量
Figure FDA00036447279000000110
的缺失条目;
S12) According to the tensor to be completed
Figure FDA0003644727900000012
Initialize the target tensor
Figure FDA00036447279000000126
Make the mapping relationship satisfy
Figure FDA0003644727900000013
in
Figure FDA0003644727900000014
is the complement of Ω, representing the missing index set,
Figure FDA0003644727900000015
represents the target tensor
Figure FDA0003644727900000016
known entries of ,
Figure FDA0003644727900000017
represents the tensor to be completed
Figure FDA0003644727900000018
known entries of ,
Figure FDA0003644727900000019
represents the target tensor
Figure FDA00036447279000000110
missing entries;
步骤S2)模型建立,具体包括以下子步骤:Step S2) model establishment, specifically includes the following sub-steps: S21)通过从不完整的原始视觉数据的已知条目中找到对应的张量环分解表示,然后利用所得到的张量环分解表示的TR因子来估计原始视觉数据的缺失条目,得到简单张量环补全模型为:S21) By finding the corresponding tensor ring decomposition representation from the known entries of the incomplete original visual data, and then using the obtained TR factor represented by the tensor ring decomposition to estimate the missing items of the original visual data, the simple tensor ring completion model is obtained as:
Figure FDA00036447279000000111
Figure FDA00036447279000000111
其中,
Figure FDA00036447279000000112
表示TR因子集合,
Figure FDA00036447279000000113
表示第n个TR因子,n=1,2,...,N,
Figure FDA00036447279000000114
是张量环分解表示,
Figure FDA00036447279000000115
表示在观测索引集Ω下的投影操作,||·||F表示张量的Frobenius范数;
in,
Figure FDA00036447279000000112
represents the set of TR factors,
Figure FDA00036447279000000113
represents the nth TR factor, n=1,2,...,N,
Figure FDA00036447279000000114
is the tensor ring decomposition representation,
Figure FDA00036447279000000115
Represents the projection operation under the observation index set Ω, ||·|| F represents the Frobenius norm of the tensor;
为了解决这类低秩张量补全方法依赖于初始秩的问题,下面对简单张量环补全模型进行了改进;In order to solve the problem that such low-rank tensor completion methods depend on the initial rank, the simple tensor ring completion model is improved below; S22)首先引入变换张量奇异值分解,并介绍了涉及的张量基础代数知识S22) First, the singular value decomposition of the transform tensor is introduced, and the basic algebraic knowledge of the tensor involved is introduced 张量酉变换:对于三阶张量
Figure FDA00036447279000000116
假设
Figure FDA00036447279000000117
是一个酉变换矩阵,满足ΦΦH=ΦHΦ=I,张量
Figure FDA00036447279000000118
的酉变换定义为:
Tensor Unitary Transform: For third-order tensors
Figure FDA00036447279000000116
Assumption
Figure FDA00036447279000000117
is a unitary transformation matrix, satisfying ΦΦ HH Φ=I, tensor
Figure FDA00036447279000000118
The unitary transformation of is defined as:
Figure FDA00036447279000000119
Figure FDA00036447279000000119
其中,
Figure FDA00036447279000000120
表示张量
Figure FDA00036447279000000121
的酉变换,
Figure FDA00036447279000000122
表示张量
Figure FDA00036447279000000123
与矩阵Φ的模3乘积,I表示单位矩阵,上标H表示矩阵的共轭转置,
Figure FDA00036447279000000124
表示实数域,Ik′,k′=1,2,3分别表示张量
Figure FDA00036447279000000125
的第k′阶上的维度大小;
in,
Figure FDA00036447279000000120
Represents a tensor
Figure FDA00036447279000000121
The unitary transformation of ,
Figure FDA00036447279000000122
Represents a tensor
Figure FDA00036447279000000123
The modulo 3 product with the matrix Φ, I represents the identity matrix, the superscript H represents the conjugate transpose of the matrix,
Figure FDA00036447279000000124
Represents the real number field, I k′ , k′=1, 2, 3 represent tensors respectively
Figure FDA00036447279000000125
The dimension size on the k'th order of ;
块对角矩阵:基于
Figure FDA0003644727900000025
的所有正向切片的块对角矩阵定义为:
Block Diagonal Matrix: Based on
Figure FDA0003644727900000025
The block diagonal matrix of all forward slices of is defined as:
Figure FDA0003644727900000021
Figure FDA0003644727900000021
其中,
Figure FDA0003644727900000026
Figure FDA0003644727900000027
的第i个正向切片,i=1,2,...,I3,而
Figure FDA0003644727900000028
能够通过折叠算子fold(·)转换为一个张量,即
Figure FDA0003644727900000029
in,
Figure FDA0003644727900000026
Yes
Figure FDA0003644727900000027
The i-th forward slice of , i=1,2,...,I 3 , and
Figure FDA0003644727900000028
can be converted into a tensor by the folding operator fold( ), that is
Figure FDA0003644727900000029
张量Φ积:两个三阶张量之间的Φ积由酉变换域内正向切片的乘积定义,对于两个张量
Figure FDA00036447279000000210
Figure FDA00036447279000000211
张量Φ积定义为:
Tensor Φ product: The Φ product between two third-order tensors is defined by the product of forward slices in the unitary transform domain, for two tensors
Figure FDA00036447279000000210
and
Figure FDA00036447279000000211
The product of tensors Φ is defined as:
Figure FDA0003644727900000022
Figure FDA0003644727900000022
其中,*Φ表示张量Φ积符号,
Figure FDA00036447279000000212
表示张量
Figure FDA00036447279000000213
的酉变换,张量Φ积结果是一个三阶张量
Figure FDA00036447279000000214
上标H表示共轭转置,I4表示张量
Figure FDA00036447279000000215
第二阶上的维度大小;
where * Φ denotes the tensor Φ product notation,
Figure FDA00036447279000000212
Represents a tensor
Figure FDA00036447279000000213
The unitary transformation of , the result of the tensor Φ product is a third-order tensor
Figure FDA00036447279000000214
Superscript H means conjugate transpose, I 4 means tensor
Figure FDA00036447279000000215
The dimension size on the second order;
变换张量奇异值分解:主要用于三阶张量的因式分解,采用酉变换矩阵Φ代替传统张量奇异值分解中的离散傅里叶变换矩阵,对于一个三阶张量
Figure FDA00036447279000000216
其变换张量奇异值分解表示为:
Transform tensor singular value decomposition: It is mainly used for the factorization of third-order tensors. The unitary transform matrix Φ is used to replace the discrete Fourier transform matrix in the traditional tensor singular value decomposition. For a third-order tensor
Figure FDA00036447279000000216
Its transformation tensor singular value decomposition is expressed as:
Figure FDA0003644727900000023
Figure FDA0003644727900000023
其中,
Figure FDA00036447279000000217
Figure FDA00036447279000000218
均为酉张量,
Figure FDA00036447279000000219
为对角张量,
in,
Figure FDA00036447279000000217
and
Figure FDA00036447279000000218
are all unitary tensors,
Figure FDA00036447279000000219
is a diagonal tensor,
基于变换张量奇异值分解,能够定义变换张量核范数,对于三阶张量
Figure FDA00036447279000000220
假设
Figure FDA00036447279000000222
是一个酉变换矩阵,张量
Figure FDA00036447279000000221
的变换张量核范数定义为:
Based on the singular value decomposition of the transformation tensor, the kernel norm of the transformation tensor can be defined. For third-order tensors
Figure FDA00036447279000000220
Assumption
Figure FDA00036447279000000222
is a unitary transformation matrix, tensor
Figure FDA00036447279000000221
The transformed tensor kernel norm of is defined as:
Figure FDA0003644727900000024
Figure FDA0003644727900000024
其中,||·||TTNN表示变换张量核范数,||·||*表示矩阵核范数,
Figure FDA00036447279000000223
表示
Figure FDA00036447279000000224
的第i个正向切片的矩阵核范数,也即矩阵
Figure FDA00036447279000000225
的所有奇异值之和;
Among them, ||·|| TTNN represents the transformation tensor kernel norm, ||·|| * represents the matrix kernel norm,
Figure FDA00036447279000000223
express
Figure FDA00036447279000000224
The matrix kernel norm of the ith forward slice of , that is, the matrix
Figure FDA00036447279000000225
The sum of all singular values of ;
由于张量秩与TR因子的秩满足关系
Figure FDA00036447279000000226
其中X(n)表示张量
Figure FDA00036447279000000227
的标准模n展开矩阵,
Figure FDA00036447279000000322
表示
Figure FDA00036447279000000323
的标准模2展开矩阵,rank(·)表示矩阵的秩函数,利用变换张量核范数来进一步约束每一个TR因子,得到基本低秩张量环补全模型为:
Due to the rank-satisfying relationship between the tensor rank and the TR factor
Figure FDA00036447279000000226
where X (n) represents the tensor
Figure FDA00036447279000000227
The standard modulo n expansion matrix of ,
Figure FDA00036447279000000322
express
Figure FDA00036447279000000323
The standard modulo 2 expansion matrix of , rank( ) represents the rank function of the matrix, and uses the transformed tensor kernel norm to further constrain each TR factor, and the basic low-rank tensor ring completion model is obtained as:
Figure FDA0003644727900000031
Figure FDA0003644727900000031
Figure FDA0003644727900000032
Figure FDA0003644727900000032
其中,目标张量
Figure FDA00036447279000000325
N表示目标张量
Figure FDA00036447279000000321
的阶数,In表示
Figure FDA00036447279000000320
的第n阶的维度大小,
Figure FDA00036447279000000318
表示TR因子集合,
Figure FDA00036447279000000317
表示第n个TR因子,Rn-1、In和Rn分别表示三种维度大小,||·||TTNN表示变换张量核范数,λ>0是权衡参数;当上述基本低秩张量环补全模型被优化时,所有TR因子的变换张量核范数和目标张量的拟合误差同时最小化,在这个基本低秩张量环补全模型中,基于给定的三阶TR因子
Figure FDA00036447279000000316
构造了一个酉变换矩阵
Figure FDA00036447279000000314
由于
Figure FDA00036447279000000315
是未知的,能够迭代地更新Φn,这个过程表示为:
Among them, the target tensor
Figure FDA00036447279000000325
N represents the target tensor
Figure FDA00036447279000000321
The order of , In represents
Figure FDA00036447279000000320
The dimension size of the nth order,
Figure FDA00036447279000000318
represents the set of TR factors,
Figure FDA00036447279000000317
Represents the nth TR factor, R n -1 , In and R n represent three dimensions respectively, || · || TTNN represents the transformation tensor kernel norm, λ>0 is a trade-off parameter; when the above basic low rank When the tensor ring completion model is optimized, the transformed tensor kernel norm of all TR factors and the fitting error of the target tensor are simultaneously minimized. In this basic low-rank tensor ring completion model, based on a given third-order TR factor
Figure FDA00036447279000000316
Constructs a unitary transformation matrix
Figure FDA00036447279000000314
because
Figure FDA00036447279000000315
is unknown, Φ n can be updated iteratively, and this process is expressed as:
Figure FDA0003644727900000033
Figure FDA0003644727900000033
其中,
Figure FDA00036447279000000310
表示
Figure FDA00036447279000000311
的标准模3展开矩阵,
Figure FDA00036447279000000312
表示展开矩阵
Figure FDA00036447279000000313
的奇异值分解,U和V分别表示左奇异矩阵和右奇异矩阵,S表示对角矩阵,在变换张量奇异值分解中选择UH作为酉变换矩阵,假设
Figure FDA0003644727900000038
的秩满足
Figure FDA0003644727900000039
然后通过执行张量酉变换
Figure FDA0003644727900000036
就会得到张量
Figure FDA0003644727900000037
的最后Rn-r个正向切片全是零矩阵,因此,UH作为一个酉变换矩阵将有助于进一步探索TR因子
Figure FDA00036447279000000324
的低秩信息;
in,
Figure FDA00036447279000000310
express
Figure FDA00036447279000000311
The standard modulo-3 expansion matrix of ,
Figure FDA00036447279000000312
represents the expansion matrix
Figure FDA00036447279000000313
The singular value decomposition of , U and V represent the left singular matrix and the right singular matrix respectively, S represents the diagonal matrix, and U H is selected as the unitary transformation matrix in the singular value decomposition of the transformation tensor, assuming
Figure FDA0003644727900000038
rank satisfaction
Figure FDA0003644727900000039
Then by performing a tensor unitary transform
Figure FDA0003644727900000036
will get the tensor
Figure FDA0003644727900000037
The last R n -r forward slices are all zero matrices, therefore, U H as a unitary transformation matrix will help to further explore the TR factor
Figure FDA00036447279000000324
The low-rank information of ;
S23)为了进一步提高视觉数据的补全性能,添加因子先验来充分利用数据的潜在信息,S23) In order to further improve the completion performance of visual data, factor priors are added to make full use of the potential information of the data, 在张量环分解中,任意第n个TR因子
Figure FDA0003644727900000035
分别代表原始视觉数据的第n阶上的信息,将原始视觉数据的像素局部相似性描述为精确的因子先验,定义单因子图的权重如下:
In a tensor ring decomposition, any nth TR factor
Figure FDA0003644727900000035
respectively represent the information on the nth order of the original visual data, describe the pixel local similarity of the original visual data as an accurate factor prior, and define the weight of the single factor graph as follows:
Figure FDA0003644727900000034
Figure FDA0003644727900000034
其中,row和column分别表示行空间和列空间,若k=row,那么ik和jk分别表示行空间的任意两个索引位置,wij为相似度矩阵
Figure FDA00036447279000000415
的第(i,j)个元素,σ为所有成对距离ik-jk的平均值,令
Figure FDA00036447279000000414
为对角矩阵,矩阵D中第(i,i)个元素为∑jwij,由此得到拉普拉斯矩阵L=D-W;
Among them, row and column represent the row space and column space respectively, if k=row, then i k and j k represent any two index positions of the row space respectively, and w ij is the similarity matrix
Figure FDA00036447279000000415
The (i,j)th element of , σ is the average of all pairwise distances i k -j k , let
Figure FDA00036447279000000414
is a diagonal matrix, and the (i,i)th element in matrix D is ∑ j w ij , thus obtaining the Laplacian matrix L=DW;
利用TR因子的低秩假设和图正则化的因子先验,得到基于低秩张量环分解和因子先验的视觉数据补全模型为:Using the low-rank assumption of the TR factor and the factor prior of graph regularization, the visual data completion model based on the low-rank tensor ring decomposition and factor prior is obtained as:
Figure FDA0003644727900000041
Figure FDA0003644727900000041
Figure FDA0003644727900000042
Figure FDA0003644727900000042
其中,上式第一行表示基于低秩张量环分解和因子先验的视觉数据补全模型的目标函数,第二行表示该目标函数的约束条件,α=[α12,…,αN]是一个图正则化参数向量,μ,λ是权衡参数并且μ>0,λ>0,tr(·)为矩阵迹操作,拉普拉斯矩阵
Figure FDA00036447279000000413
描述了第n个TR因子内部的相互依赖关系,
Figure FDA00036447279000000416
表示第n个TR因子
Figure FDA00036447279000000412
的标准模2展开矩阵,上标T表示矩阵的转置;
Among them, the first line of the above formula represents the objective function of the visual data completion model based on the low-rank tensor loop decomposition and factor prior, and the second line represents the constraints of the objective function, α=[α 12 ,...,α N ] is a graph regularization parameter vector, μ,λ are trade-off parameters and μ>0, λ>0, tr( ) is the matrix trace operation, Laplacian matrix
Figure FDA00036447279000000413
describes the interdependencies within the nth TR factor,
Figure FDA00036447279000000416
represents the nth TR factor
Figure FDA00036447279000000412
The standard modulo 2 expansion matrix of , and the superscript T represents the transpose of the matrix;
步骤S3)模型求解,具体包括以下子步骤:Step S3) model solution, specifically includes the following sub-steps: S31)构造增广拉格朗日函数S31) Construct augmented Lagrangian function 为了使用交替方向乘子法ADMM计算框架来求解基于低秩张量环分解和因子先验的视觉数据补全模型的目标函数,首先引入了一系列辅助张量
Figure FDA00036447279000000410
来简化优化,因此该目标函数的优化问题被重新表示为:
In order to use the alternating direction multiplier method ADMM computational framework to solve the objective function of the visual data completion model based on low-rank tensor ring decomposition and factor priors, a series of auxiliary tensors are first introduced.
Figure FDA00036447279000000410
to simplify the optimization, so the optimization problem for this objective function is reformulated as:
Figure FDA0003644727900000043
Figure FDA0003644727900000043
Figure FDA0003644727900000044
Figure FDA0003644727900000044
Figure FDA0003644727900000045
Figure FDA0003644727900000045
其中,集合
Figure FDA0003644727900000047
表示一个张量序列,
Figure FDA0003644727900000048
表示第n个TR因子
Figure FDA0003644727900000049
的对应辅助张量,通过结合辅助张量的附加等式约束
Figure FDA0003644727900000046
得到该目标函数的增广拉格朗日函数为:
Among them, the collection
Figure FDA0003644727900000047
represents a sequence of tensors,
Figure FDA0003644727900000048
represents the nth TR factor
Figure FDA0003644727900000049
The corresponding auxiliary tensors of , by combining the additional equality constraints of the auxiliary tensors
Figure FDA0003644727900000046
The augmented Lagrangian function of the objective function is obtained as:
Figure FDA0003644727900000051
Figure FDA0003644727900000051
Figure FDA0003644727900000052
Figure FDA0003644727900000052
其中,
Figure FDA00036447279000000524
是拉格朗日乘子集合,
Figure FDA00036447279000000525
是第n个拉格朗日乘子,β>0是一个惩罚参数,<x,y>表示张量内积;
in,
Figure FDA00036447279000000524
is the set of Lagrange multipliers,
Figure FDA00036447279000000525
is the nth Lagrange multiplier, β>0 is a penalty parameter, <x, y> represents the inner product of the tensor;
然后,对于每个变量,通过固定除该变量之外的其他变量,并依次求解步骤S32)至步骤S35)中每个变量分别对应的优化子问题,交替更新每个变量;Then, for each variable, by fixing other variables except the variable, and solving the optimization sub-problems corresponding to each variable in step S32) to step S35) in turn, each variable is updated alternately; S32)
Figure FDA00036447279000000523
的更新
S32)
Figure FDA00036447279000000523
update
关于变量
Figure FDA00036447279000000522
的优化子问题被简化为:
About variables
Figure FDA00036447279000000522
The optimization subproblem is simplified to:
Figure FDA0003644727900000053
Figure FDA0003644727900000053
其中,X<n>表示目标张量
Figure FDA00036447279000000521
的循环模n展开矩阵,
Figure FDA00036447279000000519
表示除第n个TR因子
Figure FDA00036447279000000520
外的所有因子经多线性乘积合并生成的子链张量的循环模2展开矩阵;
Among them, X <n> represents the target tensor
Figure FDA00036447279000000521
The cyclic modulo n expansion matrix of ,
Figure FDA00036447279000000519
means dividing the nth TR factor
Figure FDA00036447279000000520
The cyclic modulo 2 expansion matrix of the sub-chain tensor generated by the combination of all factors other than the multi-linear product;
通过将上述变量
Figure FDA00036447279000000518
的优化子问题相对于
Figure FDA00036447279000000517
的一阶梯度设为零,上述变量
Figure FDA00036447279000000516
的优化子问题的解等于求解以下一般的Sylvester矩阵方程:
By adding the above variables
Figure FDA00036447279000000518
The optimization subproblem of
Figure FDA00036447279000000517
The first-order gradient of is set to zero, the above variable
Figure FDA00036447279000000516
The solution to the optimization subproblem is equivalent to solving the following general Sylvester matrix equation:
Figure FDA0003644727900000054
Figure FDA0003644727900000054
其中,X<n>表示目标张量的循环模n展开矩阵,
Figure FDA00036447279000000512
Figure FDA00036447279000000513
分别表示
Figure FDA00036447279000000514
Figure FDA00036447279000000515
的标准模2展开矩阵,
Figure FDA00036447279000000511
是一个单位矩阵;由于矩阵-Ln和矩阵
Figure FDA00036447279000000510
没有共同的特征值,因此该Sylvester矩阵方程有唯一解,其求解通过调用Matlab中的Sylvester函数实现;
Among them, X <n> represents the cyclic modulo n expansion matrix of the target tensor,
Figure FDA00036447279000000512
and
Figure FDA00036447279000000513
Respectively
Figure FDA00036447279000000514
and
Figure FDA00036447279000000515
The standard modulo-2 expansion matrix of ,
Figure FDA00036447279000000511
is an identity matrix; since the matrix-L n and the matrix
Figure FDA00036447279000000510
There is no common eigenvalue, so the Sylvester matrix equation has a unique solution, and its solution is realized by calling the Sylvester function in Matlab;
S33)
Figure FDA00036447279000000526
的更新
S33)
Figure FDA00036447279000000526
update
Figure FDA00036447279000000527
更新后,首先根据以下公式来更新变换张量核范数中的第n个酉变换矩阵Φn
exist
Figure FDA00036447279000000527
After updating, first update the nth unitary transformation matrix Φ n in the transformation tensor kernel norm according to the following formula,
Figure FDA0003644727900000055
Figure FDA0003644727900000055
其中,
Figure FDA0003644727900000059
表示
Figure FDA0003644727900000058
的标准模3展开矩阵,
Figure FDA0003644727900000056
表示展开矩阵
Figure FDA0003644727900000057
的奇异值分解,U和V分别表示左奇异矩阵和右奇异矩阵,S表示对角矩阵;
in,
Figure FDA0003644727900000059
express
Figure FDA0003644727900000058
The standard modulo-3 expansion matrix of ,
Figure FDA0003644727900000056
represents the expansion matrix
Figure FDA0003644727900000057
The singular value decomposition of , U and V represent the left singular matrix and right singular matrix, respectively, and S represents the diagonal matrix;
然后,关于变量
Figure FDA00036447279000000633
的优化子问题被简化为:
Then, about the variable
Figure FDA00036447279000000633
The optimization subproblem is simplified to:
Figure FDA0003644727900000061
Figure FDA0003644727900000061
Figure FDA00036447279000000630
Figure FDA00036447279000000631
上述变量
Figure FDA00036447279000000632
的优化子问题等价于:
make
Figure FDA00036447279000000630
and
Figure FDA00036447279000000631
the above variables
Figure FDA00036447279000000632
The optimization subproblem is equivalent to:
Figure FDA0003644727900000062
Figure FDA0003644727900000062
进一步地,
Figure FDA00036447279000000627
能够通过变换张量奇异值分解表示为
Figure FDA00036447279000000628
其中
Figure FDA00036447279000000629
表示在酉变换矩阵Φn下的张量Φ积,
Figure FDA00036447279000000625
Figure FDA00036447279000000626
均为酉张量,
Figure FDA00036447279000000624
为对角张量;
further,
Figure FDA00036447279000000627
can be represented by the singular value decomposition of the transform tensor as
Figure FDA00036447279000000628
in
Figure FDA00036447279000000629
represents the product of tensors Φ under the unitary transformation matrix Φ n ,
Figure FDA00036447279000000625
and
Figure FDA00036447279000000626
are all unitary tensors,
Figure FDA00036447279000000624
is a diagonal tensor;
变量
Figure FDA00036447279000000623
的优化子问题能够通过张量奇异值阈值t-SVT算子来求解,求解结果表示为:
variable
Figure FDA00036447279000000623
The optimization subproblem of can be solved by the tensor singular value threshold t-SVT operator, and the solution result is expressed as:
Figure FDA0003644727900000063
Figure FDA0003644727900000063
其中,中间变量
Figure FDA00036447279000000621
的求解通过首先对
Figure FDA00036447279000000622
做张量酉变换得到
Figure FDA00036447279000000620
再根据公式
Figure FDA00036447279000000615
得到
Figure FDA00036447279000000616
最后根据
Figure FDA00036447279000000617
得到中间变量
Figure FDA00036447279000000618
其中
Figure FDA00036447279000000619
表示取
Figure FDA00036447279000000614
和0中更大的一个;
Among them, the intermediate variable
Figure FDA00036447279000000621
is solved by first
Figure FDA00036447279000000622
Do tensor unitary transformation to get
Figure FDA00036447279000000620
Then according to the formula
Figure FDA00036447279000000615
get
Figure FDA00036447279000000616
Finally according to
Figure FDA00036447279000000617
get intermediate variable
Figure FDA00036447279000000618
in
Figure FDA00036447279000000619
means to take
Figure FDA00036447279000000614
and the larger of 0;
S34)
Figure FDA00036447279000000634
的更新
S34)
Figure FDA00036447279000000634
update
关于变量
Figure FDA00036447279000000613
的优化子问题被表述为:
About variables
Figure FDA00036447279000000613
The optimization subproblem is formulated as:
Figure FDA0003644727900000064
Figure FDA0003644727900000064
Figure FDA0003644727900000065
Figure FDA0003644727900000065
这是一个具有等式约束的凸优化问题,变量
Figure FDA00036447279000000611
被更新为:
This is a convex optimization problem with equality constraints, the variable
Figure FDA00036447279000000611
was updated to:
Figure FDA0003644727900000066
Figure FDA0003644727900000066
其中,
Figure FDA00036447279000000612
表示在观测索引集Ω下的投影操作,
Figure FDA0003644727900000069
表示在缺失索引集
Figure FDA00036447279000000610
下的投影操作;
in,
Figure FDA00036447279000000612
represents the projection operation under the observation index set Ω,
Figure FDA0003644727900000069
Indicates that in the missing index set
Figure FDA00036447279000000610
The projection operation under;
S35)
Figure FDA0003644727900000067
的更新
S35)
Figure FDA0003644727900000067
update
基于交替方向乘子法ADMM计算框架,拉格朗日乘子
Figure FDA0003644727900000068
被更新为:
ADMM calculation framework based on alternating direction multiplier method, Lagrange multiplier
Figure FDA0003644727900000068
was updated to:
Figure FDA0003644727900000071
Figure FDA0003644727900000071
此外,所述目标函数的增广拉格朗日函数的惩罚参数β在每次迭代中通过β=min(ρβ,βmax)来更新,其中1<ρ<1.5是一个调优超参数,βmax表示设定的β上限,min(ρβ,βmax)表示取ρβ和βmax中更小的一个作为当前的β值;In addition, the penalty parameter β of the augmented Lagrangian function of the objective function is updated in each iteration by β= min (ρβ,βmax), where 1<ρ<1.5 is a tuning hyperparameter, β max represents the set upper limit of β, min(ρβ, β max ) represents taking the smaller one of ρβ and β max as the current β value; S36)迭代更新S36) Iterative update 重复步骤S32)-S35),通过多次迭代来交替更新每个变量,设置两个收敛条件:最大迭代次数maxiter,以及两次迭代之间的相对误差阈值tol,其中,两次迭代之间的相对误差计算公式为
Figure FDA0003644727900000072
Figure FDA0003644727900000075
表示当前的
Figure FDA0003644727900000074
值,
Figure FDA0003644727900000073
表示上一次迭代的
Figure FDA0003644727900000076
值;当同时满足上述两个收敛条件时,即达到最大迭代次数maxiter,并满足两次迭代之间的相对误差小于阈值tol时,结束迭代,得到目标张量
Figure FDA0003644727900000077
的解;
Steps S32)-S35) are repeated, each variable is updated alternately through multiple iterations, and two convergence conditions are set: the maximum number of iterations maxiter, and the relative error threshold tol between the two iterations, wherein the The relative error calculation formula is
Figure FDA0003644727900000072
Figure FDA0003644727900000075
represents the current
Figure FDA0003644727900000074
value,
Figure FDA0003644727900000073
represents the last iteration
Figure FDA0003644727900000076
value; when the above two convergence conditions are met at the same time, that is, the maximum number of iterations maxiter is reached, and when the relative error between the two iterations is less than the threshold tol, the iteration ends and the target tensor is obtained
Figure FDA0003644727900000077
solution;
步骤S4)将得到的目标张量
Figure FDA0003644727900000078
的解转换为原始视觉数据的对应格式,得到不完整的原始视觉数据的最终补全结果。
Step S4) will get the target tensor
Figure FDA0003644727900000078
The solution is converted into the corresponding format of the original visual data, and the final completion result of the incomplete original visual data is obtained.
2.根据权利要求1所述的基于低秩张量环分解和因子先验的视觉数据补全方法,其特征在于,所述原始视觉数据包括彩色图像、彩色视频。2 . The visual data completion method based on low-rank tensor ring decomposition and factor prior according to claim 1 , wherein the original visual data includes color images and color videos. 3 . 3.根据权利要求2所述的基于低秩张量环分解和因子先验的视觉数据补全方法,其特征在于,所述调优超参数ρ的取值为ρ=1.01。3 . The visual data completion method based on low-rank tensor ring decomposition and factor prior according to claim 2 , wherein the value of the tuning hyperparameter ρ is ρ=1.01. 4 . 4.根据权利要求3所述的基于低秩张量环分解和因子先验的视觉数据补全方法,其特征在于,所述最大迭代次数maxiter的取值为maxiter=300。4 . The visual data completion method based on low-rank tensor ring decomposition and factor prior according to claim 3 , wherein the maximum iteration number maxiter is maxiter=300. 5 . 5.根据权利要求4所述的基于低秩张量环分解和因子先验的视觉数据补全方法,其特征在于,所述两次迭代之间的相对误差阈值tol的取值为tol=10-45 . The visual data completion method based on low-rank tensor loop decomposition and factor prior according to claim 4 , wherein the relative error threshold tol between the two iterations is tol=10 −4 . .
CN202210526890.4A 2022-05-16 2022-05-16 Visual data completion method based on low-rank tensor ring decomposition and factor prior Active CN114841888B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210526890.4A CN114841888B (en) 2022-05-16 2022-05-16 Visual data completion method based on low-rank tensor ring decomposition and factor prior

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210526890.4A CN114841888B (en) 2022-05-16 2022-05-16 Visual data completion method based on low-rank tensor ring decomposition and factor prior

Publications (2)

Publication Number Publication Date
CN114841888A true CN114841888A (en) 2022-08-02
CN114841888B CN114841888B (en) 2023-03-28

Family

ID=82569550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210526890.4A Active CN114841888B (en) 2022-05-16 2022-05-16 Visual data completion method based on low-rank tensor ring decomposition and factor prior

Country Status (1)

Country Link
CN (1) CN114841888B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115630211A (en) * 2022-09-16 2023-01-20 山东科技大学 Traffic data tensor completion method based on space-time constraint
CN116087435A (en) * 2023-04-04 2023-05-09 石家庄学院 Air quality monitoring method, electronic equipment and storage medium
CN116245779A (en) * 2023-05-11 2023-06-09 四川工程职业技术学院 Image fusion method and device, storage medium and electronic equipment
CN116450636A (en) * 2023-06-20 2023-07-18 石家庄学院 Internet of things data completion method, equipment and medium based on low-rank tensor decomposition
CN116912107A (en) * 2023-06-13 2023-10-20 重庆市荣冠科技有限公司 DCT-based weighted adaptive tensor data completion method
CN117745551A (en) * 2024-02-19 2024-03-22 电子科技大学 A method for image signal phase recovery
CN118535564A (en) * 2024-07-26 2024-08-23 山东科技大学 Expressway data complement method based on low-rank priori
CN118606617A (en) * 2024-08-09 2024-09-06 江西财经大学 Tensor completion method and system based on low-rank matrix of Tucker decomposition

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292337A (en) * 2017-06-13 2017-10-24 西北工业大学 Ultralow order tensor data filling method
CN109241491A (en) * 2018-07-28 2019-01-18 天津大学 The structural missing fill method of tensor based on joint low-rank and rarefaction representation
CN110059291A (en) * 2019-03-15 2019-07-26 上海大学 A kind of three rank low-rank tensor complementing methods based on GPU
CN110162744A (en) * 2019-05-21 2019-08-23 天津理工大学 A kind of multiple estimation new method of car networking shortage of data based on tensor
CN112116532A (en) * 2020-08-04 2020-12-22 西安交通大学 A Color Image Completion Method Based on Tensor Block Circular Unrolling
CN113222834A (en) * 2021-04-22 2021-08-06 南京航空航天大学 Visual data tensor completion method based on smooth constraint and matrix decomposition
CN113240596A (en) * 2021-05-07 2021-08-10 西南大学 Color video recovery method and system based on high-order tensor singular value decomposition
US20210338171A1 (en) * 2020-02-05 2021-11-04 The Regents Of The University Of Michigan Tensor amplification-based data processing
CN113704688A (en) * 2021-08-17 2021-11-26 南昌航空大学 Defect vibration signal recovery method based on variational Bayes parallel factor decomposition

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292337A (en) * 2017-06-13 2017-10-24 西北工业大学 Ultralow order tensor data filling method
CN109241491A (en) * 2018-07-28 2019-01-18 天津大学 The structural missing fill method of tensor based on joint low-rank and rarefaction representation
CN110059291A (en) * 2019-03-15 2019-07-26 上海大学 A kind of three rank low-rank tensor complementing methods based on GPU
CN110162744A (en) * 2019-05-21 2019-08-23 天津理工大学 A kind of multiple estimation new method of car networking shortage of data based on tensor
US20210338171A1 (en) * 2020-02-05 2021-11-04 The Regents Of The University Of Michigan Tensor amplification-based data processing
CN112116532A (en) * 2020-08-04 2020-12-22 西安交通大学 A Color Image Completion Method Based on Tensor Block Circular Unrolling
CN113222834A (en) * 2021-04-22 2021-08-06 南京航空航天大学 Visual data tensor completion method based on smooth constraint and matrix decomposition
CN113240596A (en) * 2021-05-07 2021-08-10 西南大学 Color video recovery method and system based on high-order tensor singular value decomposition
CN113704688A (en) * 2021-08-17 2021-11-26 南昌航空大学 Defect vibration signal recovery method based on variational Bayes parallel factor decomposition

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHENG DAI等: "A tucker decomposition based knowledge distillation for intelligent edge applications", 《APPLIED SOFT COMPUTING》 *
LONGHAO YUAN等: "Higher-dimension Tensor Completion via Low-rank Tensor Ring Decomposition", 《2018 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC)》 *
李琼: "基于变分贝叶斯平行因子分解的缺失信号的恢复", 《仪器仪表学报》 *
马友等: "基于张量分解的卫星遥测缺失数据预测算法", 《电子与信息学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115630211A (en) * 2022-09-16 2023-01-20 山东科技大学 Traffic data tensor completion method based on space-time constraint
CN116087435B (en) * 2023-04-04 2023-06-16 石家庄学院 Air quality monitoring method, electronic equipment and storage medium
CN116087435A (en) * 2023-04-04 2023-05-09 石家庄学院 Air quality monitoring method, electronic equipment and storage medium
CN116245779B (en) * 2023-05-11 2023-08-22 四川工程职业技术学院 Image fusion method and device, storage medium and electronic equipment
CN116245779A (en) * 2023-05-11 2023-06-09 四川工程职业技术学院 Image fusion method and device, storage medium and electronic equipment
CN116912107A (en) * 2023-06-13 2023-10-20 重庆市荣冠科技有限公司 DCT-based weighted adaptive tensor data completion method
CN116912107B (en) * 2023-06-13 2024-04-16 万基泰科工集团数字城市科技有限公司 DCT-based weighted adaptive tensor data completion method
CN116450636A (en) * 2023-06-20 2023-07-18 石家庄学院 Internet of things data completion method, equipment and medium based on low-rank tensor decomposition
CN116450636B (en) * 2023-06-20 2023-08-18 石家庄学院 Internet of things data completion method, equipment and medium based on low-rank tensor decomposition
CN117745551A (en) * 2024-02-19 2024-03-22 电子科技大学 A method for image signal phase recovery
CN117745551B (en) * 2024-02-19 2024-04-26 电子科技大学 Method for recovering phase of image signal
CN118535564A (en) * 2024-07-26 2024-08-23 山东科技大学 Expressway data complement method based on low-rank priori
CN118606617A (en) * 2024-08-09 2024-09-06 江西财经大学 Tensor completion method and system based on low-rank matrix of Tucker decomposition

Also Published As

Publication number Publication date
CN114841888B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN114841888A (en) Visual data completion method based on low-rank tensor ring decomposition and factor prior
Xue et al. Multilayer sparsity-based tensor decomposition for low-rank tensor completion
Yuan et al. High-order tensor completion via gradient-based optimization under tensor train format
Liu et al. Low CP rank and tucker rank tensor completion for estimating missing components in image data
Mu et al. Accelerated low-rank visual recovery by random projection
CN109241491A (en) The structural missing fill method of tensor based on joint low-rank and rarefaction representation
Feng et al. Robust block tensor principal component analysis
CN108510013B (en) Background Modeling Method for Improved Robust Tensor Principal Component Analysis Based on Low-Rank Core Matrix
CN105469360A (en) Non local joint sparse representation based hyperspectral image super-resolution reconstruction method
CN105550988A (en) Super-resolution reconstruction algorithm based on improved neighborhood embedding and structure self-similarity
CN113222834A (en) Visual data tensor completion method based on smooth constraint and matrix decomposition
Zhang et al. Tensor recovery based on a novel non-convex function minimax logarithmic concave penalty function
Gong et al. Accurate regularized Tucker decomposition for image restoration
Sun et al. NF-3DLogTNN: An effective hyperspectral and multispectral image fusion method based on nonlocal low-fibered-rank regularization
Liu et al. Image inpainting algorithm based on tensor decomposition and weighted nuclear norm
Li et al. Thick cloud removal for multitemporal remote sensing images: When tensor ring decomposition meets gradient domain fidelity
Tu et al. Tensor recovery using the tensor nuclear norm based on nonconvex and nonlinear transformations
Li et al. An alternating nonmonotone projected Barzilai–Borwein algorithm of nonnegative factorization of big matrices
Gong et al. Blind image deblurring by promoting group sparsity
Zhang et al. Cyclic tensor singular value decomposition with applications in low-rank high-order tensor recovery
Tian et al. Hyperspectral Image Denoising via $ L_ {0} $ Regularized Low-Rank Tucker Decomposition
Wu et al. Tensor nonconvex unified prior for tensor recovery
Khader et al. A model-guided deep convolutional sparse coding network for hyperspectral and multispectral image fusion
Qi et al. Variable T-Product and Zero-Padding Tensor Completion with Applications
CN116206166B (en) A data dimensionality reduction method, device and medium based on kernel projection learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant