CN114841888A - Visual data completion method based on low-rank tensor ring decomposition and factor prior - Google Patents
Visual data completion method based on low-rank tensor ring decomposition and factor prior Download PDFInfo
- Publication number
- CN114841888A CN114841888A CN202210526890.4A CN202210526890A CN114841888A CN 114841888 A CN114841888 A CN 114841888A CN 202210526890 A CN202210526890 A CN 202210526890A CN 114841888 A CN114841888 A CN 114841888A
- Authority
- CN
- China
- Prior art keywords
- tensor
- matrix
- rank
- factor
- visual data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000354 decomposition reaction Methods 0.000 title claims abstract description 68
- 230000000007 visual effect Effects 0.000 title claims abstract description 59
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000009466 transformation Effects 0.000 claims abstract description 39
- 239000011159 matrix material Substances 0.000 claims description 103
- 238000005457 optimization Methods 0.000 claims description 26
- 230000003190 augmentative effect Effects 0.000 claims description 9
- 125000004122 cyclic group Chemical group 0.000 claims description 7
- UHFICAKXFHFOCN-UHFFFAOYSA-N 6-(5,5,8,8-tetramethyl-6,7-dihydronaphthalen-2-yl)naphthalene-2-carboxylic acid Chemical compound C1=C(C(O)=O)C=CC2=CC(C=3C=C4C(C)(C)CCC(C4=CC=3)(C)C)=CC=C21 UHFICAKXFHFOCN-UHFFFAOYSA-N 0.000 claims description 5
- 239000013598 vector Substances 0.000 claims description 5
- 230000000295 complement effect Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 2
- 230000000875 corresponding effect Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 3
- 230000001131 transforming effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000013441 quality evaluation Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/15—Correlation function computation including computation of convolution operations
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于低秩张量环分解和因子先验的视觉数据补全方法,该方法针对传统的基于张量分解的数据补全算法依赖初始秩选择而导致恢复结果缺乏稳定性与有效性的问题,设计了分层的张量分解模型,同时实现张量环分解和补全,对于第一层,通过张量环分解将不完全张量表示为一系列的三阶因子;对于第二层,使用变换张量核范数来表示因子的低秩约束,并且结合图正则化的因子先验来限制每个因子的自由度;本发明同时利用因子空间的低秩结构和先验信息,一方面使得模型具有隐式的秩调整,可以提高模型对秩选择的鲁棒性,从而减轻了搜索最优初始秩的负担,另一方面充分利用张量数据的潜在信息,进一步提高补全性能。
The invention discloses a visual data completion method based on low-rank tensor ring decomposition and factor prior. The method aims at the lack of stability and validity of restoration results caused by the traditional tensor decomposition-based data completion algorithm relying on initial rank selection. For the first layer, the incomplete tensor is represented as a series of third-order factors by the tensor ring decomposition; for the second layer, the transformation is used The tensor kernel norm is used to represent the low-rank constraint of the factor, and the degree of freedom of each factor is limited by combining the factor prior of graph regularization; the present invention utilizes the low-rank structure and prior information of the factor space at the same time, on the one hand, the model makes the model With implicit rank adjustment, the robustness of the model to rank selection can be improved, thereby reducing the burden of searching for the optimal initial rank. On the other hand, the potential information of tensor data is fully utilized to further improve the completion performance.
Description
技术领域technical field
本发明涉及视觉数据补全领域,具体涉及一种基于低秩张量环分解和因子先验的视觉数据补全方法。The invention relates to the field of visual data completion, in particular to a visual data completion method based on low-rank tensor ring decomposition and factor prior.
背景技术Background technique
随着信息技术的飞速发展,现代社会正步入一个数据爆炸式增长的时代,产生了大量的多属性和多关联的数据。然而,大部分数据通常是不完整的,这可能是由于遮挡、噪声、局部损坏、收集困难或转换过程中的数据丢失所致。数据的不完整性可能会大大降低数据的质量,从而使分析过程变得困难。张量作为向量和矩阵的高维扩展,可以表达更复杂的数据内部结构,广泛应用于信号处理、计算机视觉、数据挖掘和神经科学等领域。基于矩阵的相关补全方法破坏了原始多维数据的空间结构特性,效果不佳。因此,张量补全近年来受到了众多关注,是张量分析中的重要问题之一,它通过一些先验信息和数据结构属性,从观测到的可用元素中恢复缺失元素的值。事实上,大多数现实世界的自然数据都是低秩或近似低秩的,如彩色图像、彩色视频等视觉数据,因此可以使用低秩先验来恢复不完整的数据。随着低秩矩阵补全的成功,低秩约束也是恢复高阶张量中缺失项的有力工具,它可以利用张量的全局信息有效地估计缺失数据。低秩张量补全的一个基本问题是张量秩的定义。然而,与矩阵秩不同的是,张量秩的定义并不是唯一的。根据不同的张量分解,定义了不同类型的张量秩。With the rapid development of information technology, modern society is entering an era of explosive growth of data, resulting in a large amount of multi-attribute and multi-related data. However, most of the data is usually incomplete, which may be due to occlusion, noise, local corruption, difficulty in collection, or data loss during transformation. Incomplete data can significantly reduce the quality of the data, making the analysis process difficult. As a high-dimensional extension of vectors and matrices, tensors can express more complex data internal structures and are widely used in signal processing, computer vision, data mining, and neuroscience. The matrix-based correlation completion method destroys the spatial structure characteristics of the original multi-dimensional data, and the effect is not good. Therefore, tensor completion has received much attention in recent years and is one of the important issues in tensor analysis, which recovers the value of missing elements from the observed available elements through some prior information and data structure properties. In fact, most real-world natural data are low-rank or near-low-rank, such as color images, color videos, and other visual data, so low-rank priors can be used to recover incomplete data. Along with the success of low-rank matrix completion, low-rank constraints are also a powerful tool for recovering missing terms in higher-order tensors, which can effectively estimate missing data using the global information of the tensors. A fundamental problem in low-rank tensor completion is the definition of the rank of a tensor. However, unlike matrix rank, the definition of tensor rank is not unique. According to different tensor decomposition, different types of tensor ranks are defined.
张量分解是张量数据分析中的重要内容。通过张量分解,可以从原始张量数据中提取出本质特征,获得其低维表示,同时保留了数据内部的结构信息。近年来,张量网络已成为分析大规模张量数据的一大工具。随着张量环分解的提出,因其更强大的表示能力和灵活性,已经被跨学科研究。目前已有不少的理论与实践证实了张量环分解应用于张量补全任务中的可行性与有效性。现有的基于张量环分解的数据补全方法在取得优异性能的同时,往往依赖于良好的初始秩估计以及繁重的计算开销。然而,确定最优初始秩在实践中是一项艰巨的工作,秩搜索的计算复杂度随着秩的维度增加而呈指数增长。数据补全的结果受初始秩影响,可能会产生过拟合。此外,基于张量环分解的模型计算复杂度高,导致现有方法效率低下,大大限制了实际应用。总而言之,对于基于张量环分解的补全方法来说,受初始秩影响大和较高的计算成本仍然是一个具有挑战性的问题,因此开发稳健且高效的基于张量环分解的数据补全算法是至关重要的。Tensor decomposition is an important content in tensor data analysis. Through tensor decomposition, the essential features can be extracted from the original tensor data and its low-dimensional representation can be obtained, while retaining the structural information inside the data. In recent years, tensor networks have become a great tool for analyzing large-scale tensor data. With the proposal of tensor ring decomposition, it has been studied across disciplines because of its more powerful representation ability and flexibility. At present, many theories and practices have confirmed the feasibility and effectiveness of tensor ring decomposition in tensor completion tasks. Existing data completion methods based on tensor ring decomposition often rely on good initial rank estimation and heavy computational overhead while achieving excellent performance. However, determining the optimal initial rank is a difficult task in practice, and the computational complexity of rank search grows exponentially with the dimension of the rank. The results of data completion are affected by the initial rank and may result in overfitting. In addition, the model based on tensor ring decomposition has high computational complexity, resulting in inefficiency of existing methods, which greatly limits practical applications. All in all, the large initial rank effect and high computational cost are still a challenging problem for completion methods based on tensor ring decomposition, so it is crucial to develop robust and efficient data completion algorithms based on tensor ring decomposition. of.
发明内容SUMMARY OF THE INVENTION
本发明针对传统的基于张量分解的数据补全算法依赖初始秩选择而导致恢复结果缺乏稳定性与有效性的问题,提供一种基于低秩张量环分解和因子先验的视觉数据补全方法。The invention provides a visual data completion method based on low-rank tensor ring decomposition and factor prior, aiming at the problem that the traditional data completion algorithm based on tensor decomposition relies on initial rank selection, which leads to the lack of stability and validity of the restoration result.
本发明的基于低秩张量环分解和因子先验的视觉数据补全方法,包括下列步骤:The visual data completion method based on low-rank tensor ring decomposition and factor prior of the present invention includes the following steps:
S1:目标张量初始化。将不完整的原始视觉数据表示为待补全张量确定观测索引集Ω,并根据待补全张量初始化目标张量作为本发明的数据补全模型输入;S1: Target tensor initialization. Represent incomplete raw visual data as a tensor to be completed Determine the observation index set Ω, and according to the tensor to be completed Initialize the target tensor Input as the data completion model of the present invention;
S2:模型建立。以简单张量环(Tensor Ring,TR)补全模型为基本框架,设计了分层的张量分解模型,通过变换张量核范数对TR因子进行低秩约束,另外结合因子先验信息来限制每个TR因子的自由度,构建基于低秩张量环分解和因子先验的视觉数据补全模型,得到本发明数据补全模型的目标函数;S2: Model establishment. Taking the simple tensor ring (TR) completion model as the basic framework, a layered tensor decomposition model is designed. The TR factor is constrained by low rank by transforming the tensor kernel norm, and the prior information of the factor is combined to limit each factor. the degrees of freedom of the TR factors, construct a visual data completion model based on low-rank tensor loop decomposition and factor prior, and obtain the objective function of the data completion model of the present invention;
S3:模型求解。使用交替方向乘子法(Alternating Direction Method ofMultipliers,A DMM)的计算框架来求解目标函数,通过构造目标函数的增广拉格朗日函数形式,将目标函数的优化问题转化为多个子问题分别求解,并通过依次求解每一个子问题来迭代更新中间变量,经数次迭代函数收敛后输出目标张量的解;S3: Model solution. Use the Alternating Direction Method of Multipliers (A DMM) computational framework to solve the objective function. By constructing the augmented Lagrangian function form of the objective function, the optimization problem of the objective function is transformed into multiple sub-problems to be solved separately. , and iteratively update the intermediate variables by solving each sub-problem in turn, and output the target tensor after several iterations of the function convergence solution;
S4:将目标张量的解转换为原始视觉数据的对应格式,得到最终补全结果。S4: put the target tensor The solution is converted into the corresponding format of the original visual data, and the final completion result is obtained.
其中,步骤S1包括如下步骤:Wherein, step S1 includes the following steps:
S11:获取不完整的原始视觉数据并存储为张量形式,得到待补全张量取原始视觉数据中所有已知像素点的索引位置组成观测索引集Ω;S11: Acquire incomplete original visual data and store it in the form of tensor to obtain the tensor to be completed Take the index positions of all known pixels in the original visual data to form the observation index set Ω;
S12:根据待补全张量初始化目标张量使其满足其中为Ω的补集,表示目标张量的已知条目,表示待补全张量的已知条目,表示目标张量的缺失条目。S12: According to the tensor to be completed Initialize the target tensor to satisfy in is the complement of Ω, represents the target tensor known entries of , represents the tensor to be completed known entries of , represents the target tensor missing entries.
其中,步骤S2包括如下步骤:Wherein, step S2 includes the following steps:
S21:通过从已知条目中找到不完整原始视觉数据的张量环分解表示,然后利用所得到的张量环分解表示的TR因子来估计原始视觉数据的缺失项,可以得到简单张量环补全模型为:S21: By finding the tensor ring decomposition representation of the incomplete original visual data from the known entries, and then using the obtained TR factor represented by the tensor ring decomposition to estimate the missing items of the original visual data, the simple tensor ring completion model can be obtained as:
其中,表示TR因子集合,是张量环分解表示,表示在观测索引集Ω下的投影操作,||·||F表示张量的Frobenius范数;in, represents the set of TR factors, is the tensor ring decomposition representation, Represents the projection operation under the observation index set Ω, ||·|| F represents the Frobenius norm of the tensor;
S22:在简单张量环补全模型基础上,通过变换张量核范数进一步约束每一个TR因子来利用张量数据的全局低秩特征,可以得到基本低秩张量环补全模型为:S22: On the basis of the simple tensor ring completion model, by transforming the tensor kernel norm to further constrain each TR factor to utilize the global low-rank feature of tensor data, the basic low-rank tensor ring completion model can be obtained as:
其中,目标张量 表示实数域,N表示目标张量的阶数,In表示的第n阶的维度大小,n=1,2,...,N。表示TR因子集合,表示第n个TR因子,Rn-1、In和Rn表示实数域的不同维度大小。||·||TTNN表示变换张量核范数,λ>0是权衡参数。Among them, the target tensor represents the real number domain, and N represents the target tensor The order of , In represents The dimension size of the nth order, n=1,2,...,N. represents the set of TR factors, represents the nth TR factor, R n -1 , In and R n represent the real number field of different dimensions. ||·|| TTNN represents the transform tensor kernel norm, and λ>0 is a trade-off parameter.
上述基本低秩张量环补全模型通过低秩约束限制TR因子,在数次迭代中会隐式地调整TR秩,以使TR秩逐渐趋于张量环分解的实际秩,从而增强初始秩选择的鲁棒性;The basic low-rank tensor ring completion model above restricts the TR factor through low-rank constraints, and implicitly adjusts the TR rank in several iterations, so that the TR rank gradually tends to the actual rank of the tensor ring decomposition, thereby enhancing the robustness of the initial rank selection sex;
S23:为了进一步提高视觉数据的补全性能,可以添加因子先验来充分利用视觉数据的潜在信息。利用TR因子的低秩假设和图正则化的因子先验,可以得到基于低秩张量环分解和因子先验的视觉数据补全模型为:S23: To further improve the completion performance of visual data, factor priors can be added to make full use of the latent information of visual data. Using the low-rank assumption of the TR factor and the factor prior of graph regularization, the visual data completion model based on the low-rank tensor ring decomposition and factor prior can be obtained as:
其中,上式第一行表示基于低秩张量环分解和因子先验的视觉数据补全模型的目标函数,第二行表示该目标函数的约束条件。α=[α1,α2,…,αN]是一个图正则化参数向量,αn表示向量α中的第n个元素,n=1,2,...,N。μ,λ是权衡参数并且μ>0,λ>0。Ln表示第n个拉普拉斯矩阵,表示第n个TR因子的标准模2展开矩阵,tr(·)表示矩阵的迹,上标T表示矩阵的转置。Among them, the first row of the above formula represents the objective function of the visual data completion model based on low-rank tensor ring decomposition and factor prior, and the second row represents the constraints of the objective function. α=[α 1 ,α 2 ,...,α N ] is a graph regularization parameter vector, α n represents the nth element in the vector α, n=1,2,...,N. μ,λ are trade-off parameters and μ>0, λ>0. L n represents the nth Laplacian matrix, represents the nth TR factor The standard modulo 2 expansion matrix of , tr( ) represents the trace of the matrix, and the superscript T represents the transpose of the matrix.
其中,步骤S3包括如下步骤:Wherein, step S3 includes the following steps:
S31:为了使用ADMM来求解目标函数,首先引入了一系列辅助张量来简化优化,因此目标函数的优化问题可以被重新表示为:S31: In order to use ADMM to solve the objective function, a series of auxiliary tensors are first introduced to simplify the optimization, so the optimization problem of the objective function can be reformulated as:
其中,集合表示一个张量序列,表示第n个TR因子的对应辅助张量。通过结合辅助张量的附加等式约束n=1,2,…,N,可以得到目标函数的增广拉格朗日函数为:Among them, the collection represents a sequence of tensors, represents the nth TR factor The corresponding auxiliary tensor of . Additional equality constraints by combining auxiliary tensors n=1,2,...,N, the augmented Lagrangian function of the objective function can be obtained as:
其中,是拉格朗日乘子集合,是第n个拉格朗日乘子,N为拉格朗日乘子的总数,β>0是一个惩罚参数,<x,y>表示张量内积。in, is the set of Lagrange multipliers, is the nth Lagrange multiplier, N is the total number of Lagrange multipliers, β>0 is a penalty parameter, and <x, y> represents the tensor inner product.
然后,对于每个变量,通过固定除该变量之外的其他变量,并依次求解S32至S35中每个变量分别对应的优化子问题,交替更新每个变量。Then, for each variable, by fixing other variables except the variable, and sequentially solving the optimization sub-problems corresponding to each variable in S32 to S35, each variable is updated alternately.
S32:的更新。关于变量的优化子问题可以简化为:S32: 's update. About variables The optimization subproblem can be simplified to:
其中,X<n>表示目标张量的循环模n展开矩阵,表示除第n个TR因子外的所有因子经多线性乘积合并生成的子链张量的循环模2展开矩阵。通过求解上述子问题可以实现变量的更新;where X <n> represents the target tensor The cyclic modulo n expansion matrix of , means dividing the nth TR factor The cyclic modulo 2 expansion matrix of the sub-chain tensors generated by the combination of all factors other than the multi-linear product. Variables can be achieved by solving the above subproblems update;
S33:的更新。关于变量的优化子问题可以写为:S33: 's update. About variables The optimization subproblem can be written as:
通过求解上述子问题可以实现变量的更新;Variables can be achieved by solving the above subproblems update;
S34:的更新。关于变量的优化子问题可以表述为:S34: 's update. About variables The optimization subproblem can be formulated as:
通过求解上述子问题可以实现变量的更新;Variables can be achieved by solving the above subproblems update;
S35:的更新。基于ADMM方案,拉格朗日乘子可以被更新为:S35: 's update. Based on ADMM scheme, Lagrange multipliers can be updated to:
此外,目标函数的增广拉格朗日函数的惩罚参数β可以在每次迭代中通过β=min(ρβ,βmax)来更新,其中1<ρ<1.5是一个调优超参数。βmax表示设定的β上限,min(ρβ,βmax)表示取ρβ和βmax中更小的一个作为当前的β值;Furthermore, the penalty parameter β of the augmented Lagrangian function of the objective function can be updated in each iteration by β= min (ρβ,βmax), where 1<ρ<1.5 is a tuning hyperparameter. β max indicates the set upper limit of β, and min(ρβ,β max ) indicates that the smaller one of ρβ and β max is taken as the current β value;
S36:重复步骤S32-S35,通过多次迭代来交替更新每个变量。考虑设置两个收敛条件:最大迭代次数maxiter和两次迭代之间的相对误差阈值tol。当同时满足上述两个收敛条件时,即达到最大迭代次数maxiter,并满足两次迭代之间的相对误差小于阈值tol,结束迭代,可以得到目标张量的解。S36: Repeat steps S32-S35 to alternately update each variable through multiple iterations. Consider setting two convergence conditions: the maximum number of iterations maxiter and the relative error threshold tol between two iterations. When the above two convergence conditions are met at the same time, that is, the maximum number of iterations maxiter is reached, and the relative error between the two iterations is less than the threshold tol, the iteration ends, and the target tensor can be obtained. solution.
本发明通过同时利用因子空间的低秩结构和先验信息,设计了分层的张量分解模型,可以同时实现张量环分解和补全。对于第一层,通过张量环分解将不完全张量表示为一系列的三阶TR因子。对于第二层,使用变换张量核范数来表示TR因子的低秩约束,并且考虑了图正则化的因子先验策略。TR因子的低秩约束可以使得本发明的数据补全模型具有隐式的秩调整,增强该模型对TR秩选择的鲁棒性,从而减轻了搜索最优TR秩的负担,而因子先验可以充分利用原始视觉数据的潜在信息,有助于进一步提高补全性能。The present invention designs a layered tensor decomposition model by simultaneously utilizing the low-rank structure and prior information of the factor space, and can realize tensor ring decomposition and completion at the same time. For the first layer, the incomplete tensor is represented as a series of third-order TR factors by tensor ring decomposition. For the second layer, the transform tensor kernel norm is used to represent the low-rank constraints of the TR factor, and a factor prior strategy for graph regularization is considered. The low rank constraint of the TR factor can make the data completion model of the present invention have implicit rank adjustment, enhance the robustness of the model to TR rank selection, thereby reducing the burden of searching for the optimal TR rank, and the factor prior can be Making full use of the latent information of the original visual data helps to further improve the completion performance.
附图说明Description of drawings
图1为本发明实施例总体框架示意图;1 is a schematic diagram of the overall framework of an embodiment of the present invention;
图2为本发明实施例中基于低秩张量环分解和因子先验的视觉数据补全方法简化流程图;2 is a simplified flowchart of a visual data completion method based on low-rank tensor ring decomposition and factor prior in an embodiment of the present invention;
图3为本发明实施例中原始彩色图像数据;Fig. 3 is original color image data in the embodiment of the present invention;
图4为本发明实施例中原始彩色视频数据;Fig. 4 is original color video data in the embodiment of the present invention;
图5为本发明实施例在不同缺失率下的彩色图像的补全结果图;Fig. 5 is the completion result diagram of the color image under different deletion rates according to the embodiment of the present invention;
图6为本发明实施例在不同缺失率下的彩色视频的补全结果图。FIG. 6 is a result diagram of color video completion under different deletion rates according to an embodiment of the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面结合实施方式和附图,对本发明作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the embodiments and accompanying drawings.
本发明提出了一种基于低秩张量环分解和因子先验的视觉数据补全方法,该方法具体包括如下步骤:The present invention proposes a visual data completion method based on low-rank tensor ring decomposition and factor prior, the method specifically includes the following steps:
步骤S1:目标张量初始化。Step S1: Initialize the target tensor.
S11:获取不完整的原始视觉数据(如彩色图像、彩色视频等),通过Matlab软件将具有缺失条目的原始视觉数据文件读入,并存储为张量形式,得到待补全张量取原始视觉数据的所有已知像素点的索引位置组成观测索引集Ω;S11: Obtain incomplete original visual data (such as color images, color videos, etc.), read the original visual data files with missing entries through Matlab software, and store them in the form of tensors to obtain tensors to be completed Take the index positions of all known pixels of the original visual data to form the observation index set Ω;
S12:根据待补全张量初始化目标张量使得映射关系满足其中为Ω的补集,表示缺失索引集。表示目标张量的已知条目,表示待补全张量的已知条目,表示目标张量的缺失条目。S12: According to the tensor to be completed Initialize the target tensor Make the mapping relationship satisfy in is the complement of Ω, representing the missing index set. represents the target tensor known entries of , represents the tensor to be completed known entries of , represents the target tensor missing entries.
步骤S2:模型建立。Step S2: Model establishment.
S21:通过从不完整的原始视觉数据的已知条目中找到对应的张量环分解表示,然后利用所得到的张量环分解表示的TR因子来估计原始视觉数据的缺失条目,可以得到简单张量环补全模型为:S21: By finding the corresponding tensor ring decomposition representation from the known entries of the incomplete original visual data, and then using the obtained TR factor represented by the tensor ring decomposition to estimate the missing items of the original visual data, the simple tensor ring completion model can be obtained as :
其中,表示TR因子集合,表示第n个TR因子,n=1,2,...,N,是张量环分解表示,表示在观测索引集Ω下的投影操作,||·||F表示张量的Frobenius范数。为了解决这类低秩张量补全方法依赖于初始秩的问题,下面对简单张量环补全模型进行了改进;in, represents the set of TR factors, represents the nth TR factor, n=1,2,...,N, is the tensor ring decomposition representation, represents the projection operation under the observation index set Ω, and ||·|| F represents the Frobenius norm of the tensor. In order to solve the problem that such low-rank tensor completion methods depend on the initial rank, the simple tensor ring completion model is improved below;
S22:首先引入变换张量奇异值分解以及涉及的张量基础代数知识。S22: First, the singular value decomposition of transform tensors and the basic algebraic knowledge of tensors involved are introduced.
张量酉变换:对于三阶张量假设是一个酉变换矩阵,满足ΦΦH=ΦHΦ=I,张量的酉变换定义为:Tensor Unitary Transform: For third-order tensors Assumption is a unitary transformation matrix, satisfying ΦΦ H =Φ H Φ=I, tensor The unitary transformation of is defined as:
其中,表示张量的酉变换,表示张量与矩阵Φ的模3乘积。I表示单位矩阵,上标H表示矩阵的共轭转置,表示实数域,Ik′,k′=1,2,3分别表示张量的第k′阶上维度大小。in, Represents a tensor The unitary transformation of , Represents a tensor Modulo 3 product with matrix Φ. I represents the identity matrix, the superscript H represents the conjugate transpose of the matrix, Represents the real number field, I k′ , k′=1, 2, 3 represent tensors respectively The dimension size on the k'th order of .
块对角矩阵:基于的所有正向切片的块对角矩阵定义为:Block Diagonal Matrix: Based on The block diagonal matrix of all forward slices of is defined as:
其中,是的第i个正向切片,i=1,2,...,I3,而可以通过折叠算子fold(·)转换为一个张量,即 in, Yes The i-th forward slice of , i=1,2,...,I 3 , and It can be converted into a tensor by the folding operator fold( ), i.e.
张量Φ积:两个三阶张量之间的Φ积由酉变换域内正向切片的乘积定义。对于两个张量和张量Φ积定义为:Tensor Φ product: The Φ product between two third-order tensors is defined by the product of forward slices in the unitary transform domain. for two tensors and The product of tensors Φ is defined as:
其中,*Φ表示张量Φ积符号,表示张量的酉变换。张量Φ积结果是一个三阶张量上标H表示共轭转置,I4表示张量第二阶上的维度大小。where * Φ denotes the tensor Φ product notation, Represents a tensor The unitary transformation of . The result of the tensor Φ product is a third-order tensor Superscript H means conjugate transpose, I 4 means tensor The dimension size on the second order.
变换张量奇异值分解:主要用于三阶张量的因式分解,采用酉变换矩阵Φ代替传统张量奇异值分解中的离散傅里叶变换矩阵。对于一个三阶张量其变换张量奇异值分解可以表示为:Transform tensor singular value decomposition: It is mainly used for factorization of third-order tensors, and the unitary transform matrix Φ is used to replace the discrete Fourier transform matrix in the traditional tensor singular value decomposition. for a third-order tensor Its transformation tensor singular value decomposition can be expressed as:
其中,和均为酉张量,为对角张量。in, and are all unitary tensors, is a diagonal tensor.
基于变换张量奇异值分解,可以定义变换张量核范数。对于三阶张量假设是一个酉变换矩阵,张量的变换张量核范数定义为:Based on transformation tensor singular value decomposition, transformation tensor kernel norm can be defined. For third-order tensors Assumption is a unitary transformation matrix, tensor The transformed tensor kernel norm of is defined as:
其中,||·||TTNN表示变换张量核范数,||·||*表示矩阵核范数。表示的第i个正向切片的矩阵核范数,也即矩阵的所有奇异值之和。Among them, ||·|| TTNN represents the transformation tensor kernel norm, and ||·||* represents the matrix kernel norm. express The matrix kernel norm of the ith forward slice of , that is, the matrix The sum of all singular values of .
由于张量秩与TR因子的秩满足关系其中X(n)表示张量的标准模n展开矩阵,表示的标准模2展开矩阵,rank(·)表示矩阵的秩函数,表明目标张量的秩一定程度上会受到相应的TR因子的秩的限制,这使得可以通过正则化TR因子来利用张量数据的全局低秩特征。此外,变换张量核范数可用于近似张量的变换多秩之和,是一个合适的张量秩替代。因此,可以利用变换张量核范数来进一步约束每一个TR因子,得到基本低秩张量环补全模型为:Due to the rank-satisfying relationship between the tensor rank and the TR factor where X (n) represents the tensor The standard modulo n expansion matrix of , express The standard modulo 2 expansion matrix of , rank( ) represents the rank function of the matrix, indicating the target tensor The rank of is somewhat affected by the corresponding TR factor , which makes it possible to exploit the global low-rank feature of tensor data by regularizing the TR factor. In addition, the transformed tensor kernel norm can be used to approximate the sum of the transformed multi-rank of a tensor, and is a suitable tensor rank surrogate. Therefore, the transform tensor kernel norm can be used to further constrain each TR factor, and the basic low-rank tensor ring completion model can be obtained as:
其中,目标张量N表示目标张量的阶数,In表示的第n阶的维度大小。表示TR因子集合,表示第n个TR因子,Rn-1、In和Rn分别表示的三个维度大小。||·||TTNN表示变换张量核范数,λ>0是权衡参数。当上述基本低秩张量环补全模型被优化时,所有TR因子的变换张量核范数和目标张量的拟合误差同时最小化。此外,这里的变换张量奇异值分解包含一个关键的酉变换矩阵Φn。在这个模型中,基于给定的三阶TR因子构造了一个酉变换矩阵由于是未知的,可以迭代地更新Φn。这个过程可以表示为:Among them, the target tensor N represents the target tensor The order of , In represents The dimension size of the nth order. represents the set of TR factors, Represents the nth TR factor, and R n -1 , In and R n represent the size of the three dimensions, respectively. ||·|| TTNN represents the transform tensor kernel norm, and λ>0 is a trade-off parameter. When the basic low-rank tensor ring completion model described above is optimized, the transformed tensor kernel norm of all TR factors and the fitting error of the target tensor are simultaneously minimized. In addition, the transformation tensor singular value decomposition here contains a key unitary transformation matrix Φ n . In this model, based on a given third-order TR factor Constructs a unitary transformation matrix because is unknown and Φ n can be updated iteratively. This process can be expressed as:
其中,表示的标准模3展开矩阵,表示展开矩阵的奇异值分解,U和V分别表示左奇异矩阵和右奇异矩阵,S表示对角矩阵。在变换张量奇异值分解中可以选择UH作为酉变换矩阵。假设的秩满足然后通过执行张量酉变换就会得到张量的最后Rn-r个正向切片全是零矩阵。因此,UH作为一个酉变换矩阵将有助于进一步探索TR因子的低秩信息。in, express The standard modulo-3 expansion matrix of , represents the expansion matrix The singular value decomposition of , U and V represent the left singular matrix and right singular matrix, respectively, and S represents the diagonal matrix. In the transformation tensor singular value decomposition, U H can be selected as the unitary transformation matrix. Assumption rank satisfaction Then by performing a tensor unitary transform will get the tensor The last R n -r forward slices of are all zero matrices. Therefore, U H as a unitary transformation matrix will help to further explore the TR factor low-rank information.
S23:为了进一步提高视觉数据的补全性能,可以添加因子先验来充分利用数据的潜在信息。图正则化用于视觉数据补全,可以对常见的图像先验进行编码,以促进图像恢复。其中一个广泛使用的图像先验是局部相似性先验,它假设相邻的行和列是高度相关的。在张量环分解中,任意第n个TR因子分别代表原始视觉数据的第n阶上的信息。例如,如果将一个彩色图像视为一个三阶张量,经张量环分解后得到的前两个TR因子分别编码行空间和列空间中的变化。因此,可以将彩色图像、彩色视频等视觉数据的像素局部相似性描述为精确的因子先验,可以定义单因子图的权重如下:S23: To further improve the completion performance of visual data, factor priors can be added to make full use of the latent information of the data. Graph regularization is used for visual data completion and can encode common image priors to facilitate image restoration. One of the widely used image priors is the local similarity prior, which assumes that adjacent rows and columns are highly correlated. In a tensor ring decomposition, any nth TR factor respectively represent the information on the nth order of the original visual data. For example, if a color image is viewed as a third-order tensor, the first two TR factors obtained after decomposing the tensor ring encode changes in row space and column space, respectively. Therefore, the pixel local similarity of visual data such as color images and color videos can be described as an exact factor prior, and the weights of the one-factor graph can be defined as follows:
其中,row和column分别表示行空间和列空间,若k=row,那么ik和jk分别表示行空间的任意两个索引位置。wij为相似度矩阵的第(i,j)个元素,σ为所有成对距离ik-jk的平均值。令为对角矩阵,矩阵D中第(i,i)个元素为∑jwij,可以得到拉普拉斯矩阵L=D-W。Among them, row and column represent row space and column space respectively, if k=row, then i k and j k represent any two index positions of row space respectively. w ij is the similarity matrix The (i,j)th element of , σ is the average of all pairwise distances i k -j k . make is a diagonal matrix, the (i, i)th element in matrix D is ∑ j w ij , and the Laplacian matrix L=DW can be obtained.
利用TR因子的低秩假设和图正则化的因子先验,可以得到基于低秩张量环分解和因子先验的视觉数据补全模型为:Using the low-rank assumption of the TR factor and the factor prior of graph regularization, the visual data completion model based on the low-rank tensor ring decomposition and factor prior can be obtained as:
其中,上式第一行表示基于低秩张量环分解和因子先验的视觉数据补全模型的目标函数,第二行表示该目标函数的约束条件。α=[α1,α2,…,αN]是一个图正则化参数向量,μ,λ是权衡参数并且μ>0,λ>0。tr(·)为矩阵迹操作。拉普拉斯矩阵描述了第n个TR因子内部的相互依赖关系,表示第n个TR因子的标准模2展开矩阵,上标T表示矩阵的转置。Among them, the first row of the above formula represents the objective function of the visual data completion model based on low-rank tensor ring decomposition and factor prior, and the second row represents the constraints of the objective function. α=[α 1 ,α 2 ,...,α N ] is a graph regularization parameter vector, μ,λ are trade-off parameters and μ>0, λ>0. tr(·) is a matrix trace operation. Laplace matrix describes the interdependencies within the nth TR factor, represents the nth TR factor The standard modulo-2 expansion matrix of , where the superscript T denotes the transpose of the matrix.
步骤S3:模型求解。Step S3: model solution.
S31:构造增广拉格朗日函数。S31: Construct an augmented Lagrangian function.
为了使用ADMM计算框架来求解基于低秩张量环分解和因子先验的视觉数据补全模型的目标函数,首先引入了一系列辅助张量来简化优化,因此目标函数的优化问题可以被重新表示为:In order to use the ADMM computational framework to solve the objective function of the visual data completion model based on low-rank tensor ring decomposition and factor priors, a series of auxiliary tensors are first introduced to simplify the optimization, so the optimization problem of the objective function can be reformulated as:
其中,集合表示一个张量序列,表示第n个TR因子的对应辅助张量。通过结合辅助张量的附加等式约束可以得到目标函数的增广拉格朗日函数为:Among them, the collection represents a sequence of tensors, represents the nth TR factor The corresponding auxiliary tensor of . Additional equality constraints by combining auxiliary tensors The augmented Lagrangian function of the objective function can be obtained as:
其中,是拉格朗日乘子集合,是第n个拉格朗日乘子,β>0是一个惩罚参数,<x,y>表示张量内积。然后,通过固定其他变量并依次求解S32至S35的每个子问题,可以交替更新以下每个变量;in, is the set of Lagrange multipliers, is the nth Lagrange multiplier, β>0 is a penalty parameter, and <x, y> represents the tensor inner product. Then, each of the following variables can be updated alternately by fixing the other variables and solving each of the sub-problems S32 to S35 in turn;
S32:的更新。S32: 's update.
关于变量的优化子问题可以简化为:About variables The optimization subproblem can be simplified to:
其中,X<n>表示目标张量的循环模n展开矩阵,表示除第n个TR因子外的所有因子经多线性乘积合并生成的子链张量的循环模2展开矩阵。where X <n> represents the target tensor The cyclic modulo n expansion matrix of , means dividing the nth TR factor The cyclic modulo 2 expansion matrix of the sub-chain tensors generated by the combination of all factors other than the multi-linear product.
通过将上述公式相对于的一阶梯度设为零,上述子问题的解等于求解以下一般的Syl vester矩阵方程:By comparing the above formula to The first-order gradient of is set to zero, and the solution to the above subproblem is equivalent to solving the following general Syl vester matrix equation:
其中,X<n>表示目标张量的循环模n展开矩阵,和分别表示和的标准模2展开矩阵。是一个单位矩阵。由于矩阵-Ln和矩阵没有共同的特征值,因此该方程有唯一解,其求解可以调用Matlab中的Sylvester函数;where X <n> represents the circular modulo n expansion matrix of the target tensor, and Respectively and The standard modulo 2 expansion matrix of . is an identity matrix. Since the matrix -L n and the matrix There is no common eigenvalue, so the equation has a unique solution, and its solution can call the Sylvester function in Matlab;
S33:的更新。S33: 's update.
在更新后,首先根据以下公式来更新变换张量核范数中的第n个酉变换矩阵Φn。exist After updating, firstly, the nth unitary transformation matrix Φ n in the transformation tensor kernel norm is updated according to the following formula.
其中,表示的标准模3展开矩阵,表示展开矩阵的奇异值分解,U和V分别表示左奇异矩阵和右奇异矩阵,S表示对角矩阵。in, express The standard modulo-3 expansion matrix of , represents the expansion matrix The singular value decomposition of , U and V represent the left singular matrix and right singular matrix, respectively, and S represents the diagonal matrix.
然后,关于变量的优化子问题可以写为:Then, about the variable The optimization subproblem can be written as:
令和上述优化子问题可以等价于:make and The above optimization subproblem can be equivalent to:
进一步地,可以通过变换张量奇异值分解表示为其中表示在酉变换矩阵Φn下的张量Φ积,和均为酉张量,为对角张量。further, It can be expressed by transforming the tensor singular value decomposition as in represents the product of tensors Φ under the unitary transformation matrix Φ n , and are all unitary tensors, is a diagonal tensor.
变量的优化子问题可以通过张量奇异值阈值(t-SVT)算子来求解,求解结果可以表示为:variable The optimization subproblem of can be solved by the tensor singular value threshold (t-SVT) operator, and the solution result can be expressed as:
其中,中间变量其中表示张量与矩阵的模3乘积,并且有其中表示取和0中更大的一个。此时,首先对做张量酉变换得到再根据公式得到最后根据得到中间变量 Among them, the intermediate variable in Represents a tensor with the matrix the modulo 3 product, and have in means to take and the larger of 0. At this point, first Do tensor unitary transformation to get Then according to the formula get Finally according to get intermediate variable
S34:的更新。S34: 's update.
关于变量的优化子问题可以表述为:About variables The optimization subproblem can be formulated as:
这是一个具有等式约束的凸优化问题。变量可以更新为:This is a convex optimization problem with equality constraints. variable can be updated to:
其中,表示在观测索引集Ω下的投影操作,表示在缺失索引集下的投影操作。in, represents the projection operation under the observation index set Ω, Indicates that in the missing index set Projection operation below.
S35:的更新。S35: 's update.
基于ADMM方案,拉格朗日乘子可以被更新为:Based on ADMM scheme, Lagrange multipliers can be updated to:
此外,所述目标函数的增广拉格朗日函数的惩罚参数β可以在每次迭代中通过β=min(ρβ,βmax)来更新,其中1<ρ<1.5是一个调优超参数。βmax表示设定的β上限,min(ρβ,βmax)表示取ρβ和βmax中更小的一个作为当前的β值。在本发明具体实施例中,设置ρ=1.01;Furthermore, the penalty parameter β of the augmented Lagrangian function of the objective function can be updated in each iteration by β=min(ρβ, β max ), where 1<ρ<1.5 is a tuning hyperparameter. β max indicates the set upper limit of β, and min(ρβ,β max ) indicates that the smaller one of ρβ and β max is taken as the current β value. In the specific embodiment of the present invention, set ρ=1.01;
S36:迭代更新。S36: Iterative update.
重复步骤S32-S35,通过多次迭代来交替更新每个变量。考虑设置两个收敛条件:最大迭代次数maxiter=300,两次迭代之间的相对误差阈值tol=10-4。其中,两次迭代之间的相对误差计算公式为 表示当前的值,表示上一次迭代的值。当同时满足上述两个收敛条件时,即达到最大迭代次数300,并满足两次迭代之间的相对误差小于阈值10-4,结束迭代,可以得到目标张量的解。Steps S32-S35 are repeated to alternately update each variable through multiple iterations. Consider setting two convergence conditions: the maximum number of iterations maxiter=300, and the relative error threshold between two iterations tol=10 −4 . Among them, the relative error between two iterations is calculated as represents the current value, represents the last iteration value. When the above two convergence conditions are met at the same time, that is, the maximum number of iterations is 300, and the relative error between the two iterations is less than the threshold of 10 -4 , the iteration ends, and the target tensor can be obtained solution.
步骤S4:将得到的目标张量的解转换为原始视觉数据的对应格式,得到不完整的原始视觉数据的最终补全结果。Step S4: The target tensor that will be obtained The solution is converted into the corresponding format of the original visual data, and the final completion result of the incomplete original visual data is obtained.
实施例Example
本实施例中,对给定张量数据(如图3、4所示的彩色图像、彩色视频,图片上方的英文表示数据对应的名称)进行测试。在算法初始化中,设置惩罚参数β=0.01,其他参数手动调整以获得最佳性能。通过随机删除视觉数据的部分像素来生成不完整的张量数据,设置了几种不同的随机缺失率(MR∈{60%,70%,80%,90%,95%}),并采用本发明提出的技术方案来进行张量补全任务。图5、6分别展示了在彩色图像、彩色视频数据上的补全结果,并采用峰值信噪比(PSNR)来评估本发明的数据补全方法对视觉数据的恢复性能,图片上方的数值表示对应的PSNR值。PSNR值越高,恢复图像的质量越好。通过对比恢复前后的图像可知本发明方法的有效性。最终结果显示,与传统方法HaLTRC和TRALS相比,本发明方法的补全结果不仅整体的视觉效果更好,且在图像的局部细节纹理方面恢复的更好,恢复结果更为接近原始图像。同时从视觉质量的评估指标PSNR方面评价,本发明方法取得了更高的恢复精度。综上所述,本发明方法可在高缺失率下实现对不完整视觉数据的主要信息以及纹理细节的有效恢复,能实现性能更好的张量补全任务,具有很好的应用前景。In this embodiment, a test is performed on given tensor data (color images and color videos as shown in FIGS. 3 and 4 , and the English above the picture indicates the name corresponding to the data). In the initialization of the algorithm, the penalty parameter β=0.01 is set, and other parameters are manually adjusted to obtain the best performance. Incomplete tensor data is generated by randomly removing part of the pixels of the visual data, several different random missing rates (MR ∈ {60%, 70%, 80%, 90%, 95%}) are set, and this The proposed technical solution is invented to perform the tensor completion task. Figures 5 and 6 show the completion results on color images and color video data respectively, and the peak signal-to-noise ratio (PSNR) is used to evaluate the recovery performance of the data completion method of the present invention for visual data. The numerical value above the picture represents the The corresponding PSNR value. The higher the PSNR value, the better the quality of the recovered image. The effectiveness of the method of the present invention can be seen by comparing the images before and after restoration. The final result shows that, compared with the traditional methods HaLTRC and TRALS, the completion result of the method of the present invention not only has a better overall visual effect, but also restores the local detail texture of the image better, and the restoration result is closer to the original image. At the same time, the method of the present invention achieves higher restoration accuracy from the aspect of the visual quality evaluation index PSNR. To sum up, the method of the present invention can effectively restore the main information and texture details of incomplete visual data under a high missing rate, and can realize the tensor completion task with better performance, which has a good application prospect.
以上所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The above-described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210526890.4A CN114841888B (en) | 2022-05-16 | 2022-05-16 | Visual data completion method based on low-rank tensor ring decomposition and factor prior |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210526890.4A CN114841888B (en) | 2022-05-16 | 2022-05-16 | Visual data completion method based on low-rank tensor ring decomposition and factor prior |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114841888A true CN114841888A (en) | 2022-08-02 |
CN114841888B CN114841888B (en) | 2023-03-28 |
Family
ID=82569550
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210526890.4A Active CN114841888B (en) | 2022-05-16 | 2022-05-16 | Visual data completion method based on low-rank tensor ring decomposition and factor prior |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114841888B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115630211A (en) * | 2022-09-16 | 2023-01-20 | 山东科技大学 | Traffic data tensor completion method based on space-time constraint |
CN116087435A (en) * | 2023-04-04 | 2023-05-09 | 石家庄学院 | Air quality monitoring method, electronic equipment and storage medium |
CN116245779A (en) * | 2023-05-11 | 2023-06-09 | 四川工程职业技术学院 | Image fusion method and device, storage medium and electronic equipment |
CN116450636A (en) * | 2023-06-20 | 2023-07-18 | 石家庄学院 | Internet of things data completion method, equipment and medium based on low-rank tensor decomposition |
CN116912107A (en) * | 2023-06-13 | 2023-10-20 | 重庆市荣冠科技有限公司 | DCT-based weighted adaptive tensor data completion method |
CN117745551A (en) * | 2024-02-19 | 2024-03-22 | 电子科技大学 | A method for image signal phase recovery |
CN118535564A (en) * | 2024-07-26 | 2024-08-23 | 山东科技大学 | Expressway data complement method based on low-rank priori |
CN118606617A (en) * | 2024-08-09 | 2024-09-06 | 江西财经大学 | Tensor completion method and system based on low-rank matrix of Tucker decomposition |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292337A (en) * | 2017-06-13 | 2017-10-24 | 西北工业大学 | Ultralow order tensor data filling method |
CN109241491A (en) * | 2018-07-28 | 2019-01-18 | 天津大学 | The structural missing fill method of tensor based on joint low-rank and rarefaction representation |
CN110059291A (en) * | 2019-03-15 | 2019-07-26 | 上海大学 | A kind of three rank low-rank tensor complementing methods based on GPU |
CN110162744A (en) * | 2019-05-21 | 2019-08-23 | 天津理工大学 | A kind of multiple estimation new method of car networking shortage of data based on tensor |
CN112116532A (en) * | 2020-08-04 | 2020-12-22 | 西安交通大学 | A Color Image Completion Method Based on Tensor Block Circular Unrolling |
CN113222834A (en) * | 2021-04-22 | 2021-08-06 | 南京航空航天大学 | Visual data tensor completion method based on smooth constraint and matrix decomposition |
CN113240596A (en) * | 2021-05-07 | 2021-08-10 | 西南大学 | Color video recovery method and system based on high-order tensor singular value decomposition |
US20210338171A1 (en) * | 2020-02-05 | 2021-11-04 | The Regents Of The University Of Michigan | Tensor amplification-based data processing |
CN113704688A (en) * | 2021-08-17 | 2021-11-26 | 南昌航空大学 | Defect vibration signal recovery method based on variational Bayes parallel factor decomposition |
-
2022
- 2022-05-16 CN CN202210526890.4A patent/CN114841888B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292337A (en) * | 2017-06-13 | 2017-10-24 | 西北工业大学 | Ultralow order tensor data filling method |
CN109241491A (en) * | 2018-07-28 | 2019-01-18 | 天津大学 | The structural missing fill method of tensor based on joint low-rank and rarefaction representation |
CN110059291A (en) * | 2019-03-15 | 2019-07-26 | 上海大学 | A kind of three rank low-rank tensor complementing methods based on GPU |
CN110162744A (en) * | 2019-05-21 | 2019-08-23 | 天津理工大学 | A kind of multiple estimation new method of car networking shortage of data based on tensor |
US20210338171A1 (en) * | 2020-02-05 | 2021-11-04 | The Regents Of The University Of Michigan | Tensor amplification-based data processing |
CN112116532A (en) * | 2020-08-04 | 2020-12-22 | 西安交通大学 | A Color Image Completion Method Based on Tensor Block Circular Unrolling |
CN113222834A (en) * | 2021-04-22 | 2021-08-06 | 南京航空航天大学 | Visual data tensor completion method based on smooth constraint and matrix decomposition |
CN113240596A (en) * | 2021-05-07 | 2021-08-10 | 西南大学 | Color video recovery method and system based on high-order tensor singular value decomposition |
CN113704688A (en) * | 2021-08-17 | 2021-11-26 | 南昌航空大学 | Defect vibration signal recovery method based on variational Bayes parallel factor decomposition |
Non-Patent Citations (4)
Title |
---|
CHENG DAI等: "A tucker decomposition based knowledge distillation for intelligent edge applications", 《APPLIED SOFT COMPUTING》 * |
LONGHAO YUAN等: "Higher-dimension Tensor Completion via Low-rank Tensor Ring Decomposition", 《2018 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC)》 * |
李琼: "基于变分贝叶斯平行因子分解的缺失信号的恢复", 《仪器仪表学报》 * |
马友等: "基于张量分解的卫星遥测缺失数据预测算法", 《电子与信息学报》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115630211A (en) * | 2022-09-16 | 2023-01-20 | 山东科技大学 | Traffic data tensor completion method based on space-time constraint |
CN116087435B (en) * | 2023-04-04 | 2023-06-16 | 石家庄学院 | Air quality monitoring method, electronic equipment and storage medium |
CN116087435A (en) * | 2023-04-04 | 2023-05-09 | 石家庄学院 | Air quality monitoring method, electronic equipment and storage medium |
CN116245779B (en) * | 2023-05-11 | 2023-08-22 | 四川工程职业技术学院 | Image fusion method and device, storage medium and electronic equipment |
CN116245779A (en) * | 2023-05-11 | 2023-06-09 | 四川工程职业技术学院 | Image fusion method and device, storage medium and electronic equipment |
CN116912107A (en) * | 2023-06-13 | 2023-10-20 | 重庆市荣冠科技有限公司 | DCT-based weighted adaptive tensor data completion method |
CN116912107B (en) * | 2023-06-13 | 2024-04-16 | 万基泰科工集团数字城市科技有限公司 | DCT-based weighted adaptive tensor data completion method |
CN116450636A (en) * | 2023-06-20 | 2023-07-18 | 石家庄学院 | Internet of things data completion method, equipment and medium based on low-rank tensor decomposition |
CN116450636B (en) * | 2023-06-20 | 2023-08-18 | 石家庄学院 | Internet of things data completion method, equipment and medium based on low-rank tensor decomposition |
CN117745551A (en) * | 2024-02-19 | 2024-03-22 | 电子科技大学 | A method for image signal phase recovery |
CN117745551B (en) * | 2024-02-19 | 2024-04-26 | 电子科技大学 | Method for recovering phase of image signal |
CN118535564A (en) * | 2024-07-26 | 2024-08-23 | 山东科技大学 | Expressway data complement method based on low-rank priori |
CN118606617A (en) * | 2024-08-09 | 2024-09-06 | 江西财经大学 | Tensor completion method and system based on low-rank matrix of Tucker decomposition |
Also Published As
Publication number | Publication date |
---|---|
CN114841888B (en) | 2023-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114841888A (en) | Visual data completion method based on low-rank tensor ring decomposition and factor prior | |
Xue et al. | Multilayer sparsity-based tensor decomposition for low-rank tensor completion | |
Yuan et al. | High-order tensor completion via gradient-based optimization under tensor train format | |
Liu et al. | Low CP rank and tucker rank tensor completion for estimating missing components in image data | |
Mu et al. | Accelerated low-rank visual recovery by random projection | |
CN109241491A (en) | The structural missing fill method of tensor based on joint low-rank and rarefaction representation | |
Feng et al. | Robust block tensor principal component analysis | |
CN108510013B (en) | Background Modeling Method for Improved Robust Tensor Principal Component Analysis Based on Low-Rank Core Matrix | |
CN105469360A (en) | Non local joint sparse representation based hyperspectral image super-resolution reconstruction method | |
CN105550988A (en) | Super-resolution reconstruction algorithm based on improved neighborhood embedding and structure self-similarity | |
CN113222834A (en) | Visual data tensor completion method based on smooth constraint and matrix decomposition | |
Zhang et al. | Tensor recovery based on a novel non-convex function minimax logarithmic concave penalty function | |
Gong et al. | Accurate regularized Tucker decomposition for image restoration | |
Sun et al. | NF-3DLogTNN: An effective hyperspectral and multispectral image fusion method based on nonlocal low-fibered-rank regularization | |
Liu et al. | Image inpainting algorithm based on tensor decomposition and weighted nuclear norm | |
Li et al. | Thick cloud removal for multitemporal remote sensing images: When tensor ring decomposition meets gradient domain fidelity | |
Tu et al. | Tensor recovery using the tensor nuclear norm based on nonconvex and nonlinear transformations | |
Li et al. | An alternating nonmonotone projected Barzilai–Borwein algorithm of nonnegative factorization of big matrices | |
Gong et al. | Blind image deblurring by promoting group sparsity | |
Zhang et al. | Cyclic tensor singular value decomposition with applications in low-rank high-order tensor recovery | |
Tian et al. | Hyperspectral Image Denoising via $ L_ {0} $ Regularized Low-Rank Tucker Decomposition | |
Wu et al. | Tensor nonconvex unified prior for tensor recovery | |
Khader et al. | A model-guided deep convolutional sparse coding network for hyperspectral and multispectral image fusion | |
Qi et al. | Variable T-Product and Zero-Padding Tensor Completion with Applications | |
CN116206166B (en) | A data dimensionality reduction method, device and medium based on kernel projection learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |