CN107133930A - Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix - Google Patents

Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix Download PDF

Info

Publication number
CN107133930A
CN107133930A CN201710298239.5A CN201710298239A CN107133930A CN 107133930 A CN107133930 A CN 107133930A CN 201710298239 A CN201710298239 A CN 201710298239A CN 107133930 A CN107133930 A CN 107133930A
Authority
CN
China
Prior art keywords
mrow
msubsup
msup
msub
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710298239.5A
Other languages
Chinese (zh)
Inventor
杨敬钰
杨蕉如
李坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201710298239.5A priority Critical patent/CN107133930A/en
Publication of CN107133930A publication Critical patent/CN107133930A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/513Sparse representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

本发明属于计算机视觉领域,为实现对像素行列缺失图像的准确填充。本发明采取的技术方案是,基于低秩矩阵重建和稀疏表示的行列缺失图像填充方法,步骤是,基于低秩矩阵重建理论引入低秩先验对潜在图像进行约束;同时,考虑到行缺失图像的每一列可以由列字典稀疏表示,而列缺失图像的每一行可以由行字典稀疏表示,故基于稀疏表示理论引入可分离的二维稀疏先验;从而基于上述联合低秩与可分离的二维稀疏先验,将带有行列缺失的图像填充问题具体地表述为求解约束优化方程,从而实现行列缺失图像填充。本发明主要应用于计算机视觉处理场合。

The invention belongs to the field of computer vision and aims to realize accurate filling of missing images in pixel rows and columns. The technical scheme adopted by the present invention is a method for filling missing images in rows and columns based on low-rank matrix reconstruction and sparse representation. Each column of can be sparsely represented by a column dictionary, and each row of a column-missing image can be sparsely represented by a row dictionary, so a separable two-dimensional sparse prior is introduced based on the sparse representation theory; thus based on the above joint low-rank and separable binary Dimensional sparse prior, the problem of image filling with missing rows and columns is specifically expressed as solving constrained optimization equations, so as to realize image filling with missing rows and columns. The invention is mainly applied to computer vision processing occasions.

Description

基于低秩矩阵重建和稀疏表示的行列缺失图像填充方法Image filling method with missing rows and columns based on low-rank matrix reconstruction and sparse representation

技术领域technical field

本发明属于计算机视觉领域。特别涉及基于低秩矩阵重建和稀疏表示的行列缺失图像填充方法。The invention belongs to the field of computer vision. In particular, it concerns row and column missing image filling methods based on low-rank matrix reconstruction and sparse representation.

背景技术Background technique

根据矩阵的一部分已知像素恢复出未知完整矩阵的问题在近年来引起了人们很大的关注。在计算机视觉和机器学习的许多应用领域中经常遇到这类问题,例如图像修复、推荐系统和背景建模等。The problem of recovering an unknown complete matrix from a part of the known pixels of the matrix has attracted great attention in recent years. Such problems are frequently encountered in many application domains of computer vision and machine learning, such as image inpainting, recommender systems, and background modeling.

关于解决图像填充问题的方法已经有很多研究成果。由于矩阵填充问题的病态性,目前的矩阵填充方法普遍认为潜在矩阵是低秩的或者近似低秩的,然后通过低秩矩阵重建来填充缺失像素值。如奇异值阈值法(SVT)、增广拉格朗日乘子法(ALM)、加速近邻梯度法(APG)等。但是已有的这些填充算法都是利用图像的低秩特性来填充缺失像素值,这对于像素随机缺失且图像的每行每列均有观测值的情况是有效的,但当图像中存在整行和整列像素缺失时,已有算法则无法解决这种图像填充问题。因为大量行列像素缺失的矩阵填充问题在只利用低秩特性进行约束的条件下是无法求解的。而在实际应用中,如图像传输、地震数据获取等过程中图像矩阵很可能会遭到某些行列缺失的退化。所以,设计出一种能够有效地填充矩阵行列缺失的填充算法是十分必要的。There have been many research results on methods to solve the image filling problem. Due to the ill-conditioned nature of the matrix filling problem, current matrix filling methods generally consider the latent matrix to be low-rank or approximately low-rank, and then fill in the missing pixel values through low-rank matrix reconstruction. Such as singular value threshold method (SVT), augmented Lagrangian multiplier method (ALM), accelerated neighbor gradient method (APG) and so on. However, these existing filling algorithms use the low-rank characteristics of the image to fill the missing pixel values, which is effective for the case where pixels are randomly missing and each row and column of the image has observation values, but when there are entire rows in the image Existing algorithms cannot solve this kind of image filling problem when the pixel and the entire column are missing. Because the matrix filling problem with a large number of missing pixels in rows and columns cannot be solved under the condition of only using the low-rank property as a constraint. However, in practical applications, such as image transmission, seismic data acquisition, etc., the image matrix is likely to be degraded by missing rows and columns. Therefore, it is very necessary to design a filling algorithm that can effectively fill the missing rows and columns of the matrix.

现阶段,针对上述只利用低秩特性的矩阵填充方法的缺点,学术界在此基础上,引入对列向量的稀疏约束,实现了对图像行缺失的恢复。然而由于先验条件的不足,矩阵行和列同时缺失的问题还是未能解决。为此,本发明在模型中引入低秩和可分离的二维稀疏先验从而实现对行列缺失的矩阵进行准确填充。At this stage, in view of the shortcomings of the above-mentioned matrix filling method that only uses low-rank characteristics, the academic community has introduced sparse constraints on column vectors on this basis to realize the recovery of missing image rows. However, due to the lack of prior conditions, the problem of missing both rows and columns of the matrix has not yet been solved. For this reason, the present invention introduces a low-rank and separable two-dimensional sparse prior into the model so as to realize accurate filling of the matrix with missing rows and columns.

发明内容Contents of the invention

本发明意在弥补现有技术的不足,即实现对像素行列缺失图像的准确填充。本发明采取的技术方案是,基于低秩矩阵重建和稀疏表示的行列缺失图像填充方法,步骤是,基于低秩矩阵重建理论引入低秩先验对潜在图像进行约束;同时,考虑到行缺失图像的每一列可以由列字典稀疏表示,而列缺失图像的每一行可以由行字典稀疏表示,故基于稀疏表示理论引入可分离的二维稀疏先验;从而基于上述联合低秩与可分离的二维稀疏先验,将带有行列缺失的图像填充问题具体地表述为求解约束优化方程,从而实现行列缺失图像填充。The present invention aims to make up for the deficiency of the prior art, that is, to realize accurate filling of images with missing rows and columns of pixels. The technical scheme adopted by the present invention is a method for filling missing images in rows and columns based on low-rank matrix reconstruction and sparse representation. Each column of can be sparsely represented by a column dictionary, and each row of a column-missing image can be sparsely represented by a row dictionary, so a separable two-dimensional sparse prior is introduced based on the sparse representation theory; thus based on the above joint low-rank and separable binary Dimensional sparse prior, the problem of image filling with missing rows and columns is specifically expressed as solving constrained optimization equations, so as to realize image filling with missing rows and columns.

将带有行列缺失的图像填充问题具体地表述为求解约束优化方程具体步骤细化为:The image filling problem with missing rows and columns is specifically expressed as solving the constrained optimization equation, and the specific steps are refined as follows:

1)将带有行列缺失的图像填充问题具体地表述为求解如下约束优化方程:1) The image filling problem with missing rows and columns is specifically expressed as solving the following constrained optimization equation:

其中tr(·)是矩阵的迹,表示低秩先验项;表示两个矩阵的点乘运算,||·||1表示矩阵的一范数,两个一范数项分别代表可分离的二维稀疏先验项;Ω是观测空间,表示有行列缺失的观测矩阵D内的已知像素,PΩ(·)是投影算子,表示变量投影到空间域Ω内的值,A为填充好的矩阵,Σ=diag([σ12,...,σn])表示由A的奇异值以非递增的顺序组成的对角矩阵,Wa、Wb和Wc分别表示加权低秩项和可分离二维稀疏项的权重矩阵,γB,γC分别表示可分离二维稀疏项的正则化系数,Φc和Φr分别表示训练好的列字典和行字典,对应的系数矩阵分别由B和C表示,E代表观测矩阵D中缺失的像素;where tr( ) is the trace of the matrix, representing the low-rank prior term; Indicates the dot product operation of two matrices, ||·|| 1 represents the one-norm of the matrix, and the two one-norm items represent separable two-dimensional sparse prior items respectively; Ω is the observation space, indicating that there are rows and columns missing Observe the known pixels in the matrix D, P Ω (·) is the projection operator, which represents the value of the variable projected into the space domain Ω, A is the filled matrix, Σ=diag([σ 12 ,.. .,σ n ]) represents the diagonal matrix composed of the singular values of A in non-increasing order, W a , W b and W c represent the weight matrix of weighted low-rank items and separable two-dimensional sparse items, respectively, γ B , γ C represent the regularization coefficients of separable two-dimensional sparse items, Φ c and Φ r represent the trained column dictionary and row dictionary respectively, the corresponding coefficient matrices are represented by B and C respectively, and E represents the missing of pixels;

采用增广拉格朗日乘子法(ALM)将约束优化问题(1)转化为无约束优化问题来求解,增广拉格朗日方程如下:The augmented Lagrangian multiplier method (ALM) is used to transform the constrained optimization problem (1) into an unconstrained optimization problem to solve, and the augmented Lagrangian equation is as follows:

其中Y1、Y2和Y3表示拉格朗日乘子矩阵,μ1、μ2和μ3是惩罚因子,<·,·>表示两个矩阵的内积,||·||F表示矩阵的弗罗贝尼乌斯(Frobenius)范数;Among them, Y 1 , Y 2 and Y 3 represent Lagrange multiplier matrices, μ 1 , μ 2 and μ 3 are penalty factors, <·,·> represent the inner product of two matrices, and ||·|| F represents the Frobenius norm of the matrix;

求解过程为,训练列字典和行字典Φc和Φr,初始化权重矩阵Wa、Wb和Wc,交替地更新系数矩阵B和C,恢复矩阵A,缺失像素矩阵E,拉格朗日乘子矩阵Y1、Y2和Y3,惩罚因子μ1、μ2和μ3以及权重矩阵Wa、Wb和Wc,直到算法收敛,这时迭代的结果A(l)就是原问题的最终解A。The solution process is to train column and row dictionaries Φ c and Φ r , initialize weight matrices W a , W b and W c , update coefficient matrices B and C alternately, restore matrix A, missing pixel matrix E, Lagrangian Multiplier matrices Y 1 , Y 2 and Y 3 , penalty factors μ 1 , μ 2 and μ 3 and weight matrices W a , W b and W c , until the algorithm converges, then the iteration result A (l) is the original problem The final solution A of .

具体地,训练字典Φc和Φr:在高质量的图像数据集上使用在线学习算法训练出列字典和行字典Φc和ΦrSpecifically, training dictionaries Φ c and Φ r : use an online learning algorithm to train out-of-column dictionaries and row dictionaries Φ c and Φ r on high-quality image datasets.

具体地,初始化权重矩阵Wa、Wb和Wc:设重加权次数为l,l=0时,将权重矩阵 初始值全部赋值为1,表示第一次迭代没有重加权。Specifically, initialize the weight matrices W a , W b and W c : set the number of reweighting to l, when l=0, the weight matrix with The initial values are all assigned a value of 1, indicating that there is no reweighting for the first iteration.

具体地,采用交替方向法ADM将方程(2)转换成如下序列进行迭代求解:Specifically, the alternating direction method ADM is used to transform equation (2) into the following sequence for iterative solution:

上式中的分别表示使目标函数取最小值时的变量B、C、A和E的值,ρ1、ρ2和ρ3为倍数因子,k是迭代次数;然后按照如下步骤进行迭代求解:in the above formula with respectively represent the values of variables B, C, A and E when the objective function takes the minimum value, ρ 1 , ρ 2 and ρ 3 are multiple factors, and k is the number of iterations; then iteratively solve according to the following steps:

1)求解Bk+1:使用加速近邻梯度算法求得Bk+11) Solve B k+1 : use accelerated neighbor gradient algorithm to get B k+1 ;

去掉式子(3)中求解B的目标函数里与B无关的项,得到如下方程:Remove the terms irrelevant to B in the objective function for solving B in formula (3), and get the following equation:

使用泰勒展开的方法,构造出一个二阶函数来逼近上式,然后针对这个二阶的函数来求解原方程,令再引入变量Z,最终可以解得:Using the method of Taylor expansion, construct a second-order function to approximate the above formula, and then solve the original equation for this second-order function, let Introducing the variable Z again, we can finally solve:

其中,soft(·,·)为收缩算子,的梯度,Lf是一个常数,值为变量Zj的更新规则如下:Among them, soft(·,·) is the contraction operator, The gradient of , L f is a constant whose value is The update rule of the variable Z j is as follows:

其中,tj是一组常数序列,j是变量迭代次数;Among them, t j is a set of constant sequences, and j is the number of variable iterations;

2)求解Ck+1:使用加速近邻梯度算法求得Ck+12) Solve C k+1 : use accelerated neighbor gradient algorithm to find C k+1 ;

去掉式子(3)中求解C的目标函数里与C无关的项,得到如下方程:Remove the terms irrelevant to C in the objective function for solving C in formula (3), and get the following equation:

使用泰勒展开的方法,构造出一个二阶函数来逼近上式,然后针对这个二阶的函数来求解原方程,令再引入变量最终可以解得:Using the method of Taylor expansion, construct a second-order function to approximate the above formula, and then solve the original equation for this second-order function, let reintroduce variable Finally it can be solved:

其中,soft(·,·)为收缩算子,的梯度,Lf是一个常数,值为变量的更新规则如下:Among them, soft(·,·) is the contraction operator, for The gradient of , L f is a constant whose value is variable The update rules for are as follows:

其中,tj是一组常数序列,j是变量迭代次数;Among them, t j is a set of constant sequences, and j is the number of variable iterations;

3)求解Ak+1:使用奇异值阈值法(Singular Value Thresholding)SVT求解Ak+13) Solve A k+1 : use Singular Value Thresholding (SVT) to solve A k+1 ;

去掉式子(3)中求解A的目标函数里与A无关的项,并且通过配方得到:Remove the items irrelevant to A in the objective function for solving A in formula (3), and obtain through the formula:

其中,对Qk+1使用奇异值阈值法解得:in, Using the singular value threshold method for Q k+1 to solve:

其中Hk+1,Vk+1分别是Qk+1的左奇异矩阵和右奇异矩阵;Among them, H k+1 and V k+1 are the left singular matrix and right singular matrix of Q k+ 1 respectively;

4)求解Ek+1:Ek+1的解由两部分组成;4) Solve E k+1 : the solution of E k+1 consists of two parts;

在观测空间Ω内,E的值为0;在观测空间Ω以外,即互补空间内,使用一阶求导来求解,将两部分合起来即为E的最终解:In the observation space Ω, the value of E is 0; outside the observation space Ω, that is, the complementary space Inside, use the first-order derivation to solve, and combine the two parts to get the final solution of E:

5)重复上述步骤1)、2)、3)、4)直到算法收敛,这时迭代的结果Ak+1、Σk+1、Bk+1、Ck+1和Ek+1就是原问题没有重加权的结果A(l)、Σ(l)、B(l)、C(l)和E(l),这里,l是重加权次数;5) Repeat the above steps 1), 2), 3) and 4) until the algorithm converges, then the iteration results A k+1 , Σ k+1 , B k+1 , C k+1 and E k+1 are The original problem has no reweighted results A (l) , Σ (l) , B (l) , C (l) and E (l) , where l is the number of reweighted times;

6)更新权重矩阵Wa、Wb和Wc6) Updating weight matrices W a , W b and W c ;

为抵消信号幅值在核范数项和一范数项上的影响,引入重加权方案,根据当前估计的奇异值矩阵Σ(l)、系数矩阵Bl和Cl的幅值,采用反比例原则迭代地更新权重矩阵Wa、Wb和WcIn order to offset the influence of the signal amplitude on the nuclear norm item and the one norm item, a reweighting scheme is introduced, and the inverse proportional principle is adopted according to the amplitudes of the currently estimated singular value matrix Σ (l) and coefficient matrices B l and C l Iteratively update the weight matrices W a , W b and W c :

其中是图像中像素的位置坐标,ε是任意小的正数。in is the position coordinate of the pixel in the image, and ε is an arbitrarily small positive number.

7)重复上述步骤1)-7)直到算法收敛,这时迭代的结果A(l)就是原问题的最终解A。7) Repeat the above steps 1)-7) until the algorithm converges, then the iteration result A (l) is the final solution A of the original problem.

本发明的技术特点及效果:Technical characteristics and effects of the present invention:

本发明方法针对行列缺失的图像填充问题,通过引入可分离的二维稀疏先验,实现了对行列缺失的图像填充问题的求解。本发明具有以下特点:The method of the invention aims at the problem of image filling with missing row and column, and realizes the solution to the problem of image filling with missing row and column by introducing a separable two-dimensional sparse prior. The present invention has the following characteristics:

1、运用了增广拉格朗日乘子法(ALM)、交替方向法(ADM)、加速近邻梯度算法、奇异值阈值法等算法求解子问题,整合了已有算法的优点。1. Algorithms such as Augmented Lagrangian Multiplier Method (ALM), Alternating Direction Method (ADM), Accelerated Neighbor Gradient Algorithm, and Singular Value Threshold Method are used to solve sub-problems, and the advantages of existing algorithms are integrated.

2、使用列字典和行字典对图像的列和行进行稀疏表示,与传统的块字典相比更加高效。2. Use the column dictionary and row dictionary to sparsely represent the columns and rows of the image, which is more efficient than the traditional block dictionary.

3、将低秩矩阵重建理论和稀疏表示理论相结合,在传统的低秩矩阵重建模型中引入字典学习,提出了联合低秩信息与可分离的二维稀疏性先验,使得可以对行列同时缺失的图像进行准确填充。3. Combining low-rank matrix reconstruction theory with sparse representation theory, dictionary learning is introduced into the traditional low-rank matrix reconstruction model, and a joint low-rank information and separable two-dimensional sparsity prior is proposed, so that the row and column can be simultaneously Missing images are accurately filled.

4、通过对缺失的受损图进行低秩和稀疏的联合约束,提高了填充性能,既可以填充行列缺失,也可以更加准确地填充随机缺失。4. Through the low-rank and sparse joint constraints on the missing damaged graph, the filling performance is improved, which can fill the missing row and column, and can also fill the random missing more accurately.

附图说明Description of drawings

本发明上述的优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above-mentioned advantages of the present invention will become obvious and easy to understand from the following description of the embodiments in conjunction with the accompanying drawings, wherein:

图1是本发明流程图;Fig. 1 is a flowchart of the present invention;

图2是原始的没有缺失的真值图像;Figure 2 is the original ground-truth image without missing;

图3是有行列和随机缺失的受损图像,黑色表示缺失像素,从左至右总缺失率分别为:(1)10%缺失;(2)20%缺失;(3)30%缺失;(4)50%缺失;Figure 3 is a damaged image with rows and columns and random deletions, black represents missing pixels, and the total missing rates from left to right are: (1) 10% missing; (2) 20% missing; (3) 30% missing; ( 4) 50% missing;

图4是用本发明方法对四种缺失率下的缺失图的填充结果图:(1)10%缺失填充结果,PSNR=40.79;(2)20%缺失填充结果,PSNR=37.32;(3)30%缺失填充结果,PSNR=35.69;(4)50%缺失填充结果,PSNR=32.23。Fig. 4 is the filling result figure of the missing figure under four kinds of missing rates with the method of the present invention: (1) 10% missing filling result, PSNR=40.79; (2) 20% missing filling result, PSNR=37.32; (3) 30% missing filling result, PSNR=35.69; (4) 50% missing filling result, PSNR=32.23.

具体实施方式detailed description

下面结合实施例和附图对本发明基于低秩矩阵重建和稀疏表示的行列缺失图像填充方法做出详细说明。The method for filling missing rows and columns of images based on low-rank matrix reconstruction and sparse representation of the present invention will be described in detail below in conjunction with the embodiments and drawings.

本发明将低秩矩阵重建与稀疏表示相结合,在传统的低秩矩阵重建模型的基础上引入字典学习模型,通过对缺失图像采用联合低秩与可分离的二维稀疏先验条件的约束,从而解决已有算法无法实现行列缺失的图像填充的问题。具体方法包括以下步骤:The present invention combines low-rank matrix reconstruction with sparse representation, introduces a dictionary learning model on the basis of traditional low-rank matrix reconstruction models, and adopts joint low-rank and separable two-dimensional sparse priori constraints on missing images, Therefore, the problem that existing algorithms cannot realize image filling with missing rows and columns is solved. The specific method includes the following steps:

1)考虑到自然图像本身的低秩特性,基于低秩矩阵重建理论引入低秩先验对潜在图像进行约束;同时,考虑到行缺失图像的每一列可以由列字典稀疏表示,而列缺失图像的每一行可以由行字典稀疏表示,故基于稀疏表示理论引入可分离的二维稀疏先验;从而基于上述联合低秩与可分离的二维稀疏先验,将带有行列缺失的图像填充问题具体地表述为求解如下约束优化方程:1) Considering the low-rank characteristics of the natural image itself, a low-rank prior is introduced based on the low-rank matrix reconstruction theory to constrain the latent image; at the same time, considering that each column of a row-missing image can be sparsely represented by a column dictionary, and a column-missing image Each row of can be sparsely represented by a row dictionary, so a separable two-dimensional sparse prior is introduced based on the sparse representation theory; thus, based on the above joint low-rank and separable two-dimensional sparse prior, the image filling problem with missing rows and columns Specifically, it is expressed as solving the following constrained optimization equation:

其中tr(·)是矩阵的迹,表示低秩先验项;表示两个矩阵的点乘运算,||·||1表示矩阵的一范数,两个一范数项分别代表可分离的二维稀疏先验项;Ω是观测空间,表示有行列缺失的观测矩阵D内的已知像素,PΩ(·)是投影算子,表示变量投影到空间域Ω内的值,A为填充好的矩阵,Σ=diag([σ12,...,σn])表示由A的奇异值以非递增的顺序组成的对角矩阵,Wa、Wb和Wc分别表示加权低秩项和可分离二维稀疏项的权重矩阵,γB,γC分别表示可分离二维稀疏项的正则化系数,Φc和Φr分别表示训练好的列字典和行字典,对应的系数矩阵分别由B和C表示,E代表观测矩阵D中缺失的像素;where tr( ) is the trace of the matrix, representing the low-rank prior term; Indicates the dot product operation of two matrices, ||·|| 1 represents the one-norm of the matrix, and the two one-norm items represent separable two-dimensional sparse prior items respectively; Ω is the observation space, indicating that there are rows and columns missing Observe the known pixels in the matrix D, P Ω (·) is the projection operator, which represents the value of the variable projected into the space domain Ω, A is the filled matrix, Σ=diag([σ 12 ,.. .,σ n ]) represents the diagonal matrix composed of the singular values of A in non-increasing order, W a , W b and W c represent the weight matrix of weighted low-rank items and separable two-dimensional sparse items, respectively, γ B , γ C represent the regularization coefficients of separable two-dimensional sparse items, Φ c and Φ r represent the trained column dictionary and row dictionary respectively, the corresponding coefficient matrices are represented by B and C respectively, and E represents the missing of pixels;

11)本发明采用增广拉格朗日乘子法(ALM)将约束优化问题(1)转化为无约束优化问题来求解,增广拉格朗日方程如下:11) The present invention adopts the augmented Lagrangian multiplier method (ALM) to convert the constrained optimization problem (1) into an unconstrained optimization problem to solve, and the augmented Lagrangian equation is as follows:

其中Y1、Y2和Y3表示拉格朗日乘子矩阵,μ1、μ2和μ3是惩罚因子,<·,·>表示两个矩阵的内积,||·||F表示矩阵的弗罗贝尼乌斯(Frobenius)范数;Among them, Y 1 , Y 2 and Y 3 represent Lagrange multiplier matrices, μ 1 , μ 2 and μ 3 are penalty factors, <·,·> represent the inner product of two matrices, and ||·|| F represents the Frobenius norm of the matrix;

12)求解过程为,训练列字典和行字典Φc和Φr,初始化权重矩阵Wa、Wb和Wc,交替地更新系数矩阵B和C,恢复矩阵A,缺失像素矩阵E,拉格朗日乘子矩阵Y1、Y2和Y3,惩罚因子μ1、μ2和μ3以及权重矩阵Wa、Wb和Wc12) The solution process is as follows: train the column dictionary and row dictionary Φ c and Φ r , initialize the weight matrix W a , W b and W c , update the coefficient matrix B and C alternately, restore the matrix A, the missing pixel matrix E, Lag Langerian multiplier matrices Y 1 , Y 2 and Y 3 , penalty factors μ 1 , μ 2 and μ 3 and weight matrices W a , W b and W c ;

2)训练字典Φc和Φr:在高质量的图像数据集上使用在线学习算法训练出列字典和行字典Φc和Φr2) Training dictionaries Φ c and Φ r : use online learning algorithm to train column and row dictionaries Φ c and Φ r on high-quality image data sets;

21)构造列字典Φc使得矩阵A能够由列字典稀疏表示,即满足A=ΦcB,其中B是系数矩阵且是稀疏的;构造行字典Φr使得矩阵A的转置能够由行字典稀疏表示,即满足AΤ=ΦrC,其中C是系数矩阵且是稀疏的。本发明使用Online Learning算法在Kodak图像集上训练出列字典和行字典Φc和Φr21) Construct a column dictionary Φ c so that the matrix A can be sparsely represented by the column dictionary, that is, satisfy A=Φ c B, where B is a coefficient matrix and is sparse; construct a row dictionary Φ r so that the transposition of the matrix A can be represented by the row dictionary Sparse representation, which satisfies A Τ = Φ r C, where C is a coefficient matrix and is sparse. The present invention uses an Online Learning algorithm to train a column dictionary and a row dictionary Φ c and Φ r on a Kodak image set.

22)训练字典的相关参数设定为:待重建矩阵A的行数与字典Φc中元素的维数m相等,即A的行数与Φc的行数均为m;A的列数与字典Φr中元素的维数n相等,即A的列数与Φr的行数均为n。训练出的字典Φc和Φr均是过完备的字典,即字典的列数必须大于其行数。22) The relevant parameters of the training dictionary are set as follows: the number of rows of the matrix A to be reconstructed is equal to the dimension m of elements in the dictionary Φ c , that is, the number of rows of A and the number of rows of Φ c are both m; the number of columns of A is equal to The dimension n of the elements in the dictionary Φ r is equal, that is, the number of columns of A and the number of rows of Φ r are both n. The trained dictionaries Φ c and Φ r are both over-complete dictionaries, that is, the number of columns of the dictionary must be greater than the number of rows.

3)初始化权重矩阵Wa、Wb和Wc3) Initialize weight matrices W a , W b and W c ;

设重加权次数为l,l=0时,将权重矩阵初始值全部赋值为1,表示第一次迭代没有重加权。Set the number of times of reweighting to l, when l=0, the weight matrix with The initial values are all assigned a value of 1, indicating that there is no reweighting for the first iteration.

4)采用交替方向法(ADM)将方程(2)转换成如下序列进行迭代求解:4) Using Alternating Direction Method (ADM) to convert Equation (2) into the following sequence for iterative solution:

上式中的分别表示使目标函数取最小值时的变量B、C、A和E的值,ρ1、ρ2和ρ3为倍数因子,k是迭代次数;设定好各参数初值,然后按照步骤5)、6)、7)、8)的方法进行迭代求解得到没有重加权的结果。in the above formula with Represent the values of variables B, C, A and E when the objective function takes the minimum value, ρ 1 , ρ 2 and ρ 3 are multiplication factors, k is the number of iterations; set the initial value of each parameter, and then follow step 5 ), 6), 7), and 8) to iteratively solve the results without reweighting.

5)求解Bk+1:使用加速近邻梯度算法求得Bk+15) Solve B k+1 : use the accelerated nearest neighbor gradient algorithm to find B k+1 .

51)去掉式子(3)中求解B的目标函数里与B无关的项,得到如下方程:51) Remove the item irrelevant to B in the objective function for solving B in the formula (3), and obtain the following equation:

通过泰勒展开,构造出一个二阶函数来逼近上式,然后针对这个二阶函数来求解原方程。令再引入变量Z,定义如下函数:Through Taylor expansion, a second-order function is constructed to approximate the above formula, and then the original equation is solved for this second-order function. make Then introduce the variable Z and define the following function:

其中,为f(Z)的梯度,Lf是一个常数,值为用来保证对所有的Z都有F(Z)≤Q(B,Z)。in, is the gradient of f(Z), L f is a constant, the value is It is used to ensure that F(Z)≤Q(B,Z) for all Z.

52)经过上步转化,方程(4)转化成求解Q(B,Zj)的最小值问题,通过配方得到如下形式:52) After the transformation in the previous step, the equation (4) is transformed into the problem of solving the minimum value of Q(B, Z j ), and the formula is obtained as follows:

其中,变量Zj的更新规则如下:in, The update rule of the variable Z j is as follows:

其中,tj是一组常数序列,j是变量迭代次数。使用收缩算子解得:Among them, tj is a set of constant sequences, and j is the number of variable iterations. Using the contraction operator to solve:

其中,soft(·,·)为收缩算子。Among them, soft(·,·) is the contraction operator.

6)求解Ck+1:使用加速近邻梯度算法求得Ck+16) Solve C k+1 : use accelerated neighbor gradient algorithm to find C k+1 .

61)去掉式子(3)中求解C的目标函数里与C无关的项,得到如下方程:61) Remove the terms irrelevant to C in the objective function for solving C in formula (3), and obtain the following equation:

使用泰勒展开,构造出一个二阶函数来逼近上式,然后针对这个二阶函数来求解原方程。令再引入变量定义如下函数:Using Taylor expansion, a second-order function is constructed to approximate the above formula, and then the original equation is solved for this second-order function. make reintroduce variable Define the following functions:

其中,的梯度,Lf是一个常数,值为用来保证对所有的都有 in, for The gradient of , L f is a constant whose value is used to ensure that all have

62)经过上步转化,方程(9)转化成求解的最小值问题,通过配方得到如下形式:62) After the transformation in the previous step, the equation (9) is transformed into the solution The minimum value problem of is obtained by formula as follows:

其中,变量的更新规则如下:in, variable The update rules for are as follows:

其中,tj是一组常数序列,j是变量迭代次数。使用收缩算子解得:Among them, tj is a set of constant sequences, and j is the number of variable iterations. Using the contraction operator to solve:

其中,soft(·,·)为收缩算子。Among them, soft(·,·) is the contraction operator.

7)求解Ak+1:使用奇异值阈值法(Singular Value Thresholding)SVT求解Ak+17) Solve for A k+1 : use SVT to solve for A k+1 .

去掉式子(3)中求解A的目标函数里与A无关的项得到:Remove the items irrelevant to A in the objective function for solving A in formula (3):

使用配方法将上式改写成:Using the formula method, the above formula can be rewritten as:

其中,对Qk+1使用奇异值阈值法解得:in, Using the singular value threshold method for Q k+1 to solve:

其中Hk+1,Vk+1分别是Qk+1的左奇异矩阵和右奇异矩阵;Among them, H k+1 and V k+1 are the left singular matrix and right singular matrix of Q k+ 1 respectively;

8)求解Ek+1:Ek+1的解由两部分组成。8) Solving E k+1 : The solution of E k+1 consists of two parts.

81)在观测空间Ω内,E的值为0;即PΩ(E)=0。81) In the observation space Ω, the value of E is 0; that is, P Ω (E)=0.

82)在观测空间Ω以外,即互补空间内,关于Ek+1的方程如下式所示:82) Outside the observation space Ω, that is, the complementary space Inside, the equation about E k+1 is as follows:

使用一阶求导来求解,得到 Using the first-order derivative to solve, we get

83)将空间域Ω内部和外部的解联合起来即为E的最终解:83) Combining the solutions inside and outside the space domain Ω is the final solution of E:

9)重复上述步骤5)、6)、7)、8)直到算法收敛,这时迭代的结果Ak+1、Σk+1、Bk+1、Ck+1和Ek+1就是原问题没有重加权的结果A(l)、Σ(l)、B(l)、C(l)和E(l)。这里,l是重加权次数。9) Repeat the above steps 5), 6), 7) and 8) until the algorithm converges, then the iteration results A k+1 , Σ k+1 , B k+1 , C k+1 and E k+1 are The original problem has no reweighted outcomes A (l) , Σ (l) , B (l) , C (l) and E (l) . Here, l is the number of reweighting times.

10)更新权重矩阵Wa、Wb和Wc10) Update the weight matrices W a , W b and W c .

为抵消信号幅值在核范数项和一范数项上的影响,引入重加权方案,根据当前估计的奇异值矩阵Σ(l)、系数矩阵Bl和Cl的幅值,采用反比例原则迭代地更新权重矩阵Wa、Wb和WcIn order to offset the influence of the signal amplitude on the nuclear norm item and the one norm item, a reweighting scheme is introduced, and the inverse proportional principle is adopted according to the amplitudes of the currently estimated singular value matrix Σ (l) and coefficient matrices B l and C l Iteratively update the weight matrices W a , W b and W c :

其中是图像中像素的位置坐标,ε是任意小的正数。in is the position coordinate of the pixel in the image, and ε is an arbitrarily small positive number.

11)重复上述步骤4)、5)、6)、7)、8)、9)、10)直到算法收敛,这时迭代的结果A(l)就是原问题的最终解A。11) Repeat the above steps 4), 5), 6), 7), 8), 9), and 10) until the algorithm converges, and the iterative result A (l) is the final solution A of the original problem.

本发明方法将低秩矩阵重建与稀疏表示理论相结合,在传统的低秩矩阵重建模型的基础上引入字典学习模型,通过对缺失图像采用联合低秩与可分离的二维稀疏先验条件的约束,从而解决已有技术无法处理的问题,即实现对行列缺失的图像进行填充(实验流程图如图1所示)。结合附图和实施例的详细说明如下:The method of the present invention combines low-rank matrix reconstruction with sparse representation theory, introduces a dictionary learning model on the basis of traditional low-rank matrix reconstruction models, and adopts joint low-rank and separable two-dimensional sparse prior conditions for missing images Constraints, so as to solve the problem that the existing technology cannot handle, that is, to realize the filling of the image with missing rows and columns (the experimental flow chart is shown in Figure 1). The detailed description in conjunction with accompanying drawing and embodiment is as follows:

1)实验中使用从BSDS500数据集中随机选取的一张321×481像素的图片(如图2所示)作为原始图,在其上构造了4种缺失率分别为10%、20%、30%和50%的受损图像进行测试(如图3所示),其中包含行列缺失和随机缺失。本发明采用原子大小固定为100的字典,所以先将待填充图按照从上到下、从左至右滑窗的方式分成若干个100×100的图像块。滑窗的步长为90个像素。将这若干个100×100的图像块依次填充好,最终再组合起来即可得到原始尺寸321×481的填充图。填充第一个图像块时,则将其用矩阵D表示,填充当前带有行列缺失的图像块的问题具体地表述为求解如下约束优化方程:1) In the experiment, a 321×481-pixel image randomly selected from the BSDS500 dataset (as shown in Figure 2) was used as the original image, and four missing rates were constructed on it with 10%, 20%, and 30% and 50% of the damaged images (as shown in Figure 3), which contain row and column deletions and random deletions. The present invention uses a dictionary whose atomic size is fixed at 100, so the image to be filled is divided into several image blocks of 100×100 in the manner of sliding windows from top to bottom and from left to right. The sliding window has a step size of 90 pixels. These several 100×100 image blocks are filled in sequence, and finally combined to obtain a filled image with an original size of 321×481. When filling the first image block, it is represented by a matrix D, and the problem of filling the current image block with missing rows and columns is specifically expressed as solving the following constrained optimization equation:

其中tr(·)是矩阵的迹,表示低秩先验项;表示两个矩阵的点乘运算,||·||1表示矩阵的一范数,两个一范数项分别代表可分离的二维稀疏先验项;Ω是观测空间,表示有行列缺失的观测矩阵D内的已知像素,PΩ(·)是投影算子,表示变量投影到空间域Ω内的值,A为填充好的矩阵,Σ=diag([σ12,...,σn])表示由A的奇异值以非递增的顺序组成的对角矩阵,Wa、Wb和Wc分别表示加权低秩项和可分离二维稀疏项的权重矩阵,γB,γC分别表示可分离二维稀疏项的正则化系数,Φc和Φr分别表示训练好的列字典和行字典,对应的系数矩阵分别由B和C表示,E代表观测矩阵D中缺失的像素;where tr( ) is the trace of the matrix, representing the low-rank prior term; Indicates the dot product operation of two matrices, ||·|| 1 represents the one-norm of the matrix, and the two one-norm items represent separable two-dimensional sparse prior items respectively; Ω is the observation space, indicating that there are rows and columns missing Observe the known pixels in the matrix D, P Ω (·) is the projection operator, which represents the value of the variable projected into the space domain Ω, A is the filled matrix, Σ=diag([σ 12 ,.. .,σ n ]) represents the diagonal matrix composed of the singular values of A in non-increasing order, W a , W b and W c represent the weight matrix of weighted low-rank items and separable two-dimensional sparse items, respectively, γ B , γ C represent the regularization coefficients of separable two-dimensional sparse items, Φ c and Φ r represent the trained column dictionary and row dictionary respectively, the corresponding coefficient matrices are represented by B and C respectively, and E represents the missing of pixels;

11)本发明采用增广拉格朗日乘子法(ALM)将约束优化问题(1)转化为无约束优化问题来求解,增广拉格朗日方程如下:11) The present invention adopts the augmented Lagrangian multiplier method (ALM) to convert the constrained optimization problem (1) into an unconstrained optimization problem to solve, and the augmented Lagrangian equation is as follows:

其中Y1、Y2和Y3表示拉格朗日乘子矩阵,μ1、μ2和μ3是惩罚因子,<·,·>表示两个矩阵的内积,||·||F表示矩阵的弗罗贝尼乌斯(Frobenius)范数;Among them, Y 1 , Y 2 and Y 3 represent Lagrange multiplier matrices, μ 1 , μ 2 and μ 3 are penalty factors, <·,·> represent the inner product of two matrices, and ||·|| F represents the Frobenius norm of the matrix;

12)求解过程为,训练列字典和行字典Φc和Φr,初始化权重矩阵Wa、Wb和Wc,交替地更新系数矩阵B和C,恢复矩阵A,缺失像素矩阵E,拉格朗日乘子矩阵Y1、Y2和Y3,惩罚因子μ1、μ2和μ3以及权重矩阵Wa、Wb和Wc12) The solution process is as follows: train the column dictionary and row dictionary Φ c and Φ r , initialize the weight matrix W a , W b and W c , update the coefficient matrix B and C alternately, restore the matrix A, the missing pixel matrix E, Lag Langerian multiplier matrices Y 1 , Y 2 and Y 3 , penalty factors μ 1 , μ 2 and μ 3 and weight matrices W a , W b and W c ;

2)训练字典Φc和Φr:在高质量的图像数据集上使用在线学习算法训练出列字典和行字典Φc和Φr2) Training dictionaries Φ c and Φ r : use online learning algorithm to train column and row dictionaries Φ c and Φ r on high-quality image data sets;

21)构造列字典Φc使得矩阵A能够由列字典稀疏表示,即满足A=ΦcB,其中B是系数矩阵且是稀疏的;构造行字典Φr使得矩阵A的转置能够由行字典稀疏表示,即满足AΤ=ΦrC,其中C是系数矩阵且是稀疏的。本发明使用Online Learning算法在Kodak图像集中所有图像上共随机选取230000个大小为100×1的像素列作为训练数据训练出列字典和行字典Φc和Φr21) Construct a column dictionary Φ c so that the matrix A can be sparsely represented by the column dictionary, that is, satisfy A=Φ c B, where B is a coefficient matrix and is sparse; construct a row dictionary Φ r so that the transposition of the matrix A can be represented by the row dictionary Sparse representation, which satisfies A Τ = Φ r C, where C is a coefficient matrix and is sparse. The present invention uses an Online Learning algorithm to randomly select 230,000 pixel columns with a size of 100×1 on all images in the Kodak image set as training data to train a column dictionary and a row dictionary Φ c and Φ r .

22)训练字典的相关参数设定为:重建矩阵A的行数与字典Φc中元素的维数m相等,即A的行数与Φc的行数均为m,实验中取m=100。A的列数与字典Φr中元素的维数n相等,即A的列数与Φr的行数均为n,实验中取n=100。训练出的字典Φc和Φr均是过完备的字典,即字典的列数必须大于其行数。实验中行、列字典列数均取为400,则字典Φc和Φr的规格均为100×400。22) The relevant parameters of the training dictionary are set as follows: the number of rows of the reconstruction matrix A is equal to the dimension m of elements in the dictionary Φ c , that is, the number of rows of A and the number of rows of Φ c are both m, and m=100 is taken in the experiment . The number of columns of A is equal to the dimension n of elements in the dictionary Φ r , that is, the number of columns of A and the number of rows of Φ r are both n, and n=100 is taken in the experiment. The trained dictionaries Φ c and Φ r are both over-complete dictionaries, that is, the number of columns of the dictionary must be greater than the number of rows. In the experiment, the number of rows and columns in the dictionary is 400, and the specifications of dictionaries Φ c and Φ r are both 100×400.

3)初始化权重矩阵Wa、Wb和Wc3) Initialize weight matrices W a , W b and W c ;

设重加权次数为l,l=0时,将权重矩阵初始值全部赋值为1,表示第一次迭代没有重加权。Set the number of times of reweighting to l, when l=0, the weight matrix with The initial values are all assigned a value of 1, indicating that there is no reweighting for the first iteration.

4)采用交替方向法(ADM)将方程(2)转换成如下序列进行迭代求解:4) Using Alternating Direction Method (ADM) to convert Equation (2) into the following sequence for iterative solution:

上式中的分别表示使目标函数取最小值时的变量B、C、A和E的值,ρ1、ρ2和ρ3为倍数因子,k是迭代次数;设定好各参数初值,然后按照步骤5)、6)、7)、8)的方法进行迭代求解得到没有重加权的结果。实验中设定初值为:l=0;k=1;ρ1=ρ2=ρ3=1.1;A1=B1=C1=E1=0。in the above formula with Represent the values of variables B, C, A and E when the objective function takes the minimum value, ρ 1 , ρ 2 and ρ 3 are multiplication factors, k is the number of iterations; set the initial value of each parameter, and then follow step 5 ), 6), 7), and 8) to iteratively solve the results without reweighting. The initial value is set in the experiment: l=0; k=1; ρ 123 =1.1; A 1 =B 1 =C 1 =E 1 =0.

5)求解Bk+1:使用加速近邻梯度算法求得Bk+15) Solve B k+1 : use the accelerated nearest neighbor gradient algorithm to find B k+1 .

51)去掉式子(3)中求解B的目标函数里与B无关的项,得到如下方程:51) Remove the item irrelevant to B in the objective function for solving B in the formula (3), and obtain the following equation:

通过泰勒展开,构造出一个二阶函数来逼近上式,然后针对这个二阶函数来求解原方程。令再引入变量Z,定义如下函数:Through Taylor expansion, a second-order function is constructed to approximate the above formula, and then the original equation is solved for this second-order function. make Then introduce the variable Z and define the following function:

其中,为f(Z)的梯度,Lf是一个常数,值为用来保证对所有的Z都有F(Z)≤Q(B,Z)。in, is the gradient of f(Z), L f is a constant, the value is It is used to ensure that F(Z)≤Q(B,Z) for all Z.

52)经过上步转化,方程(4)转化成求解Q(B,Zj)的最小值问题,通过配方得到如下形式:52) After the transformation in the previous step, the equation (4) is transformed into the problem of solving the minimum value of Q(B, Z j ), and the formula is obtained as follows:

其中,变量Zj的更新规则如下:in, The update rule of the variable Z j is as follows:

其中,tj是一组常数序列,j是变量迭代次数。经过上述转化,设定各参数初始值如下:j=1;t1=1;Z1=0。收敛时可以解得:Among them, tj is a set of constant sequences, and j is the number of variable iterations. After the above conversion, the initial values of each parameter are set as follows: j=1; t 1 =1; Z 1 =0. When it converges, it can be solved:

其中,soft(·,·)为收缩算子。Among them, soft(·,·) is the contraction operator.

6)求解Ck+1:使用加速近邻梯度算法求得Ck+16) Solve C k+1 : use accelerated neighbor gradient algorithm to find C k+1 .

61)去掉式子(3)中求解C的目标函数里与C无关的项,得到如下方程:61) Remove the terms irrelevant to C in the objective function for solving C in formula (3), and obtain the following equation:

使用泰勒展开,构造出一个二阶函数来逼近上式,然后针对这个二阶函数来求解原方程。令再引入变量定义如下函数:Using Taylor expansion, a second-order function is constructed to approximate the above formula, and then the original equation is solved for this second-order function. make reintroduce variable Define the following functions:

其中,的梯度,Lf是一个常数,值为用来保证对所有的都有 in, for The gradient of , L f is a constant whose value is used to ensure that all have

62)经过上步转化,方程(9)转化成求解的最小值问题,通过配方得到如下形式:62) After the transformation in the previous step, the equation (9) is transformed into the solution The minimum value problem of is obtained by formula as follows:

其中,变量的更新规则如下:in, variable The update rules for are as follows:

其中,tj是一组常数序列,j是变量迭代次数。经过上述转化,设定各参数初始值如下:j=1;Among them, tj is a set of constant sequences, and j is the number of variable iterations. After the above conversion, the initial values of each parameter are set as follows: j=1;

t1=1;收敛时可以解得:t 1 =1; When it converges, it can be solved:

其中,soft(·,·)为收缩算子。Among them, soft(·,·) is the contraction operator.

7)求解Ak+1:使用奇异值阈值法(Singular Value Thresholding)SVT求解Ak+17) Solve for A k+1 : use SVT to solve for A k+1 .

去掉式子(3)中求解A的目标函数里与A无关的项得到:Remove the items irrelevant to A in the objective function for solving A in formula (3):

使用配方法将上式改写成:Using the formula method, the above formula can be rewritten as:

其中,对Qk+1使用奇异值阈值法解得:in, Using the singular value threshold method for Q k+1 to solve:

其中Hk+1,Vk+1分别是Qk+1的左奇异矩阵和右奇异矩阵;Among them, H k+1 and V k+1 are the left singular matrix and right singular matrix of Q k+ 1 respectively;

8)求解Ek+1:Ek+1的解由两部分组成。8) Solving E k+1 : The solution of E k+1 consists of two parts.

81)在观测空间Ω内,E的值为0;即PΩ(E)=0。81) In the observation space Ω, the value of E is 0; that is, P Ω (E)=0.

82)在观测空间Ω以外,即互补空间内,关于Ek+1的方程如下式所示:82) Outside the observation space Ω, that is, the complementary space Inside, the equation about E k+1 is as follows:

使用一阶求导来求解,得到 Using the first-order derivative to solve, we get

83)将空间域Ω内部和外部的解联合起来即为E的最终解:83) Combining the solutions inside and outside the space domain Ω is the final solution of E:

9)重复上述步骤5)、6)、7)、8)直到算法收敛,这时迭代的结果Ak+1、Σk+1、Bk+1、Ck+1和Ek+1就是原问题没有重加权的结果A(l)、Σ(l)、B(l)、C(l)和E(l)。这里,l是重加权次数。9) Repeat the above steps 5), 6), 7) and 8) until the algorithm converges, then the iteration results A k+1 , Σ k+1 , B k+1 , C k+1 and E k+1 are The original problem has no reweighted outcomes A (l) , Σ (l) , B (l) , C (l) and E (l) . Here, l is the number of reweighting times.

10)更新权重矩阵Wa、Wb和Wc10) Update the weight matrices W a , W b and W c .

为抵消信号幅值在核范数项和一范数项上的影响,引入重加权方案,根据当前估计的奇异值矩阵Σ(l)、系数矩阵Bl和Cl的幅值,采用反比例原则迭代地更新权重矩阵Wa、Wb和WcIn order to offset the influence of the signal amplitude on the nuclear norm item and the one norm item, a reweighting scheme is introduced, and the inverse proportional principle is adopted according to the amplitudes of the currently estimated singular value matrix Σ (l) and coefficient matrices B l and C l Iteratively update the weight matrices W a , W b and W c :

其中是图像中像素的位置坐标,ε是任意小的正数。实验中取ε=0.001。in is the position coordinate of the pixel in the image, and ε is an arbitrarily small positive number. In the experiment, ε=0.001 was taken.

11)重复上述步骤4)、5)、6)、7)、8)、9)、10)直到算法收敛,这时迭代的结果A(l)就是原问题的最终解A。11) Repeat the above steps 4), 5), 6), 7), 8), 9), and 10) until the algorithm converges, and the iterative result A (l) is the final solution A of the original problem.

12)依次处理步骤1)中得到的其余若干个图像块直至全部填充好,再将这些图像块组合成最终的填充图(如图4所示)。组合时,被多次填充的划分重叠的像素点取多次填充的均值作为最终值。12) Process the remaining several image blocks obtained in step 1) until they are all filled, and then combine these image blocks into a final filling map (as shown in FIG. 4 ). When combining, the pixel points that are divided and overlapped by multiple fillings take the average value of multiple fillings as the final value.

实验结果:本发明采用PSNR(峰值信噪比)作为图像填充结果的度量测度,单位为dB:Experimental result: the present invention adopts PSNR (Peak Signal-to-Noise Ratio) as the measure measure of image filling result, and unit is dB:

其中I为填充后的图像,I0为没有缺失的真实图像,w为图像的宽度,h为图像的高度,(x,y)表示图像第x行第y列的像素值,Σ表示求和运算,|·|为绝对值。本实验取n=8,实验中用于测试的4张不同程度行列缺失的图片填充结果见图4标注。Among them, I is the filled image, I 0 is the real image without missing, w is the width of the image, h is the height of the image, (x, y) represents the pixel value of row x and column y of the image, and Σ represents the sum Operation, |·| is an absolute value. In this experiment, n=8, and the filling results of the 4 pictures with different degrees of missing rows and columns used in the experiment are shown in Figure 4.

Claims (5)

1.一种基于低秩矩阵重建和稀疏表示的行列缺失图像填充方法,其特征是,步骤是,基于低秩矩阵重建理论引入低秩先验对潜在图像进行约束;同时,考虑到行缺失图像的每一列可以由列字典稀疏表示,而列缺失图像的每一行可以由行字典稀疏表示,故基于稀疏表示理论引入可分离的二维稀疏先验;从而基于上述联合低秩与可分离的二维稀疏先验,将带有行列缺失的图像填充问题具体地表述为求解约束优化方程,从而实现行列缺失图像填充。1. A row and column missing image filling method based on low-rank matrix reconstruction and sparse representation, characterized in that the steps are, based on low-rank matrix reconstruction theory, introducing low-rank priors to constrain latent images; meanwhile, considering row-missing images Each column of can be sparsely represented by a column dictionary, and each row of a column-missing image can be sparsely represented by a row dictionary, so a separable two-dimensional sparse prior is introduced based on the sparse representation theory; thus based on the above joint low-rank and separable binary Dimensional sparse prior, the problem of image filling with missing rows and columns is specifically expressed as solving constrained optimization equations, so as to realize image filling with missing rows and columns. 2.如权利要求1所述的基于低秩矩阵重建和稀疏表示的行列缺失图像填充方法,其特征是,将带有行列缺失的图像填充问题具体地表述为求解约束优化方程具体步骤细化为:2. The image filling method based on low-rank matrix reconstruction and sparse representation as claimed in claim 1, wherein the image filling problem with missing rows and columns is specifically expressed as solving a constrained optimization equation and the specific steps are refined as : 1)将带有行列缺失的图像填充问题具体地表述为求解如下约束优化方程:1) The image filling problem with missing rows and columns is specifically expressed as solving the following constrained optimization equation: 其中tr(·)是矩阵的迹,表示低秩先验项;表示两个矩阵的点乘运算,||·||1表示矩阵的一范数,两个一范数项分别代表可分离的二维稀疏先验项;Ω是观测空间,表示有行列缺失的观测矩阵D内的已知像素,PΩ(·)是投影算子,表示变量投影到空间域Ω内的值,A为填充好的矩阵,Σ=diag([σ12,...,σn])表示由A的奇异值以非递增的顺序组成的对角矩阵,Wa、Wb和Wc分别表示加权低秩项和可分离二维稀疏项的权重矩阵,γB,γC分别表示可分离二维稀疏项的正则化系数,Φc和Φr分别表示训练好的列字典和行字典,对应的系数矩阵分别由B和C表示,E代表观测矩阵D中缺失的像素;where tr( ) is the trace of the matrix, representing the low-rank prior term; Indicates the dot product operation of two matrices, ||·|| 1 represents the one-norm of the matrix, and the two one-norm items represent separable two-dimensional sparse prior items respectively; Ω is the observation space, indicating that there are rows and columns missing Observe the known pixels in the matrix D, P Ω (·) is the projection operator, which represents the value of the variable projected into the space domain Ω, A is the filled matrix, Σ=diag([σ 12 ,.. .,σ n ]) represents the diagonal matrix composed of the singular values of A in non-increasing order, W a , W b and W c represent the weight matrix of weighted low-rank items and separable two-dimensional sparse items, respectively, γ B , γ C represent the regularization coefficients of separable two-dimensional sparse items, Φ c and Φ r represent the trained column dictionary and row dictionary respectively, the corresponding coefficient matrices are represented by B and C respectively, and E represents the missing of pixels; 采用增广拉格朗日乘子法(ALM)将约束优化问题(1)转化为无约束优化问题来求解,增广拉格朗日方程如下:The augmented Lagrangian multiplier method (ALM) is used to transform the constrained optimization problem (1) into an unconstrained optimization problem to solve, and the augmented Lagrangian equation is as follows: 其中Y1、Y2和Y3表示拉格朗日乘子矩阵,μ1、μ2和μ3是惩罚因子,<·,·>表示两个矩阵的内积,||·||F表示矩阵的弗罗贝尼乌斯(Frobenius)范数;Among them, Y 1 , Y 2 and Y 3 represent Lagrange multiplier matrices, μ 1 , μ 2 and μ 3 are penalty factors, <·,·> represent the inner product of two matrices, and ||·|| F represents the Frobenius norm of the matrix; 求解过程为,训练列字典和行字典Φc和Φr,初始化权重矩阵Wa、Wb和Wc,交替地更新系数矩阵B和C,恢复矩阵A,缺失像素矩阵E,拉格朗日乘子矩阵Y1、Y2和Y3,惩罚因子μ1、μ2和μ3以及权重矩阵Wa、Wb和Wc,直到算法收敛,这时迭代的结果A(l)就是原问题的最终解A。The solution process is to train column and row dictionaries Φ c and Φ r , initialize weight matrices W a , W b and W c , update coefficient matrices B and C alternately, restore matrix A, missing pixel matrix E, Lagrangian Multiplier matrices Y 1 , Y 2 and Y 3 , penalty factors μ 1 , μ 2 and μ 3 and weight matrices W a , W b and W c , until the algorithm converges, then the iteration result A (l) is the original problem The final solution A of . 3.如权利要求2所述的基于低秩矩阵重建和稀疏表示的行列缺失图像填充方法,其特征是,具体地,训练字典Φc和Φr:在高质量的图像数据集上使用在线学习算法训练出列字典和行字典Φc和Φr3. The row and column missing image filling method based on low-rank matrix reconstruction and sparse representation as claimed in claim 2, characterized in that, specifically, training dictionaries Φ c and Φ r : use online learning on high-quality image data sets The algorithm trains the column and row dictionaries Φ c and Φ r . 4.如权利要求2所述的基于低秩矩阵重建和稀疏表示的行列缺失图像填充方法,其特征是,具体地,初始化权重矩阵Wa、Wb和Wc:设重加权次数为l,l=0时,将权重矩阵初始值全部赋值为1,表示第一次迭代没有重加权。4. The row and column missing image filling method based on low-rank matrix reconstruction and sparse representation as claimed in claim 2, characterized in that, specifically, initializing weight matrices W a , W b and W c : the number of times of weighting is set to 1, When l=0, the weight matrix , with The initial values are all assigned a value of 1, indicating that there is no reweighting for the first iteration. 5.如权利要求2所述的基于低秩矩阵重建和稀疏表示的行列缺失图像填充方法,其特征是,具体地,采用交替方向法ADM将方程(2)转换成如下序列进行迭代求解:5. the row and column missing image filling method based on low-rank matrix reconstruction and sparse representation as claimed in claim 2, is characterized in that, specifically, adopts alternating direction method ADM to convert equation (2) into following sequence and iteratively solves: <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>B</mi> </munder> <mo>{</mo> <msub> <mi>L</mi> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mi>k</mi> </msup> <mo>,</mo> <mi>B</mi> <mo>,</mo> <msup> <mi>C</mi> <mi>k</mi> </msup> <mo>,</mo> <msup> <mi>E</mi> <mi>k</mi> </msup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>C</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>C</mi> </munder> <mo>{</mo> <msub> <mi>L</mi> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mi>k</mi> </msup> <mo>,</mo> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <mi>C</mi> <mo>,</mo> <msup> <mi>E</mi> <mi>k</mi> </msup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>A</mi> </munder> <mo>{</mo> <msub> <mi>L</mi> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </msub> <mrow> <mo>(</mo> <mi>A</mi> <mo>,</mo> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <msup> <mi>C</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <msup> <mi>E</mi> <mi>k</mi> </msup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>E</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mrow> <msub> <mi>P</mi> <mi>&amp;Omega;</mi> </msub> <mrow> <mo>(</mo> <mi>E</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> </mrow> </munder> <mo>{</mo> <msub> <mi>L</mi> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <msup> <mi>C</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <mi>E</mi> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>Y</mi> <mn>1</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>Y</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>-</mo> <msub> <mi>&amp;Phi;</mi> <mi>r</mi> </msub> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>Y</mi> <mn>2</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>Y</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <msup> <mn>1</mn> <mi>T</mi> </msup> </mrow> </msup> <mo>-</mo> <msub> <mi>&amp;Phi;</mi> <mi>r</mi> </msub> <msup> <mi>C</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>Y</mi> <mn>3</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <mi>D</mi> <mo>-</mo> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>-</mo> <msup> <mi>E</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msub> <mi>&amp;rho;</mi> <mn>1</mn> </msub> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msub> <mi>&amp;rho;</mi> <mn>2</mn> </msub> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msub> <mi>&amp;rho;</mi> <mn>3</mn> </msub> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>B</mi> </munder> <mo>{</mo> <msub> <mi>L</mi> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mi>k</mi> </msup> <mo>,</mo> <mi>B</mi> <mo>,</mo> <msup> <mi>C</mi> <mi>k</mi> </msup> <mo>,</mo> <msup> <mi>E</mi> <mi>k</mi> </msup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>C</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>C</mi> </munder> <mo>{</mo> <msub> <mi>L</mi> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mi>k</mi> </msup> <mo>,</mo> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <mi>C</mi> <mo>,</mo> <msup> <mi>E</mi> <mi>k</mi> </msup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>A</mi> </munder> <mo>{</mo> <msub> <mi>L</mi> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </msub> <mrow> <mo>(</mo> <mi>A</mi> <mo>,</mo> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <msup> <mi>C</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <msup> <mi>E</mi> <mi>k</mi> </msup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>E</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mrow> <msub> <mi>P</mi> <mi>&amp;Omega;</mi> </msub> <mrow> <mo>(</mo> <mi>E</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> </mrow> </munder> <mo>{</mo> <msub> <mi>L</mi> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <msup> <mi>C</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <mi>E</mi> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>Y</mi> <mn>1</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>Y</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>-</mo> <msub> <mi>&amp;Phi;</mi> <mi>r</mi> </msub> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>Y</mi> <mn>2</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>Y</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <msup> <mn>1</mn> <mi>T</mi> </msup> </mrow> </msup> <mo>-</mo> <msub> <mi>&amp;Phi;</mi> <mi>r</mi> </msub> <msup> <mi>C</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>Y</mi> <mn>3</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <mi>D</mi> <mo>-</mo> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>-</mo> <msup> <mi>E</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msub> <mi>&amp;rho;</mi> <mn>1</mn> </msub> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msub> <mi>&amp;rho;</mi> <mn>2</mn> </msub> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msub> <mi>&amp;rho;</mi> <mn>3</mn> </msub> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> 上式中的分别表示使目标函数取最小值时的变量B、C、A和E的值,ρ1、ρ2和ρ3为倍数因子,k是迭代次数;然后按照如下步骤进行迭代求解:in the above formula with respectively represent the values of variables B, C, A and E when the objective function takes the minimum value, ρ 1 , ρ 2 and ρ 3 are multiple factors, and k is the number of iterations; then iteratively solve according to the following steps: 1)求解Bk+1:使用加速近邻梯度算法求得Bk+11) Solve B k+1 : use accelerated neighbor gradient algorithm to get B k+1 ; 去掉式子(3)中求解B的目标函数里与B无关的项,得到如下方程:Remove the terms irrelevant to B in the objective function for solving B in formula (3), and get the following equation: 使用泰勒展开的方法,构造出一个二阶函数来逼近上式,然后针对这个二阶的函数来求解原方程,令再引入变量Z,最终可以解得:Using the method of Taylor expansion, construct a second-order function to approximate the above formula, and then solve the original equation for this second-order function, let Introducing the variable Z again, we can finally solve: <mrow> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msubsup> <mi>B</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>=</mo> <mi>s</mi> <mi>o</mi> <mi>f</mi> <mi>t</mi> <mrow> <mo>(</mo> <msub> <mi>U</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <mfrac> <msub> <mi>&amp;gamma;</mi> <mi>B</mi> </msub> <msub> <mi>L</mi> <mi>f</mi> </msub> </mfrac> <msub> <mi>W</mi> <mi>b</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> <mrow> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msubsup> <mi>B</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>=</mo> <mi>s</mi> <mi>o</mi> <mi>f</mi> <mi>t</mi> <mrow> <mo>(</mo> <msub> <mi>U</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <mfrac> <msub> <mi>&amp;gamma;</mi> <mi>B</mi> </msub> <msub> <mi>L</mi> <mi>f</mi> </msub> </mfrac> <msub> <mi>W</mi> <mi>b</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> 其中,soft(·,·)为收缩算子, 为f(Z)的梯度,Lf是一个常数,值为变量Zj的更新规则如下:Among them, soft(·,·) is the contraction operator, is the gradient of f(Z), L f is a constant, the value is The update rule of the variable Z j is as follows: <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msqrt> <mrow> <mn>4</mn> <msubsup> <mi>t</mi> <mi>j</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mn>1</mn> </mrow> </msqrt> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>Z</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msubsup> <mi>B</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>+</mo> <mfrac> <mrow> <msub> <mi>t</mi> <mi>j</mi> </msub> <mo>-</mo> <mn>1</mn> </mrow> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mfrac> <mrow> <mo>(</mo> <msubsup> <mi>B</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>-</mo> <msubsup> <mi>B</mi> <mi>j</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msqrt> <mrow> <mn>4</mn> <msubsup> <mi>t</mi> <mi>j</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mn>1</mn> </mrow> </msqrt> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>Z</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msubsup> <mi>B</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>+</mo> <mfrac> <mrow> <msub> <mi>t</mi> <mi>j</mi> </msub> <mo>-</mo> <mn>1</mn> </mrow> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mfrac> <mrow> <mo>(</mo> <msubsup> <mi>B</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>-</mo> <msubsup> <mi>B</mi> <mi>j</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> 其中,tj是一组常数序列,j是变量迭代次数;Among them, t j is a set of constant sequences, and j is the number of variable iterations; 2)求解Ck+1:使用加速近邻梯度算法求得Ck+12) Solve C k+1 : use accelerated neighbor gradient algorithm to find C k+1 ; 去掉式子(3)中求解C的目标函数里与C无关的项,得到如下方程:Remove the terms irrelevant to C in the objective function for solving C in formula (3), and get the following equation: 使用泰勒展开的方法,构造出一个二阶函数来逼近上式,然后针对这个二阶的函数来求解原方程,令再引入变量最终可以解得:Using the method of Taylor expansion, construct a second-order function to approximate the above formula, and then solve the original equation for this second-order function, let reintroduce variable Finally it can be solved: <mrow> <msup> <mi>C</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msubsup> <mi>C</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>=</mo> <mi>s</mi> <mi>o</mi> <mi>f</mi> <mi>t</mi> <mrow> <mo>(</mo> <msub> <mover> <mi>U</mi> <mo>~</mo> </mover> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <mfrac> <msub> <mi>&amp;gamma;</mi> <mi>C</mi> </msub> <msub> <mi>L</mi> <mi>f</mi> </msub> </mfrac> <msub> <mi>W</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> <mrow> <msup> <mi>C</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msubsup> <mi>C</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>=</mo> <mi>s</mi> <mi>o</mi> <mi>f</mi> <mi>t</mi> <mrow> <mo>(</mo> <msub> <mover> <mi>U</mi> <mo>~</mo> </mover> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <mfrac> <msub> <mi>&amp;gamma;</mi> <mi>C</mi> </msub> <msub> <mi>L</mi> <mi>f</mi> </msub> </mfrac> <msub> <mi>W</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> 其中,soft(·,·)为收缩算子, 的梯度,Lf是一个常数,值为变量的更新规则如下:Among them, soft(·,·) is the contraction operator, for The gradient of , L f is a constant whose value is variable The update rules for are as follows: <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msqrt> <mrow> <mn>4</mn> <msubsup> <mi>t</mi> <mi>j</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mn>1</mn> </mrow> </msqrt> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mover> <mi>Z</mi> <mo>~</mo> </mover> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msubsup> <mi>C</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>+</mo> <mfrac> <mrow> <msub> <mi>t</mi> <mi>j</mi> </msub> <mo>-</mo> <mn>1</mn> </mrow> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mfrac> <mrow> <mo>(</mo> <msubsup> <mi>C</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>-</mo> <msubsup> <mi>C</mi> <mi>j</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msqrt> <mrow> <mn>4</mn> <msubsup> <mi>t</mi> <mi>j</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mn>1</mn> </mrow> </msqrt> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mover> <mi>Z</mi> <mo>~</mo> </mover> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msubsup> <mi>C</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>+</mo> <mfrac> <mrow> <msub> <mi>t</mi> <mi>j</mi> </msub> <mo>-</mo> <mn>1</mn> </mrow> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mfrac> <mrow> <mo>(</mo> <msubsup> <mi>C</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>-</mo> <msubsup> <mi>C</mi> <mi>j</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> 其中,tj是一组常数序列,j是变量迭代次数;Among them, t j is a set of constant sequences, and j is the number of variable iterations; 3)求解Ak+1:使用奇异值阈值法(Singular Value Thresholding)SVT求解Ak+13) Solve A k+1 : use Singular Value Thresholding (SVT) to solve A k+1 ; 去掉式子(3)中求解A的目标函数里与A无关的项,并且通过配方得到:Remove the items irrelevant to A in the objective function for solving A in formula (3), and obtain through the formula: 其中,对Qk +1使用奇异值阈值法解得:in, Using the singular value threshold method for Q k +1 to solve: <mrow> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msup> <mi>H</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mi>s</mi> <mi>o</mi> <mi>f</mi> <mi>t</mi> <mrow> <mo>(</mo> <msup> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <mfrac> <mn>1</mn> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </mfrac> <msub> <mi>W</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>V</mi> <mrow> <mi>k</mi> <mo>+</mo> <msup> <mn>1</mn> <mi>T</mi> </msup> </mrow> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> <mrow> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msup> <mi>H</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mi>s</mi> <mi>o</mi> <mi>f</mi> <mi>t</mi> <mrow> <mo>(</mo> <msup> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <mfrac> <mn>1</mn> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </mfrac> <msub> <mi>W</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>V</mi> <mrow> <mi>k</mi> <mo>+</mo> <msup> <mn>1</mn> <mi>T</mi> </msup> </mrow> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> 其中Hk+1,Vk+1分别是Qk+1的左奇异矩阵和右奇异矩阵;Among them, H k+1 and V k+1 are the left singular matrix and right singular matrix of Q k+ 1 respectively; 4)求解Ek+1:Ek+1的解由两部分组成;4) Solve E k+1 : the solution of E k+1 consists of two parts; 在观测空间Ω内,E的值为0;在观测空间Ω以外,即互补空间内,使用一阶求导来求解,将两部分合起来即为E的最终解:In the observation space Ω, the value of E is 0; outside the observation space Ω, that is, the complementary space Inside, use the first-order derivation to solve, and combine the two parts to get the final solution of E: <mrow> <msup> <mi>E</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msub> <mi>P</mi> <mi>&amp;Omega;</mi> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>P</mi> <mover> <mi>&amp;Omega;</mi> <mo>&amp;OverBar;</mo> </mover> </msub> <mrow> <mo>(</mo> <mi>D</mi> <mo>-</mo> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>+</mo> <mfrac> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow> <mrow> <msup> <mi>E</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msub> <mi>P</mi> <mi>&amp;Omega;</mi> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>P</mi> <mover> <mi>&amp;Omega;</mi> <mo>&amp;OverBar;</mo> </mover> </msub> <mrow> <mo>(</mo> <mi>D</mi> <mo>-</mo> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>+</mo> <mfrac> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow> 5)重复上述步骤1)、2)、3)、4)直到算法收敛,这时迭代的结果Ak+1、Σk+1、Bk+1、Ck+1和Ek+1就是原问题没有重加权的结果A(l)、Σ(l)、B(l)、C(l)和E(l),这里,l是重加权次数;5) Repeat the above steps 1), 2), 3) and 4) until the algorithm converges, then the iteration results A k+1 , Σ k+1 , B k+1 , C k+1 and E k+1 are The original problem has no reweighted results A (l) , Σ (l) , B (l) , C (l) and E (l) , where l is the number of reweighted times; 6)更新权重矩阵Wa、Wb和Wc6) Updating weight matrices W a , W b and W c ; 为抵消信号幅值在核范数项和一范数项上的影响,引入重加权方案,根据当前估计的奇异值矩阵Σ(l)、系数矩阵Bl和Cl的幅值,采用反比例原则迭代地更新权重矩阵Wa、Wb和WcIn order to offset the influence of the signal amplitude on the nuclear norm item and the one norm item, a reweighting scheme is introduced, and the inverse proportional principle is adopted according to the amplitudes of the currently estimated singular value matrix Σ (l) and coefficient matrices B l and C l Iteratively update the weight matrices W a , W b and W c : <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>W</mi> <mi>a</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>,</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <mi>&amp;epsiv;</mi> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>W</mi> <mi>b</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>,</mo> <mover> <mi>j</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msup> <mi>B</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>,</mo> <mover> <mi>j</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>|</mo> <mo>+</mo> <mi>&amp;epsiv;</mi> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>W</mi> <mi>c</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>,</mo> <mover> <mi>j</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msup> <mi>C</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>,</mo> <mover> <mi>j</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>|</mo> <mo>+</mo> <mi>&amp;epsiv;</mi> </mrow> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>W</mi> <mi>a</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>,</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <mi>&amp;epsiv;</mi> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>W</mi> <mi>b</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>,</mo> <mover> <mi>j</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msup> <mi>B</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>,</mo> <mover> <mi>j</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>|</mo> <mo>+</mo> <mi>&amp;epsiv;</mi> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>W</mi> <mi>c</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>,</mo> <mover> <mi>j</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msup> <mi>C</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>,</mo> <mover> <mi>j</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>|</mo> <mo>+</mo> <mi>&amp;epsiv;</mi> </mrow> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> 其中是图像中像素的位置坐标,ε是任意小的正数。in is the position coordinate of the pixel in the image, and ε is an arbitrarily small positive number. 7)重复上述步骤1)-7)直到算法收敛,这时迭代的结果A(l)就是原问题的最终解A。7) Repeat the above steps 1)-7) until the algorithm converges, then the iteration result A (l) is the final solution A of the original problem.
CN201710298239.5A 2017-04-30 2017-04-30 Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix Pending CN107133930A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710298239.5A CN107133930A (en) 2017-04-30 2017-04-30 Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710298239.5A CN107133930A (en) 2017-04-30 2017-04-30 Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix

Publications (1)

Publication Number Publication Date
CN107133930A true CN107133930A (en) 2017-09-05

Family

ID=59715788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710298239.5A Pending CN107133930A (en) 2017-04-30 2017-04-30 Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix

Country Status (1)

Country Link
CN (1) CN107133930A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171215A (en) * 2018-01-25 2018-06-15 河南大学 Face Pseudo-median filter and camouflage category detection method based on low-rank variation dictionary and rarefaction representation classification
CN108427742A (en) * 2018-03-07 2018-08-21 中国电力科学研究院有限公司 A kind of distribution network reliability data recovery method and system based on low-rank matrix
CN108734675A (en) * 2018-05-17 2018-11-02 西安电子科技大学 Image recovery method based on mixing sparse prior model
CN109215025A (en) * 2018-09-25 2019-01-15 电子科技大学 A kind of method for detecting infrared puniness target approaching minimization based on non-convex order
CN109325442A (en) * 2018-09-19 2019-02-12 福州大学 A face recognition method with missing image pixels
CN109325446A (en) * 2018-09-19 2019-02-12 电子科技大学 An infrared weak and small target detection method based on weighted truncation kernel norm
CN109348229A (en) * 2018-10-11 2019-02-15 武汉大学 A mismatch steganalysis method for JPEG images based on heterogeneous feature subspace migration
CN109671030A (en) * 2018-12-10 2019-04-23 西安交通大学 A kind of image completion method based on the optimization of adaptive rand estination Riemann manifold
CN109754008A (en) * 2018-12-28 2019-05-14 上海理工大学 The estimation method of the symmetrical sparse network missing information of higher-dimension based on matrix decomposition
CN109978783A (en) * 2019-03-19 2019-07-05 上海交通大学 A kind of color image restorative procedure
CN111025385A (en) * 2019-11-26 2020-04-17 中国地质大学(武汉) Seismic data reconstruction method based on low rank and sparse constraint
CN111597440A (en) * 2020-05-06 2020-08-28 上海理工大学 Recommendation system information estimation method based on internal weighting matrix three-decomposition low-rank approximation
CN111881413A (en) * 2020-07-28 2020-11-03 中国人民解放军海军航空大学 Multi-source time series missing data recovery method based on matrix decomposition
CN112184571A (en) * 2020-09-14 2021-01-05 江苏信息职业技术学院 Robust principal component analysis method based on non-convex rank approximation
CN112561842A (en) * 2020-12-07 2021-03-26 昆明理工大学 Multi-source damaged image fusion and recovery combined implementation method based on dictionary learning
CN112564945A (en) * 2020-11-23 2021-03-26 南京邮电大学 IP network flow estimation method based on time sequence prior and sparse representation
CN112734763A (en) * 2021-01-29 2021-04-30 西安理工大学 Image decomposition method based on convolution and K-SVD dictionary joint sparse coding
CN113112563A (en) * 2021-04-21 2021-07-13 西北大学 Sparse angle CB-XLCT imaging method for optimizing regional knowledge prior
CN114253959A (en) * 2021-12-21 2022-03-29 大连理工大学 A Data Completion Method Based on Dynamics Principle and Time Difference
CN115508835A (en) * 2022-10-28 2022-12-23 广东工业大学 Tomography SAR three-dimensional imaging method based on blind compressed sensing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093430A (en) * 2013-01-25 2013-05-08 西安电子科技大学 Heart magnetic resonance imaging (MRI) image deblurring method based on sparse low rank and dictionary learning
CN103679660A (en) * 2013-12-16 2014-03-26 清华大学 Method and system for restoring image
CN104867119A (en) * 2015-05-21 2015-08-26 天津大学 Structural lack image filling method based on low rank matrix reconstruction
CN104978716A (en) * 2015-06-09 2015-10-14 重庆大学 SAR image noise reduction method based on linear minimum mean square error estimation
CN105743611A (en) * 2015-12-25 2016-07-06 华中农业大学 Sparse dictionary-based wireless sensor network missing data reconstruction method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093430A (en) * 2013-01-25 2013-05-08 西安电子科技大学 Heart magnetic resonance imaging (MRI) image deblurring method based on sparse low rank and dictionary learning
CN103679660A (en) * 2013-12-16 2014-03-26 清华大学 Method and system for restoring image
CN104867119A (en) * 2015-05-21 2015-08-26 天津大学 Structural lack image filling method based on low rank matrix reconstruction
CN104978716A (en) * 2015-06-09 2015-10-14 重庆大学 SAR image noise reduction method based on linear minimum mean square error estimation
CN105743611A (en) * 2015-12-25 2016-07-06 华中农业大学 Sparse dictionary-based wireless sensor network missing data reconstruction method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JINGYU YANG等: ""Completion of Structurally-Incomplete Matrices with Reweighted Low-Rank and Sparsity Priors"", 《2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) 》 *
杜伟男等: ""基于残差字典学习的图像超分辨率重建方法"", 《北京工业大学学报》 *
汪雄良等: ""基于快速基追踪算法的图像去噪"", 《计算机应用》 *
王斌等: ""基于低秩表示和学习字典的高光谱图像异常探测"", 《红外与毫米波学报》 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171215B (en) * 2018-01-25 2023-02-03 河南大学 Face Masquerade Detection and Masquerade Category Detection Method Based on Low-rank Variation Dictionary and Sparse Representation Classification
CN108171215A (en) * 2018-01-25 2018-06-15 河南大学 Face Pseudo-median filter and camouflage category detection method based on low-rank variation dictionary and rarefaction representation classification
CN108427742A (en) * 2018-03-07 2018-08-21 中国电力科学研究院有限公司 A kind of distribution network reliability data recovery method and system based on low-rank matrix
CN108427742B (en) * 2018-03-07 2023-12-19 中国电力科学研究院有限公司 A distribution network reliability data repair method and system based on low-rank matrix
CN108734675A (en) * 2018-05-17 2018-11-02 西安电子科技大学 Image recovery method based on mixing sparse prior model
CN108734675B (en) * 2018-05-17 2021-09-28 西安电子科技大学 Image restoration method based on mixed sparse prior model
CN109325446B (en) * 2018-09-19 2021-06-22 电子科技大学 Infrared weak and small target detection method based on weighted truncation nuclear norm
CN109325442A (en) * 2018-09-19 2019-02-12 福州大学 A face recognition method with missing image pixels
CN109325446A (en) * 2018-09-19 2019-02-12 电子科技大学 An infrared weak and small target detection method based on weighted truncation kernel norm
CN109215025A (en) * 2018-09-25 2019-01-15 电子科技大学 A kind of method for detecting infrared puniness target approaching minimization based on non-convex order
CN109215025B (en) * 2018-09-25 2021-08-10 电子科技大学 Infrared weak and small target detection method based on non-convex rank approach minimization
CN109348229B (en) * 2018-10-11 2020-02-11 武汉大学 JPEG image mismatch steganalysis method based on heterogeneous feature subspace migration
CN109348229A (en) * 2018-10-11 2019-02-15 武汉大学 A mismatch steganalysis method for JPEG images based on heterogeneous feature subspace migration
CN109671030A (en) * 2018-12-10 2019-04-23 西安交通大学 A kind of image completion method based on the optimization of adaptive rand estination Riemann manifold
CN109671030B (en) * 2018-12-10 2021-04-20 西安交通大学 An Image Completion Method Based on Adaptive Rank Estimation Riemannian Manifold Optimization
CN109754008B (en) * 2018-12-28 2022-07-19 上海理工大学 High-dimensional symmetric sparse network missing information estimation method based on matrix decomposition
CN109754008A (en) * 2018-12-28 2019-05-14 上海理工大学 The estimation method of the symmetrical sparse network missing information of higher-dimension based on matrix decomposition
CN109978783A (en) * 2019-03-19 2019-07-05 上海交通大学 A kind of color image restorative procedure
CN111025385A (en) * 2019-11-26 2020-04-17 中国地质大学(武汉) Seismic data reconstruction method based on low rank and sparse constraint
CN111597440A (en) * 2020-05-06 2020-08-28 上海理工大学 Recommendation system information estimation method based on internal weighting matrix three-decomposition low-rank approximation
CN111881413A (en) * 2020-07-28 2020-11-03 中国人民解放军海军航空大学 Multi-source time series missing data recovery method based on matrix decomposition
CN111881413B (en) * 2020-07-28 2022-12-09 中国人民解放军海军航空大学 Multi-source time sequence missing data recovery method based on matrix decomposition
CN112184571A (en) * 2020-09-14 2021-01-05 江苏信息职业技术学院 Robust principal component analysis method based on non-convex rank approximation
CN112564945B (en) * 2020-11-23 2023-03-24 南京邮电大学 IP network flow estimation method based on time sequence prior and sparse representation
CN112564945A (en) * 2020-11-23 2021-03-26 南京邮电大学 IP network flow estimation method based on time sequence prior and sparse representation
CN112561842B (en) * 2020-12-07 2022-12-09 昆明理工大学 Multi-source Damaged Image Fusion and Restoration Joint Implementation Method Based on Dictionary Learning
CN112561842A (en) * 2020-12-07 2021-03-26 昆明理工大学 Multi-source damaged image fusion and recovery combined implementation method based on dictionary learning
CN112734763A (en) * 2021-01-29 2021-04-30 西安理工大学 Image decomposition method based on convolution and K-SVD dictionary joint sparse coding
CN113112563A (en) * 2021-04-21 2021-07-13 西北大学 Sparse angle CB-XLCT imaging method for optimizing regional knowledge prior
CN113112563B (en) * 2021-04-21 2023-10-27 西北大学 Sparse angle CB-XLCT imaging method for optimizing regional knowledge priori
CN114253959A (en) * 2021-12-21 2022-03-29 大连理工大学 A Data Completion Method Based on Dynamics Principle and Time Difference
CN114253959B (en) * 2021-12-21 2024-07-12 大连理工大学 Data complement method based on dynamics principle and time difference
CN115508835A (en) * 2022-10-28 2022-12-23 广东工业大学 Tomography SAR three-dimensional imaging method based on blind compressed sensing
CN115508835B (en) * 2022-10-28 2024-03-15 广东工业大学 Chromatographic SAR three-dimensional imaging method based on blind compressed sensing

Similar Documents

Publication Publication Date Title
CN107133930A (en) Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix
CN109241491A (en) The structural missing fill method of tensor based on joint low-rank and rarefaction representation
CN104867119B (en) The structural missing image fill method rebuild based on low-rank matrix
CN111369487B (en) Hyperspectral and multispectral image fusion method, system and medium
CN103810755B (en) Compressed sensing spectrum picture method for reconstructing based on documents structured Cluster rarefaction representation
CN101976435B (en) Combination learning super-resolution method based on dual constraint
CN104063886B (en) Nuclear magnetic resonance image reconstruction method based on sparse representation and non-local similarity
CN103150713B (en) Utilize the image super-resolution method that image block classification rarefaction representation is polymerized with self-adaptation
CN105825477B (en) The Remote sensed image super-resolution reconstruction method merged based on more dictionary learnings with non-local information
CN103020909B (en) Single-image super-resolution method based on multi-scale structural self-similarity and compressive sensing
CN104778659A (en) Single-frame image super-resolution reconstruction method on basis of deep learning
CN105931264B (en) A kind of sea infrared small target detection method
CN105957022A (en) Recovery method of low-rank matrix reconstruction with random value impulse noise deletion image
Wang et al. Translution-SNet: A semisupervised hyperspectral image stripe noise removal based on transformer and CNN
CN102915527A (en) Face image super-resolution reconstruction method based on morphological component analysis
CN104050653A (en) Hyperspectral image super-resolution algorithm based on non-negative structure sparse
CN106097278A (en) The sparse model of a kind of multidimensional signal, method for reconstructing and dictionary training method
CN104392243A (en) Nonlinear un-mixing method of hyperspectral images based on kernel sparse nonnegative matrix decomposition
CN104574456A (en) Graph regularization sparse coding-based magnetic resonance super-undersampled K data imaging method
Xia et al. Meta-learning-based degradation representation for blind super-resolution
CN105931181B (en) Super resolution image reconstruction method and system based on non-coupled mapping relations
CN104915935B (en) Compressed Spectral Imaging Method Based on Nonlinear Compressed Sensing and Dictionary Learning
CN104091364A (en) Single-image super-resolution reconstruction method
CN105138860B (en) A kind of EO-1 hyperion nonlinear solution mixing method based on border projection Optimal gradient
Zeiler et al. Differentiable pooling for hierarchical feature learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170905

WD01 Invention patent application deemed withdrawn after publication