CN107133930A - Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix - Google Patents

Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix Download PDF

Info

Publication number
CN107133930A
CN107133930A CN201710298239.5A CN201710298239A CN107133930A CN 107133930 A CN107133930 A CN 107133930A CN 201710298239 A CN201710298239 A CN 201710298239A CN 107133930 A CN107133930 A CN 107133930A
Authority
CN
China
Prior art keywords
mrow
msubsup
msup
msub
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710298239.5A
Other languages
Chinese (zh)
Inventor
杨敬钰
杨蕉如
李坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201710298239.5A priority Critical patent/CN107133930A/en
Publication of CN107133930A publication Critical patent/CN107133930A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/513Sparse representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

The invention belongs to computer vision field, pixel column row missing image is accurately filled to realize.The present invention is adopted the technical scheme that, the ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix, and step is that introduce low-rank priori based on low-rank matrix reconstruction theory enters row constraint to latent image;Simultaneously, it is contemplated that each row of row missing image can be by row dictionary rarefaction representation, and every a line of row missing image can be by row dictionary rarefaction representation, therefore introduces separable two-dimentional sparse prior based on sparse representation theory;So as to based on above-mentioned joint low-rank and separable two-dimentional sparse prior, specifically be expressed as solving constrained optimization equation with the image completion problem that ranks are lacked, so as to realize that ranks missing image is filled.Present invention is mainly applied to computer vision processing occasion.

Description

Line-column missing image filling method based on low-rank matrix reconstruction and sparse representation
Technical Field
The invention belongs to the field of computer vision. In particular to a line and column missing image filling method based on low-rank matrix reconstruction and sparse representation.
Background
The problem of recovering an unknown complete matrix from a portion of known pixels of the matrix has attracted considerable attention in recent years. Such problems are often encountered in many application areas of computer vision and machine learning, such as image inpainting, recommendation systems, and background modeling.
There have been many research efforts on methods to solve the image filling problem. Due to the ill-conditioned nature of the matrix filling problem, current matrix filling methods generally consider potential matrices to be low-rank or nearly low-rank, and then fill in missing pixel values through low-rank matrix reconstruction. Such as Singular Value Thresholding (SVT), Augmented Lagrange Multiplier (ALM), and accelerated nearest neighbor gradient (APG). However, these existing filling algorithms all use the low-rank characteristic of the image to fill in missing pixel values, which is effective for the case where pixels are randomly missing and each row and each column of the image has an observed value, but when there is a pixel missing in an entire row and an entire column of the image, the existing algorithms cannot solve the image filling problem. The matrix filling problem due to the large number of missing row and column pixels cannot be solved under the condition of only using the low rank characteristic for constraint. In practical applications, such as image transmission, seismic data acquisition, etc., the image matrix is likely to suffer from some row-column missing degradation. Therefore, it is necessary to design a filling algorithm capable of effectively filling the missing matrix rows and columns.
At present, aiming at the defects of the matrix filling method only using the low-rank characteristic, the academic community introduces sparse constraint on column vectors on the basis of the defect, and the recovery of image row loss is realized. However, due to the insufficiency of the prior condition, the problem of simultaneous missing of matrix rows and columns is still unsolved. Therefore, the low-rank separable two-dimensional sparse prior is introduced into the model, so that the matrix with missing rows and columns is accurately filled.
Disclosure of Invention
The invention aims to make up the defects of the prior art, namely, the accurate filling of the missing pixel row-column image is realized. The method adopts the technical scheme that a row-column missing image filling method based on low-rank matrix reconstruction and sparse representation comprises the steps of introducing low-rank prior to constraining potential images based on a low-rank matrix reconstruction theory; meanwhile, considering that each column of the row-missing image can be sparsely represented by a column dictionary and each row of the column-missing image can be sparsely represented by a row dictionary, separable two-dimensional sparse prior is introduced based on a sparse representation theory; therefore, based on the combined low-rank and separable two-dimensional sparse prior, the image filling problem with row and column missing is specifically expressed as a solution constraint optimization equation, and row and column missing image filling is achieved.
The image filling problem with row and column deletion is specifically expressed as a concrete step of solving a constraint optimization equation, and the concrete step is detailed as follows:
1) the image filling problem with row-column deficiency is specifically formulated to solve the following constrained optimization equation:
where tr (-) is the trace of the matrix, representing a low rank prior term;represents the dot multiplication operation of two matrixes, | ·| non-woven phosphor1Representing a norm of the matrix, wherein two norm terms respectively represent separable two-dimensional sparse prior terms; omega is the observation space, representing a known pixel, P, within the observation matrix D with a missing row and columnΩ(. cndot.) is a projection operator, which represents the value of the variable projected into the spatial domain Ω, a is a filled matrix, Σ ═ diag ([ σ ])12,...,σn]) Representing a diagonal matrix consisting of the singular values of A in non-increasing order, Wa、WbAnd WcWeight matrices, γ, representing weighted low rank terms and separable two-dimensional sparse terms, respectivelyB,γCRegularization coefficients, phi, respectively representing separable two-dimensional sparse termscAnd phirRespectively representing the trained column dictionary and row dictionary, the corresponding coefficient matrixes are respectively represented by B and C, and E represents the missing in the observation matrix DA pixel;
and converting the constrained optimization problem (1) into an unconstrained optimization problem by adopting an augmented Lagrange multiplier method (ALM) to solve, wherein an augmented Lagrange equation is as follows:
wherein Y is1、Y2And Y3Representing the Lagrange multiplier matrix, mu1、μ2And mu3Is a penalty factor that is a function of,<·,·>representing the inner product of two matrices, | · | > non-woven phosphorFA Frobenius (Frobenius) norm representing a matrix;
the solving process is that a column dictionary and a row dictionary phi are trainedcAnd phirInitializing the weight matrix Wa、WbAnd WcAlternately updating coefficient matrixes B and C, restoring matrix A, missing pixel matrix E and Lagrange multiplier matrix Y1、Y2And Y3Penalty factor μ1、μ2And mu3And a weight matrix Wa、WbAnd WcUntil the algorithm converges, at which point the result of the iteration A(l)Is the final solution a of the original problem.
Specifically, the dictionary Φ is trainedcAnd phir: training out-of-column and row dictionaries Φ using online learning algorithms on high quality image datasetscAnd phir
Specifically, the weight matrix W is initializeda、WbAnd Wc: when the weighting times is l, and l is 0, the weighting matrix is set Andthe initial values are all assigned to 1, indicating that there is no re-weighting for the first iteration.
Specifically, equation (2) is converted into the following sequence for iterative solution by using an alternating direction method ADM:
in the above formulaAndrespectively representing the values of variables B, C, A and E, p, at which the objective function is to be minimized1、ρ2And ρ3Is a multiple factor, k is the number of iterations; then, the iterative solution is carried out according to the following steps:
1) solving for Bk+1: finding B using an accelerated neighbor gradient algorithmk+1
Removing the terms irrelevant to B in the objective function for solving B in the formula (3) to obtain the following equation:
using Taylor expansion method to construct a second order function to approximate the above formula, then solving the original equation for the second order function to makeAnd introducing a variable Z, and finally solving the following steps:
wherein soft (·,) is a shrink operator,gradient of (a), LfIs a constant with a value ofVariable ZjThe update rule of (2) is as follows:
wherein, tjIs a set of constant sequences, j is the number of iterations of the variable;
2) solving for Ck+1: using an accelerated neighbor gradient algorithm to find Ck+1
Removing the terms irrelevant to C in the objective function for solving C in the formula (3) to obtain the following equation:
using Taylor expansion method to construct a second order function to approximate the above formula, then solving the original equation for the second order function to makeReintroducing variablesFinally, the following can be obtained:
wherein soft (·,) is a shrink operator,is composed ofGradient of (a), LfIs a constant with a value ofVariables ofThe update rule of (2) is as follows:
wherein, tjIs a set of constant sequences, j is the number of iterations of the variable;
3) solving for Ak+1: solving for A using Singular Value Thresholding (Singular Value Thresholding) SVTk+1
Removing terms independent of A in the objective function for solving A in the formula (3), and obtaining the terms through a formula:
wherein,to Qk+1Solving by using a singular value threshold method to obtain:
wherein Hk+1,Vk+1Are each Qk+1The left singular matrix and the right singular matrix;
4) solving for Ek+1:Ek+1The solution of (a) consists of two parts;
in the observation space Ω, the value of E is 0; outside the observation space omega, i.e. complementary spaceAnd solving by using a first-order derivative, and combining the two parts to obtain a final solution of the E:
5) repeating the steps 1), 2), 3) and 4) until the algorithm is converged, wherein the iterative result A is obtainedk+1、Σk+1、Bk+1、Ck+1And Ek+1Is the result A of the original problem without re-weighting(l)、Σ(l)、B(l)、C(l)And E(l)Where, l is the number of reweighting times;
6) updating the weight matrix Wa、WbAnd Wc
In order to counteract the influence of the signal amplitude on the kernel norm term and the one norm term, a re-weighting scheme is introduced according to the currently estimated singular value matrix sigma(l)Coefficient matrix BlAnd ClIteratively updating the weight matrix W using an inverse scaling rulea、WbAnd Wc
WhereinIs the position coordinates of a pixel in the image and is an arbitrarily small positive number.
7) Repeating the steps 1) -7) until the algorithm is converged, and iterating the stepsResults of (A)(l)Is the final solution a of the original problem.
The invention has the technical characteristics and effects that:
aiming at the row and column missing image filling problem, the method realizes the solution of the row and column missing image filling problem by introducing separable two-dimensional sparse prior. The invention has the following characteristics:
1. algorithms such as an augmented Lagrange multiplier method (ALM), an Alternate Direction Method (ADM), an accelerated nearest neighbor gradient algorithm, a singular value threshold method and the like are used for solving the subproblems, and the advantages of the existing algorithms are integrated.
2. Sparse representation of columns and rows of an image using a column dictionary and a row dictionary is more efficient than conventional block dictionaries.
3. The low-rank matrix reconstruction theory and the sparse representation theory are combined, dictionary learning is introduced into a traditional low-rank matrix reconstruction model, and combined low-rank information and separable two-dimensional sparsity prior are provided, so that images with missing rows and columns can be accurately filled.
4. By performing low-rank and sparse combined constraint on the missing damaged graph, the filling performance is improved, and not only the row and column missing can be filled, but also the random missing can be filled more accurately.
Drawings
The above advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of the present invention;
FIG. 2 is an original truth image without dropout;
fig. 3 is a damaged image with rows and columns and random defects, the black color indicates the missing pixels, and the total defect rates from left to right are: (1) 10% missing; (2) 20% missing; (3) 30% missing; (4) 50% missing;
FIG. 4 is a graph of the filling results of the deletion plots for four deletion rates using the method of the present invention: (1) 10% missing fill result, PSNR 40.79; (2) 20% missing filling result, PSNR 37.32; (3) 30% missing padding result, PSNR 35.69; (4) the result of 50% missing fill, PSNR is 32.23.
Detailed Description
The method for filling a row-column missing image based on low-rank matrix reconstruction and sparse representation according to the present invention is described in detail below with reference to the embodiments and the accompanying drawings.
According to the invention, low-rank matrix reconstruction and sparse representation are combined, a dictionary learning model is introduced on the basis of a traditional low-rank matrix reconstruction model, and the problem that the existing algorithm cannot realize column-row missing image filling is solved by adopting the constraint of a combined low-rank and separable two-dimensional sparse prior condition for missing images. The method comprises the following steps:
1) the low-rank characteristic of a natural image is considered, and low-rank prior is introduced based on a low-rank matrix reconstruction theory to constrain potential images; meanwhile, considering that each column of the row-missing image can be sparsely represented by a column dictionary and each row of the column-missing image can be sparsely represented by a row dictionary, separable two-dimensional sparse prior is introduced based on a sparse representation theory; therefore, based on the combined low-rank and separable two-dimensional sparse prior, the image filling problem with row and column deletion is specifically expressed as solving the following constraint optimization equation:
where tr (-) is the trace of the matrix, representing a low rank prior term;represents the dot multiplication operation of two matrixes, | ·| non-woven phosphor1Representing a norm of the matrix, wherein two norm terms respectively represent separable two-dimensional sparse prior terms; omega is the observation space, representing a known pixel, P, within the observation matrix D with a missing row and columnΩ(. cndot.) is a projection operator, which represents the value of the variable projected into the spatial domain Ω, a is a filled matrix, Σ ═ diag ([ σ ])12,...,σn]) Representing a diagonal matrix consisting of the singular values of A in non-increasing order, Wa、WbAnd WcWeight matrices, γ, representing weighted low rank terms and separable two-dimensional sparse terms, respectivelyB,γCRegularization coefficients, phi, respectively representing separable two-dimensional sparse termscAnd phirRespectively representing the trained column dictionary and row dictionary, wherein the corresponding coefficient matrixes are respectively represented by B and C, and E represents the missing pixels in the observation matrix D;
11) the invention adopts an augmented Lagrange multiplier method (ALM) to convert a constrained optimization problem (1) into an unconstrained optimization problem for solving, wherein an augmented Lagrange equation is as follows:
wherein Y is1、Y2And Y3Representing the Lagrange multiplier matrix, mu1、μ2And mu3Is a penalty factor that is a function of,<·,·>representing the inner product of two matrices, | · | > non-woven phosphorFA Frobenius (Frobenius) norm representing a matrix;
12) the solving process is that a column dictionary and a row dictionary phi are trainedcAnd phirInitializing the weight matrix Wa、WbAnd WcAlternately updating coefficient matrixes B and C, restoring matrix A, missing pixel matrix E and Lagrange multiplier matrix Y1、Y2And Y3Penalty factor μ1、μ2And mu3And a weight matrix Wa、WbAnd Wc
2) Training dictionary phicAnd phir: training out-of-column and row dictionaries Φ using online learning algorithms on high quality image datasetscAnd phir
21) Constructing a column dictionary phicSo that the matrix A can be sparsely represented by a column dictionary, i.e. satisfying A-phicB, wherein B is a coefficient matrix and is sparse; constructing a line dictionary phirSo that the transpose of the matrix A can be sparsely represented by the row dictionary, i.e. satisfying AΤ=ΦrC, where C is a coefficient matrix and is sparse. The invention trains out a column dictionary and a row dictionary phi on a Kodak image set by using an Online Learning algorithmcAnd phir
22) The relevant parameters of the training dictionary are set as follows: the number of rows and the dictionary phi of the matrix A to be reconstructedcThe dimension m of the middle element is equal, i.e. the number of rows of A and phicThe number of rows is m; column number and dictionary of ArDimension n of the middle element is equal, i.e. column number of A and phirThe number of rows of (c) is n. Trained dictionary phicAnd phirAre overcomplete dictionaries, i.e., the number of columns of the dictionary must be greater than the number of rows.
3) Initializing the weight matrix Wa、WbAnd Wc
When the weighting times is l, and l is 0, the weighting matrix is setAndthe initial values are all assigned to 1, indicating that there is no re-weighting for the first iteration.
4) The iterative solution is performed by converting equation (2) into the following sequence using the Alternating Direction Method (ADM):
in the above formulaAndrespectively representing the values of variables B, C, A and E, p, at which the objective function is to be minimized1、ρ2And ρ3Is a multiple factor, k is the number of iterations; setting initial values of all parameters, and then carrying out iterative solution according to the methods of the steps 5), 6), 7) and 8) to obtain a result without re-weighting.
5) Solving for Bk+1: finding B using an accelerated neighbor gradient algorithmk+1
51) Removing the terms irrelevant to B in the objective function for solving B in the formula (3) to obtain the following equation:
by taylor expansion, a second order function is constructed to approximate the above formula, and then the original equation is solved for the second order function. Order toThe variable Z is reintroduced, defining the function:
wherein,is a gradient of f (Z), LfIs a constant with a value ofIs used for protectingAll Z's are confirmed to have F (Z). ltoreq.Q (B, Z).
52) Through the above transformation, equation (4) is transformed to solve Q (B, Z)j) By formulation, the following form is obtained:
wherein,variable ZjThe update rule of (2) is as follows:
wherein, tjIs a set of constant sequences and j is the number of iterations of the variable. Using the shrink operator to solve:
where soft (·, ·) is the shrink operator.
6) Solving for Ck+1: using an accelerated neighbor gradient algorithm to find Ck+1
61) Removing the terms irrelevant to C in the objective function for solving C in the formula (3) to obtain the following equation:
using Taylor expansion, a second order function is constructed to approximate the above equation, and then the original equation is solved for the second order function. Order toReintroducing variablesThe following function is defined:
wherein,is composed ofGradient of (a), LfIs a constant with a value ofFor ensuring to allAre all provided with
62) Through the above transformation, equation (9) is transformed to solveBy formulation, the following form is obtained:
wherein,variables ofThe update rule of (2) is as follows:
wherein, tjIs a set of constant sequences and j is the number of iterations of the variable. Using the shrink operator to solve:
where soft (·, ·) is the shrink operator.
7) Solving for Ak+1: solving for A using Singular Value Thresholding (Singular Value Thresholding) SVTk+1
Removing terms independent of A in the objective function for solving A in the formula (3) to obtain:
the above formula is rewritten using the recipe as:
wherein,to Qk+1Solving by using a singular value threshold method to obtain:
wherein Hk+1,Vk+1Are each Qk+1The left singular matrix and the right singular matrix;
8) solving for Ek+1:Ek+1The solution of (a) consists of two parts.
81) In the observation space Ω, the value of E is 0; i.e. PΩ(E)=0。
82) Outside the observation space omega, i.e. complementary spaceIn respect of Ek+1Is shown as follows:
using first order derivation to solve, obtain
83) The solutions inside and outside the spatial domain Ω are combined to be the final solution of E:
9) repeating the steps 5), 6), 7) and 8) until the algorithm converges, and then iterating the result Ak+1、Σk+1、Bk+1、Ck+1And Ek+1Is the result A of the original problem without re-weighting(l)、Σ(l)、B(l)、C(l)And E(l). Here, l is the number of reweighting times.
10) Updating the weight matrix Wa、WbAnd Wc
In order to counteract the influence of the signal amplitude on the kernel norm term and the one norm term, a re-weighting scheme is introduced according to the currently estimated singular value matrix sigma(l)Coefficient matrix BlAnd ClThe amplitude of (2) is overlapped by inverse proportion principleUpdating the weight matrix W insteada、WbAnd Wc
WhereinIs the position coordinates of a pixel in the image and is an arbitrarily small positive number.
11) Repeating the above steps 4), 5), 6), 7), 8), 9), 10) until the algorithm converges, at which time the result A of the iteration(l)Is the final solution a of the original problem.
The method combines low-rank matrix reconstruction with a sparse representation theory, introduces a dictionary learning model on the basis of a traditional low-rank matrix reconstruction model, and solves the problem that the prior art cannot process by adopting the constraint of a combined low-rank and separable two-dimensional sparse prior condition for the missing image, namely, the filling of the row-column missing image is realized (an experimental flow chart is shown in figure 1). The detailed description of the embodiments in conjunction with the drawings is as follows:
1) the experiment used a 321 × 481 pixel image randomly selected from the BSDS500 dataset (as shown in fig. 2) as the original image on which 4 defect images were constructed with respective defect rates of 10%, 20%, 30% and 50% and were tested (as shown in fig. 3), including row and column defects and random defects. The invention adopts a dictionary with the atom size fixed as 100, so that the graph to be filled is firstly divided into a plurality of 100 multiplied by 100 image blocks in a window sliding mode from top to bottom and from left to right. The step size of the sliding window is 90 pixels. The 100 × 100 image blocks are sequentially filled, and finally combined to obtain a filling map with an original size of 321 × 481. When the first image block is filled, the first image block is expressed by a matrix D, and the problem of filling the current image block with row and column missing is specifically expressed by solving the following constrained optimization equation:
where tr (-) is the trace of the matrix, representing a low rank prior term;represents the dot multiplication operation of two matrixes, | ·| non-woven phosphor1Representing a norm of the matrix, wherein two norm terms respectively represent separable two-dimensional sparse prior terms; omega is the observation space, representing a known pixel, P, within the observation matrix D with a missing row and columnΩ(. cndot.) is a projection operator, which represents the value of the variable projected into the spatial domain Ω, a is a filled matrix, Σ ═ diag ([ σ ])12,...,σn]) Representing a diagonal matrix consisting of the singular values of A in non-increasing order, Wa、WbAnd WcWeight matrices, γ, representing weighted low rank terms and separable two-dimensional sparse terms, respectivelyB,γCRegularization coefficients, phi, respectively representing separable two-dimensional sparse termscAnd phirRespectively representing the trained column dictionary and row dictionary, wherein the corresponding coefficient matrixes are respectively represented by B and C, and E represents the missing pixels in the observation matrix D;
11) the invention adopts an augmented Lagrange multiplier method (ALM) to convert a constrained optimization problem (1) into an unconstrained optimization problem for solving, wherein an augmented Lagrange equation is as follows:
wherein Y is1、Y2And Y3Representing the Lagrange multiplier matrix, mu1、μ2And mu3Is a penalty factor that is a function of,<·,·>representing the inner product of two matrices, | · | > non-woven phosphorFA Frobenius (Frobenius) norm representing a matrix;
12) the solving process is that a column dictionary and a row dictionary phi are trainedcAnd phirInitializing the weight matrix Wa、WbAnd WcAlternately updating coefficient matrixes B and C, restoring matrix A, missing pixel matrix E and Lagrange multiplier matrix Y1、Y2And Y3Penalty factor μ1、μ2And mu3And a weight matrix Wa、WbAnd Wc
2) Training dictionary phicAnd phir: training out-of-column and row dictionaries Φ using online learning algorithms on high quality image datasetscAnd phir
21) Constructing a column dictionary phicSo that the matrix A can be sparsely represented by a column dictionary, i.e. satisfying A-phicB, wherein B is a coefficient matrix and is sparse; constructing a line dictionary phirSo that the transpose of the matrix A can be sparsely represented by the row dictionary, i.e. satisfying AΤ=ΦrThe present invention trains out a column dictionary and a row dictionary Φ using the Online Learning algorithm to randomly select 230000 pixel columns of size 100 × 1 as training data over all images in the Kodak image setcAnd phir
22) The relevant parameters of the training dictionary are set as follows: number of rows and dictionary phi of reconstruction matrix AcThe dimension m of the middle element is equal, i.e. the number of rows of A and phicThe number of rows in (1) is m, and m is 100 in the experiment. Column number and dictionary of ArDimension n of the middle element is equal, i.e. column number of A and phirThe number of rows is n, and n is 100 in the experiment. Trained dictionary phicAnd phirAre overcomplete dictionaries, i.e., the number of columns of the dictionary must be greater than the number of rows. In the experiment, the number of columns of the row dictionary and the column dictionary is 400, and then the dictionary phicAnd phirAll specification of (2) is 100 × 400.
3) Initializing the weight matrix Wa、WbAnd Wc
When the weighting times is l, and l is 0, the weighting matrix is setAndthe initial values are all assigned to 1, indicating that there is no re-weighting for the first iteration.
4) The iterative solution is performed by converting equation (2) into the following sequence using the Alternating Direction Method (ADM):
in the above formulaAndrespectively representing the values of variables B, C, A and E, p, at which the objective function is to be minimized1、ρ2And ρ3Is a multiple factor, k is the number of iterations; setting initial values of all parameters, and then carrying out iterative solution according to the methods of the steps 5), 6), 7) and 8) to obtain a result without re-weighting. Initial values were set in the experiment as: l is 0; k is 1; rho1=ρ2=ρ3=1.1;A1=B1=C1=E1=0。
5) Solving for Bk+1: finding B using an accelerated neighbor gradient algorithmk+1
51) Removing the terms irrelevant to B in the objective function for solving B in the formula (3) to obtain the following equation:
by taylor expansion, a second order function is constructed to approximate the above formula, and then the original equation is solved for the second order function. Order toThe variable Z is reintroduced, defining the function:
wherein,is a gradient of f (Z), LfIs a constant with a value ofTo ensure that there is F (Z). ltoreq.Q (B, Z) for all Z.
52) Through the above transformation, equation (4) is transformed to solve Q (B, Z)j) By formulation, the following form is obtained:
wherein,variable ZjThe update rule of (2) is as follows:
wherein, tjIs a set of constant sequences and j is the number of iterations of the variable. After the conversion, the initial values of the parameters are set as follows: j is 1; t is t1=1;Z10. The convergence can be solved as follows:
where soft (·, ·) is the shrink operator.
6) Solving for Ck+1: using an accelerated neighbor gradient algorithm to find Ck+1
61) Removing the terms irrelevant to C in the objective function for solving C in the formula (3) to obtain the following equation:
using Taylor expansion, a second order function is constructed to approximate the above equation, and then the original equation is solved for the second order function. Order toReintroducing variablesThe following function is defined:
wherein,is composed ofGradient of (a), LfIs a constant with a value ofFor ensuring to allAre all provided with
62) Through the above transformation, equation (9) is transformed to solveBy formulation, the following form is obtained:
wherein,variables ofThe update rule of (2) is as follows:
wherein, tjIs a set of constant sequences and j is the number of iterations of the variable. After the conversion, the initial values of the parameters are set as follows: j is 1;
t1=1;the convergence can be solved as follows:
where soft (·, ·) is the shrink operator.
7) Solving for Ak+1: SVT Using Singular Value Thresholding (Singular Value Thresholding)Solving for Ak+1
Removing terms independent of A in the objective function for solving A in the formula (3) to obtain:
the above formula is rewritten using the recipe as:
wherein,to Qk+1Solving by using a singular value threshold method to obtain:
wherein Hk+1,Vk+1Are each Qk+1The left singular matrix and the right singular matrix;
8) solving for Ek+1:Ek+1The solution of (a) consists of two parts.
81) In the observation space Ω, the value of E is 0; i.e. PΩ(E)=0。
82) Outside the observation space omega, i.e. complementary spaceIn respect of Ek+1Is shown as follows:
solving using first order derivativesTo obtain
83) The solutions inside and outside the spatial domain Ω are combined to be the final solution of E:
9) repeating the steps 5), 6), 7) and 8) until the algorithm converges, and then iterating the result Ak+1、Σk+1、Bk+1、Ck+1And Ek+1Is the result A of the original problem without re-weighting(l)、Σ(l)、B(l)、C(l)And E(l). Here, l is the number of reweighting times.
10) Updating the weight matrix Wa、WbAnd Wc
In order to counteract the influence of the signal amplitude on the kernel norm term and the one norm term, a re-weighting scheme is introduced according to the currently estimated singular value matrix sigma(l)Coefficient matrix BlAnd ClIteratively updating the weight matrix W using an inverse scaling rulea、WbAnd Wc
WhereinIs the position coordinates of a pixel in the image and is an arbitrarily small positive number. In the experiment, the value was 0.001.
11) Repeating the above steps 4), 5), 6), 7), 8), 9), 10) until the algorithm converges, at which time the result A of the iteration(l)Is the final solution a of the original problem.
12) Sequentially processing the rest image blocks obtained in the step 1) until the image blocks are completely filled, and then combining the image blocks into a final filling graph (as shown in fig. 4). And during combination, taking the average value of the pixel points which are filled for multiple times and are divided and overlapped as a final value.
The experimental results are as follows: the invention adopts PSNR (peak signal-to-noise ratio) as the measurement measure of the image filling result, and the unit is dB:
wherein I is the filled image, I0For a real image without missing, w is the width of the image, h is the height of the image, (x, y) represents the pixel value of the x-th row and y-th column of the image, Σ represents the summation operation, | · | is an absolute value. The experiment takes n as 8, and the filling results of 4 pictures with different degree of row-column deletion used for testing in the experiment are marked in figure 4.

Claims (5)

1. A row-column missing image filling method based on low-rank matrix reconstruction and sparse representation is characterized by comprising the steps of introducing low-rank prior to constrain potential images based on a low-rank matrix reconstruction theory; meanwhile, considering that each column of the row-missing image can be sparsely represented by a column dictionary and each row of the column-missing image can be sparsely represented by a row dictionary, separable two-dimensional sparse prior is introduced based on a sparse representation theory; therefore, based on the combined low-rank and separable two-dimensional sparse prior, the image filling problem with row and column missing is specifically expressed as a solution constraint optimization equation, and row and column missing image filling is achieved.
2. The method for column-row missing image filling based on low rank matrix reconstruction and sparse representation as claimed in claim 1, wherein the step of specifically describing the image filling problem with column-row missing as solving the constrained optimization equation is detailed as:
1) the image filling problem with row-column deficiency is specifically formulated to solve the following constrained optimization equation:
where tr (-) is the trace of the matrix, representing a low rank prior term;represents the dot multiplication operation of two matrixes, | ·| non-woven phosphor1Representing a norm of the matrix, wherein two norm terms respectively represent separable two-dimensional sparse prior terms; omega is the observation space, representing a known pixel, P, within the observation matrix D with a missing row and columnΩ(. cndot.) is a projection operator, which represents the value of the variable projected into the spatial domain Ω, a is a filled matrix, Σ ═ diag ([ σ ])12,...,σn]) Representing a diagonal matrix consisting of the singular values of A in non-increasing order, Wa、WbAnd WcWeight matrices, γ, representing weighted low rank terms and separable two-dimensional sparse terms, respectivelyB,γCRegularization coefficients, phi, respectively representing separable two-dimensional sparse termscAnd phirRespectively representing the trained column dictionary and row dictionary, wherein the corresponding coefficient matrixes are respectively represented by B and C, and E represents the missing pixels in the observation matrix D;
and converting the constrained optimization problem (1) into an unconstrained optimization problem by adopting an augmented Lagrange multiplier method (ALM) to solve, wherein an augmented Lagrange equation is as follows:
wherein Y is1、Y2And Y3Representing the Lagrange multiplier matrix, mu1、μ2And mu3Is a penalty factor that is a function of,<·,·>representing the inner product of two matrices, | · | > non-woven phosphorFA Frobenius (Frobenius) norm representing a matrix;
the solving process is that a column dictionary and a row dictionary phi are trainedcAnd phirInitializing the weight matrix Wa、WbAnd WcAlternately updating coefficient matrixes B and C, restoring matrix A, missing pixel matrix E and Lagrange multiplier matrix Y1、Y2And Y3Penalty factor μ1、μ2And mu3And a weight matrix Wa、WbAnd WcUntil the algorithm converges, at which point the result of the iteration A(l)Is the final solution a of the original problem.
3. The method of row-column missing image filling based on low rank matrix reconstruction and sparse representation as claimed in claim 2, characterized in that in particular the dictionary Φ is trainedcAnd phir: training out-of-column and row dictionaries Φ using online learning algorithms on high quality image datasetscAnd phir
4. The method of row-column missing image filling based on low rank matrix reconstruction and sparse representation as claimed in claim 2, characterized in that specifically the weight matrix W is initializeda、WbAnd Wc: when the weighting times is l, and l is 0, the weighting matrix is setAndthe initial values are all assigned to 1, indicating that there is no re-weighting for the first iteration.
5. The method of row-column missing image filling based on low rank matrix reconstruction and sparse representation as claimed in claim 2, characterized in that in particular the iterative solution is performed using the alternating direction method ADM to convert equation (2) into the following sequence:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>B</mi> </munder> <mo>{</mo> <msub> <mi>L</mi> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mi>k</mi> </msup> <mo>,</mo> <mi>B</mi> <mo>,</mo> <msup> <mi>C</mi> <mi>k</mi> </msup> <mo>,</mo> <msup> <mi>E</mi> <mi>k</mi> </msup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>C</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>C</mi> </munder> <mo>{</mo> <msub> <mi>L</mi> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mi>k</mi> </msup> <mo>,</mo> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <mi>C</mi> <mo>,</mo> <msup> <mi>E</mi> <mi>k</mi> </msup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>A</mi> </munder> <mo>{</mo> <msub> <mi>L</mi> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </msub> <mrow> <mo>(</mo> <mi>A</mi> <mo>,</mo> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <msup> <mi>C</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <msup> <mi>E</mi> <mi>k</mi> </msup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>E</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mrow> <msub> <mi>P</mi> <mi>&amp;Omega;</mi> </msub> <mrow> <mo>(</mo> <mi>E</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> </mrow> </munder> <mo>{</mo> <msub> <mi>L</mi> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </msub> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <msup> <mi>C</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <mi>E</mi> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>Y</mi> <mn>1</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>Y</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>-</mo> <msub> <mi>&amp;Phi;</mi> <mi>r</mi> </msub> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>Y</mi> <mn>2</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>Y</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <msup> <mn>1</mn> <mi>T</mi> </msup> </mrow> </msup> <mo>-</mo> <msub> <mi>&amp;Phi;</mi> <mi>r</mi> </msub> <msup> <mi>C</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>Y</mi> <mn>3</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <mi>D</mi> <mo>-</mo> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>-</mo> <msup> <mi>E</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msub> <mi>&amp;rho;</mi> <mn>1</mn> </msub> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msub> <mi>&amp;rho;</mi> <mn>2</mn> </msub> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <msub> <mi>&amp;rho;</mi> <mn>3</mn> </msub> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
in the above formulaAndrespectively representing the values of variables B, C, A and E, p, at which the objective function is to be minimized1、ρ2And ρ3Is a multiple factor, k is the number of iterations; then, the iterative solution is carried out according to the following steps:
1) solving for Bk+1: finding B using an accelerated neighbor gradient algorithmk+1
Removing the terms irrelevant to B in the objective function for solving B in the formula (3) to obtain the following equation:
using Taylor expansion method to construct a second order function to approximate the above formula, then solving the original equation for the second order function to makeAnd introducing a variable Z, and finally solving the following steps:
<mrow> <msup> <mi>B</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msubsup> <mi>B</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>=</mo> <mi>s</mi> <mi>o</mi> <mi>f</mi> <mi>t</mi> <mrow> <mo>(</mo> <msub> <mi>U</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <mfrac> <msub> <mi>&amp;gamma;</mi> <mi>B</mi> </msub> <msub> <mi>L</mi> <mi>f</mi> </msub> </mfrac> <msub> <mi>W</mi> <mi>b</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
wherein soft (·,) is a shrink operator, is a gradient of f (Z), LfIs a constant with a value ofVariable ZjThe update rule of (2) is as follows:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msqrt> <mrow> <mn>4</mn> <msubsup> <mi>t</mi> <mi>j</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mn>1</mn> </mrow> </msqrt> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>Z</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msubsup> <mi>B</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>+</mo> <mfrac> <mrow> <msub> <mi>t</mi> <mi>j</mi> </msub> <mo>-</mo> <mn>1</mn> </mrow> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mfrac> <mrow> <mo>(</mo> <msubsup> <mi>B</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>-</mo> <msubsup> <mi>B</mi> <mi>j</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
wherein, tjIs a set of constant sequences, j is the number of iterations of the variable;
2) solving for Ck+1: using an accelerated neighbor gradient algorithm to find Ck+1
Removing the terms irrelevant to C in the objective function for solving C in the formula (3) to obtain the following equation:
using Taylor expansion method to construct a second order function to approximate the above formula, then solving the original equation for the second order function to makeReintroducing variablesFinally, the following can be obtained:
<mrow> <msup> <mi>C</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msubsup> <mi>C</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>=</mo> <mi>s</mi> <mi>o</mi> <mi>f</mi> <mi>t</mi> <mrow> <mo>(</mo> <msub> <mover> <mi>U</mi> <mo>~</mo> </mover> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <mfrac> <msub> <mi>&amp;gamma;</mi> <mi>C</mi> </msub> <msub> <mi>L</mi> <mi>f</mi> </msub> </mfrac> <msub> <mi>W</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
wherein soft (·,) is a shrink operator, is composed ofGradient of (a), LfIs a constant with a value ofVariables ofThe update rule of (2) is as follows:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msqrt> <mrow> <mn>4</mn> <msubsup> <mi>t</mi> <mi>j</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mn>1</mn> </mrow> </msqrt> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mover> <mi>Z</mi> <mo>~</mo> </mover> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msubsup> <mi>C</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>+</mo> <mfrac> <mrow> <msub> <mi>t</mi> <mi>j</mi> </msub> <mo>-</mo> <mn>1</mn> </mrow> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mfrac> <mrow> <mo>(</mo> <msubsup> <mi>C</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>-</mo> <msubsup> <mi>C</mi> <mi>j</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
wherein, tjIs a group ofConstant sequence, j is the number of iterations of the variable;
3) solving for Ak+1: solving for A using Singular Value Thresholding (Singular Value Thresholding) SVTk+1
Removing terms independent of A in the objective function for solving A in the formula (3), and obtaining the terms through a formula:
wherein,to Qk +1Solving by using a singular value threshold method to obtain:
<mrow> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msup> <mi>H</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mi>s</mi> <mi>o</mi> <mi>f</mi> <mi>t</mi> <mrow> <mo>(</mo> <msup> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> <mfrac> <mn>1</mn> <mrow> <msubsup> <mi>&amp;mu;</mi> <mn>1</mn> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mn>2</mn> <mi>k</mi> </msubsup> <mo>+</mo> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mrow> </mfrac> <msub> <mi>W</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <msup> <mi>V</mi> <mrow> <mi>k</mi> <mo>+</mo> <msup> <mn>1</mn> <mi>T</mi> </msup> </mrow> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
wherein Hk+1,Vk+1Are each Qk+1The left singular matrix and the right singular matrix;
4) solving for Ek+1:Ek+1The solution of (a) consists of two parts;
in the observation space Ω, the value of E is 0; outside the observation space omega, i.e. complementary spaceAnd solving by using a first-order derivative, and combining the two parts to obtain a final solution of the E:
<mrow> <msup> <mi>E</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msub> <mi>P</mi> <mi>&amp;Omega;</mi> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>P</mi> <mover> <mi>&amp;Omega;</mi> <mo>&amp;OverBar;</mo> </mover> </msub> <mrow> <mo>(</mo> <mi>D</mi> <mo>-</mo> <msup> <mi>A</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>+</mo> <mfrac> <msubsup> <mi>Y</mi> <mn>3</mn> <mi>k</mi> </msubsup> <msubsup> <mi>&amp;mu;</mi> <mn>3</mn> <mi>k</mi> </msubsup> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
5) repeating the steps 1), 2), 3) and 4) until the algorithm is converged, wherein the iterative result A is obtainedk+1、Σk+1、Bk+1、Ck+1And Ek+1Is the result A of the original problem without re-weighting(l)、Σ(l)、B(l)、C(l)And E(l)Where, l is the number of reweighting times;
6) updating a weight matrixWa、WbAnd Wc
In order to counteract the influence of the signal amplitude on the kernel norm term and the one norm term, a re-weighting scheme is introduced according to the currently estimated singular value matrix sigma(l)Coefficient matrix BlAnd ClIteratively updating the weight matrix W using an inverse scaling rulea、WbAnd Wc
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>W</mi> <mi>a</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>,</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msubsup> <mi>&amp;sigma;</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <mi>&amp;epsiv;</mi> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>W</mi> <mi>b</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>,</mo> <mover> <mi>j</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msup> <mi>B</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>,</mo> <mover> <mi>j</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>|</mo> <mo>+</mo> <mi>&amp;epsiv;</mi> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>W</mi> <mi>c</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>,</mo> <mover> <mi>j</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msup> <mi>C</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mover> <mi>i</mi> <mo>^</mo> </mover> <mo>,</mo> <mover> <mi>j</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>|</mo> <mo>+</mo> <mi>&amp;epsiv;</mi> </mrow> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow>
WhereinIs the position coordinates of a pixel in the image and is an arbitrarily small positive number.
7) Repeating the steps 1) -7) until the algorithm is converged, and then obtaining an iterative result A(l)Is the final solution a of the original problem.
CN201710298239.5A 2017-04-30 2017-04-30 Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix Pending CN107133930A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710298239.5A CN107133930A (en) 2017-04-30 2017-04-30 Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710298239.5A CN107133930A (en) 2017-04-30 2017-04-30 Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix

Publications (1)

Publication Number Publication Date
CN107133930A true CN107133930A (en) 2017-09-05

Family

ID=59715788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710298239.5A Pending CN107133930A (en) 2017-04-30 2017-04-30 Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix

Country Status (1)

Country Link
CN (1) CN107133930A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171215A (en) * 2018-01-25 2018-06-15 河南大学 Face Pseudo-median filter and camouflage category detection method based on low-rank variation dictionary and rarefaction representation classification
CN108427742A (en) * 2018-03-07 2018-08-21 中国电力科学研究院有限公司 A kind of distribution network reliability data recovery method and system based on low-rank matrix
CN108734675A (en) * 2018-05-17 2018-11-02 西安电子科技大学 Image recovery method based on mixing sparse prior model
CN109215025A (en) * 2018-09-25 2019-01-15 电子科技大学 A kind of method for detecting infrared puniness target approaching minimization based on non-convex order
CN109325446A (en) * 2018-09-19 2019-02-12 电子科技大学 A kind of method for detecting infrared puniness target based on weighting truncation nuclear norm
CN109325442A (en) * 2018-09-19 2019-02-12 福州大学 A kind of face identification method of image pixel missing
CN109348229A (en) * 2018-10-11 2019-02-15 武汉大学 Jpeg image mismatch steganalysis method based on the migration of heterogeneous characteristic subspace
CN109671030A (en) * 2018-12-10 2019-04-23 西安交通大学 A kind of image completion method based on the optimization of adaptive rand estination Riemann manifold
CN109754008A (en) * 2018-12-28 2019-05-14 上海理工大学 The estimation method of the symmetrical sparse network missing information of higher-dimension based on matrix decomposition
CN109978783A (en) * 2019-03-19 2019-07-05 上海交通大学 A kind of color image restorative procedure
CN111025385A (en) * 2019-11-26 2020-04-17 中国地质大学(武汉) Seismic data reconstruction method based on low rank and sparse constraint
CN111597440A (en) * 2020-05-06 2020-08-28 上海理工大学 Recommendation system information estimation method based on internal weighting matrix three-decomposition low-rank approximation
CN111881413A (en) * 2020-07-28 2020-11-03 中国人民解放军海军航空大学 Multi-source time sequence missing data recovery method based on matrix decomposition
CN112184571A (en) * 2020-09-14 2021-01-05 江苏信息职业技术学院 Robust principal component analysis method based on non-convex rank approximation
CN112564945A (en) * 2020-11-23 2021-03-26 南京邮电大学 IP network flow estimation method based on time sequence prior and sparse representation
CN112561842A (en) * 2020-12-07 2021-03-26 昆明理工大学 Multi-source damaged image fusion and recovery combined implementation method based on dictionary learning
CN112734763A (en) * 2021-01-29 2021-04-30 西安理工大学 Image decomposition method based on convolution and K-SVD dictionary joint sparse coding
CN113112563A (en) * 2021-04-21 2021-07-13 西北大学 Sparse angle CB-XLCT imaging method for optimizing regional knowledge prior
CN114253959A (en) * 2021-12-21 2022-03-29 大连理工大学 Data completion method based on dynamics principle and time difference
CN115508835A (en) * 2022-10-28 2022-12-23 广东工业大学 Tomography SAR three-dimensional imaging method based on blind compressed sensing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093430A (en) * 2013-01-25 2013-05-08 西安电子科技大学 Heart magnetic resonance imaging (MRI) image deblurring method based on sparse low rank and dictionary learning
CN103679660A (en) * 2013-12-16 2014-03-26 清华大学 Method and system for restoring image
CN104867119A (en) * 2015-05-21 2015-08-26 天津大学 Structural lack image filling method based on low rank matrix reconstruction
CN104978716A (en) * 2015-06-09 2015-10-14 重庆大学 SAR image noise reduction method based on linear minimum mean square error estimation
CN105743611A (en) * 2015-12-25 2016-07-06 华中农业大学 Sparse dictionary-based wireless sensor network missing data reconstruction method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093430A (en) * 2013-01-25 2013-05-08 西安电子科技大学 Heart magnetic resonance imaging (MRI) image deblurring method based on sparse low rank and dictionary learning
CN103679660A (en) * 2013-12-16 2014-03-26 清华大学 Method and system for restoring image
CN104867119A (en) * 2015-05-21 2015-08-26 天津大学 Structural lack image filling method based on low rank matrix reconstruction
CN104978716A (en) * 2015-06-09 2015-10-14 重庆大学 SAR image noise reduction method based on linear minimum mean square error estimation
CN105743611A (en) * 2015-12-25 2016-07-06 华中农业大学 Sparse dictionary-based wireless sensor network missing data reconstruction method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JINGYU YANG等: ""Completion of Structurally-Incomplete Matrices with Reweighted Low-Rank and Sparsity Priors"", 《2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) 》 *
杜伟男等: ""基于残差字典学习的图像超分辨率重建方法"", 《北京工业大学学报》 *
汪雄良等: ""基于快速基追踪算法的图像去噪"", 《计算机应用》 *
王斌等: ""基于低秩表示和学习字典的高光谱图像异常探测"", 《红外与毫米波学报》 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171215B (en) * 2018-01-25 2023-02-03 河南大学 Face camouflage detection and camouflage type detection method based on low-rank variation dictionary and sparse representation classification
CN108171215A (en) * 2018-01-25 2018-06-15 河南大学 Face Pseudo-median filter and camouflage category detection method based on low-rank variation dictionary and rarefaction representation classification
CN108427742A (en) * 2018-03-07 2018-08-21 中国电力科学研究院有限公司 A kind of distribution network reliability data recovery method and system based on low-rank matrix
CN108427742B (en) * 2018-03-07 2023-12-19 中国电力科学研究院有限公司 Power distribution network reliability data restoration method and system based on low-rank matrix
CN108734675A (en) * 2018-05-17 2018-11-02 西安电子科技大学 Image recovery method based on mixing sparse prior model
CN108734675B (en) * 2018-05-17 2021-09-28 西安电子科技大学 Image restoration method based on mixed sparse prior model
CN109325446B (en) * 2018-09-19 2021-06-22 电子科技大学 Infrared weak and small target detection method based on weighted truncation nuclear norm
CN109325446A (en) * 2018-09-19 2019-02-12 电子科技大学 A kind of method for detecting infrared puniness target based on weighting truncation nuclear norm
CN109325442A (en) * 2018-09-19 2019-02-12 福州大学 A kind of face identification method of image pixel missing
CN109215025A (en) * 2018-09-25 2019-01-15 电子科技大学 A kind of method for detecting infrared puniness target approaching minimization based on non-convex order
CN109215025B (en) * 2018-09-25 2021-08-10 电子科技大学 Infrared weak and small target detection method based on non-convex rank approach minimization
CN109348229B (en) * 2018-10-11 2020-02-11 武汉大学 JPEG image mismatch steganalysis method based on heterogeneous feature subspace migration
CN109348229A (en) * 2018-10-11 2019-02-15 武汉大学 Jpeg image mismatch steganalysis method based on the migration of heterogeneous characteristic subspace
CN109671030A (en) * 2018-12-10 2019-04-23 西安交通大学 A kind of image completion method based on the optimization of adaptive rand estination Riemann manifold
CN109671030B (en) * 2018-12-10 2021-04-20 西安交通大学 Image completion method based on adaptive rank estimation Riemann manifold optimization
CN109754008B (en) * 2018-12-28 2022-07-19 上海理工大学 High-dimensional symmetric sparse network missing information estimation method based on matrix decomposition
CN109754008A (en) * 2018-12-28 2019-05-14 上海理工大学 The estimation method of the symmetrical sparse network missing information of higher-dimension based on matrix decomposition
CN109978783A (en) * 2019-03-19 2019-07-05 上海交通大学 A kind of color image restorative procedure
CN111025385A (en) * 2019-11-26 2020-04-17 中国地质大学(武汉) Seismic data reconstruction method based on low rank and sparse constraint
CN111597440A (en) * 2020-05-06 2020-08-28 上海理工大学 Recommendation system information estimation method based on internal weighting matrix three-decomposition low-rank approximation
CN111881413B (en) * 2020-07-28 2022-12-09 中国人民解放军海军航空大学 Multi-source time sequence missing data recovery method based on matrix decomposition
CN111881413A (en) * 2020-07-28 2020-11-03 中国人民解放军海军航空大学 Multi-source time sequence missing data recovery method based on matrix decomposition
CN112184571A (en) * 2020-09-14 2021-01-05 江苏信息职业技术学院 Robust principal component analysis method based on non-convex rank approximation
CN112564945A (en) * 2020-11-23 2021-03-26 南京邮电大学 IP network flow estimation method based on time sequence prior and sparse representation
CN112564945B (en) * 2020-11-23 2023-03-24 南京邮电大学 IP network flow estimation method based on time sequence prior and sparse representation
CN112561842A (en) * 2020-12-07 2021-03-26 昆明理工大学 Multi-source damaged image fusion and recovery combined implementation method based on dictionary learning
CN112561842B (en) * 2020-12-07 2022-12-09 昆明理工大学 Multi-source damaged image fusion and recovery combined implementation method based on dictionary learning
CN112734763A (en) * 2021-01-29 2021-04-30 西安理工大学 Image decomposition method based on convolution and K-SVD dictionary joint sparse coding
CN113112563B (en) * 2021-04-21 2023-10-27 西北大学 Sparse angle CB-XLCT imaging method for optimizing regional knowledge priori
CN113112563A (en) * 2021-04-21 2021-07-13 西北大学 Sparse angle CB-XLCT imaging method for optimizing regional knowledge prior
CN114253959A (en) * 2021-12-21 2022-03-29 大连理工大学 Data completion method based on dynamics principle and time difference
CN114253959B (en) * 2021-12-21 2024-07-12 大连理工大学 Data complement method based on dynamics principle and time difference
CN115508835A (en) * 2022-10-28 2022-12-23 广东工业大学 Tomography SAR three-dimensional imaging method based on blind compressed sensing
CN115508835B (en) * 2022-10-28 2024-03-15 广东工业大学 Chromatographic SAR three-dimensional imaging method based on blind compressed sensing

Similar Documents

Publication Publication Date Title
CN107133930A (en) Ranks missing image fill method with rarefaction representation is rebuild based on low-rank matrix
CN109241491A (en) The structural missing fill method of tensor based on joint low-rank and rarefaction representation
CN104867119B (en) The structural missing image fill method rebuild based on low-rank matrix
CN109741256B (en) Image super-resolution reconstruction method based on sparse representation and deep learning
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN111369487B (en) Hyperspectral and multispectral image fusion method, system and medium
CN102142137B (en) High-resolution dictionary based sparse representation image super-resolution reconstruction method
CN103400402B (en) Based on the sparse compressed sensing MRI image rebuilding method of low-rank structure
CN104008538B (en) Based on single image super-resolution method
CN107154064B (en) Natural image compressed sensing method for reconstructing based on depth sparse coding
CN104050653B (en) Hyperspectral image super-resolution method based on non-negative structure sparse
CN105931264B (en) A kind of sea infrared small target detection method
CN105513026A (en) Compressed sensing reconstruction method based on image nonlocal similarity
CN107730482B (en) Sparse fusion method based on regional energy and variance
CN110139046B (en) Tensor-based video frame synthesis method
CN105513033B (en) A kind of super resolution ratio reconstruction method that non local joint sparse indicates
CN105957022A (en) Recovery method of low-rank matrix reconstruction with random value impulse noise deletion image
CN105631807A (en) Single-frame image super resolution reconstruction method based on sparse domain selection
CN105825477A (en) Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion
CN106952317A (en) Based on the high spectrum image method for reconstructing that structure is sparse
Chen et al. Single-image super-resolution using multihypothesis prediction
CN102609920B (en) Colorful digital image repairing method based on compressed sensing
CN113869503B (en) Data processing method and storage medium based on depth matrix decomposition completion
CN116797456A (en) Image super-resolution reconstruction method, system, device and storage medium
CN105957025A (en) Inconsistent image blind restoration method based on sparse representation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170905