CN109446473B - Robust tensor principal component analysis method based on blocks - Google Patents

Robust tensor principal component analysis method based on blocks Download PDF

Info

Publication number
CN109446473B
CN109446473B CN201810704642.8A CN201810704642A CN109446473B CN 109446473 B CN109446473 B CN 109446473B CN 201810704642 A CN201810704642 A CN 201810704642A CN 109446473 B CN109446473 B CN 109446473B
Authority
CN
China
Prior art keywords
tensor
block
component
sparse
epsilon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810704642.8A
Other languages
Chinese (zh)
Other versions
CN109446473A (en
Inventor
刘翼鹏
冯兰兰
陈龙喜
曾思行
朱策
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201810704642.8A priority Critical patent/CN109446473B/en
Publication of CN109446473A publication Critical patent/CN109446473A/en
Application granted granted Critical
Publication of CN109446473B publication Critical patent/CN109446473B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/17Function evaluation by approximation methods, e.g. inter- or extrapolation, smoothing, least mean square method
    • G06F17/175Function evaluation by approximation methods, e.g. inter- or extrapolation, smoothing, least mean square method of multidimensional data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Mathematics (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a steady tensor principal component analysis method based on blocking, which divides the whole tensor into a cascade of a plurality of block tensors with the same size by introducing cascade operation, and carries out an image denoising experiment in a tensor with a more proper size. The alternating direction multiplier method divides the optimization model into two sub-problems, namely low rank component approximation and sparse component approximation. An iterative tensor singular value soft threshold operator and an iterative soft threshold operator are used to solve both sub-problems. The resulting low rank component is a de-noised image, and the sparse component is noise. The method is used for extracting the low-rank components and the sparse components of the multi-channel data, and extracting the low-rank components in a smaller block tensor by introducing a blocking idea and adding sparse constraint, so that more accurate and clear details can be obtained.

Description

Robust tensor principal component analysis method based on blocks
Technical Field
The invention relates to the field of image processing, in particular to a tensor low-rank decomposition method based on blocking
Background
Tensors are multidimensional data, which is a high order generalization of vector and matrix data. Tensor data-based signal processing plays an important role in a wide range of applications, such as recommendation systems, data mining, image/video denoising and repairing, and the like. However, many data processing methods are developed only for two-dimensional data. It has become increasingly important to extend these efficient methods into the tensor domain.
Principal Component Analysis (PCA), one of the most common statistical tools for two-dimensional data analysis, can extract potentially low-rank structures in the data. However, PCA is very sensitive to large noisy points or outliers, and its estimated values obtained can be arbitrarily far from the true values. Therefore, Robust Principal Component Analysis (RPCA) was proposed. However, the main drawback of the RPCA method is that it can only process two-dimensional data. However, real-world data, such as color images, video, and magnetic resonance images, are often in multi-dimensional form. When RPCA is used for multiplexing data, the data must be flattened or vectorized. During the matrixing or vectorization process, information loss is inevitable, and the structural characteristics of data cannot be fully utilized.
To exploit the multidimensional structural information in tensor data, Robust Tensor Principal Component Analysis (RTPCA) is proposed. Given an observed tensor
Figure RE-GDA0001946469090000011
Wherein
Figure RE-GDA0001946469090000012
Representing real number fields, superscripted as dimensional information, i.e. N1,N2,N3Representing the first, second and third dimensions of the tensor, respectively, which can be decomposed into low rank and sparse components:
Figure RE-GDA0001946469090000013
wherein
Figure RE-GDA0001946469090000014
Representing the low rank component of the tensor, and epsilon represents the sparse component of the tensor. All element values of both components may be arbitrarily large.
Problem (1) can be transformed into the following convex optimization problem:
Figure RE-GDA0001946469090000015
wherein the content of the first and second substances,
Figure RE-GDA0001946469090000016
to represent
Figure RE-GDA0001946469090000017
Is the tensor kernel norm, | epsilon |1The L1 norm of the tensor representing epsilon, λ is a positive number, and is a weighting factor for low rank components and sparse components.
The current processing method of the RTPCA problem usually processes the whole tensor directly, and the result is usually coarse, which obscures details in some applications, for example, color image denoising based on RTPCA. Thus, an RTPCA method based on the block concept is proposed, named IBTSVT. Giving a tensor
Figure RE-GDA0001946469090000018
Which can be decomposed into a concatenation of several block tensors
Figure RE-GDA0001946469090000019
Wherein
Figure RE-GDA00019464690900000110
Representing a cascade of operations, for each block tensor
Figure RE-GDA00019464690900000111
Figure RE-GDA00019464690900000112
It can be decomposed into low rank and sparse components:
Figure RE-GDA0001946469090000021
wherein
Figure RE-GDA0001946469090000022
Low rank component, epsilon, representing the block tensorpRepresenting sparse components of the tensor.
To extract the low-rank component of each block tensor, the IBTSVT method processes the tensor nuclear norm of each block tensor
Figure RE-GDA0001946469090000023
Singular values of the Fourier domainSoft thresholds may be used to extract the low rank component of the block tensor. The soft threshold operator is as follows:
Figure RE-GDA0001946469090000024
wherein' ()+"means that the positive part is retained.
The low rank component of the final overall tensor is expressed as
Figure RE-GDA0001946469090000025
The sparse component is represented as
Figure RE-GDA0001946469090000026
The IBTSVT method processes the sparse component too coarsely, and as a result, there may be large noise or outliers.
Disclosure of Invention
The invention aims to: in view of the above problems, a tensor resolution method capable of restoring details is provided. The invention introduces a blocking idea, adds sparse constraint and provides a steady block tensor principal component analysis (RBTPCA) method, so that low-rank components can be extracted from a smaller block tensor to obtain more accurate and clear details.
The invention discloses a robust tensor principal component analysis method based on blocks, which comprises the following steps of:
inputting a to-be-processed tensor
Figure RE-GDA0001946469090000027
Initializing low rank components
Figure RE-GDA0001946469090000028
A sparse component epsilon,
Figure RE-GDA0001946469090000029
Weight factor lambda with epsilon, dual variable
Figure RE-GDA00019464690900000210
Lagrange penalty operator rho and Lagrange penalty operator upper limit rhomaxThe convergence threshold value is epsilon, the iteration coefficient k and an adjusting factor phi with the value larger than zero, and the value is larger than a constant mu of 1;
computing low rank components for the kth iteration
Figure RE-GDA00019464690900000211
And the sparse component ε of the kth iterationk
To tensor
Figure RE-GDA00019464690900000212
Carrying out tensor decomposition operation, dividing the tensor decomposition operation into a plurality of cascades of block tensors with the same size, and obtaining each block tensor
Figure RE-GDA00019464690900000213
Wherein p is a block identifier; for example, if the number of block tensors obtained by the tensor decomposition operation is represented by P, P is 1, …;
and updating each block tensor: firstly, to
Figure RE-GDA00019464690900000214
Carrying out fast Fourier transform to obtain tensor
Figure RE-GDA00019464690900000215
Respectively to tensor
Figure RE-GDA00019464690900000216
The singular value decomposition of the matrix is carried out on each front slice, and two unitary matrixes and a positive definite diagonal matrix can be obtained; then calculating soft threshold operators of positive definite diagonal matrixes obtained by decomposing each front slice, wherein parameters during calculation of the soft threshold operators
Figure RE-GDA0001946469090000031
Inverse Fourier transform is performed on the basis of soft threshold operators of positive definite diagonal matrixes of all front slices and two unitary matrixesObtaining an updated block tensor after the conversion;
computing low rank components for a current iteration
Figure RE-GDA0001946469090000032
Wherein the symbols
Figure RE-GDA0001946469090000033
Representing a cascade operation;
by parameters
Figure RE-GDA0001946469090000034
Calculating tensors
Figure RE-GDA0001946469090000035
And taking the calculation result as the sparse component epsilon of the current iterationk
Updating
Figure RE-GDA0001946469090000036
And updating ρ ═ min (μ ρ, ρ)max);
Judging whether the iterative convergence condition is satisfied, if so, updating
Figure RE-GDA0001946469090000037
ε=εkOutputting the tensor to be processed
Figure RE-GDA0001946469090000038
Low rank component of
Figure RE-GDA0001946469090000039
And a sparse component epsilon;
otherwise, update
Figure RE-GDA00019464690900000310
ε=εkAfter k +1 is updated again, the calculation of the low rank component is continued
Figure RE-GDA00019464690900000311
And a sparse component εk
The iteration convergence condition is as follows:
Figure RE-GDA00019464690900000312
‖εk-ε‖e.g or less and
Figure RE-GDA00019464690900000313
while satisfying the requirements.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that: for a tensor to be processed
Figure RE-GDA00019464690900000314
The parameter settings compared to the existing method of operating directly on the whole large tensor should be based on N1And N2Compared with the method for analyzing the block tensor, the method for analyzing the block tensor has the advantages that the one-dimensional size and the two-dimensional size of the block tensor are equal, and the parameters are set more easily, so that the method is more robust than the existing method. According to the invention, by introducing the blocking idea and adding sparse constraint, low-rank components are extracted from a smaller block tensor, and more accurate and clear details can be obtained.
Drawings
FIG. 1 is a schematic representation of t-SVD;
FIG. 2 is a schematic diagram of the cascaded operation of the present invention;
FIG. 3 a robust block tensor principal component analysis model;
FIG. 4 is a schematic representation of the color image denoising results of the present invention method and prior art method, wherein (a) represents the original image; (b) the column represents a noisy image; columns (c) and (d), (e) are graphs of the recovery results for the existing RTPCA, IBTSVT, and RBTPCA of the present invention, respectively;
FIG. 5 is a comparison of RSE and PSNR values of the denoising experiment of 50 color images by the method of the present invention and the prior art method, wherein FIG. 5-a is a comparison of RSE values; FIG. 5-b is a comparison of PSNR values.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings.
The invention provides a Robust Block Tensor Principal Component Analysis (RBTPCA) method based on a blocking thought, which is used for extracting low-rank components and sparse components of data.
For ease of understanding, the notation of the symbols, and the associated tensor are defined as follows:
(1) symbol annotation
a,A,
Figure RE-GDA0001946469090000041
Representing vectors, matrices and tensors, respectively. a isi,j,kTensor of representation
Figure RE-GDA0001946469090000042
The (i, j, k) th element of (a).
Figure RE-GDA0001946469090000043
Denotes the (i, j) th tube fiber.
Figure RE-GDA0001946469090000044
And
Figure RE-GDA0001946469090000045
respectively, the ith horizontal, jth lateral and kth frontal slices. The k-th frontal slice can also be denoted as
Figure RE-GDA0001946469090000046
Some commonly used norms are defined as
Figure RE-GDA0001946469090000047
And
Figure RE-GDA0001946469090000048
in this embodiment, use is made of
Figure RE-GDA0001946469090000049
Representing along tensor
Figure RE-GDA00019464690900000410
Third dimension of (2)
Figure RE-GDA00019464690900000411
Fast Fourier Transform (FFT). In the same way, can be based on
Figure RE-GDA00019464690900000412
Calculation by inverse Fourier transform (IFFT)
Figure RE-GDA00019464690900000413
(2) Defining the annotations:
definition 1 (t-product):
Figure RE-GDA00019464690900000414
and
Figure RE-GDA00019464690900000415
is an N1×N4×N3Tensor of
Figure RE-GDA00019464690900000416
Wherein
Figure RE-GDA00019464690900000417
Where · represents the cyclic convolution between two tube fibers.
Definition 2 (conjugate transpose) given a size N1×N2×N3Tensor of
Figure RE-GDA00019464690900000418
First, conjugate transpose
Figure RE-GDA00019464690900000419
Each of the front face of (1) is sliced, and the inverted 2 nd to N th3The order of the individual frontal slices. What is obtained is a size ofN2×N1×N3Tensor of
Figure RE-GDA00019464690900000420
Definition 3 (orthogonal tensor) one tensor
Figure RE-GDA00019464690900000421
Is orthogonal if and only if it satisfies:
Figure RE-GDA00019464690900000422
wherein
Figure RE-GDA00019464690900000423
Representing the unit tensor with the first frontal slice being the identity matrix and the other frontal slices being the 0 matrix, i.e.
Figure RE-GDA00019464690900000424
Symbol "()T"denotes transposition.
Define 4 (f-diagonal tensor) when one tensor
Figure RE-GDA00019464690900000527
Is a diagonal matrix, the tensor
Figure RE-GDA0001946469090000051
Referred to as the f-diagonal tensor.
Definition 5(t-SVD) for tensor
Figure RE-GDA0001946469090000052
The tensor singular value decomposition (t-SVD) of (1) is:
Figure RE-GDA0001946469090000053
wherein
Figure RE-GDA0001946469090000054
And
Figure RE-GDA0001946469090000055
respectively is size N1×N1×N3And N2×N2×N3The orthogonal matrix of (a) is,
Figure RE-GDA0001946469090000056
is a size of N1×N2×N3The f-diagonal tensor of.
I.e. in the pair tensor
Figure RE-GDA0001946469090000057
When singular value decomposition is performed, matrix Singular Value Decomposition (SVD) in a fourier domain is used, and referring to fig. 1, a specific decomposition processing procedure is as follows:
step S11: tensor to be decomposed
Figure RE-GDA0001946469090000058
Along a third dimension
Figure RE-GDA0001946469090000059
Performing fast Fourier transform and recording the transform result as tensor
Figure RE-GDA00019464690900000510
Step S12: initialization parameter n3=1;
Step S13: to tensor
Figure RE-GDA00019464690900000511
N of (2)3A front side of the slice
Figure RE-GDA00019464690900000512
SVD decomposition is carried out to obtain decomposition results [ U, S, V]I.e. by
Figure RE-GDA00019464690900000513
U, V thereinFor corresponding unitary matrix, S is positive definite diagonal matrix, symbol' ()*"denotes conjugate transpose;
and order
Figure RE-GDA00019464690900000514
Step S14: judging n3Whether or not equal to N3If not, the parameter n3After incrementing by 1, continue to step S13; if yes, go to step S15;
step S15: by
Figure RE-GDA00019464690900000515
To obtain
Figure RE-GDA00019464690900000516
By
Figure RE-GDA00019464690900000517
To obtain
Figure RE-GDA00019464690900000518
By
Figure RE-GDA00019464690900000519
To obtain
Figure RE-GDA00019464690900000520
Then respectively to
Figure RE-GDA00019464690900000521
Performing inverse Fourier transform to obtain
Figure RE-GDA00019464690900000522
Definition 6 (tensor nuclear norm: TNN): for tensor
Figure RE-GDA00019464690900000523
Tensor nuclear norm of
Figure RE-GDA00019464690900000524
Is equivalent to
Figure RE-GDA00019464690900000525
The norm of a matrix kernel of, wherein
Figure RE-GDA00019464690900000526
Is a block diagonal tensor defined as
Figure RE-GDA0001946469090000061
Wherein
Figure RE-GDA0001946469090000062
Is that
Figure RE-GDA0001946469090000063
I 1,2, N3
Based on the above notation and definition, the RBTPCA method of the present invention has the following principles:
first, the invention can be expressed as the following optimization model:
Figure RE-GDA0001946469090000064
wherein
Figure RE-GDA0001946469090000065
The cascade operation is shown, and the schematic diagram of the cascade operation is shown in fig. 2, namely, the cascade operation comprises an up-down direction and a left-right direction. Wherein, for two block tensors cascaded up and down, the second dimension and the third dimension are equal; for two block tensors cascaded left and right, the first dimension and the third dimension are equal.
The RBTPCA model is shown in FIG. 3, where the black dots represent sparse components.
Defining a low rank component as
Figure RE-GDA0001946469090000066
By symbols
Figure RE-GDA0001946469090000067
Representing tensor resolution operations, i.e.
Figure RE-GDA0001946469090000068
The RBTPCA problem shown in equation (9) can be solved by using an Alternating Direction Multiplier Method (ADMM), which translates into the following iterative problem:
Figure RE-GDA0001946469090000069
Figure RE-GDA00019464690900000610
Figure RE-GDA00019464690900000611
where p > 0 is the lagrangian penalty operator,
Figure RE-GDA00019464690900000612
is a dual variable, k is an iteration coefficient,
Figure RE-GDA00019464690900000613
εkrespectively represents dual variables and sparse components in the k iteration,
Figure RE-GDA00019464690900000614
εk+1and respectively representing dual variables, low-rank components and sparse components in the (k + 1) th iteration.
The low rank component approximation problem shown in equation (10) can be converted to equation (13):
Figure RE-GDA00019464690900000615
wherein, namely
Figure RE-GDA0001946469090000071
Wherein
Figure RE-GDA0001946469090000072
Respectively representing tensors
Figure RE-GDA0001946469090000073
Two orthogonal tensors obtained by performing a t-SVD decomposition, wherein
Figure RE-GDA0001946469090000074
And
Figure RE-GDA0001946469090000075
tensor of representation
Figure RE-GDA0001946469090000076
An f-diagonal tensor of a Fourier domain obtained when t-SVD decomposition is performed, ifft (-) represents inverse Fourier transform,
Figure RE-GDA0001946469090000077
tensor of representation
Figure RE-GDA0001946469090000078
The soft threshold operator of (2).
Then, equation (13) can be solved by iterating the singular value threshold operator of the block tensor as follows:
Figure RE-GDA0001946469090000079
wherein the block tensor should satisfy the following condition:
Figure RE-GDA00019464690900000710
as for the sparse component ∈, it can be solved by equation (16):
Figure RE-GDA00019464690900000711
wherein, sthτ(X) and
Figure RE-GDA00019464690900000712
representing matrix X and tensor, respectively
Figure RE-GDA00019464690900000713
Which means that for any element x in the matrix or tensor:
sthτ(x)=sign(x)·max(|x|-τ) (17)
where the sign function sign () is used to return the sign of the parameter.
For a given object to be analyzed
Figure RE-GDA00019464690900000714
The RBTPCA method of the invention is realized by the following concrete steps:
step S21: initializing low rank components
Figure RE-GDA00019464690900000715
Sparse component epsilon, dual variable
Figure RE-GDA00019464690900000716
Lagrange punishment operator rho, adjusting factor phi (phi is more than 0), constant mu (mu is more than 1) and lagrange punishment operator upper limit rhomaxA convergence threshold value E and an iteration coefficient k;
where ρ ismaxIs usually in the range of 103<ρmax<105And e is usually in the range of 10-5<∈<10-3
In this embodiment, the preferable values of the parameters are:
Figure RE-GDA00019464690900000717
ε=0,
Figure RE-GDA00019464690900000718
φ=1.1,ρ=0.7,μ=1.2,ρmax=105,∈=10-5,k=1;
step S22: calculating block tensors
Figure RE-GDA00019464690900000719
Wherein epsilon 00, i.e. ε0Is the initial value of epsilon;
and then separately for each block tensor
Figure RE-GDA00019464690900000720
And (3) performing updating treatment:
firstly, to each block tensor
Figure RE-GDA00019464690900000721
Performing fast Fourier transform to obtain
Figure RE-GDA00019464690900000722
Then, respectively aiming at each block tensor
Figure RE-GDA00019464690900000723
N of (A)3A front side of the slice
Figure RE-GDA0001946469090000081
Performing singular value decomposition of the matrix:
Figure RE-GDA0001946469090000082
wherein
Figure RE-GDA0001946469090000083
Figure RE-GDA0001946469090000084
Is the current corresponding unitary matrix and is,
Figure RE-GDA0001946469090000085
to define the opposite angleMatrix with block identifier P ═ 1, …, P, slice identifier n3=1,2,…,N3
Then to N3Positive definite diagonal matrix
Figure RE-GDA0001946469090000086
Respectively calculating soft threshold operators to obtain
Figure RE-GDA0001946469090000087
Finally, based on N3An
Figure RE-GDA0001946469090000088
And
Figure RE-GDA0001946469090000089
are respectively obtained
Figure RE-GDA00019464690900000810
And
Figure RE-GDA00019464690900000811
then performing inverse Fourier transform on the signals to obtain
Figure RE-GDA00019464690900000812
And
Figure RE-GDA00019464690900000813
so that an updated block tensor can be obtained
Figure RE-GDA00019464690900000814
I.e. the updated block tensor
Figure RE-GDA00019464690900000815
Wherein ". sup." denotes t-product (see definition 1), abbreviated as
Figure RE-GDA00019464690900000816
Step S23: computing low rank components for a current iteration
Figure RE-GDA00019464690900000817
Computing sparse components for a current iteration
Figure RE-GDA00019464690900000818
Wherein sthτ(. cndot.) Soft threshold operator representing the tensor in brackets, i.e.
Figure RE-GDA00019464690900000819
Wherein
Figure RE-GDA00019464690900000820
Updating
Figure RE-GDA00019464690900000821
And updating ρ ═ min (μ ρ, ρ)max);
Step S24: judging whether the iterative convergence condition is satisfied, if so, updating
Figure RE-GDA00019464690900000822
ε=εkOutput tensor
Figure RE-GDA00019464690900000823
Low rank component of
Figure RE-GDA00019464690900000824
And a sparse component epsilon; otherwise, if not, updating k to k +1, and continuing to execute step S22;
the iteration convergence condition is as follows:
Figure RE-GDA00019464690900000825
‖εkk-1e.g or less and
Figure RE-GDA00019464690900000826
at the same time satisfy, wherein
Figure RE-GDA00019464690900000827
ε0Are respectively as
Figure RE-GDA00019464690900000828
The initial value of epsilon.
Examples
In order to verify the effectiveness of the invention, a color image denoising experiment is carried out. The running platform is MatLab R2016a, the processor is Intel 2.60GB i5-3230M, the RMB is 8GB, and the running platform is a notebook of a Windows 10 system.
For the space size N1×N2The RGB color image being substantially a 3-dimensional tensor
Figure RE-GDA0001946469090000091
Each channel of a color image can be viewed as being
Figure RE-GDA0001946469090000092
One face-side slice of (a). The image may be approximately reconstructed by a low order matrix. Considering that t-SVD is a multi-linear extension of SVD, in this embodiment, a low tube rank tensor is used to approximate the color image.
The RBTPCA method is applied to the color image denoising treatment, and the performance of the RBTPCA method is compared with the prior IBTSVT method and the RTPCA method:
50 color images were randomly drawn for testing. For each color image, 10% of the pixels are randomly selected, which are in the interval [0, 255%]And (4) randomly taking values. For the RBTPCA method of the present invention, the block size of 24X 3 is selected and parameters are set
Figure RE-GDA0001946469090000093
For the existing IBTSVT algorithm, the size of the block is selected to be 24 × 24 × 3, and parameters are set
Figure RE-GDA0001946469090000094
For RTPCA without blocking operation, parameters are selected
Figure RE-GDA0001946469090000095
In the present embodiment, the quality of the restored image is evaluated using the Relative Square Error (RSE) and the peak signal-to-noise ratio (PSNR). Recovered image
Figure RE-GDA0001946469090000096
And the original noise-free image
Figure RE-GDA0001946469090000097
Is defined as:
Figure RE-GDA0001946469090000098
and PSNR is defined as:
Figure RE-GDA0001946469090000099
higher PSNR values and lower RSE values mean better performance. Fig. 4 shows the RSE and PSNR values of 50 color images. As can be seen, the RBTPCA method provided by the invention provides the best denoising performance.
Some examples of the inclusion of large amounts of texture information are given in fig. 5. They are images of birds, people, insects, boats, houses, and horses. The results show that: the color image recovered by RTPCA is rather blurred. Although the color image recovered by the IBTSVT method retains the detail information of the original image, a plurality of large noise points still remain; however, the results of the RBTPCA method of the present invention are not only very clear in detail, but also successfully remove noise. For example, the outlines of the bird's beak and boat are very distinct and clear.
In conclusion, the invention extracts the low-rank component in the smaller block tensor by introducing the blocking idea and adding the sparse constraint, and can obtain more accurate and clear details.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.

Claims (7)

1. The robust tensor principal component analysis method based on the block is characterized by comprising the following steps of:
inputting a to-be-processed tensor
Figure FDA0002928332150000011
The tensor to be processed
Figure FDA0002928332150000012
As dimension information being N1×N2×N3Image data of, N1×N2Representing the spatial size of the image data, N3Indicating the number of channels of the image data;
initializing low rank components of image data
Figure FDA0002928332150000013
And a sparse component epsilon, and
Figure FDA0002928332150000014
a weighting factor lambda with epsilon and initializing a dual variable
Figure FDA0002928332150000015
Lagrange penalty operator rho and Lagrange penalty operator upper limit rhomaxA convergence threshold value epsilon, an iteration coefficient k, an adjusting factor phi and a constant mu; wherein, the value of the adjusting factor phi is larger than zero, and the value of the constant mu is larger than 1;
computing low rank components for the kth iteration
Figure FDA0002928332150000016
And the sparse component ε of the kth iterationk
To tensor
Figure FDA0002928332150000017
Carrying out tensor decomposition operation, dividing the tensor decomposition operation into a plurality of cascades of block tensors with the same size, and obtaining each block tensor
Figure FDA0002928332150000018
Wherein p is a block identifier;
and updating each block tensor: firstly, to
Figure FDA0002928332150000019
Carrying out fast Fourier transform to obtain tensor
Figure FDA00029283321500000110
Then respectively to tensor
Figure FDA00029283321500000111
Performing singular value decomposition on the matrix of each front slice to obtain two unitary matrixes and a positive definite diagonal matrix; then calculating soft threshold operators of positive definite diagonal matrixes obtained by decomposing each front slice, wherein parameters during calculation of the soft threshold operators
Figure FDA00029283321500000112
Performing inverse Fourier transform on the soft threshold operators of positive definite diagonal matrixes of all front slices and the two unitary matrixes to obtain an updated block tensor;
cascading the updated block tensors to obtain the low-rank components of the current iteration
Figure FDA00029283321500000113
By parameters
Figure FDA00029283321500000114
Calculating tensors
Figure FDA00029283321500000115
And taking the calculation result as the sparse component epsilon of the current iterationk
Updating
Figure FDA00029283321500000116
And updating ρ ═ min (μ ρ, ρ)max);
Judging whether the iterative convergence condition is satisfied, if so, updating
Figure FDA00029283321500000117
ε=εkOutputting the tensor to be processed
Figure FDA00029283321500000118
Low rank component of
Figure FDA00029283321500000119
And a sparse component epsilon;
otherwise, update
Figure FDA00029283321500000120
ε=εkAfter k +1 is updated again, the calculation of the low rank component is continued
Figure FDA00029283321500000121
And a sparse component εk
The iteration convergence condition is as follows:
Figure FDA00029283321500000122
‖εk-ε‖e.g or less and
Figure FDA00029283321500000123
while satisfying the requirements.
2. The method of claim 1The method is characterized in that the Lagrangian penalty operator upper bound ρmaxHas a value range of 103<ρmax<105
3. The method of claim 1, wherein the convergence threshold e is in the range of 10-5<∈<10-3
4. The method of claim 1, wherein the adjustment factor φ is at a value of 1.1.
5. The method of claim 1, wherein the constant μ has a value of 1.2.
6. The method of claim 1, wherein the initial value of the lagrangian penalty operator p is 0.7.
7. The method of claim 1, in which low rank components
Figure FDA0002928332150000021
The initial values of the sparse component epsilon are all 0.
CN201810704642.8A 2018-07-02 2018-07-02 Robust tensor principal component analysis method based on blocks Active CN109446473B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810704642.8A CN109446473B (en) 2018-07-02 2018-07-02 Robust tensor principal component analysis method based on blocks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810704642.8A CN109446473B (en) 2018-07-02 2018-07-02 Robust tensor principal component analysis method based on blocks

Publications (2)

Publication Number Publication Date
CN109446473A CN109446473A (en) 2019-03-08
CN109446473B true CN109446473B (en) 2021-04-30

Family

ID=65532652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810704642.8A Active CN109446473B (en) 2018-07-02 2018-07-02 Robust tensor principal component analysis method based on blocks

Country Status (1)

Country Link
CN (1) CN109446473B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111369457B (en) * 2020-02-28 2022-05-17 西南电子技术研究所(中国电子科技集团公司第十研究所) Remote sensing image denoising method for sparse discrimination tensor robustness PCA
US11553139B2 (en) 2020-09-29 2023-01-10 International Business Machines Corporation Video frame synthesis using tensor neural networks
CN113349795B (en) * 2021-06-15 2022-04-08 杭州电子科技大学 Depression electroencephalogram analysis method based on sparse low-rank tensor decomposition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938072A (en) * 2012-10-20 2013-02-20 复旦大学 Dimension reducing and sorting method of hyperspectral imagery based on blocking low rank tensor analysis
CN105160623A (en) * 2015-08-17 2015-12-16 河南科技学院 Unsupervised hyperspectral data dimension reduction method based on block low-rank tensor model
CN107609580A (en) * 2017-08-29 2018-01-19 天津大学 A kind of low-rank tensor identification analysis method of direct-push

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104644205A (en) * 2015-03-02 2015-05-27 上海联影医疗科技有限公司 Method and system for positioning patient during diagnostic imaging

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938072A (en) * 2012-10-20 2013-02-20 复旦大学 Dimension reducing and sorting method of hyperspectral imagery based on blocking low rank tensor analysis
CN105160623A (en) * 2015-08-17 2015-12-16 河南科技学院 Unsupervised hyperspectral data dimension reduction method based on block low-rank tensor model
CN107609580A (en) * 2017-08-29 2018-01-19 天津大学 A kind of low-rank tensor identification analysis method of direct-push

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Robust Principal Component Analysis?;EMMANUEL J. CANDE`S 等;《Journal of the ACM》;20110331;第58卷(第3期);全文 *

Also Published As

Publication number Publication date
CN109446473A (en) 2019-03-08

Similar Documents

Publication Publication Date Title
Zhou et al. Tensor factorization for low-rank tensor completion
Bobin et al. Sparsity and morphological diversity in blind source separation
Tarzanagh et al. Fast randomized algorithms for t-product based tensor operations and decompositions with applications to imaging data
Abolghasemi et al. Fast and incoherent dictionary learning algorithms with application to fMRI
Fadili et al. Image decomposition and separation using sparse representations: An overview
Xu et al. A fast patch-dictionary method for whole image recovery
Le Magoarou et al. Flexible multilayer sparse approximations of matrices and applications
CN109446473B (en) Robust tensor principal component analysis method based on blocks
CN109671029B (en) Image denoising method based on gamma norm minimization
CN108510013B (en) Background modeling method for improving robust tensor principal component analysis based on low-rank core matrix
Yu et al. Quaternion-based weighted nuclear norm minimization for color image denoising
Li et al. A fast algorithm for learning overcomplete dictionary for sparse representation based on proximal operators
Feng et al. Robust block tensor principal component analysis
CN106972862B (en) Group sparse compressed sensing image reconstruction method based on truncation kernel norm minimization
CN108765313B (en) Hyperspectral image denoising method based on intra-class low-rank structure representation
Liu et al. Mixed noise removal via robust constrained sparse representation
Khmag Digital image noise removal based on collaborative filtering approach and singular value decomposition
Wen et al. The power of complementary regularizers: Image recovery via transform learning and low-rank modeling
KR100633555B1 (en) Signal processing
Yufeng et al. Research on SAR image change detection algorithm based on hybrid genetic FCM and image registration
Fei et al. Adaptive PCA transforms with geometric morphological grouping for image noise removal
Wang et al. Minimum unbiased risk estimate based 2DPCA for color image denoising
Li et al. Alternating direction method of multipliers for solving dictionary learning models
Stott et al. A class of multidimensional NIPALS algorithms for quaternion and tensor partial least squares regression
Ting et al. A Novel Approach for Arbitrary-Shape ROI Compression of Medical Images Using Principal Component Analysis (PCA)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant