CN109921799B - Tensor compression method based on energy-gathering dictionary learning - Google Patents

Tensor compression method based on energy-gathering dictionary learning Download PDF

Info

Publication number
CN109921799B
CN109921799B CN201910126833.5A CN201910126833A CN109921799B CN 109921799 B CN109921799 B CN 109921799B CN 201910126833 A CN201910126833 A CN 201910126833A CN 109921799 B CN109921799 B CN 109921799B
Authority
CN
China
Prior art keywords
tensor
dictionary
matrix
representing
compression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910126833.5A
Other languages
Chinese (zh)
Other versions
CN109921799A (en
Inventor
张祖凡
毛军伟
甘臣权
孙韶辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201910126833.5A priority Critical patent/CN109921799B/en
Publication of CN109921799A publication Critical patent/CN109921799A/en
Application granted granted Critical
Publication of CN109921799B publication Critical patent/CN109921799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a tensor compression method based on energy-gathering dictionary learning, and belongs to the field of signal processing. The method comprises the following steps: 1. respectively carrying out tach decomposition and sparse representation on the tensor to obtain a dictionary, a sparse coefficient and a nuclear tensor; 2. obtaining a new sparse representation form about the tensor through the relationship between the sparse coefficient of the tensor and the kernel tensor; 3. and reducing the dimension of the dictionary in the mapping matrix by utilizing an energy-gathering dictionary learning algorithm, thereby realizing the compression of the tensor. The tensor compression algorithm based on the energy-gathering dictionary learning provided by the invention realizes the effective compression of the tensor, and can more effectively retain the information of the original tensor compared with other compression algorithms, thereby achieving better denoising effect.

Description

Tensor compression method based on energy-gathering dictionary learning
Technical Field
The invention belongs to the field of signal processing, and particularly relates to a tensor signal compression algorithm based on energy-gathering dictionary learning, which can realize effective compression of tensor signals.
Background
With the development of information technology, multidimensional signals play an increasingly important role in the field of signal processing. Meanwhile, multi-dimensional (MD) signals also impose a great burden on the transmission and storage process. To deal with the challenges of multi-dimensional signal processing, people pay attention to tensor representation of multi-dimensional signals, and the multi-dimensional signals are represented as tensors and processed, so that great convenience is brought to the processing of the multi-dimensional signals. Therefore, the compression of the multidimensional signal is essentially effective compression of the tensor, so that the tensor compression algorithm plays an increasingly important role in the field of multidimensional signal compression, and is a hot spot of current research.
In recent years, researchers have proposed many effective compression algorithms based on CP decomposition and tack decomposition algorithms for the problem of tensor compression, and one of them is to awaken vectorization operation of tensor data directly based on tensor decomposition for the tensor data itself. However, according to the well-known theorem of ugly ducklings, there is no optimal representation of patterns without any a priori knowledge, in other words, vectorization of tensor data is not always efficient. Specifically, this may cause the following problems: firstly, the inherent high-order structure and inherent correlation in the original data are destroyed, and the information is lost or the high-order dependency of redundant information and the original data is covered, so that a model representation which is more meaningful is impossible to obtain in the original data; second, vectorization operations can generate high-dimensional vectors, leading to "overfitting", "dimension disasters", and small sample problems.
On the other hand, the sparse representation of the tensor is applied to the compression of the tensor. Because of the equivalence of the tensor Take model and the Kronecker representation, a tensor can be defined as a representation of a given Kronecker dictionary with certain sparsity, such as multidirectional sparsity and block sparsity. With the occurrence of tensor sparsity, some sparse coding modes also occur, such as algorithms of Kroneker-OMP, N-way Block OMP and the like, and the algorithms bring great convenience to tensor compression; meanwhile, a plurality of dictionary learning algorithms are generated corresponding to various sparse coding algorithms, and tensor is subjected to sparse coding and dictionary learning on each dimensionality, so that the aim of sparse representation is fulfilled. However, in the above process of the tensor processing algorithm based on the sparse representation, some new noise is often introduced, which will have a certain influence on the accuracy of the data. In addition, in the sparse representation process, the determination of the sparsity also brings certain challenges to tensor processing.
Therefore, in processing a tensor including a large amount of data, the tensor is effectively compressed, useful information is extracted, and the transmission and storage costs are reduced.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. The tensor compression method based on the energy-gathering dictionary learning can improve the storage capacity of original data and enhance the denoising capacity. The technical scheme of the invention is as follows:
a tensor compression method based on energy-gathered dictionary learning comprises the following steps:
step 1): acquiring a multi-dimensional signal, expressing the multi-dimensional signal as a tensor, inputting the tensor, and performing sparse expression and tack decomposition;
step 2) obtaining a new tensor sparse representation form related to the original tensor by utilizing the approximate relation between the sparse coefficient tensor in the sparse representation and the core tensor obtained by the Tak decomposition; (ii) a
Step 3) converting the new tensor sparse representation form in the step 2) into a mapping matrix form related to dictionary representation according to tensor operation properties;
and 4) reducing the dimension of the dictionary in the mapping matrix by using the idea of reducing the dimension of the energy-gathering dictionary learning algorithm, thereby realizing the compression of the tensor.
Further, the step 1) obtains the form of the product of the core tensor and the mapping matrix in each dimension through the Tack decomposition, and the following expression is obtained in the Tack decomposition process
Figure BDA0001973893910000021
A∈R I×P ,B∈R J×Q ,C∈R K×R The orthogonal matrix is also called a factor matrix, reflects principal components in each dimension, and Z belongs to R P×Q×R The core tensor is the core tensor and reflects the relevant conditions of all dimensions, P, Q and R respectively correspond to the column numbers of factor matrixes A, B and C, I, J and K represent the size of each dimension of the original tensor, and if P, Q and R are smaller than I, J and K, the core tensor can be regarded as the compression of the original tensor;
for an N-order tensor, the Tak decomposition form is
χ=Z× 1 A 1 × 2 A 2 ...× N A N
χ represents the input tensor signal, Z represents the core tensor, a i The decomposition matrix in each dimension is represented as an orthogonal matrix.
Further, the expression form of the sparse representation of the tensor of the step 1) is
Figure BDA0001973893910000031
Figure BDA0001973893910000032
Representing the signal after sparse representation, S representing the sparse coefficient tensor, N representing the order of the self-tensor, D i Representing dictionaries in various dimensions.
Further, the step 2) finds that the two expressions are similar by observing the sparse representation of the tensor and the Tak decomposition form, and the representation of the nuclear tensor obtained by the Tak decomposition is shown as
Z=χ× 1 A 1 T × 2 A 2 T ...× N A N T
Substituting the expression of the nuclear tensor into tensor sparse representation by using the approximate relation between the sparse coefficient tensor and the nuclear tensor to obtain
Figure BDA0001973893910000033
Further, the step 3) converts the new tensor sparse representation form in the step 2) into a mapping matrix form related to dictionary representation according to tensor operational properties, and specifically includes:
in the operation of the tensor,
when m is not equal to n, the composition is,
Ψ× mn B=Ψ× n (BA)
wherein Ψ represents an N-order tensor, and m m-mode product of tensor and matrix n Representing the n-modulo product of the tensor and matrix.
When m is not equal to n, the number of the first and second groups,
Ψ× mn B=Ψ× nm A
applying the above two properties to a new sparse representation can result in
Figure BDA0001973893910000034
Further, in the step 4), the dictionary in the mapping matrix is subjected to dimension reduction by using the idea of dimension reduction of the energy-gathered dictionary learning algorithm, so that the compression of the tensor is realized, and the specific steps include:
inputting: tensor composed of T training samples
Figure BDA0001973893910000041
Sparsity k, maximum number of iterations Itermax, termination threshold ε
1. Initializing a dictionary
Figure BDA0001973893910000042
Is a Gaussian matrix and normalizes columns of each dictionary
2. And (3) decomposing tack: χ = Z |) 1 A 1 × 2 A 2 ...× N A N
3. Initializing a dictionary and performing a gamma (D) process in an energy-gathered dictionary learning algorithm;
4. calculating and using S when the update times i =0
Figure BDA0001973893910000043
Calculating the absolute error E without any iterative update 0
5. The ith dictionary update
For k=1:I N
By using
Figure BDA0001973893910000044
Calculating an updated value of the ith dictionary
End
6. Normalizing the N dictionaries obtained in the last step;
7. updating sparse coefficient tensor S by using ith updated dictionary i
8. Calculating the absolute error E after the ith update i And relative error E r
9. Judging the termination condition, if E r < ε orIf the iteration times exceed the maximum limit, the loop is terminated, otherwise, the step 5-8 is continued;
10. get a dictionary
Figure BDA0001973893910000045
11. According to the formula
Figure BDA0001973893910000046
To obtain U d Then the dictionary is asserted>
Figure BDA0001973893910000047
12. Bring dictionary P in
Figure BDA0001973893910000048
And (3) outputting: compression tensor
Figure BDA0001973893910000049
Further, in order to make the dictionary P after dimension reduction retain the principal components in the original dictionary D, the dictionary D needs to be preprocessed, and the processing procedure is as follows
D T D=uΛv T
u is a left singular matrix of singular value decomposition, v is a right singular matrix, lambda is a singular value matrix, and the expression form is
Figure BDA0001973893910000051
Secondly, updating singular values
Figure BDA0001973893910000052
Where k denotes the number of dictionary columns, t d Which is indicative of a principal component threshold value,
Figure BDA0001973893910000053
represents the first d singular values after update, and->
Figure BDA0001973893910000054
Representing the updated last r singular values. />
Thereby obtaining new singular values
Figure BDA0001973893910000055
Figure BDA0001973893910000056
Forming a new dictionary with the initial left and right singular matrices, i.e.
Figure BDA0001973893910000057
Further, after preprocessing the dictionary, the dictionary needs to be updated, a tensor-based multidimensional dictionary learning algorithm TKSVD is adopted in the updating process to complete the dictionary updating process of the high-dimensional tensor signal, and the TKSVD algorithm is different from the K-SVD dictionary updating algorithm in that the TKSVD algorithm specifically includes:
(1) When carrying out tensor dictionary study, different from two-dimensional signal learning mode, according to the definition of tensor norm, can obtain:
Figure BDA0001973893910000058
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0001973893910000059
D N n dimensional dictionary, D 1 Denotes a 1 st-dimensional dictionary, I T Expressing a T-order unit array, solving the above formula by a least square method to obtain a dictionary D i Is updated with a value of
Figure BDA00019738939100000510
Wherein, y i The i-mode expansion matrix representing the tensor,
Figure BDA00019738939100000511
indicating a false reversal, i.e.>
Figure BDA00019738939100000512
Wherein->
Figure BDA00019738939100000513
Pseudo-inverse matrix representing matrix M, M T Representing the transpose of matrix M. On the other hand, after one iteration is finished, the absolute error and the relative error between the data which can be recovered under the current dictionary and the sparse coefficient and the original training data are calculated, and the absolute error after the ith iteration is still defined by the Frobenius norm of the tensor
Figure BDA0001973893910000061
Wherein S represents a coefficient tensor,
Figure BDA0001973893910000062
Representing the error of the true signal from the approximated signal after removing G atoms.
After the dictionary is updated, the dictionary is subjected to dimensionality reduction, and singular value decomposition is performed on the updated dictionary
Figure BDA0001973893910000063
Wherein U is d The first d columns, U, representing the left singular matrix r The rear r column, Θ, representing the left singular matrix d The first d singular values, Θ, representing a matrix of singular values r Representing the last r singular values, V, of the matrix of singular values d The first d columns, V, representing the right singular matrix r The rear r columns of the right singular matrix are represented. The reduced-dimension dictionary is represented as
Figure BDA0001973893910000064
Substituting the dictionary P after dimension reduction into the mapping matrix TThereby completing tensor compression.
The invention has the following advantages and beneficial effects:
the invention provides a tensor compression algorithm based on energy-gathering dictionary learning. The specific innovative steps comprise: 1) The sparse representation of the tensor is applied to tensor compression, and the denoising capability is superior to other algorithms; 2) The compression of the tensor is completed through the dimensionality reduction of the mapping matrix, so that the damage of vectorization operation to the internal data structure of the tensor is avoided; 3) Dimension reduction is performed on the dictionary of the tensor by adopting an energy-gathered dictionary learning algorithm, so that a data structure between data in each dimension can be reserved, and the data retention capacity is improved.
Drawings
Figure 1 is an exploded view of a third order tensor tach as used in the preferred embodiment of the present invention;
FIG. 2 is a schematic diagram of a tensor compression algorithm based on cumulative energy dictionary learning;
figure 3 is a flow diagram of a tensor compression algorithm based on energy-gathered dictionary learning.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
the method mainly solves the problems that in the traditional tensor compression algorithm, a data structure is damaged, information is lost, and new noise is introduced. The method mainly includes the steps that a dictionary, a sparse coefficient tensor and a nuclear tensor are obtained through Tak decomposition and sparse representation, then a new sparse representation form is formed through the approximate relation of the sparse coefficient tensor and the nuclear tensor, and finally the dictionary in the sparse representation is subjected to dimensionality reduction through an energy-gathering dictionary learning algorithm, so that tensor compression is achieved.
FIG. 2 is a general flow chart of the present invention, which is described below with reference to the accompanying drawings, and includes the following steps:
the method comprises the following steps: simultaneously carrying out sparse representation and tach decomposition on the input tensor, and obtaining the following representation in the sparse representation process
Figure BDA0001973893910000071
After the tensor is subjected to sparse representation, a product form of the sparse coefficient tensor in each dimension is obtained, in other words, in the sparse representation, the sparse coefficient tensor takes the dictionary D as a mapping matrix and is projected in each dimension, and therefore the sparse tensor is obtained.
Giving a third order tensor x ∈ R I×J×K The Taker decomposition process is shown in FIG. 1. In the course of the Take decomposition, the following expression is obtained
Figure BDA0001973893910000072
A∈R I×P ,B∈R J×Q ,C∈R K×R The orthogonal matrix is also called a factor matrix, reflects principal components in each dimension, and Z belongs to R P×Q×R The core tensor is the reflection of the relevant situation of each dimension. P, Q and R respectively correspond to the column numbers of the factor matrixes A, B and C, I, J and K represent the size of each dimension of the original tensor, and if P, Q and R are smaller than I, J and K, the core tensor can be regarded as the compression of the original tensor.
More generally, for an order-N tensor, the Tak decomposition is in the form of
χ=Z× 1 A 1 × 2 A 2 ...× N A N
And step two) obtaining a new tensor sparse representation form related to the original tensor by utilizing the approximate relation between the sparse coefficient tensor in the sparse representation and the core tensor obtained by the Tak decomposition.
By observing the sparse representation of the tensor and the form of the Tak decomposition, it is found that the two expressions are similar, and the representation of the nuclear tensor obtained by the Tak decomposition is
Z=χ× 1 A 1 T × 2 A 2 T ...× N A N T
Substituting the expression of the nuclear tensor into tensor sparse representation by using the approximate relation between the sparse coefficient tensor and the nuclear tensor to obtain
Figure BDA0001973893910000081
/>
Step three: converting the new tensor sparse representation form into a mapping matrix form related to dictionary representation according to tensor operational properties, and the specific steps comprise:
in the tensor calculation, when m is not equal to n,
Ψ× mn B=Ψ× n (BA)
where Ψ represents the N-order tensor m M-mode product of expression tensor and matrix n Representing the n-modulo product of the tensor and matrix.
When m is not equal to n, the composition is,
Ψ× mn B=Ψ× nm A
applying the above two properties to a new sparse representation can result in
Figure BDA0001973893910000082
Let T i =D i A i T Will T i Brought into the above formula to obtain
Figure BDA0001973893910000083
From the above equation, the sparse representation is in the form of a projection on the original matrix in dimensions. In the MPCA algorithm, the high-dimensional tensor realizes tensor compression through processing of a projection matrix, namely, the main component of the projection matrix is reserved to achieve the purpose of compressing the projection matrix, so that the tensor compression is realized. Elicitation of the MPCA concept, here for matrix T i Dimension reduction is performed to realize the originalCompression of tensor χ. To complete the pair matrix T i For the dimension reduction of (2), here considering the sparseness of T i The dictionary D is subjected to dimension reduction, so that the mapping matrix T is realized i The dimensionality reduction of (1).
Step four: and (3) realizing the compression of the tensor by using the thought of dictionary dimension reduction in the energy-gathered dictionary learning algorithm. In the algorithm, in order to enable the dictionary P after dimensionality reduction to keep the principal components in the original dictionary D, the dictionary D needs to be preprocessed, and the processing process is as follows
D T D=uΛv T
u is left singular matrix of singular value decomposition, v is right singular matrix, Λ is singular value matrix, and the expression form is
Figure BDA0001973893910000091
Secondly, updating singular values
Figure BDA0001973893910000092
Where k denotes the number of columns in the dictionary, t d Which is indicative of the threshold value of the principal component,
Figure BDA0001973893910000093
representing the first d singular values after update>
Figure BDA0001973893910000094
Representing the updated last r singular values.
Thereby obtaining new singular values
Figure BDA0001973893910000095
Figure BDA0001973893910000096
Form a new dictionary with the initial left and right singular matrices, i.e.
Figure BDA0001973893910000097
After preprocessing the dictionary, the dictionary needs to be updated, and a multidimensional dictionary learning algorithm (TKSVD) based on tensor is adopted in the updating process to complete the dictionary updating process of the high-dimensional tensor signals. Unlike the updating algorithm of the K-SVD dictionary, the TKSVD algorithm needs to be noticed by two points:
(2) When carrying out tensor dictionary study, different from two-dimensional signal learning mode, according to the definition of tensor norm, can obtain:
Figure BDA0001973893910000098
wherein the content of the first and second substances,
Figure BDA0001973893910000099
I T representing a unit matrix of order T, D N N dimensional dictionary, D 1 Representing a 1 st dimensional dictionary. Obviously, the above formula can be solved by the least square method, and the dictionary D can be obtained i Is updated to
Figure BDA00019738939100000910
Wherein, y i The i-mode expansion matrix representing the tensor,
Figure BDA00019738939100000911
indicating a false reversal, i.e.>
Figure BDA00019738939100000912
Wherein->
Figure BDA00019738939100000913
Pseudo-inverse matrix representing matrix M, M T Representing the transpose of matrix M. On the other hand, after one iteration is completed, the absolute error and the relative error between the data which can be recovered under the current dictionary and the sparse coefficient and the original training data can be calculated, and the absolute error after the ith iteration is still in the Frobenius range of the tensorDefinition of numbers
Figure BDA0001973893910000101
Wherein S represents a coefficient tensor,
Figure BDA0001973893910000102
Representing the error of the true signal from the approximated signal after removing G atoms.
And after the dictionary is updated, performing dimension reduction processing on the dictionary. Singular value decomposition of updated dictionary
Figure BDA0001973893910000103
Wherein U is d The first d columns, U, representing the left singular matrix r The rear r column, Θ, representing the left singular matrix d The first d singular values, Θ, representing a matrix of singular values r The last r singular values, V, representing a matrix of singular values d The first d columns, V, representing the right singular matrix r The rear r columns of the right singular matrix are represented. The reduced-dimension dictionary is represented as
Figure BDA0001973893910000104
And substituting the dictionary P subjected to the dimension reduction into the mapping matrix T, thereby completing tensor compression.
The tensor compression algorithm based on the energy-gathered dictionary learning specifically comprises the following steps:
inputting: tensor composed of T training samples
Figure BDA0001973893910000105
Sparsity k, maximum number of iterations Itermax, termination threshold ε
1. Initializing a dictionary
Figure BDA0001973893910000106
Is a Gaussian matrix and normalizes columns of each dictionary
2. And (3) decomposing tack: χ =Z× 1 A 1 × 2 A 2 ...× N A N
3. Initializing dictionaries through the gamma (D) process in a cumulative energy dictionary learning algorithm
4. Calculating and using S when the update times i =0
Figure BDA0001973893910000107
Calculating the absolute error E without any iterative update 0
5. The ith dictionary update
For k=1:I N
By using
Figure BDA0001973893910000108
Calculating an updated value of the ith dictionary
End
6. Normalizing the N dictionaries obtained in the last step
7. Updating sparse coefficient tensor S by using ith updated dictionary i
8. Calculating the absolute error E after the ith update i And relative error E r
9. Judging the termination condition, if E r If epsilon is less than epsilon or the number of iterations exceeds the maximum limit, the loop is terminated, otherwise steps 5-8 are continued
10. Get a dictionary
Figure BDA0001973893910000111
11. According to the formula
Figure BDA0001973893910000112
To obtain U d Then the dictionary >>
Figure BDA0001973893910000113
12. Bringing in dictionary P
Figure BDA0001973893910000114
Output of: compression tensor
Figure BDA0001973893910000115
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure in any way whatsoever. After reading the description of the present invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (4)

1. A tensor compression method based on energy-gathered dictionary learning is characterized by comprising the following steps:
step 1): acquiring a multi-dimensional signal, expressing the multi-dimensional signal as a tensor, inputting the tensor, and performing sparse expression and tack decomposition;
step 2) obtaining a new tensor sparse representation form related to the original tensor by utilizing the approximate relation between the sparse coefficient tensor in the sparse representation and the core tensor obtained by the Tak decomposition;
step 3) converting the new tensor sparse representation form in the step 2) into a mapping matrix form related to dictionary representation according to tensor operation properties;
step 4) reducing the dimension of the dictionary in the mapping matrix by using the idea of reducing the dimension of the energy-gathering dictionary learning algorithm, thereby realizing the compression of tensor;
the step 1) obtains the form of the product of the core tensor and the mapping matrix in each dimension through the Tack decomposition, and the following expression is obtained in the Tack decomposition process
Figure FDA0004045267790000011
A∈R I×P ,B∈R J×Q ,C∈R K×R The orthogonal matrix is also called a factor matrix, reflects principal components in each dimension, and Z belongs to R P×Q×R The method is a core tensor and reflects the relevant conditions of all dimensions, P, Q and R respectively correspond to the column numbers of factor matrixes A, B and C, I, J and K represent the original factorsThe size of each dimension of the initial tensor is determined, and if P, Q and R are smaller than I, J and K, the core tensor can be regarded as the compression of the original tensor;
for an N-order tensor, the Tak decomposition form is
χ=Z× 1 A 1 × 2 A 2 × N A N
χ denotes an input tensor signal, Z denotes a core tensor, A i Representing a decomposition matrix on each dimension, namely an orthogonal matrix;
the expression form of the sparse representation of the tensor of the step 1) is
Figure FDA0004045267790000012
Figure FDA0004045267790000013
Representing the signal after sparse representation, S representing the sparse coefficient tensor, N representing the order of the self-tensor, D i Representing dictionaries in various dimensions;
in the step 2), by observing the sparse representation of the tensor and the Tack decomposition form, the two expressions are found to be similar, and the expression of the nuclear tensor obtained by the Tack decomposition is shown as
Z=χ× 1 A 1 T × 2 A 2 T ...× N A N T
Substituting the expression of the nuclear tensor into tensor sparse representation by using the approximate relation between the sparse coefficient tensor and the nuclear tensor to obtain
Figure FDA0004045267790000021
The step 3) converts the new tensor sparse representation form in the step 2) into a mapping matrix form related to dictionary representation according to tensor operation properties, and specifically comprises the following steps:
in the operation of the tensor,
when the m is = n, the number of the n,
Ψ× mn B=Ψ× n (BA)
where Ψ represents the N-order tensor m M-mode product of expression tensor and matrix n Representing the n-modulo product of the tensor and matrix;
when m is not equal to n, the composition is,
Ψ× mn B=Ψ× nm A
applying the above two properties to a new sparse representation can result in
Figure FDA0004045267790000022
2. The tensor compression method based on the energy-gathered dictionary learning as claimed in claim 1, wherein the step 4) reduces the dimension of the dictionary in the mapping matrix by using the idea of dimension reduction of the energy-gathered dictionary learning algorithm, so as to realize the tensor compression, and the specific steps include:
inputting: tensor composed of T training samples
Figure FDA0004045267790000023
Sparsity k, maximum number of iterations Itermax, termination threshold ε
1. Initializing a dictionary
Figure FDA0004045267790000024
Is a Gaussian matrix and normalizes columns of each dictionary
2. And (3) decomposing tack: χ = Z |) 1 A 1 × 2 A 2 ...× N A N
3. Initializing a dictionary and performing a gamma (D) process in an energy-gathered dictionary learning algorithm;
4. calculating and using S when the update times i =0
Figure FDA0004045267790000031
Calculating the absolute error E without any iterative update 0
5. The ith dictionary update
For k=1:I N
By using
Figure FDA0004045267790000032
Calculating an updated value of the ith dictionary
End
6. Normalizing the N dictionaries obtained in the last step;
7. updating sparse coefficient tensor S by using dictionary updated at ith time i
8. Calculating the absolute error E after the ith update i And relative error E r
9. Judging the termination condition, if E r If the epsilon is less than epsilon or the iteration times exceed the maximum limit, the circulation is terminated, otherwise, the step 5-8 is continued;
10. get a dictionary
Figure FDA0004045267790000033
11. According to the formula
Figure FDA0004045267790000034
To obtain U d Then the dictionary >>
Figure FDA0004045267790000035
12. Bringing in dictionary P
Figure FDA0004045267790000036
And (3) outputting: compression tensor
Figure FDA0004045267790000037
3. The tensor compression method based on energy-gathered dictionary learning as claimed in claim 2, wherein in order to make the dictionary P after dimension reduction retain the principal components in the original dictionary D, the dictionary D needs to be preprocessed as follows
D T D=uΛv T
u is a left singular matrix of singular value decomposition, v is a right singular matrix, lambda is a singular value matrix, and the expression form is
Figure FDA0004045267790000038
Secondly, updating singular values
Figure FDA0004045267790000039
Where k denotes the number of columns in the dictionary, t d Which is indicative of a principal component threshold value,
Figure FDA00040452677900000310
representing the first d singular values after update>
Figure FDA00040452677900000311
Representing the updated last r singular values;
thereby obtaining new singular values
Figure FDA0004045267790000041
Figure FDA0004045267790000042
Forming a new dictionary with the initial left and right singular matrices, i.e.
Figure FDA0004045267790000043
4. The tensor compression method based on energy-gathered dictionary learning as claimed in claim 3, wherein after the dictionary is preprocessed, the dictionary needs to be updated, a tensor-based multidimensional dictionary learning algorithm TKSVD is adopted in the updating process, the dictionary updating process of the high-dimensional tensor signal is completed, and different from the K-SVD dictionary updating algorithm, the TKSVD algorithm specifically comprises:
(1) When carrying out tensor dictionary study, different from two-dimensional signal learning mode, according to the definition of tensor norm, can obtain:
Figure FDA0004045267790000044
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0004045267790000045
D N an N-dimensional dictionary, D 1 Denotes a 1 st-dimensional dictionary, I T Expressing a T-order unit array, solving the above formula by a least square method to obtain a dictionary D i Is updated with a value of
Figure FDA0004045267790000046
Wherein, y i The i-mode expansion matrix representing the tensor,
Figure FDA0004045267790000047
indicating a false reversal, i.e.>
Figure FDA0004045267790000048
Wherein +>
Figure FDA0004045267790000049
Pseudo-inverse matrix representing matrix M, M T A transposed matrix representing the matrix M; on the other hand, after one iteration is finished, the absolute error and the relative error between the data which can be recovered under the current dictionary and the sparse coefficient and the original training data are calculated, and the absolute error after the ith iteration is still expandedFrobenius norm definition of amount
Figure FDA00040452677900000410
Wherein S represents a coefficient tensor,
Figure FDA00040452677900000411
Representing the error of the real signal and the approximation signal after removing G atoms;
after the dictionary is updated, the dictionary is subjected to dimensionality reduction, and singular value decomposition is performed on the updated dictionary
Figure FDA00040452677900000412
Wherein U is d The first d columns, U, of the left singular matrix r The rear r column, Θ, representing the left singular matrix d The first d singular values, Θ, representing a matrix of singular values r Representing the last r singular values, V, of the matrix of singular values d The first d columns, V, representing the right singular matrix r Representing the rear r column of the right singular matrix, the dictionary after dimensionality reduction is represented as
Figure FDA0004045267790000051
And substituting the dictionary P subjected to the dimension reduction into the mapping matrix T, thereby completing tensor compression. />
CN201910126833.5A 2019-02-20 2019-02-20 Tensor compression method based on energy-gathering dictionary learning Active CN109921799B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910126833.5A CN109921799B (en) 2019-02-20 2019-02-20 Tensor compression method based on energy-gathering dictionary learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910126833.5A CN109921799B (en) 2019-02-20 2019-02-20 Tensor compression method based on energy-gathering dictionary learning

Publications (2)

Publication Number Publication Date
CN109921799A CN109921799A (en) 2019-06-21
CN109921799B true CN109921799B (en) 2023-03-31

Family

ID=66961845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910126833.5A Active CN109921799B (en) 2019-02-20 2019-02-20 Tensor compression method based on energy-gathering dictionary learning

Country Status (1)

Country Link
CN (1) CN109921799B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110579967B (en) * 2019-09-23 2020-06-30 中南大学 Process monitoring method based on simultaneous dimensionality reduction and dictionary learning
CN111241076B (en) * 2020-01-02 2023-10-31 西安邮电大学 Stream data increment processing method and device based on tensor chain decomposition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014096118A (en) * 2012-11-12 2014-05-22 Nippon Telegr & Teleph Corp <Ntt> Device, method, and program for missing value prediction and device, method, and program for commodity recommendation
CN107561576A (en) * 2017-08-31 2018-01-09 电子科技大学 Seismic signal method based on dictionary learning regularization rarefaction representation
CN108305297A (en) * 2017-12-22 2018-07-20 上海交通大学 A kind of image processing method based on multidimensional tensor dictionary learning algorithm

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5861827A (en) * 1996-07-24 1999-01-19 Unisys Corporation Data compression and decompression system with immediate dictionary updating interleaved with string search
CN1297822C (en) * 2003-02-21 2007-01-31 重庆邮电学院 Estimation method for radio orientation incoming wave direction based on TD-SCMA
US7490071B2 (en) * 2003-08-29 2009-02-10 Oracle Corporation Support vector machines processing system
US20100246920A1 (en) * 2009-03-31 2010-09-30 Iowa State University Research Foundation, Inc. Recursive sparse reconstruction
CN102130862B (en) * 2011-04-26 2013-07-17 重庆邮电大学 Method for reducing overhead caused by channel estimation of communication system
US8935308B2 (en) * 2012-01-20 2015-01-13 Mitsubishi Electric Research Laboratories, Inc. Method for recovering low-rank matrices and subspaces from data in high-dimensional matrices
WO2015042873A1 (en) * 2013-09-27 2015-04-02 Google Inc. Decomposition techniques for multi-dimensional data
CN105099460B (en) * 2014-05-07 2018-05-04 瑞昱半导体股份有限公司 Dictionary compression method, dictionary decompression method and dictionary constructing method
US9870519B2 (en) * 2014-07-08 2018-01-16 Nec Corporation Hierarchical sparse dictionary learning (HiSDL) for heterogeneous high-dimensional time series
CN104318064B (en) * 2014-09-26 2018-01-30 大连理工大学 Head coherent pulse response three-dimensional data compression method based on model's Multidimensional decomposition technique
CN104683074B (en) * 2015-03-13 2018-09-11 重庆邮电大学 Extensive mimo system limited feedback method based on compressed sensing
CN104933684B (en) * 2015-06-12 2017-11-21 北京工业大学 A kind of light field method for reconstructing
US10235600B2 (en) * 2015-06-22 2019-03-19 The Johns Hopkins University System and method for structured low-rank matrix factorization: optimality, algorithm, and applications to image processing
CN106506007A (en) * 2015-09-08 2017-03-15 联发科技(新加坡)私人有限公司 A kind of lossless data compression and decompressing device and its method
CN106023098B (en) * 2016-05-12 2018-11-16 西安电子科技大学 Image mending method based on the more dictionary learnings of tensor structure and sparse coding
CN106097278B (en) * 2016-06-24 2021-11-30 北京工业大学 Sparse model, reconstruction method and dictionary training method of multi-dimensional signal
CN106897685A (en) * 2017-02-17 2017-06-27 深圳大学 Face identification method and system that dictionary learning and sparse features based on core Non-negative Matrix Factorization are represented
US10436871B2 (en) * 2017-04-24 2019-10-08 Cedars-Sinai Medical Center Low-rank tensor imaging for multidimensional cardiovascular MRI
CN107516129B (en) * 2017-08-01 2020-06-02 北京大学 Dimension self-adaptive Tucker decomposition-based deep network compression method
CN107507253B (en) * 2017-08-15 2020-09-01 电子科技大学 Multi-attribute body data compression method based on high-order tensor approximation
CN108521586B (en) * 2018-03-20 2020-01-14 西北大学 IPTV television program personalized recommendation method giving consideration to time context and implicit feedback
CN108510013B (en) * 2018-07-02 2020-05-12 电子科技大学 Background modeling method for improving robust tensor principal component analysis based on low-rank core matrix
CN109241491A (en) * 2018-07-28 2019-01-18 天津大学 The structural missing fill method of tensor based on joint low-rank and rarefaction representation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014096118A (en) * 2012-11-12 2014-05-22 Nippon Telegr & Teleph Corp <Ntt> Device, method, and program for missing value prediction and device, method, and program for commodity recommendation
CN107561576A (en) * 2017-08-31 2018-01-09 电子科技大学 Seismic signal method based on dictionary learning regularization rarefaction representation
CN108305297A (en) * 2017-12-22 2018-07-20 上海交通大学 A kind of image processing method based on multidimensional tensor dictionary learning algorithm

Also Published As

Publication number Publication date
CN109921799A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
Zhang et al. Robust low-rank kernel multi-view subspace clustering based on the schatten p-norm and correntropy
Rathi et al. Statistical shape analysis using kernel PCA
CN108198147B (en) Multi-source image fusion denoising method based on discriminant dictionary learning
Majumdar et al. Robust greedy deep dictionary learning for ECG arrhythmia classification
Duan et al. K-CPD: Learning of overcomplete dictionaries for tensor sparse coding
WO2020010602A1 (en) Face recognition and construction method and system based on non-linear non-negative matrix decomposition, and storage medium
CN109921799B (en) Tensor compression method based on energy-gathering dictionary learning
CN110717519B (en) Training, feature extraction and classification method, device and storage medium
JP2012507793A (en) Complexity normalization pattern representation, search, and compression
Jin et al. Multiple graph regularized sparse coding and multiple hypergraph regularized sparse coding for image representation
Zhang et al. Two-dimensional non-negative matrix factorization for face representation and recognition
Wang et al. Efficient and robust discriminant dictionary pair learning for pattern classification
Qi et al. Two dimensional synthesis sparse model
CN111325275A (en) Robust image classification method and device based on low-rank two-dimensional local discriminant map embedding
CN113221660B (en) Cross-age face recognition method based on feature fusion
Chen et al. Augmented sparse representation for incomplete multiview clustering
CN108460412B (en) Image classification method based on subspace joint sparse low-rank structure learning
CN111798531B (en) Image depth convolution compressed sensing reconstruction method applied to plant monitoring
CN113920210A (en) Image low-rank reconstruction method based on adaptive graph learning principal component analysis method
CN112001865A (en) Face recognition method, device and equipment
Kärkkäinen et al. A Douglas–Rachford method for sparse extreme learning machine
CN110266318B (en) Measurement matrix optimization method based on gradient projection algorithm in compressed sensing signal reconstruction
CN112417234B (en) Data clustering method and device and computer readable storage medium
CN112149053A (en) Multi-view image characterization method based on low-rank correlation analysis
CN111475768A (en) Observation matrix construction method based on low coherence unit norm tight frame

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant