CN109921799A - A kind of tensor compression method based on cumulative amount dictionary learning - Google Patents
A kind of tensor compression method based on cumulative amount dictionary learning Download PDFInfo
- Publication number
- CN109921799A CN109921799A CN201910126833.5A CN201910126833A CN109921799A CN 109921799 A CN109921799 A CN 109921799A CN 201910126833 A CN201910126833 A CN 201910126833A CN 109921799 A CN109921799 A CN 109921799A
- Authority
- CN
- China
- Prior art keywords
- tensor
- dictionary
- matrix
- representing
- singular
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000006835 compression Effects 0.000 title claims abstract description 50
- 238000007906 compression Methods 0.000 title claims abstract description 50
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000001186 cumulative effect Effects 0.000 title abstract description 5
- 239000011159 matrix material Substances 0.000 claims abstract description 85
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 34
- 238000013507 mapping Methods 0.000 claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 13
- 230000014509 gene expression Effects 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract 1
- QDGIAPPCJRFVEK-UHFFFAOYSA-N (1-methylpiperidin-4-yl) 2,2-bis(4-chlorophenoxy)acetate Chemical compound C1CN(C)CCC1OC(=O)C(OC=1C=CC(Cl)=CC=1)OC1=CC=C(Cl)C=C1 QDGIAPPCJRFVEK-UHFFFAOYSA-N 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000013256 coordination polymer Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Machine Translation (AREA)
Abstract
A kind of tensor compression method based on cumulative amount dictionary learning is claimed in the present invention, belongs to field of signal processing.It the described method comprises the following steps: 1, tensor being subjected to Plutarch decomposition and rarefaction representation respectively, obtain dictionary, sparse coefficient and core tensor;2, by the relationship of the sparse coefficient of tensor and core tensor, the new rarefaction representation form about tensor is obtained;3, dimensionality reduction is carried out to the dictionary in mapping matrix using cumulative amount dictionary learning algorithm, to realize the compression of tensor.Tensor compression algorithm proposed by the present invention based on cumulative amount dictionary learning, realizes being effectively compressed for tensor, relative to other compression algorithms, can more effectively retain the information of original tensor, reaches preferably denoising effect.
Description
Technical Field
The invention belongs to the field of signal processing, and particularly relates to a tensor signal compression algorithm based on energy-gathering dictionary learning, which can realize effective compression of tensor signals.
Background
With the development of information technology, multidimensional signals play an increasingly important role in the field of signal processing. Meanwhile, multi-dimensional (MD) signals also impose a great burden on the transmission and storage process. To deal with the challenges of multi-dimensional signal processing, people pay attention to tensor representation of multi-dimensional signals, and the multi-dimensional signals are represented as tensors and processed, so that great convenience is brought to the processing of the multi-dimensional signals. Therefore, the compression of the multidimensional signal is essentially effective compression of the tensor, so that the tensor compression algorithm plays an increasingly important role in the field of multidimensional signal compression, and is a hot spot of current research.
In recent years, researchers have proposed many effective compression algorithms based on CP decomposition and tack decomposition algorithms for the problem of tensor compression, and one of them is to awaken vectorization operation of tensor data directly based on tensor decomposition for the tensor data itself. However, according to the well-known theorem of ugly ducklings, there is no optimal representation of patterns without any a priori knowledge, in other words, vectorization of tensor data is not always efficient. Specifically, this may cause the following problems: firstly, the inherent high-order structure and the inherent correlation in the original data are destroyed, and the information loses or masks the redundant information and the high-order dependency of the original data, so that a model representation which is more meaningful possibly cannot be obtained in the original data; second, vectorization operations can generate high-dimensional vectors, leading to "overfitting", "dimension disasters", and small sample problems.
On the other hand, the sparse representation of the tensor is applied to the compression of the tensor. Due to the equivalence of the tensor tack model and the Kronecker representation, a tensor can be defined as a representation of a given Kronecker dictionary with certain sparsity, such as multidirectional sparsity and block sparsity. With the occurrence of tensor sparsity, some sparse coding modes also occur, such as algorithms of Kroneker-OMP, N-way Block OMP and the like, and the algorithms bring great convenience to tensor compression; meanwhile, a plurality of dictionary learning algorithms are generated corresponding to various sparse coding algorithms, and tensors are subjected to sparse coding and dictionary learning in each dimension, so that the aim of sparse representation is fulfilled. However, in the above process of the tensor processing algorithm based on the sparse representation, some new noise is often introduced, which will have a certain influence on the accuracy of the data. In addition, in the sparse representation process, the determination of the sparsity also brings certain challenges to tensor processing.
Therefore, in processing a tensor including a large amount of data, the tensor is effectively compressed, useful information is extracted, and unprecedented challenges are faced in reducing transmission and storage costs.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. The tensor compression method based on the energy-gathering dictionary learning can improve the storage capacity of original data and enhance the denoising capacity. The technical scheme of the invention is as follows:
a tensor compression method based on energy-gathered dictionary learning comprises the following steps:
step 1): acquiring a multi-dimensional signal, expressing the multi-dimensional signal as a tensor, inputting the tensor, and performing sparse expression and tack decomposition;
step 2) obtaining a new tensor sparse representation form related to the original tensor by utilizing the approximate relation between the sparse coefficient tensor in the sparse representation and the core tensor obtained by the Tak decomposition; (ii) a
Step 3) converting the new tensor sparse representation form in the step 2) into a mapping matrix form related to dictionary representation according to tensor operation properties;
and 4) reducing the dimension of the dictionary in the mapping matrix by using the idea of reducing the dimension of the energy-gathering dictionary learning algorithm, thereby realizing the compression of the tensor.
Further, the step 1) obtains the form of the product of the core tensor and the mapping matrix in each dimension through the Tack decomposition, and the following expression is obtained in the Tack decomposition process
A∈RI×P,B∈RJ×Q,C∈RK×RThe orthogonal matrix is also called a factor matrix, reflects principal components in each dimension, and Z belongs to RP×Q×RFor the core tensor, reflecting the correlation of each dimension, P, Q and R correspond to the number of columns of factor matrices A, B and C, I, J and K represent the size of each dimension of the original tensor, and if P, Q, R is smaller than I, J, K, the core tensor can beCompression, considered as the original tensor;
for an N-order tensor, the Tak decomposition form is
χ=Z×1A1×2A2...×NAN
χ denotes an input tensor signal, Z denotes a core tensor, AiThe decomposition matrix in each dimension is represented as an orthogonal matrix.
Further, the expression form of the sparse representation of the tensor of the step 1) is
Representing the signal after sparse representation, S representing the sparse coefficient tensor, N representing the order of the self-tensor, DiRepresenting dictionaries in various dimensions.
Further, the step 2) finds that the two expressions are similar by observing the sparse representation of the tensor and the Tak decomposition form, and the representation of the nuclear tensor obtained by the Tak decomposition is shown as
Z=χ×1A1 T×2A2 T...×NAN T
Substituting the expression of the nuclear tensor into tensor sparse representation by using the approximate relation between the sparse coefficient tensor and the nuclear tensor to obtain
Further, the step 3) converts the new tensor sparse representation form in the step 2) into a mapping matrix form related to dictionary representation according to tensor operational properties, and specifically includes:
in the operation of the tensor,
when m is not equal to n, the number of the first and second groups,
Ψ×mA×nB=Ψ×n(BA)
wherein Ψ represents an N-order tensor, andmm-mode product of expression tensor and matrixnRepresenting the n-modulo product of the tensor and matrix.
When m is not equal to n, the number of the first and second groups,
Ψ×mA×nB=Ψ×nB×mA
applying the above two properties to a new sparse representation can result in
Further, in the step 4), the dictionary in the mapping matrix is subjected to dimension reduction by using the idea of dimension reduction of the energy-gathered dictionary learning algorithm, so that the compression of the tensor is realized, and the specific steps include:
inputting: tensor composed of T training samplesSparsity k, maximum number of iterations Itermax, termination threshold ε
1. Initializing a dictionaryIs a Gaussian matrix and normalizes columns of each dictionary
2. And (3) decomposing tack: chi ═ Z curve1A1×2A2...×NAN;
3. Initializing a dictionary and performing a gamma (D) process in an energy-gathered dictionary learning algorithm;
4. calculating S when the update time i is 0, and usingCalculating the absolute error E without any iterative update0;
5. The ith dictionary update
For k=1:IN
By usingCalculating an updated value of the ith dictionary
End
6. Normalizing the N dictionaries obtained in the last step;
7. updating sparse coefficient tensor S by using ith updated dictionaryi;
8. Calculating the absolute error E after the ith updateiAnd relative error Er;
9. Judging the termination condition, if ErIf the iteration number is less than epsilon or exceeds the maximum limit, the loop is terminated, otherwise, the step 5-8 is continued;
10. get a dictionary
11. According to the formulaTo obtain UdThen dictionary
12. Bring dictionary P in
And (3) outputting: compression tensor
Further, in order to make the dictionary P after dimension reduction retain the principal components in the original dictionary D, the dictionary D needs to be preprocessed, and the processing procedure is as follows
DTD=uΛvT
u is left singular matrix of singular value decomposition, v is right singular matrix, Λ is singular value matrix, and the expression form isSecondly, updating singular values
Where k denotes the number of dictionary columns, tdWhich is indicative of a principal component threshold value,representing the first d singular values after updating,representing the updated last r singular values.
Thereby obtaining new singular values
Form a new dictionary with the initial left and right singular matrices, i.e.
Further, after preprocessing the dictionary, the dictionary needs to be updated, a tensor-based multidimensional dictionary learning algorithm TKSVD is adopted in the updating process to complete the dictionary updating process of the high-dimensional tensor signal, and the TKSVD algorithm is different from the K-SVD dictionary updating algorithm in that the TKSVD algorithm specifically includes:
(1) when carrying out tensor dictionary study, different from two-dimensional signal learning mode, according to the definition of tensor norm, can obtain:
wherein,DNan N-dimensional dictionary, D1Denotes a 1 st-dimensional dictionary, ITExpressing a T-order unit array, solving the above formula by a least square method to obtain a dictionary DiIs updated to
Wherein, yiThe i-mode expansion matrix representing the tensor,representing a pseudo inverse, i.e.WhereinPseudo-inverse matrix representing matrix M, MTRepresenting the transpose of matrix M. On the other hand, after one iteration is completed, the calculation can be performed under the current dictionary and the sparse coefficientThe absolute and relative errors between the recovered data and the original training data, the absolute error after the ith iteration still being defined by the Frobenius norm of the tensor
Wherein S represents a coefficient tensor,Representing the error of the true signal from the approximated signal after removing G atoms.
After the dictionary is updated, the dictionary is subjected to dimensionality reduction, and singular value decomposition is performed on the updated dictionary
Wherein U isdThe first d columns, U, representing the left singular matrixrThe rear r column, Θ, representing the left singular matrixdThe first d singular values, Θ, representing a matrix of singular valuesrRepresenting the last r singular values, V, of the matrix of singular valuesdThe first d columns, V, representing the right singular matrixrThe rear r columns of the right singular matrix are represented. The reduced-dimension dictionary is represented asAnd substituting the dictionary P subjected to the dimension reduction into the mapping matrix T, thereby completing tensor compression.
The invention has the following advantages and beneficial effects:
the invention provides a tensor compression algorithm based on energy-gathering dictionary learning. The specific innovative steps comprise: 1) the sparse representation of the tensor is applied to tensor compression, and the denoising capability is superior to other algorithms; 2) the compression of the tensor is completed through the dimensionality reduction of the mapping matrix, so that the damage of vectorization operation to the internal data structure of the tensor is avoided; 3) dimension reduction is performed on the dictionary of the tensor by adopting an energy-gathered dictionary learning algorithm, so that a data structure between data in each dimension can be reserved, and the data retention capacity is improved.
Drawings
Figure 1 is an exploded view of a third order tensor tack as used in the preferred embodiment of the present invention;
FIG. 2 is a schematic diagram of a tensor compression algorithm based on energy-gathered dictionary learning;
fig. 3 is a flow diagram of a tensor compression algorithm based on energy-gathered dictionary learning.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
the method mainly solves the problems that in the traditional tensor compression algorithm, a data structure is damaged, information is lost, and new noise is introduced. The method mainly includes the steps that a dictionary, a sparse coefficient tensor and a nuclear tensor are obtained through Tak decomposition and sparse representation, then a new sparse representation form is formed through the approximate relation between the sparse coefficient tensor and the nuclear tensor, finally dimension reduction is conducted on the dictionary in the sparse representation through an energy-gathering dictionary learning algorithm, and therefore tensor compression is achieved.
FIG. 2 is a general flow chart of the present invention, which is described below with reference to the accompanying drawings, and includes the following steps:
the method comprises the following steps: simultaneously carrying out sparse representation and tack decomposition on the input tensor, and obtaining the following representation in the sparse representation process
After the tensor is subjected to sparse representation, a product form of the sparse coefficient tensor in each dimension is obtained, in other words, in the sparse representation, the sparse coefficient tensor takes the dictionary D as a mapping matrix and is projected in each dimension, and therefore the sparse tensor is obtained.
Giving a third order tensor x ∈ RI×J×KThe Take decomposition process is shown in FIG. 1. In the course of the Take decomposition, the following expression is obtained
A∈RI×P,B∈RJ×Q,C∈RK×RThe orthogonal matrix is also called a factor matrix, reflects principal components in each dimension, and Z belongs to RP×Q×RThe core tensor is the reflection of the relevant situation of each dimension. P, Q and R correspond to the number of columns of factor matrices A, B and C, respectively, I, J and K represent the size of each dimension of the original tensor, if P, Q, R is smaller than I, J, K, then the core tensor can be regarded as the compression of the original tensor.
More generally, for an order-N tensor, the Tak decomposition is in the form of
χ=Z×1A1×2A2...×NAN
And step two) obtaining a new tensor sparse representation form related to the original tensor by utilizing the approximate relation between the sparse coefficient tensor in the sparse representation and the core tensor obtained by the Tak decomposition.
By observing the sparse representation of the tensor and the form of the Tak decomposition, it is found that the two expressions are similar, and the representation of the nuclear tensor obtained by the Tak decomposition is
Z=χ×1A1 T×2A2 T...×NAN T
Substituting the expression of the nuclear tensor into tensor sparse representation by using the approximate relation between the sparse coefficient tensor and the nuclear tensor to obtain
Step three: converting the new tensor sparse representation form into a mapping matrix form related to dictionary representation according to tensor operational properties, and the specific steps comprise:
in the tensor operation, when m is not equal to n,
Ψ×mA×nB=Ψ×n(BA)
wherein Ψ represents an N-order tensor, andmm-mode product of expression tensor and matrixnRepresenting the n-modulo product of the tensor and matrix.
When m is not equal to n, the number of the first and second groups,
Ψ×mA×nB=Ψ×nB×mA
applying the above two properties to a new sparse representation can result in
Let Ti=DiAi TWill TiBrought into the above formula to obtain
From the above equation, the sparse representation is in the form of a projection on one dimension with respect to the original matrix. In the MPCA algorithm, the high-dimensional tensor realizes tensor compression through processing of a projection matrix, namely, the main component of the projection matrix is reserved to achieve the purpose of compressing the projection matrix, so that the tensor compression is realized. Elicitation of the MPCA concept, here for matrix TiDimension reduction is performed, thereby realizing compression of the original tensor χ. To complete the matrix pair TiDimension reduction of (2), here considering sparse TiThe dictionary D is subjected to dimension reduction, so that the mapping matrix T is realizediThe dimensionality reduction of (1).
Step four: and (3) realizing the compression of the tensor by using the thought of dictionary dimension reduction in the energy-gathered dictionary learning algorithm. In the algorithm, in order to enable the dictionary P after dimensionality reduction to keep the principal components in the original dictionary D, the dictionary D needs to be preprocessed, and the processing process is as follows
DTD=uΛvT
u is left singular matrix of singular value decomposition, v is right singular matrix, Λ is singular value matrix, and the expression form isSecondly, updating singular values
Where k denotes the number of dictionary columns, tdWhich is indicative of a principal component threshold value,representing the first d singular values after updating,representing the updated last r singular values.
Thereby obtaining new singular values
Form a new dictionary with the initial left and right singular matrices, i.e.
After preprocessing the dictionary, the dictionary needs to be updated, and a multidimensional dictionary learning algorithm (TKSVD) based on tensor is adopted in the updating process to complete the dictionary updating process of the high-dimensional tensor signals. Unlike the updating algorithm of the K-SVD dictionary, the TKSVD algorithm needs to be noticed by two points:
(2) when carrying out tensor dictionary study, different from two-dimensional signal learning mode, according to the definition of tensor norm, can obtain:
wherein,ITrepresenting a unit matrix of order T, DNAn N-dimensional dictionary, D1Representing a 1 st dimensional dictionary. Obviously, the above formula can be solved by the least square method, and the dictionary D can be obtainediIs updated to
Wherein, yiThe i-mode expansion matrix representing the tensor,representing a pseudo inverse, i.e.WhereinPseudo-inverse matrix representing matrix M, MTRepresenting the transpose of matrix M. On the other hand, after one iteration is completed, the absolute error and the relative error between the data which can be recovered under the current dictionary and the sparse coefficient and the original training data can be calculated, and the absolute error after the ith iteration is still defined by the Frobenius norm of the tensor
Wherein S represents a coefficient tensor,Representing the error of the true signal from the approximated signal after removing G atoms.
And after the dictionary is updated, performing dimension reduction processing on the dictionary. Singular value decomposition of updated dictionary
Wherein U isdThe first d columns, U, representing the left singular matrixrThe rear r column, Θ, representing the left singular matrixdThe first d singular values, Θ, representing a matrix of singular valuesrRepresenting the last r singular values, V, of the matrix of singular valuesdThe first d columns, V, representing the right singular matrixrThe rear r columns of the right singular matrix are represented. The reduced-dimension dictionary is represented asAnd substituting the dictionary P subjected to the dimension reduction into the mapping matrix T, thereby completing tensor compression.
The tensor compression algorithm based on the energy-gathered dictionary learning specifically comprises the following steps:
inputting: tensor composed of T training samplesSparsity k, maximum number of iterations Itermax, termination threshold ε
1. Initializing a dictionaryIs a Gaussian matrix and normalizes columns of each dictionary
2. And (3) decomposing tack: chi ═ Z curve1A1×2A2...×NAN
3. Initializing dictionaries through the gamma (D) process in a cumulative energy dictionary learning algorithm
4. Calculating S when the update time i is 0, and usingCalculating the absolute error E without any iterative update0
5. The ith dictionary update
For k=1:IN
By usingCalculating an updated value of the ith dictionary
End
6. Normalizing the N dictionaries obtained in the last step
7. Updating sparse coefficient tensor S by using ith updated dictionaryi
8. Calculating the absolute error E after the ith updateiAnd relative error Er
9. Judging the termination condition, if Er< epsilon or the number of iterations exceeds the maximumLimiting, terminating the loop, otherwise continuing to step 5-8
10. Get a dictionary
11. According to the formulaTo obtain UdThen dictionary
12. Bring dictionary P in
And (3) outputting: compression tensor
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.
Claims (8)
1. A tensor compression method based on energy-gathered dictionary learning is characterized by comprising the following steps:
step 1): acquiring a multi-dimensional signal, expressing the multi-dimensional signal as a tensor, inputting the tensor, and performing sparse expression and tack decomposition;
step 2) obtaining a new tensor sparse representation form related to the original tensor by utilizing the approximate relation between the sparse coefficient tensor in the sparse representation and the core tensor obtained by the Tak decomposition; (ii) a
Step 3) converting the new tensor sparse representation form in the step 2) into a mapping matrix form related to dictionary representation according to tensor operation properties;
and 4) reducing the dimension of the dictionary in the mapping matrix by using the idea of reducing the dimension of the energy-gathering dictionary learning algorithm, thereby realizing the compression of the tensor.
2. The tensor compression method based on energy-gathered dictionary learning as claimed in claim 1, wherein the step 1) obtains the product form of the core tensor and the mapping matrix in each dimension through the Tak decomposition, and the following expression is obtained in the Tak decomposition process
A∈RI×P,B∈RJ×Q,C∈RK×RThe orthogonal matrix is also called a factor matrix, reflects principal components in each dimension, and Z belongs to RP×Q×RThe core tensor is the reflection of the relevant situation of each dimension. P, Q and R correspond to the number of columns of the factor matrix A, B and C, respectively, I, J and K represent the size of each dimension of the original tensor, if P, Q, R is smaller than I, J, K, the core tensor can be regarded as the compression of the original tensor;
for an N-order tensor, the Tak decomposition form is
χ=Z×1A1×2A2...×NAN
χ denotes an input tensor signal, Z denotes a core tensor, AiThe decomposition matrix in each dimension is represented as an orthogonal matrix.
3. The tensor compression method based on energy-gathered dictionary learning as claimed in claim 2, wherein the expression form of the sparse representation of the tensor in the step 1) is represented by
Representing the signal after sparse representation, S representing the sparse coefficient tensor, N representing the order of the self-tensor, DiRepresenting dictionaries in various dimensions.
4. The tensor compression method based on energy-gathered dictionary learning as claimed in claim 3, wherein the step 2) finds that the two expressions are similar by observing the sparse representation of the tensor and the Tak decomposition form, and the representation of the nuclear tensor obtained by the Tak decomposition is represented as
Z=χ×1A1 T×2A2 T...×NAN T
Substituting the expression of the nuclear tensor into tensor sparse representation by using the approximate relation between the sparse coefficient tensor and the nuclear tensor to obtain
5. The tensor compression method based on energy-gathered dictionary learning as claimed in claim 4, wherein the step 3) converts the new tensor sparse representation form of the step 2) into a mapping matrix form related to dictionary representation according to tensor operation properties, and specifically comprises the following steps:
in the operation of the tensor,
when m is not equal to n, the number of the first and second groups,
Ψ×mA×nB=Ψ×n(BA)
wherein Ψ represents an N-order tensor, andmm-mode product of expression tensor and matrixnRepresenting the n-modulo product of the tensor and matrix;
when m is not equal to n, the number of the first and second groups,
Ψ×mA×nB=Ψ×nB×mA
applying the above two properties to a new sparse representation can result in
6. The tensor compression method based on energy-gathered dictionary learning according to claim 5, wherein the step 4) uses the idea of dimension reduction of the energy-gathered dictionary learning algorithm to perform dimension reduction on the dictionary in the mapping matrix, so as to realize tensor compression, and the specific steps include:
inputting: tensor composed of T training samplesSparsity k, maximum number of iterations Itermax, termination threshold ε
1. Initializing a dictionaryIs a Gaussian matrix and normalizes columns of each dictionary
2. And (3) decomposing tack: chi ═ Z curve1A1×2A2...×NAN;
3. Initializing a dictionary and performing a gamma (D) process in an energy-gathered dictionary learning algorithm;
4. calculating S when the update time i is 0, and usingCalculating the absolute error E without any iterative update0;
5. The ith dictionary update
For k=1:IN
By usingCalculating an updated value of the ith dictionary
End
6. Normalizing the N dictionaries obtained in the last step;
7. updating sparse coefficient tensor S by using ith updated dictionaryi;
8. Calculating the absolute error E after the ith updateiAnd relative error Er;
9. Judging the termination condition, if ErIf the iteration number is less than epsilon or exceeds the maximum limit, the loop is terminated, otherwise, the step 5-8 is continued;
10. get a dictionary
11. According to the formulaTo obtain UdThen dictionary
12. Bring dictionary P in
And (3) outputting: compression tensor
7. The tensor compression method based on energy-gathered dictionary learning as claimed in claim 6, wherein the dictionary D needs to be preprocessed in order to make the dictionary P after dimension reduction retain the principal components in the original dictionary D, and the processing procedure is as follows
DTD=uΛvT
u is left singular matrix of singular value decomposition, v is right singular matrix, Λ is singular value matrix, and the expression form isSecondly, updating singular values
Where k denotes the number of dictionary columns, tdWhich is indicative of a principal component threshold value,representing the first d singular values after updating,representing the updated last r singular values;
thereby obtaining new singular values
Form a new dictionary with the initial left and right singular matrices, i.e.
8. The tensor compression method based on energy-gathered dictionary learning of claim 7, wherein after the dictionary is preprocessed, the dictionary needs to be updated, a tensor-based multidimensional dictionary learning algorithm TKSVD is adopted in the updating process, the dictionary updating process of the high-dimensional tensor signal is completed, and the TKSVD algorithm is different from the K-SVD dictionary updating algorithm in that the TKSVD algorithm specifically comprises:
(1) when carrying out tensor dictionary study, different from two-dimensional signal learning mode, according to the definition of tensor norm, can obtain:
wherein,DNan N-dimensional dictionary, D1Denotes a 1 st-dimensional dictionary, ITExpressing a T-order unit array, solving the above formula by a least square method to obtain a dictionary DiIs updated to
Wherein, yiThe i-mode expansion matrix representing the tensor,representing a pseudo inverse, i.e.WhereinPseudo-inverse matrix representing matrix M, MTA transposed matrix representing the matrix M; on the other hand, after one iteration is finished, the absolute error and the relative error between the data which can be recovered under the current dictionary and the sparse coefficient and the original training data are calculated, and the absolute error after the ith iteration is still defined by the Frobenius norm of the tensor
Wherein S represents a coefficient tensor,Representing the error of the real signal and the approximation signal after removing G atoms;
after the dictionary is updated, the dictionary is subjected to dimensionality reduction, and singular value decomposition is performed on the updated dictionary
Wherein U isdThe first d columns, U, representing the left singular matrixrThe rear r column, Θ, representing the left singular matrixdThe first d singular values, Θ, representing a matrix of singular valuesrRepresenting the last r singular values, V, of the matrix of singular valuesdThe first d columns, V, representing the right singular matrixrRepresenting the rear r column of the right singular matrix, the dictionary after dimensionality reduction is represented asAnd substituting the dictionary P subjected to the dimension reduction into the mapping matrix T, thereby completing tensor compression.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910126833.5A CN109921799B (en) | 2019-02-20 | 2019-02-20 | Tensor compression method based on energy-gathering dictionary learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910126833.5A CN109921799B (en) | 2019-02-20 | 2019-02-20 | Tensor compression method based on energy-gathering dictionary learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109921799A true CN109921799A (en) | 2019-06-21 |
CN109921799B CN109921799B (en) | 2023-03-31 |
Family
ID=66961845
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910126833.5A Active CN109921799B (en) | 2019-02-20 | 2019-02-20 | Tensor compression method based on energy-gathering dictionary learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109921799B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110579967A (en) * | 2019-09-23 | 2019-12-17 | 中南大学 | process monitoring method based on simultaneous dimensionality reduction and dictionary learning |
CN111241076A (en) * | 2020-01-02 | 2020-06-05 | 西安邮电大学 | Stream data increment processing method and device based on tensor chain decomposition |
Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1228887A (en) * | 1996-07-24 | 1999-09-15 | 尤尼西斯公司 | Data compression and decompression system with immediate dictionary updating interleaved with string search |
CN1523372A (en) * | 2003-02-21 | 2004-08-25 | 重庆邮电学院 | Estimation method for radio orientation incoming wave direction based on TD-SCMA |
US20050049990A1 (en) * | 2003-08-29 | 2005-03-03 | Milenova Boriana L. | Support vector machines processing system |
US20100246920A1 (en) * | 2009-03-31 | 2010-09-30 | Iowa State University Research Foundation, Inc. | Recursive sparse reconstruction |
CN102130862A (en) * | 2011-04-26 | 2011-07-20 | 重庆邮电大学 | Method for reducing overhead caused by channel estimation of communication system |
US20130191425A1 (en) * | 2012-01-20 | 2013-07-25 | Fatih Porikli | Method for Recovering Low-Rank Matrices and Subspaces from Data in High-Dimensional Matrices |
JP2014096118A (en) * | 2012-11-12 | 2014-05-22 | Nippon Telegr & Teleph Corp <Ntt> | Device, method, and program for missing value prediction and device, method, and program for commodity recommendation |
CN104318064A (en) * | 2014-09-26 | 2015-01-28 | 大连理工大学 | Three-dimensional head-related impulse response data compressing method based on canonical multi-decomposition |
WO2015042873A1 (en) * | 2013-09-27 | 2015-04-02 | Google Inc. | Decomposition techniques for multi-dimensional data |
CN104683074A (en) * | 2015-03-13 | 2015-06-03 | 重庆邮电大学 | Large-scale MIMO system limiting feedback method based on compressive sensing |
CN104933684A (en) * | 2015-06-12 | 2015-09-23 | 北京工业大学 | Light field reconstruction method |
US20150326247A1 (en) * | 2014-05-07 | 2015-11-12 | Realtek Semiconductor Corporation | Dictionary-based compression method, dictionary-based decompression method and dictionary composing method |
US20160012334A1 (en) * | 2014-07-08 | 2016-01-14 | Nec Laboratories America, Inc. | Hierarchical Sparse Dictionary Learning (HiSDL) for Heterogeneous High-Dimensional Time Series |
CN106023098A (en) * | 2016-05-12 | 2016-10-12 | 西安电子科技大学 | Image repairing method based on tensor structure multi-dictionary learning and sparse coding |
CN106097278A (en) * | 2016-06-24 | 2016-11-09 | 北京工业大学 | The sparse model of a kind of multidimensional signal, method for reconstructing and dictionary training method |
US20160371563A1 (en) * | 2015-06-22 | 2016-12-22 | The Johns Hopkins University | System and method for structured low-rank matrix factorization: optimality, algorithm, and applications to image processing |
CN106506007A (en) * | 2015-09-08 | 2017-03-15 | 联发科技(新加坡)私人有限公司 | A kind of lossless data compression and decompressing device and its method |
CN107507253A (en) * | 2017-08-15 | 2017-12-22 | 电子科技大学 | Based on the approximate more attribute volume data compression methods of high order tensor |
CN107516129A (en) * | 2017-08-01 | 2017-12-26 | 北京大学 | The depth Web compression method decomposed based on the adaptive Tucker of dimension |
CN107561576A (en) * | 2017-08-31 | 2018-01-09 | 电子科技大学 | Seismic signal method based on dictionary learning regularization rarefaction representation |
CN108305297A (en) * | 2017-12-22 | 2018-07-20 | 上海交通大学 | A kind of image processing method based on multidimensional tensor dictionary learning algorithm |
WO2018149133A1 (en) * | 2017-02-17 | 2018-08-23 | 深圳大学 | Method and system for face recognition by means of dictionary learning based on kernel non-negative matrix factorization, and sparse feature representation |
CN108510013A (en) * | 2018-07-02 | 2018-09-07 | 电子科技大学 | The steady tensor principal component analytical method of improvement based on low-rank kernel matrix |
CN108521586A (en) * | 2018-03-20 | 2018-09-11 | 西北大学 | The IPTV TV program personalizations for taking into account time context and implicit feedback recommend method |
US20180306882A1 (en) * | 2017-04-24 | 2018-10-25 | Cedars-Sinai Medical Center | Low-rank tensor imaging for multidimensional cardiovascular mri |
CN109241491A (en) * | 2018-07-28 | 2019-01-18 | 天津大学 | The structural missing fill method of tensor based on joint low-rank and rarefaction representation |
-
2019
- 2019-02-20 CN CN201910126833.5A patent/CN109921799B/en active Active
Patent Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1228887A (en) * | 1996-07-24 | 1999-09-15 | 尤尼西斯公司 | Data compression and decompression system with immediate dictionary updating interleaved with string search |
CN1523372A (en) * | 2003-02-21 | 2004-08-25 | 重庆邮电学院 | Estimation method for radio orientation incoming wave direction based on TD-SCMA |
US20050049990A1 (en) * | 2003-08-29 | 2005-03-03 | Milenova Boriana L. | Support vector machines processing system |
US20100246920A1 (en) * | 2009-03-31 | 2010-09-30 | Iowa State University Research Foundation, Inc. | Recursive sparse reconstruction |
CN102130862A (en) * | 2011-04-26 | 2011-07-20 | 重庆邮电大学 | Method for reducing overhead caused by channel estimation of communication system |
US20130191425A1 (en) * | 2012-01-20 | 2013-07-25 | Fatih Porikli | Method for Recovering Low-Rank Matrices and Subspaces from Data in High-Dimensional Matrices |
JP2014096118A (en) * | 2012-11-12 | 2014-05-22 | Nippon Telegr & Teleph Corp <Ntt> | Device, method, and program for missing value prediction and device, method, and program for commodity recommendation |
WO2015042873A1 (en) * | 2013-09-27 | 2015-04-02 | Google Inc. | Decomposition techniques for multi-dimensional data |
US20150326247A1 (en) * | 2014-05-07 | 2015-11-12 | Realtek Semiconductor Corporation | Dictionary-based compression method, dictionary-based decompression method and dictionary composing method |
US20160012334A1 (en) * | 2014-07-08 | 2016-01-14 | Nec Laboratories America, Inc. | Hierarchical Sparse Dictionary Learning (HiSDL) for Heterogeneous High-Dimensional Time Series |
CN104318064A (en) * | 2014-09-26 | 2015-01-28 | 大连理工大学 | Three-dimensional head-related impulse response data compressing method based on canonical multi-decomposition |
CN104683074A (en) * | 2015-03-13 | 2015-06-03 | 重庆邮电大学 | Large-scale MIMO system limiting feedback method based on compressive sensing |
CN104933684A (en) * | 2015-06-12 | 2015-09-23 | 北京工业大学 | Light field reconstruction method |
US20160371563A1 (en) * | 2015-06-22 | 2016-12-22 | The Johns Hopkins University | System and method for structured low-rank matrix factorization: optimality, algorithm, and applications to image processing |
CN106506007A (en) * | 2015-09-08 | 2017-03-15 | 联发科技(新加坡)私人有限公司 | A kind of lossless data compression and decompressing device and its method |
CN106023098A (en) * | 2016-05-12 | 2016-10-12 | 西安电子科技大学 | Image repairing method based on tensor structure multi-dictionary learning and sparse coding |
CN106097278A (en) * | 2016-06-24 | 2016-11-09 | 北京工业大学 | The sparse model of a kind of multidimensional signal, method for reconstructing and dictionary training method |
WO2018149133A1 (en) * | 2017-02-17 | 2018-08-23 | 深圳大学 | Method and system for face recognition by means of dictionary learning based on kernel non-negative matrix factorization, and sparse feature representation |
US20180306882A1 (en) * | 2017-04-24 | 2018-10-25 | Cedars-Sinai Medical Center | Low-rank tensor imaging for multidimensional cardiovascular mri |
CN107516129A (en) * | 2017-08-01 | 2017-12-26 | 北京大学 | The depth Web compression method decomposed based on the adaptive Tucker of dimension |
CN107507253A (en) * | 2017-08-15 | 2017-12-22 | 电子科技大学 | Based on the approximate more attribute volume data compression methods of high order tensor |
CN107561576A (en) * | 2017-08-31 | 2018-01-09 | 电子科技大学 | Seismic signal method based on dictionary learning regularization rarefaction representation |
CN108305297A (en) * | 2017-12-22 | 2018-07-20 | 上海交通大学 | A kind of image processing method based on multidimensional tensor dictionary learning algorithm |
CN108521586A (en) * | 2018-03-20 | 2018-09-11 | 西北大学 | The IPTV TV program personalizations for taking into account time context and implicit feedback recommend method |
CN108510013A (en) * | 2018-07-02 | 2018-09-07 | 电子科技大学 | The steady tensor principal component analytical method of improvement based on low-rank kernel matrix |
CN109241491A (en) * | 2018-07-28 | 2019-01-18 | 天津大学 | The structural missing fill method of tensor based on joint low-rank and rarefaction representation |
Non-Patent Citations (18)
Title |
---|
ARGUELLO H: ""Higher-order compu- tational model for coded aperture spectral imaging"", 《APPL OPTICS》 * |
CAIAFA CF: ""Block sparse representations of tensors using Kronecker bases"", 《IN: 2012 IEEE INTERNA- TIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)》 * |
CASSIO FRAGA DANTAS: ""Learning fast dictionaries for sparse representations using low-rank tensor decompositions"", 《LVA/ICA 2018 - 14TH INTERNATIONAL CONFERENCE ON LATENT VARIABLE ANALYSIS AND SIGNAL SEPARATION》 * |
CHONG.Y: ""Block-Sparse Tensor Based Spatial-Spectral Joint Compression of Hyperspectral Images"", 《LECTURE NOTES IN COMPUTER SCIENCE》 * |
KOLDA TG: ""Tensor decompositions and applications"", 《SIAM REV》 * |
LY NH: ""Reconstruction from random projections of hyperspectral imagery with spectral and spatial partitioning"", 《IEEE J SEL TOP APPL》 * |
PRATER-BENNETTE: ""Separation of Composite Tensors with Sparse Tucker Representations"", 《PROCEEDINGS OF SPIE》 * |
S. ZUBAIR: ""Tensor dictionary learning with sparse TUCKER decompositio"", 《2013 18TH INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP)》 * |
SHI JIARONG: ""Sparse representation of high-order tensor signal"", 《COMPUTER ENGINEERING&DESIGN》 * |
YANG Y: ""A multi-affine model for tensor decomposition"", 《IN: 2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCV WORK- SHOPS)》 * |
YUAN L: ""High-order tensor comple- tion for data recovery via sparse tensor-train optimiza- tion"", 《2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP》 * |
刘杰平等: "结合系数重用正交匹配追踪的字典学习算法", 《华南理工大学学报(自然科学版)》 * |
吴姗: ""基于稀疏张量的核磁共振成像算法"", 《中国优秀硕士学位论文全文数据库》 * |
张祖凡: ""MIMO-IBC下基于特征向量拆分的干扰对齐算"", 《华中科技大学学报(自然科学版)》 * |
李斌: ""基于张量和非线性稀疏的多维信号压缩感知理论与应用"", 《中国优秀硕士学位论文全文数据库》 * |
熊李艳: ""基于张量分解的鲁棒核低秩表示算法"", 《科学技术与工程》 * |
秦红星: ""体数据的多尺度张量表达与可视化"", 《计算机工程与应用》 * |
郑思龙等: "基于字典学习的非线性降维方法", 《自动化学报》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110579967A (en) * | 2019-09-23 | 2019-12-17 | 中南大学 | process monitoring method based on simultaneous dimensionality reduction and dictionary learning |
CN111241076A (en) * | 2020-01-02 | 2020-06-05 | 西安邮电大学 | Stream data increment processing method and device based on tensor chain decomposition |
CN111241076B (en) * | 2020-01-02 | 2023-10-31 | 西安邮电大学 | Stream data increment processing method and device based on tensor chain decomposition |
Also Published As
Publication number | Publication date |
---|---|
CN109921799B (en) | 2023-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Rathi et al. | Statistical shape analysis using kernel PCA | |
JP5509488B2 (en) | Method for recognizing shape and system for implementing method for recognizing shape | |
WO2020010602A1 (en) | Face recognition and construction method and system based on non-linear non-negative matrix decomposition, and storage medium | |
CN110717519B (en) | Training, feature extraction and classification method, device and storage medium | |
JP2012507793A (en) | Complexity normalization pattern representation, search, and compression | |
Du et al. | Tensor low-rank sparse representation for tensor subspace learning | |
CN106097278A (en) | The sparse model of a kind of multidimensional signal, method for reconstructing and dictionary training method | |
Nguyen et al. | Discriminative low-rank dictionary learning for face recognition | |
Jin et al. | Multiple graph regularized sparse coding and multiple hypergraph regularized sparse coding for image representation | |
CN104933685A (en) | Hyper-spectral compressive imaging method based on three-dimensional tensor compressed sensing | |
CN109921799B (en) | Tensor compression method based on energy-gathering dictionary learning | |
CN109002794A (en) | A kind of non-linear Non-negative Matrix Factorization recognition of face construction method, system and storage medium | |
Wang et al. | Efficient and robust discriminant dictionary pair learning for pattern classification | |
Qi et al. | Two dimensional synthesis sparse model | |
Guo et al. | Accelerating patch-based low-rank image restoration using kd-forest and Lanczos approximation | |
Peng et al. | Learnable representative coefficient image denoiser for hyperspectral image | |
CN114638283B (en) | Tensor optimization space-based orthogonal convolutional neural network image recognition method | |
Miao | Filtered Krylov-like sequence method for symmetric eigenvalue problems | |
Liou et al. | Manifold construction by local neighborhood preservation | |
CN112001865A (en) | Face recognition method, device and equipment | |
Kärkkäinen et al. | A Douglas–Rachford method for sparse extreme learning machine | |
CN110266318B (en) | Measurement matrix optimization method based on gradient projection algorithm in compressed sensing signal reconstruction | |
Mitz et al. | Symmetric rank-one updates from partial spectrum with an application to out-of-sample extension | |
CN111475768B (en) | Observation matrix construction method based on low coherence unit norm tight frame | |
CN116128747A (en) | Multispectral image denoising method and device based on structured tensor sparse model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20241104 Address after: No. 118 Shanhu Road, Nanping Street, Nan'an District, Chongqing 400060, Micro Enterprise Incubation Base 206-2021-2024 Patentee after: Jimoji Network Technology (Chongqing) Co.,Ltd. Country or region after: China Address before: 400065 Chongwen Road, Nanshan Street, Nanan District, Chongqing Patentee before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS Country or region before: China |