CN109921799B - Tensor compression method based on energy-gathering dictionary learning - Google Patents
Tensor compression method based on energy-gathering dictionary learning Download PDFInfo
- Publication number
- CN109921799B CN109921799B CN201910126833.5A CN201910126833A CN109921799B CN 109921799 B CN109921799 B CN 109921799B CN 201910126833 A CN201910126833 A CN 201910126833A CN 109921799 B CN109921799 B CN 109921799B
- Authority
- CN
- China
- Prior art keywords
- tensor
- dictionary
- matrix
- representing
- compression
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000006835 compression Effects 0.000 title claims abstract description 47
- 238000007906 compression Methods 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 35
- 239000011159 matrix material Substances 0.000 claims abstract description 84
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 34
- 238000013507 mapping Methods 0.000 claims abstract description 18
- 230000014509 gene expression Effects 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 abstract description 12
- QDGIAPPCJRFVEK-UHFFFAOYSA-N (1-methylpiperidin-4-yl) 2,2-bis(4-chlorophenoxy)acetate Chemical compound C1CN(C)CCC1OC(=O)C(OC=1C=CC(Cl)=CC=1)OC1=CC=C(Cl)C=C1 QDGIAPPCJRFVEK-UHFFFAOYSA-N 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000013256 coordination polymer Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Machine Translation (AREA)
Abstract
The invention discloses a tensor compression method based on energy-gathering dictionary learning, and belongs to the field of signal processing. The method comprises the following steps: 1. respectively carrying out tach decomposition and sparse representation on the tensor to obtain a dictionary, a sparse coefficient and a nuclear tensor; 2. obtaining a new sparse representation form about the tensor through the relationship between the sparse coefficient of the tensor and the kernel tensor; 3. and reducing the dimension of the dictionary in the mapping matrix by utilizing an energy-gathering dictionary learning algorithm, thereby realizing the compression of the tensor. The tensor compression algorithm based on the energy-gathering dictionary learning provided by the invention realizes the effective compression of the tensor, and can more effectively retain the information of the original tensor compared with other compression algorithms, thereby achieving better denoising effect.
Description
Technical Field
The invention belongs to the field of signal processing, and particularly relates to a tensor signal compression algorithm based on energy-gathering dictionary learning, which can realize effective compression of tensor signals.
Background
With the development of information technology, multidimensional signals play an increasingly important role in the field of signal processing. Meanwhile, multi-dimensional (MD) signals also impose a great burden on the transmission and storage process. To deal with the challenges of multi-dimensional signal processing, people pay attention to tensor representation of multi-dimensional signals, and the multi-dimensional signals are represented as tensors and processed, so that great convenience is brought to the processing of the multi-dimensional signals. Therefore, the compression of the multidimensional signal is essentially effective compression of the tensor, so that the tensor compression algorithm plays an increasingly important role in the field of multidimensional signal compression, and is a hot spot of current research.
In recent years, researchers have proposed many effective compression algorithms based on CP decomposition and tack decomposition algorithms for the problem of tensor compression, and one of them is to awaken vectorization operation of tensor data directly based on tensor decomposition for the tensor data itself. However, according to the well-known theorem of ugly ducklings, there is no optimal representation of patterns without any a priori knowledge, in other words, vectorization of tensor data is not always efficient. Specifically, this may cause the following problems: firstly, the inherent high-order structure and inherent correlation in the original data are destroyed, and the information is lost or the high-order dependency of redundant information and the original data is covered, so that a model representation which is more meaningful is impossible to obtain in the original data; second, vectorization operations can generate high-dimensional vectors, leading to "overfitting", "dimension disasters", and small sample problems.
On the other hand, the sparse representation of the tensor is applied to the compression of the tensor. Because of the equivalence of the tensor Take model and the Kronecker representation, a tensor can be defined as a representation of a given Kronecker dictionary with certain sparsity, such as multidirectional sparsity and block sparsity. With the occurrence of tensor sparsity, some sparse coding modes also occur, such as algorithms of Kroneker-OMP, N-way Block OMP and the like, and the algorithms bring great convenience to tensor compression; meanwhile, a plurality of dictionary learning algorithms are generated corresponding to various sparse coding algorithms, and tensor is subjected to sparse coding and dictionary learning on each dimensionality, so that the aim of sparse representation is fulfilled. However, in the above process of the tensor processing algorithm based on the sparse representation, some new noise is often introduced, which will have a certain influence on the accuracy of the data. In addition, in the sparse representation process, the determination of the sparsity also brings certain challenges to tensor processing.
Therefore, in processing a tensor including a large amount of data, the tensor is effectively compressed, useful information is extracted, and the transmission and storage costs are reduced.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. The tensor compression method based on the energy-gathering dictionary learning can improve the storage capacity of original data and enhance the denoising capacity. The technical scheme of the invention is as follows:
a tensor compression method based on energy-gathered dictionary learning comprises the following steps:
step 1): acquiring a multi-dimensional signal, expressing the multi-dimensional signal as a tensor, inputting the tensor, and performing sparse expression and tack decomposition;
step 2) obtaining a new tensor sparse representation form related to the original tensor by utilizing the approximate relation between the sparse coefficient tensor in the sparse representation and the core tensor obtained by the Tak decomposition; (ii) a
Step 3) converting the new tensor sparse representation form in the step 2) into a mapping matrix form related to dictionary representation according to tensor operation properties;
and 4) reducing the dimension of the dictionary in the mapping matrix by using the idea of reducing the dimension of the energy-gathering dictionary learning algorithm, thereby realizing the compression of the tensor.
Further, the step 1) obtains the form of the product of the core tensor and the mapping matrix in each dimension through the Tack decomposition, and the following expression is obtained in the Tack decomposition process
A∈R I×P ,B∈R J×Q ,C∈R K×R The orthogonal matrix is also called a factor matrix, reflects principal components in each dimension, and Z belongs to R P×Q×R The core tensor is the core tensor and reflects the relevant conditions of all dimensions, P, Q and R respectively correspond to the column numbers of factor matrixes A, B and C, I, J and K represent the size of each dimension of the original tensor, and if P, Q and R are smaller than I, J and K, the core tensor can be regarded as the compression of the original tensor;
for an N-order tensor, the Tak decomposition form is
χ=Z× 1 A 1 × 2 A 2 ...× N A N
χ represents the input tensor signal, Z represents the core tensor, a i The decomposition matrix in each dimension is represented as an orthogonal matrix.
Further, the expression form of the sparse representation of the tensor of the step 1) is
Representing the signal after sparse representation, S representing the sparse coefficient tensor, N representing the order of the self-tensor, D i Representing dictionaries in various dimensions.
Further, the step 2) finds that the two expressions are similar by observing the sparse representation of the tensor and the Tak decomposition form, and the representation of the nuclear tensor obtained by the Tak decomposition is shown as
Z=χ× 1 A 1 T × 2 A 2 T ...× N A N T
Substituting the expression of the nuclear tensor into tensor sparse representation by using the approximate relation between the sparse coefficient tensor and the nuclear tensor to obtain
Further, the step 3) converts the new tensor sparse representation form in the step 2) into a mapping matrix form related to dictionary representation according to tensor operational properties, and specifically includes:
in the operation of the tensor,
when m is not equal to n, the composition is,
Ψ× m A× n B=Ψ× n (BA)
wherein Ψ represents an N-order tensor, and m m-mode product of tensor and matrix n Representing the n-modulo product of the tensor and matrix.
When m is not equal to n, the number of the first and second groups,
Ψ× m A× n B=Ψ× n B× m A
applying the above two properties to a new sparse representation can result in
Further, in the step 4), the dictionary in the mapping matrix is subjected to dimension reduction by using the idea of dimension reduction of the energy-gathered dictionary learning algorithm, so that the compression of the tensor is realized, and the specific steps include:
inputting: tensor composed of T training samplesSparsity k, maximum number of iterations Itermax, termination threshold ε
2. And (3) decomposing tack: χ = Z |) 1 A 1 × 2 A 2 ...× N A N ;
3. Initializing a dictionary and performing a gamma (D) process in an energy-gathered dictionary learning algorithm;
4. calculating and using S when the update times i =0Calculating the absolute error E without any iterative update 0 ;
5. The ith dictionary update
For k=1:I N
End
6. Normalizing the N dictionaries obtained in the last step;
7. updating sparse coefficient tensor S by using ith updated dictionary i ;
8. Calculating the absolute error E after the ith update i And relative error E r ;
9. Judging the termination condition, if E r < ε orIf the iteration times exceed the maximum limit, the loop is terminated, otherwise, the step 5-8 is continued;
Further, in order to make the dictionary P after dimension reduction retain the principal components in the original dictionary D, the dictionary D needs to be preprocessed, and the processing procedure is as follows
D T D=uΛv T
u is a left singular matrix of singular value decomposition, v is a right singular matrix, lambda is a singular value matrix, and the expression form isSecondly, updating singular values
Where k denotes the number of dictionary columns, t d Which is indicative of a principal component threshold value,represents the first d singular values after update, and->Representing the updated last r singular values. />
Further, after preprocessing the dictionary, the dictionary needs to be updated, a tensor-based multidimensional dictionary learning algorithm TKSVD is adopted in the updating process to complete the dictionary updating process of the high-dimensional tensor signal, and the TKSVD algorithm is different from the K-SVD dictionary updating algorithm in that the TKSVD algorithm specifically includes:
(1) When carrying out tensor dictionary study, different from two-dimensional signal learning mode, according to the definition of tensor norm, can obtain:
wherein,D N n dimensional dictionary, D 1 Denotes a 1 st-dimensional dictionary, I T Expressing a T-order unit array, solving the above formula by a least square method to obtain a dictionary D i Is updated with a value of
Wherein, y i The i-mode expansion matrix representing the tensor,indicating a false reversal, i.e.>Wherein->Pseudo-inverse matrix representing matrix M, M T Representing the transpose of matrix M. On the other hand, after one iteration is finished, the absolute error and the relative error between the data which can be recovered under the current dictionary and the sparse coefficient and the original training data are calculated, and the absolute error after the ith iteration is still defined by the Frobenius norm of the tensor
Wherein S represents a coefficient tensor,Representing the error of the true signal from the approximated signal after removing G atoms.
After the dictionary is updated, the dictionary is subjected to dimensionality reduction, and singular value decomposition is performed on the updated dictionary
Wherein U is d The first d columns, U, representing the left singular matrix r The rear r column, Θ, representing the left singular matrix d The first d singular values, Θ, representing a matrix of singular values r Representing the last r singular values, V, of the matrix of singular values d The first d columns, V, representing the right singular matrix r The rear r columns of the right singular matrix are represented. The reduced-dimension dictionary is represented asSubstituting the dictionary P after dimension reduction into the mapping matrix TThereby completing tensor compression.
The invention has the following advantages and beneficial effects:
the invention provides a tensor compression algorithm based on energy-gathering dictionary learning. The specific innovative steps comprise: 1) The sparse representation of the tensor is applied to tensor compression, and the denoising capability is superior to other algorithms; 2) The compression of the tensor is completed through the dimensionality reduction of the mapping matrix, so that the damage of vectorization operation to the internal data structure of the tensor is avoided; 3) Dimension reduction is performed on the dictionary of the tensor by adopting an energy-gathered dictionary learning algorithm, so that a data structure between data in each dimension can be reserved, and the data retention capacity is improved.
Drawings
Figure 1 is an exploded view of a third order tensor tach as used in the preferred embodiment of the present invention;
FIG. 2 is a schematic diagram of a tensor compression algorithm based on cumulative energy dictionary learning;
figure 3 is a flow diagram of a tensor compression algorithm based on energy-gathered dictionary learning.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
the method mainly solves the problems that in the traditional tensor compression algorithm, a data structure is damaged, information is lost, and new noise is introduced. The method mainly includes the steps that a dictionary, a sparse coefficient tensor and a nuclear tensor are obtained through Tak decomposition and sparse representation, then a new sparse representation form is formed through the approximate relation of the sparse coefficient tensor and the nuclear tensor, and finally the dictionary in the sparse representation is subjected to dimensionality reduction through an energy-gathering dictionary learning algorithm, so that tensor compression is achieved.
FIG. 2 is a general flow chart of the present invention, which is described below with reference to the accompanying drawings, and includes the following steps:
the method comprises the following steps: simultaneously carrying out sparse representation and tach decomposition on the input tensor, and obtaining the following representation in the sparse representation process
After the tensor is subjected to sparse representation, a product form of the sparse coefficient tensor in each dimension is obtained, in other words, in the sparse representation, the sparse coefficient tensor takes the dictionary D as a mapping matrix and is projected in each dimension, and therefore the sparse tensor is obtained.
Giving a third order tensor x ∈ R I×J×K The Taker decomposition process is shown in FIG. 1. In the course of the Take decomposition, the following expression is obtained
A∈R I×P ,B∈R J×Q ,C∈R K×R The orthogonal matrix is also called a factor matrix, reflects principal components in each dimension, and Z belongs to R P×Q×R The core tensor is the reflection of the relevant situation of each dimension. P, Q and R respectively correspond to the column numbers of the factor matrixes A, B and C, I, J and K represent the size of each dimension of the original tensor, and if P, Q and R are smaller than I, J and K, the core tensor can be regarded as the compression of the original tensor.
More generally, for an order-N tensor, the Tak decomposition is in the form of
χ=Z× 1 A 1 × 2 A 2 ...× N A N
And step two) obtaining a new tensor sparse representation form related to the original tensor by utilizing the approximate relation between the sparse coefficient tensor in the sparse representation and the core tensor obtained by the Tak decomposition.
By observing the sparse representation of the tensor and the form of the Tak decomposition, it is found that the two expressions are similar, and the representation of the nuclear tensor obtained by the Tak decomposition is
Z=χ× 1 A 1 T × 2 A 2 T ...× N A N T
Substituting the expression of the nuclear tensor into tensor sparse representation by using the approximate relation between the sparse coefficient tensor and the nuclear tensor to obtain
Step three: converting the new tensor sparse representation form into a mapping matrix form related to dictionary representation according to tensor operational properties, and the specific steps comprise:
in the tensor calculation, when m is not equal to n,
Ψ× m A× n B=Ψ× n (BA)
where Ψ represents the N-order tensor m M-mode product of expression tensor and matrix n Representing the n-modulo product of the tensor and matrix.
When m is not equal to n, the composition is,
Ψ× m A× n B=Ψ× n B× m A
applying the above two properties to a new sparse representation can result in
Let T i =D i A i T Will T i Brought into the above formula to obtain
From the above equation, the sparse representation is in the form of a projection on the original matrix in dimensions. In the MPCA algorithm, the high-dimensional tensor realizes tensor compression through processing of a projection matrix, namely, the main component of the projection matrix is reserved to achieve the purpose of compressing the projection matrix, so that the tensor compression is realized. Elicitation of the MPCA concept, here for matrix T i Dimension reduction is performed to realize the originalCompression of tensor χ. To complete the pair matrix T i For the dimension reduction of (2), here considering the sparseness of T i The dictionary D is subjected to dimension reduction, so that the mapping matrix T is realized i The dimensionality reduction of (1).
Step four: and (3) realizing the compression of the tensor by using the thought of dictionary dimension reduction in the energy-gathered dictionary learning algorithm. In the algorithm, in order to enable the dictionary P after dimensionality reduction to keep the principal components in the original dictionary D, the dictionary D needs to be preprocessed, and the processing process is as follows
D T D=uΛv T
u is left singular matrix of singular value decomposition, v is right singular matrix, Λ is singular value matrix, and the expression form isSecondly, updating singular values
Where k denotes the number of columns in the dictionary, t d Which is indicative of the threshold value of the principal component,representing the first d singular values after update>Representing the updated last r singular values.
After preprocessing the dictionary, the dictionary needs to be updated, and a multidimensional dictionary learning algorithm (TKSVD) based on tensor is adopted in the updating process to complete the dictionary updating process of the high-dimensional tensor signals. Unlike the updating algorithm of the K-SVD dictionary, the TKSVD algorithm needs to be noticed by two points:
(2) When carrying out tensor dictionary study, different from two-dimensional signal learning mode, according to the definition of tensor norm, can obtain:
wherein,I T representing a unit matrix of order T, D N N dimensional dictionary, D 1 Representing a 1 st dimensional dictionary. Obviously, the above formula can be solved by the least square method, and the dictionary D can be obtained i Is updated to
Wherein, y i The i-mode expansion matrix representing the tensor,indicating a false reversal, i.e.>Wherein->Pseudo-inverse matrix representing matrix M, M T Representing the transpose of matrix M. On the other hand, after one iteration is completed, the absolute error and the relative error between the data which can be recovered under the current dictionary and the sparse coefficient and the original training data can be calculated, and the absolute error after the ith iteration is still in the Frobenius range of the tensorDefinition of numbers
Wherein S represents a coefficient tensor,Representing the error of the true signal from the approximated signal after removing G atoms.
And after the dictionary is updated, performing dimension reduction processing on the dictionary. Singular value decomposition of updated dictionary
Wherein U is d The first d columns, U, representing the left singular matrix r The rear r column, Θ, representing the left singular matrix d The first d singular values, Θ, representing a matrix of singular values r The last r singular values, V, representing a matrix of singular values d The first d columns, V, representing the right singular matrix r The rear r columns of the right singular matrix are represented. The reduced-dimension dictionary is represented asAnd substituting the dictionary P subjected to the dimension reduction into the mapping matrix T, thereby completing tensor compression.
The tensor compression algorithm based on the energy-gathered dictionary learning specifically comprises the following steps:
inputting: tensor composed of T training samplesSparsity k, maximum number of iterations Itermax, termination threshold ε
2. And (3) decomposing tack: χ =Z× 1 A 1 × 2 A 2 ...× N A N
3. Initializing dictionaries through the gamma (D) process in a cumulative energy dictionary learning algorithm
4. Calculating and using S when the update times i =0Calculating the absolute error E without any iterative update 0
5. The ith dictionary update
For k=1:I N
End
6. Normalizing the N dictionaries obtained in the last step
7. Updating sparse coefficient tensor S by using ith updated dictionary i
8. Calculating the absolute error E after the ith update i And relative error E r
9. Judging the termination condition, if E r If epsilon is less than epsilon or the number of iterations exceeds the maximum limit, the loop is terminated, otherwise steps 5-8 are continued
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure in any way whatsoever. After reading the description of the present invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.
Claims (4)
1. A tensor compression method based on energy-gathered dictionary learning is characterized by comprising the following steps:
step 1): acquiring a multi-dimensional signal, expressing the multi-dimensional signal as a tensor, inputting the tensor, and performing sparse expression and tack decomposition;
step 2) obtaining a new tensor sparse representation form related to the original tensor by utilizing the approximate relation between the sparse coefficient tensor in the sparse representation and the core tensor obtained by the Tak decomposition;
step 3) converting the new tensor sparse representation form in the step 2) into a mapping matrix form related to dictionary representation according to tensor operation properties;
step 4) reducing the dimension of the dictionary in the mapping matrix by using the idea of reducing the dimension of the energy-gathering dictionary learning algorithm, thereby realizing the compression of tensor;
the step 1) obtains the form of the product of the core tensor and the mapping matrix in each dimension through the Tack decomposition, and the following expression is obtained in the Tack decomposition process
A∈R I×P ,B∈R J×Q ,C∈R K×R The orthogonal matrix is also called a factor matrix, reflects principal components in each dimension, and Z belongs to R P×Q×R The method is a core tensor and reflects the relevant conditions of all dimensions, P, Q and R respectively correspond to the column numbers of factor matrixes A, B and C, I, J and K represent the original factorsThe size of each dimension of the initial tensor is determined, and if P, Q and R are smaller than I, J and K, the core tensor can be regarded as the compression of the original tensor;
for an N-order tensor, the Tak decomposition form is
χ=Z× 1 A 1 × 2 A 2 × N A N
χ denotes an input tensor signal, Z denotes a core tensor, A i Representing a decomposition matrix on each dimension, namely an orthogonal matrix;
the expression form of the sparse representation of the tensor of the step 1) is
Representing the signal after sparse representation, S representing the sparse coefficient tensor, N representing the order of the self-tensor, D i Representing dictionaries in various dimensions;
in the step 2), by observing the sparse representation of the tensor and the Tack decomposition form, the two expressions are found to be similar, and the expression of the nuclear tensor obtained by the Tack decomposition is shown as
Z=χ× 1 A 1 T × 2 A 2 T ...× N A N T
Substituting the expression of the nuclear tensor into tensor sparse representation by using the approximate relation between the sparse coefficient tensor and the nuclear tensor to obtain
The step 3) converts the new tensor sparse representation form in the step 2) into a mapping matrix form related to dictionary representation according to tensor operation properties, and specifically comprises the following steps:
in the operation of the tensor,
when the m is = n, the number of the n,
Ψ× m A× n B=Ψ× n (BA)
where Ψ represents the N-order tensor m M-mode product of expression tensor and matrix n Representing the n-modulo product of the tensor and matrix;
when m is not equal to n, the composition is,
Ψ× m A× n B=Ψ× n B× m A
applying the above two properties to a new sparse representation can result in
2. The tensor compression method based on the energy-gathered dictionary learning as claimed in claim 1, wherein the step 4) reduces the dimension of the dictionary in the mapping matrix by using the idea of dimension reduction of the energy-gathered dictionary learning algorithm, so as to realize the tensor compression, and the specific steps include:
inputting: tensor composed of T training samplesSparsity k, maximum number of iterations Itermax, termination threshold ε
2. And (3) decomposing tack: χ = Z |) 1 A 1 × 2 A 2 ...× N A N ;
3. Initializing a dictionary and performing a gamma (D) process in an energy-gathered dictionary learning algorithm;
4. calculating and using S when the update times i =0Calculating the absolute error E without any iterative update 0 ;
5. The ith dictionary update
For k=1:I N
End
6. Normalizing the N dictionaries obtained in the last step;
7. updating sparse coefficient tensor S by using dictionary updated at ith time i ;
8. Calculating the absolute error E after the ith update i And relative error E r ;
9. Judging the termination condition, if E r If the epsilon is less than epsilon or the iteration times exceed the maximum limit, the circulation is terminated, otherwise, the step 5-8 is continued;
3. The tensor compression method based on energy-gathered dictionary learning as claimed in claim 2, wherein in order to make the dictionary P after dimension reduction retain the principal components in the original dictionary D, the dictionary D needs to be preprocessed as follows
D T D=uΛv T
u is a left singular matrix of singular value decomposition, v is a right singular matrix, lambda is a singular value matrix, and the expression form isSecondly, updating singular values
Where k denotes the number of columns in the dictionary, t d Which is indicative of a principal component threshold value,representing the first d singular values after update>Representing the updated last r singular values;
4. The tensor compression method based on energy-gathered dictionary learning as claimed in claim 3, wherein after the dictionary is preprocessed, the dictionary needs to be updated, a tensor-based multidimensional dictionary learning algorithm TKSVD is adopted in the updating process, the dictionary updating process of the high-dimensional tensor signal is completed, and different from the K-SVD dictionary updating algorithm, the TKSVD algorithm specifically comprises:
(1) When carrying out tensor dictionary study, different from two-dimensional signal learning mode, according to the definition of tensor norm, can obtain:
wherein,D N an N-dimensional dictionary, D 1 Denotes a 1 st-dimensional dictionary, I T Expressing a T-order unit array, solving the above formula by a least square method to obtain a dictionary D i Is updated with a value of
Wherein, y i The i-mode expansion matrix representing the tensor,indicating a false reversal, i.e.>Wherein +>Pseudo-inverse matrix representing matrix M, M T A transposed matrix representing the matrix M; on the other hand, after one iteration is finished, the absolute error and the relative error between the data which can be recovered under the current dictionary and the sparse coefficient and the original training data are calculated, and the absolute error after the ith iteration is still expandedFrobenius norm definition of amount
Wherein S represents a coefficient tensor,Representing the error of the real signal and the approximation signal after removing G atoms;
after the dictionary is updated, the dictionary is subjected to dimensionality reduction, and singular value decomposition is performed on the updated dictionary
Wherein U is d The first d columns, U, of the left singular matrix r The rear r column, Θ, representing the left singular matrix d The first d singular values, Θ, representing a matrix of singular values r Representing the last r singular values, V, of the matrix of singular values d The first d columns, V, representing the right singular matrix r Representing the rear r column of the right singular matrix, the dictionary after dimensionality reduction is represented asAnd substituting the dictionary P subjected to the dimension reduction into the mapping matrix T, thereby completing tensor compression. />
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910126833.5A CN109921799B (en) | 2019-02-20 | 2019-02-20 | Tensor compression method based on energy-gathering dictionary learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910126833.5A CN109921799B (en) | 2019-02-20 | 2019-02-20 | Tensor compression method based on energy-gathering dictionary learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109921799A CN109921799A (en) | 2019-06-21 |
CN109921799B true CN109921799B (en) | 2023-03-31 |
Family
ID=66961845
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910126833.5A Active CN109921799B (en) | 2019-02-20 | 2019-02-20 | Tensor compression method based on energy-gathering dictionary learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109921799B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110579967B (en) * | 2019-09-23 | 2020-06-30 | 中南大学 | Process monitoring method based on simultaneous dimensionality reduction and dictionary learning |
CN111241076B (en) * | 2020-01-02 | 2023-10-31 | 西安邮电大学 | Stream data increment processing method and device based on tensor chain decomposition |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014096118A (en) * | 2012-11-12 | 2014-05-22 | Nippon Telegr & Teleph Corp <Ntt> | Device, method, and program for missing value prediction and device, method, and program for commodity recommendation |
CN107561576A (en) * | 2017-08-31 | 2018-01-09 | 电子科技大学 | Seismic signal method based on dictionary learning regularization rarefaction representation |
CN108305297A (en) * | 2017-12-22 | 2018-07-20 | 上海交通大学 | A kind of image processing method based on multidimensional tensor dictionary learning algorithm |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5861827A (en) * | 1996-07-24 | 1999-01-19 | Unisys Corporation | Data compression and decompression system with immediate dictionary updating interleaved with string search |
CN1297822C (en) * | 2003-02-21 | 2007-01-31 | 重庆邮电学院 | Estimation method for radio orientation incoming wave direction based on TD-SCMA |
US7490071B2 (en) * | 2003-08-29 | 2009-02-10 | Oracle Corporation | Support vector machines processing system |
US20100246920A1 (en) * | 2009-03-31 | 2010-09-30 | Iowa State University Research Foundation, Inc. | Recursive sparse reconstruction |
CN102130862B (en) * | 2011-04-26 | 2013-07-17 | 重庆邮电大学 | Method for reducing overhead caused by channel estimation of communication system |
US8935308B2 (en) * | 2012-01-20 | 2015-01-13 | Mitsubishi Electric Research Laboratories, Inc. | Method for recovering low-rank matrices and subspaces from data in high-dimensional matrices |
US10545919B2 (en) * | 2013-09-27 | 2020-01-28 | Google Llc | Decomposition techniques for multi-dimensional data |
CN105099460B (en) * | 2014-05-07 | 2018-05-04 | 瑞昱半导体股份有限公司 | Dictionary compression method, dictionary decompression method and dictionary constructing method |
US9870519B2 (en) * | 2014-07-08 | 2018-01-16 | Nec Corporation | Hierarchical sparse dictionary learning (HiSDL) for heterogeneous high-dimensional time series |
CN104318064B (en) * | 2014-09-26 | 2018-01-30 | 大连理工大学 | Head coherent pulse response three-dimensional data compression method based on model's Multidimensional decomposition technique |
CN104683074B (en) * | 2015-03-13 | 2018-09-11 | 重庆邮电大学 | Extensive mimo system limited feedback method based on compressed sensing |
CN104933684B (en) * | 2015-06-12 | 2017-11-21 | 北京工业大学 | A kind of light field method for reconstructing |
US10235600B2 (en) * | 2015-06-22 | 2019-03-19 | The Johns Hopkins University | System and method for structured low-rank matrix factorization: optimality, algorithm, and applications to image processing |
CN106506007A (en) * | 2015-09-08 | 2017-03-15 | 联发科技(新加坡)私人有限公司 | A kind of lossless data compression and decompressing device and its method |
CN106023098B (en) * | 2016-05-12 | 2018-11-16 | 西安电子科技大学 | Image mending method based on the more dictionary learnings of tensor structure and sparse coding |
CN106097278B (en) * | 2016-06-24 | 2021-11-30 | 北京工业大学 | Sparse model, reconstruction method and dictionary training method of multi-dimensional signal |
CN106897685A (en) * | 2017-02-17 | 2017-06-27 | 深圳大学 | Face identification method and system that dictionary learning and sparse features based on core Non-negative Matrix Factorization are represented |
US10436871B2 (en) * | 2017-04-24 | 2019-10-08 | Cedars-Sinai Medical Center | Low-rank tensor imaging for multidimensional cardiovascular MRI |
CN107516129B (en) * | 2017-08-01 | 2020-06-02 | 北京大学 | Dimension self-adaptive Tucker decomposition-based deep network compression method |
CN107507253B (en) * | 2017-08-15 | 2020-09-01 | 电子科技大学 | Multi-attribute body data compression method based on high-order tensor approximation |
CN108521586B (en) * | 2018-03-20 | 2020-01-14 | 西北大学 | IPTV television program personalized recommendation method giving consideration to time context and implicit feedback |
CN108510013B (en) * | 2018-07-02 | 2020-05-12 | 电子科技大学 | Background modeling method for improving robust tensor principal component analysis based on low-rank core matrix |
CN109241491A (en) * | 2018-07-28 | 2019-01-18 | 天津大学 | The structural missing fill method of tensor based on joint low-rank and rarefaction representation |
-
2019
- 2019-02-20 CN CN201910126833.5A patent/CN109921799B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014096118A (en) * | 2012-11-12 | 2014-05-22 | Nippon Telegr & Teleph Corp <Ntt> | Device, method, and program for missing value prediction and device, method, and program for commodity recommendation |
CN107561576A (en) * | 2017-08-31 | 2018-01-09 | 电子科技大学 | Seismic signal method based on dictionary learning regularization rarefaction representation |
CN108305297A (en) * | 2017-12-22 | 2018-07-20 | 上海交通大学 | A kind of image processing method based on multidimensional tensor dictionary learning algorithm |
Also Published As
Publication number | Publication date |
---|---|
CN109921799A (en) | 2019-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Rathi et al. | Statistical shape analysis using kernel PCA | |
Wang et al. | G2DeNet: Global Gaussian distribution embedding network and its application to visual recognition | |
He et al. | l 2, 1 regularized correntropy for robust feature selection | |
CN108198147B (en) | Multi-source image fusion denoising method based on discriminant dictionary learning | |
Majumdar et al. | Robust greedy deep dictionary learning for ECG arrhythmia classification | |
Duan et al. | K-CPD: Learning of overcomplete dictionaries for tensor sparse coding | |
WO2020010602A1 (en) | Face recognition and construction method and system based on non-linear non-negative matrix decomposition, and storage medium | |
CN110717519A (en) | Training, feature extraction and classification method, device and storage medium | |
JP2012507793A (en) | Complexity normalization pattern representation, search, and compression | |
Zhang et al. | Two-dimensional non-negative matrix factorization for face representation and recognition | |
Nguyen et al. | Discriminative low-rank dictionary learning for face recognition | |
CN109921799B (en) | Tensor compression method based on energy-gathering dictionary learning | |
Jin et al. | Multiple graph regularized sparse coding and multiple hypergraph regularized sparse coding for image representation | |
CN108460412B (en) | Image classification method based on subspace joint sparse low-rank structure learning | |
CN111325275A (en) | Robust image classification method and device based on low-rank two-dimensional local discriminant map embedding | |
Wang et al. | Efficient and robust discriminant dictionary pair learning for pattern classification | |
Qi et al. | Two dimensional synthesis sparse model | |
Peng et al. | Learnable representative coefficient image denoiser for hyperspectral image | |
CN111695507B (en) | Static gesture recognition method based on improved VGGNet network and PCA | |
CN113221992A (en) | Based on L2,1Large-scale data rapid clustering method of norm | |
CN113221660A (en) | Cross-age face recognition method based on feature fusion | |
Kärkkäinen et al. | A Douglas–Rachford method for sparse extreme learning machine | |
CN112001865A (en) | Face recognition method, device and equipment | |
Sabzalian et al. | Iterative weighted non-smooth non-negative matrix factorization for face recognition | |
CN110266318B (en) | Measurement matrix optimization method based on gradient projection algorithm in compressed sensing signal reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20241104 Address after: No. 118 Shanhu Road, Nanping Street, Nan'an District, Chongqing 400060, Micro Enterprise Incubation Base 206-2021-2024 Patentee after: Jimoji Network Technology (Chongqing) Co.,Ltd. Country or region after: China Address before: 400065 Chongwen Road, Nanshan Street, Nanan District, Chongqing Patentee before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS Country or region before: China |