CN109921799A - A kind of tensor compression method based on cumulative amount dictionary learning - Google Patents

A kind of tensor compression method based on cumulative amount dictionary learning Download PDF

Info

Publication number
CN109921799A
CN109921799A CN201910126833.5A CN201910126833A CN109921799A CN 109921799 A CN109921799 A CN 109921799A CN 201910126833 A CN201910126833 A CN 201910126833A CN 109921799 A CN109921799 A CN 109921799A
Authority
CN
China
Prior art keywords
tensor
dictionary
matrix
indicate
compression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910126833.5A
Other languages
Chinese (zh)
Other versions
CN109921799B (en
Inventor
张祖凡
毛军伟
甘臣权
孙韶辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201910126833.5A priority Critical patent/CN109921799B/en
Publication of CN109921799A publication Critical patent/CN109921799A/en
Application granted granted Critical
Publication of CN109921799B publication Critical patent/CN109921799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Machine Translation (AREA)

Abstract

A kind of tensor compression method based on cumulative amount dictionary learning is claimed in the present invention, belongs to field of signal processing.It the described method comprises the following steps: 1, tensor being subjected to Plutarch decomposition and rarefaction representation respectively, obtain dictionary, sparse coefficient and core tensor;2, by the relationship of the sparse coefficient of tensor and core tensor, the new rarefaction representation form about tensor is obtained;3, dimensionality reduction is carried out to the dictionary in mapping matrix using cumulative amount dictionary learning algorithm, to realize the compression of tensor.Tensor compression algorithm proposed by the present invention based on cumulative amount dictionary learning, realizes being effectively compressed for tensor, relative to other compression algorithms, can more effectively retain the information of original tensor, reaches preferably denoising effect.

Description

A kind of tensor compression method based on cumulative amount dictionary learning
Technical field
The invention belongs to field of signal processing, and in particular to the tensor signal compression algorithm based on cumulative amount dictionary learning, Being effectively compressed for tensor signal may be implemented.
Background technique
With the development of information technology, multidimensional signal plays increasingly important role in field of signal processing.With this Multidimensional (Multidimensional, MD) signal can also bring very big burden to transimission and storage process simultaneously.In order to cope with Multidimensional processiug bring challenge, the tensor representation of multidimensional signal attract attention, and multidimensional signal is expressed as opening It measures and it is handled, bring great convenience for the processing of multidimensional signal.Therefore, substantially to the compression of multidimensional signal It is to be effectively compressed to tensor, so tensor compression algorithm plays more and more important work in multidimensional signal compression field With, it has also become the hot spot studied now.
In recent years, aiming at the problem that tensor compresses, researchers propose on the basis of CP is decomposed with Plutarch decomposition algorithm Many effectively compression algorithms, it is rough to be divided into two aspects, one is tensor data itself are directed to, directly in tensor resolution Vectoring operations are waken up with a start to tensor data in ground foundation.However, according to famous " ugly duckling " theorem, without any priori knowledge Just not optimal mode indicates that in other words, the vector quantization of tensor data is not always effective.Specifically, this may Will lead to following problems: firstly, destroying intrinsic high-order structures and intrinsic correlation in initial data, information is lost or is covered superfluous The high-order dependence of remaining information and initial data, therefore, it is not possible to the model table that obtaining in initial data may be more meaningful Show;Second, vectoring operations can generate high dimension vector, lead to " overfitting " occur, " dimension disaster " and small sample problem.
On the other hand, the rarefaction representation of tensor is applied in the compression of tensor.Due to tensor Plutarch model and The equivalence that Kronecker is indicated, tensor can be defined as the expression of the given Kronecker dictionary with specific sparsity Form, such as multidirectional sparsity and block sparsity.With the appearance of tensor sparsity, the mode of some sparse codings also goes out therewith It is existing, such as Kroneker-OMP and N-way Block OMP scheduling algorithm, these algorithms are that tensor compression is brought greatly just Benefit;Meanwhile it is corresponding with various sparse coding algorithms also produce many dictionary learning algorithms, it is right respectively in each dimension Tensor carries out sparse coding and dictionary learning, to realize the purpose of rarefaction representation.However, above based on the tensor of rarefaction representation During Processing Algorithm, some new noises are often introduced, this will generate certain influence to the accuracy of data.And And during rarefaction representation, the determination of degree of rarefication also can bring certain challenge to the processing of tensor.
So effectively being compressed to tensor in tensor of the processing comprising big data quantity, extracting useful letter Breath is faced with unprecedented challenge in terms of reducing transimission and storage cost.
Summary of the invention
Present invention seek to address that the above problem of the prior art.It proposes one kind and is able to ascend initial data hold capacity, Enhance the tensor compression method based on cumulative amount dictionary learning of noise removal capability.Technical scheme is as follows:
A kind of tensor compression method based on cumulative amount dictionary learning comprising following steps:
Step 1): obtaining multidimensional signal and multidimensional signal is expressed as tensor, input tensor and carries out rarefaction representation and tower Gram decompose;
The approximation relation for the core tensor that step 2) is decomposed using sparse coefficient tensor in rarefaction representation and Plutarch, obtains To the new tensor rarefaction representation form about original tensor;;
Step 3) the tensor rarefaction representation form new to step 2) is converted to according to tensor operation property about dictionary table The mapping matrix form shown;
Step 4) carries out dimensionality reduction to the dictionary in mapping matrix using the thought of cumulative amount dictionary learning algorithm dimensionality reduction, thus Realize the compression of tensor.
Further, the step 1) by Plutarch decompose to obtain core tensor in each dimension with mapping matrix product Form indicated as follows in Plutarch decomposable process
A∈RI×P,B∈RJ×Q,C∈RK×R, be also known as factor matrix for orthogonal matrix, reacted in each dimension it is main at Point, Z ∈ RP×Q×RFor core tensor, the correlation circumstance of each dimension is reacted, P, Q and R respectively correspond factor matrix A, B and C Columns, I, J and K indicate the size of each dimension of original tensor, if P, Q, R are less than I, J, K, then core tensor can be regarded as The compression of original tensor;
For N rank tensor, Plutarch decomposed form is
χ=Z ×1A1×2A2...×NAN
χ indicates that the tensor signal of input, Z indicate core tensor, AiIt indicates the split-matrix in each dimension, is orthogonal moment Battle array.
Further, the representation of the rarefaction representation of the step 1) tensor is
Indicate that the signal after rarefaction representation, S indicate that sparse coefficient tensor, N are indicated from order of a tensor number, DiIndicate each Dictionary in a dimension.
Further, the step 2) finds two expression by the rarefaction representation and Plutarch decomposed form of observation tensor Formula be it is similar, decomposed to obtain being expressed as core tensor by Plutarch
Z=χ ×1A1 T×2A2 T...×NAN T
Using the approximation relation of sparse coefficient tensor sum core tensor, the expression formula of core tensor is substituted into tensor rarefaction representation It obtains
Further, the step 3) the tensor rarefaction representation form new to step 2), according to tensor operation property, conversion For the mapping matrix form indicated about dictionary, specifically include:
In the operation of tensor,
As m ≠ n,
Ψ×mnB=Ψ ×n(BA)
Wherein Ψ indicates N rank tensor, ×mIndicate the m modular multiplication product of tensor and matrix, ×nIndicate the n modular multiplication of tensor and matrix Product.
As m ≠ n,
Ψ×mnB=Ψ ×nmA
Above two attributes are applied in new rarefaction representation, it is available
Further, the step 4) is using the thought of cumulative energy dictionary learning algorithm dimensionality reduction to the word in mapping matrix Allusion quotation carries out dimensionality reduction, to realize the compression of tensor, specific steps include:
Input: the tensor that T training sample is constitutedDegree of rarefication k, maximum number of iterations Itermax are terminated Thresholding ε
1. initializing dictionaryFor Gaussian matrix, and the column of each dictionary are normalized
2. Plutarch decomposes: χ=Z ×1A1×2A2...×NAN
3. initializing dictionary by Γ (D) process in cumulative amount dictionary learning algorithm;
4. S when calculating update times i=0, and utilizeIt calculates and is updated without any iteration When absolute error E0
5. i-th dictionary updating
For k=1:IN
It utilizesCalculate the updated value of i-th of dictionary
End
6. the N number of dictionary acquired in pair previous step is normalized;
7. the dictionary updated using i-th updates sparse coefficient tensor Si
8. calculating the updated absolute error E of i-thiWith relative error Er
9. termination condition judges, if Er< ε or the number of iterations then terminate circulation, otherwise continue to walk beyond maximum limitation Rapid 5-8;
10. obtaining dictionary
11. according to formulaObtain Ud, then dictionary
12. dictionary P is brought into
Output: compression tensor
Further, it in order to make the dictionary P after dimensionality reduction retain the principal component in original dictionary D, needs to carry out dictionary D Pretreatment, treatment process are as follows
DTD=u Λ vT
U is the left singular matrix of singular value decomposition, and v is right singular matrix, and Λ is singular value matrix, and representation isSecondly singular value is updated
Wherein, k indicates dictionary columns, tdIndicate principal component threshold value,Indicate updated preceding d singular value,It indicates R singular value after updated.
To obtain new singular value
New dictionary is constituted with initial left singular matrix and right singular matrix, i.e.,
Further, after being pre-processed to dictionary, dictionary need to be updated, uses be based at no point in the update process The multidimensional dictionary learning algorithm TKSVD of tensor completes the dictionary updating process to higher-dimension tensor signal, with K-SVD dictionary updating Unlike algorithm, in TKSVD algorithm, specifically include:
(1) different from 2D signal mode of learning when carrying out tensor dictionary learning, it, can according to the definition of Tensor Norms It obtains:
Wherein,DNN-dimensional dictionary, D1Indicate the 1st dimension dictionary, ITIndicate T rank unit Battle array solves above formula by least square method, obtains dictionary DiUpdated value be
Wherein, yiIndicate that matrix is unfolded in the i mould of tensor,Indicate pseudoinverse, i.e.,Wherein The pseudo inverse matrix of representing matrix M, MTThe transposed matrix of representing matrix M.On the other hand, it after completing an iteration, calculates current Absolute error and relative error between the data that can restore under dictionary and sparse coefficient and original training data, i-th Absolute error after iteration is still defined with the Frobenius norm of tensor
Wherein S indicate coefficient tensor,Expression removes the error of actual signal and approximation signal after G atom.
After being updated completion to dictionary, dimension-reduction treatment is carried out to dictionary, singular value point is carried out to updated dictionary Solution
Wherein UdIndicate the preceding d column of left singular matrix, UrIndicate the rear r column of left singular matrix, ΘdIndicate singular value matrix Preceding d singular value, ΘrIndicate the rear r singular value of singular value matrix, VdIndicate the preceding d column of right singular matrix, VrIndicate right The rear r of singular matrix is arranged.Then the dictionary after dimensionality reduction is expressed asDictionary P after dimensionality reduction is substituted into mapping matrix T, from And complete tensor compression.
It advantages of the present invention and has the beneficial effect that:
The present invention proposes a kind of tensor compression algorithm based on cumulative amount dictionary learning.Specific innovative step includes: 1) will The rarefaction representation of tensor is applied in tensor compression, is better than other algorithms in terms of noise removal capability;2) by mapping matrix Dimensionality reduction completes the compression of tensor, avoids destruction of the vectoring operations to tensor internal data structure;3) dictionary of tensor is adopted With cumulative energy dictionary learning algorithm dimensionality reduction, the data structure between the data on every dimension is retained, is mentioned High data reserve capabilities.
Detailed description of the invention
Fig. 1 is that the present invention provides three rank tensor Plutarch exploded views used in preferred embodiment;
Fig. 2 is the tensor compression algorithm schematic diagram based on cumulative amount dictionary learning;
Fig. 3 is the tensor compression algorithm flow chart based on cumulative amount dictionary learning.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, detailed Carefully describe.Described embodiment is only a part of the embodiments of the present invention.
The technical solution that the present invention solves above-mentioned technical problem is:
Emphasis of the present invention solves in traditional tensor compression algorithm, destroys data structure, and information is caused to lose and introduce The problem of new noise.Main thought is to decompose to obtain dictionary, sparse coefficient tensor sum core tensor with rarefaction representation by Plutarch, Then new rarefaction representation form is constituted by the approximation relation of sparse coefficient tensor and core tensor, finally utilizes cumulative amount dictionary Learning algorithm carries out dimensionality reduction to the dictionary in rarefaction representation, to realize the compression of tensor.
Fig. 2 is overview flow chart of the invention, is illustrated with reference to the accompanying drawing, including the following steps:
Step 1: rarefaction representation is carried out to input tensor simultaneously and Plutarch decomposes, during rarefaction representation, is obtained as follows It indicates
Tensor obtains the form of sparse coefficient tensor product in each dimension after rarefaction representation, in other words, dilute It dredges in indicating, sparse coefficient tensor is projected in each dimension using dictionary D as mapping matrix, to obtain rarefaction Tensor afterwards.
Give a three rank tensor χ ∈ RI×J×K, Plutarch decomposable process is as shown in Figure 1.In Plutarch decomposable process, obtain It is indicated to following
A∈RI×P,B∈RJ×Q,C∈RK×R, be also known as factor matrix for orthogonal matrix, reacted in each dimension it is main at Point, Z ∈ RP×Q×RFor core tensor, the correlation circumstance of each dimension has been reacted.P, Q and R respectively corresponds factor matrix A, B and C Columns, I, J and K indicate the size of each dimension of original tensor, if P, Q, R are less than I, J, K, then core tensor can be regarded as The compression of original tensor.
More generally, for N rank tensor, Plutarch decomposed form is
χ=Z ×1A1×2A2...×NAN
Step 2) approximation relation of core tensor that is decomposed using sparse coefficient tensor in rarefaction representation and Plutarch, Obtain the new tensor rarefaction representation form about original tensor.
By observing the rarefaction representation and Plutarch decomposed form of tensor, find two expression formulas be it is similar, by Plutarch point Solution obtains being expressed as core tensor
Z=χ ×1A1 T×2A2 T...×NAN T
Using the approximation relation of sparse coefficient tensor sum core tensor, the expression formula of core tensor is substituted into tensor rarefaction representation It obtains
Step 3: it to new tensor rarefaction representation form, according to tensor operation property, is converted to and is reflected about what dictionary indicated Matrix form is penetrated, specific steps include:
In the operation of tensor, as m ≠ n,
Ψ×mnB=Ψ ×n(BA)
Wherein Ψ indicates N rank tensor, ×mIndicate the m modular multiplication product of tensor and matrix, ×nIndicate the n modular multiplication of tensor and matrix Product.
As m ≠ n,
Ψ×mnB=Ψ ×nmA
Above two attributes are applied in new rarefaction representation, it is available
Enable Ti=DiAi T, by TiIt brings above formula into, obtains
As seen from the above equation, rarefaction representation is a form projected in a dimension about original matrix.In MPCA algorithm In, higher-dimension tensor realizes tensor compression by the processing of projection matrix, that is, by the principal component of retaining projection matrix, reaches throwing The purpose of shadow matrix compression, to realize that tensor compresses.The inspiration of MPCA thought is received, here to matrix TiDimensionality reduction is carried out, from And realize the compression of original tensor χ.To complete to matrix TiDimensionality reduction, here to considering sparse TiDictionary D carry out dimensionality reduction, from And it realizes to mapping matrix TiDimensionality reduction.
Step 4: the compression to tensor is realized using the thought of dictionary dimensionality reduction in cumulative energy dictionary learning algorithm.Algorithm In to make the dictionary P after dimensionality reduction retain the principal component in original dictionary D, need to pre-process dictionary D, treatment process is such as Under
DTD=u Λ vT
U is the left singular matrix of singular value decomposition, and v is right singular matrix, and Λ is singular value matrix, and representation isSecondly singular value is updated
Wherein, k indicates dictionary columns, tdIndicate principal component threshold value,Indicate updated preceding d singular value,It indicates R singular value after updated.
To obtain new singular value
New dictionary is constituted with initial left singular matrix and right singular matrix, i.e.,
After being pre-processed to dictionary, dictionary need to be updated, at no point in the update process using based on the more of tensor It ties up dictionary learning algorithm (TKSVD), completes the dictionary updating process to higher-dimension tensor signal.Not with K-SVD dictionary updating algorithm With, in TKSVD algorithm, it should be noted that have two o'clock:
(2) different from 2D signal mode of learning when carrying out tensor dictionary learning, it, can according to the definition of Tensor Norms It obtains:
Wherein,ITIndicate T rank unit matrix, DNN-dimensional dictionary, D1Indicate the 1st dimension word Allusion quotation.Obviously above formula, available dictionary D can be solved by least square methodiUpdated value be
Wherein, yiIndicate that matrix is unfolded in the i mould of tensor,Indicate pseudoinverse, i.e.,Wherein The pseudo inverse matrix of representing matrix M, MTThe transposed matrix of representing matrix M.On the other hand, it after completing an iteration, can calculate Absolute error and relative error between the data that can restore under current dictionary and sparse coefficient and original training data, the Absolute error after i iteration is still defined with the Frobenius norm of tensor
Wherein S indicate coefficient tensor,Expression removes the error of actual signal and approximation signal after G atom.
After being updated completion to dictionary, dimension-reduction treatment is carried out to dictionary.Singular value point is carried out to updated dictionary Solution
Wherein UdIndicate the preceding d column of left singular matrix, UrIndicate the rear r column of left singular matrix, ΘdIndicate singular value matrix Preceding d singular value, ΘrIndicate the rear r singular value of singular value matrix, VdIndicate the preceding d column of right singular matrix, VrIndicate right The rear r of singular matrix is arranged.Then the dictionary after dimensionality reduction is expressed asDictionary P after dimensionality reduction is substituted into mapping matrix T, from And complete tensor compression.
For the tensor compression algorithm based on cumulative amount dictionary learning, specific step includes:
Input: the tensor that T training sample is constitutedDegree of rarefication k, maximum number of iterations Itermax are terminated Thresholding ε
1. initializing dictionaryFor Gaussian matrix, and the column of each dictionary are normalized
2. Plutarch decomposes: χ=Z ×1A1×2A2...×NAN
3. initializing dictionary by Γ (D) process in cumulative energy dictionary learning algorithm
4. S when calculating update times i=0, and utilizeIt calculates and is updated without any iteration When absolute error E0
5. i-th dictionary updating
For k=1:IN
It utilizesCalculate the updated value of i-th of dictionary
End
6. the N number of dictionary acquired in pair previous step is normalized
7. the dictionary updated using i-th updates sparse coefficient tensor Si
8. calculating the updated absolute error E of i-thiWith relative error Er
9. termination condition judges, if Er< ε or the number of iterations then terminate circulation, otherwise continue to walk beyond maximum limitation Rapid 5-8
10. obtaining dictionary
11. according to formulaObtain Ud, then dictionary
12. dictionary P is brought into
Output: compression tensor
The above embodiment is interpreted as being merely to illustrate the present invention rather than limit the scope of the invention.? After the content for having read record of the invention, technical staff can be made various changes or modifications the present invention, these equivalent changes Change and modification equally falls into the scope of the claims in the present invention.

Claims (8)

1. a kind of tensor compression method based on cumulative amount dictionary learning, which comprises the following steps:
Step 1): obtaining multidimensional signal and multidimensional signal is expressed as tensor, input tensor and carries out rarefaction representation and Plutarch point Solution;
The approximation relation for the core tensor that step 2) is decomposed using sparse coefficient tensor in rarefaction representation and Plutarch, is closed In the new tensor rarefaction representation form of original tensor;;
Step 3) the tensor rarefaction representation form new to step 2) is converted to according to tensor operation property about dictionary expression Mapping matrix form;
Step 4) carries out dimensionality reduction to the dictionary in mapping matrix using the thought of cumulative amount dictionary learning algorithm dimensionality reduction, to realize The compression of tensor.
2. a kind of tensor compression method based on cumulative amount dictionary learning according to claim 1, which is characterized in that described Step 1) Plutarch decompose to obtain core tensor in each dimension with mapping matrix product by way of, in Plutarch decomposable process In, it is indicated as follows
A∈RI×P,B∈RJ×Q,C∈RK×R, it is also known as factor matrix for orthogonal matrix, has reacted the principal component in each dimension, Z ∈RP×Q×RFor core tensor, the correlation circumstance of each dimension has been reacted.P, Q and R respectively corresponds the column of factor matrix A, B and C Number, I, J and K indicate the size of each dimension of original tensor, if P, Q, R are less than I, J, K, then core tensor can regard former as The compression of beginning tensor;
For N rank tensor, Plutarch decomposed form is
χ=Z ×1A1×2A2...×NAN
χ indicates that the tensor signal of input, Z indicate core tensor, AiIt indicates the split-matrix in each dimension, is orthogonal matrix.
3. a kind of tensor compression method based on cumulative amount dictionary learning according to claim 2, which is characterized in that described The representation of the rarefaction representation of step 1) tensor is
Indicate that the signal after rarefaction representation, S indicate that sparse coefficient tensor, N are indicated from order of a tensor number, DiIndicate each dimension Dictionary on degree.
4. a kind of tensor compression method based on cumulative amount dictionary learning according to claim 3, which is characterized in that described Step 2) by observation tensor rarefaction representation and Plutarch decomposed form, find two expression formulas be it is similar, decomposed by Plutarch Obtain being expressed as core tensor
Z=χ ×1A1 T×2A2 T...×N AN T
Using the approximation relation of sparse coefficient tensor sum core tensor, the expression formula of core tensor is substituted into tensor rarefaction representation and is obtained
5. a kind of tensor compression method based on cumulative amount dictionary learning according to claim 4, which is characterized in that described Step 3) the tensor rarefaction representation form new to step 2) is converted to the mapping indicated about dictionary according to tensor operation property Matrix form specifically includes:
In the operation of tensor,
As m ≠ n,
Ψ×mnB=Ψ ×n(BA)
Wherein Ψ indicates N rank tensor, ×mIndicate the m modular multiplication product of tensor and matrix, ×nIndicate the n modular multiplication product of tensor and matrix;
As m ≠ n,
Ψ×mnB=Ψ ×nmA
Above two attributes are applied in new rarefaction representation, it is available
6. a kind of tensor compression method based on cumulative amount dictionary learning according to claim 5, which is characterized in that described Step 4) carries out dimensionality reduction to the dictionary in mapping matrix using the thought of cumulative energy dictionary learning algorithm dimensionality reduction, opens to realize The compression of amount, specific steps include:
Input: the tensor that T training sample is constitutedDegree of rarefication k, maximum number of iterations Itermax terminate thresholding ε
1. initializing dictionaryFor Gaussian matrix, and the column of each dictionary are normalized
2. Plutarch decomposes: χ=Z ×1A1×2A2...×NAN
3. initializing dictionary by Γ (D) process in cumulative energy dictionary learning algorithm;
4. S when calculating update times i=0, and utilizeIt calculates when being updated without any iteration Absolute error E0
5. i-th dictionary updating
For k=1:IN
It utilizesCalculate the updated value of i-th of dictionary
End
6. the N number of dictionary acquired in pair previous step is normalized;
7. the dictionary updated using i-th updates sparse coefficient tensor Si
8. calculating the updated absolute error E of i-thiWith relative error Er
9. termination condition judges, if Er< ε or the number of iterations then terminate circulation, otherwise continue step 5-8 beyond maximum limitation;
10. obtaining dictionary
11. according to formulaObtain Ud, then dictionary
12. dictionary P is brought into
Output: compression tensor
7. a kind of tensor compression method based on cumulative amount dictionary learning according to claim 6, which is characterized in that in order to So that the dictionary P after dimensionality reduction is retained the principal component in original dictionary D, needs to pre-process dictionary D, treatment process is as follows
DTD=u Λ vT
U is the left singular matrix of singular value decomposition, and v is right singular matrix, and Λ is singular value matrix, and representation isSecondly singular value is updated
Wherein, k indicates dictionary columns, tdIndicate principal component threshold value,Indicate updated preceding d singular value,It indicates to update Rear r singular value afterwards;
To obtain new singular value
New dictionary is constituted with initial left singular matrix and right singular matrix, i.e.,
8. a kind of tensor compression method based on cumulative amount dictionary learning according to claim 7, which is characterized in that right After dictionary is pre-processed, dictionary need to be updated, be calculated at no point in the update process using the multidimensional dictionary learning based on tensor Method TKSVD completes the dictionary updating process to higher-dimension tensor signal, and unlike K-SVD dictionary updating algorithm, TKSVD is calculated In method, specifically include:
(1) different from 2D signal mode of learning when carrying out tensor dictionary learning, according to the definition of Tensor Norms, can be obtained:
Wherein,DNN-dimensional dictionary, D1Indicate the 1st dimension dictionary, ITIt indicates T rank unit matrix, leads to It crosses least square method and solves above formula, obtain dictionary DiUpdated value be
Wherein, yiIndicate that matrix is unfolded in the i mould of tensor,Indicate pseudoinverse, i.e.,WhereinIt indicates The pseudo inverse matrix of matrix M, MTThe transposed matrix of representing matrix M;On the other hand, it after completing an iteration, calculates in current dictionary Absolute error and relative error between the data that can restore under sparse coefficient and original training data, i-th iteration Absolute error afterwards is still defined with the Frobenius norm of tensor
Wherein S indicate coefficient tensor,Expression removes the error of actual signal and approximation signal after G atom;
After being updated completion to dictionary, dimension-reduction treatment is carried out to dictionary, singular value decomposition is carried out to updated dictionary
Wherein UdIndicate the preceding d column of left singular matrix, UrIndicate the rear r column of left singular matrix, ΘdIndicate the preceding d of singular value matrix A singular value, ΘrIndicate the rear r singular value of singular value matrix, VdIndicate the preceding d column of right singular matrix, VrIndicate right unusual square The rear r column of battle array, then the dictionary after dimensionality reduction is expressed asDictionary P after dimensionality reduction is substituted into mapping matrix T, to complete Tensor compression.
CN201910126833.5A 2019-02-20 2019-02-20 Tensor compression method based on energy-gathering dictionary learning Active CN109921799B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910126833.5A CN109921799B (en) 2019-02-20 2019-02-20 Tensor compression method based on energy-gathering dictionary learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910126833.5A CN109921799B (en) 2019-02-20 2019-02-20 Tensor compression method based on energy-gathering dictionary learning

Publications (2)

Publication Number Publication Date
CN109921799A true CN109921799A (en) 2019-06-21
CN109921799B CN109921799B (en) 2023-03-31

Family

ID=66961845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910126833.5A Active CN109921799B (en) 2019-02-20 2019-02-20 Tensor compression method based on energy-gathering dictionary learning

Country Status (1)

Country Link
CN (1) CN109921799B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110579967A (en) * 2019-09-23 2019-12-17 中南大学 process monitoring method based on simultaneous dimensionality reduction and dictionary learning
CN111241076A (en) * 2020-01-02 2020-06-05 西安邮电大学 Stream data increment processing method and device based on tensor chain decomposition

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1228887A (en) * 1996-07-24 1999-09-15 尤尼西斯公司 Data compression and decompression system with immediate dictionary updating interleaved with string search
CN1523372A (en) * 2003-02-21 2004-08-25 重庆邮电学院 Estimation method for radio orientation incoming wave direction based on TD-SCMA
US20050049990A1 (en) * 2003-08-29 2005-03-03 Milenova Boriana L. Support vector machines processing system
US20100246920A1 (en) * 2009-03-31 2010-09-30 Iowa State University Research Foundation, Inc. Recursive sparse reconstruction
CN102130862A (en) * 2011-04-26 2011-07-20 重庆邮电大学 Method for reducing overhead caused by channel estimation of communication system
US20130191425A1 (en) * 2012-01-20 2013-07-25 Fatih Porikli Method for Recovering Low-Rank Matrices and Subspaces from Data in High-Dimensional Matrices
JP2014096118A (en) * 2012-11-12 2014-05-22 Nippon Telegr & Teleph Corp <Ntt> Device, method, and program for missing value prediction and device, method, and program for commodity recommendation
CN104318064A (en) * 2014-09-26 2015-01-28 大连理工大学 Three-dimensional head-related impulse response data compressing method based on canonical multi-decomposition
WO2015042873A1 (en) * 2013-09-27 2015-04-02 Google Inc. Decomposition techniques for multi-dimensional data
CN104683074A (en) * 2015-03-13 2015-06-03 重庆邮电大学 Large-scale MIMO system limiting feedback method based on compressive sensing
CN104933684A (en) * 2015-06-12 2015-09-23 北京工业大学 Light field reconstruction method
US20150326247A1 (en) * 2014-05-07 2015-11-12 Realtek Semiconductor Corporation Dictionary-based compression method, dictionary-based decompression method and dictionary composing method
US20160012334A1 (en) * 2014-07-08 2016-01-14 Nec Laboratories America, Inc. Hierarchical Sparse Dictionary Learning (HiSDL) for Heterogeneous High-Dimensional Time Series
CN106023098A (en) * 2016-05-12 2016-10-12 西安电子科技大学 Image repairing method based on tensor structure multi-dictionary learning and sparse coding
CN106097278A (en) * 2016-06-24 2016-11-09 北京工业大学 The sparse model of a kind of multidimensional signal, method for reconstructing and dictionary training method
US20160371563A1 (en) * 2015-06-22 2016-12-22 The Johns Hopkins University System and method for structured low-rank matrix factorization: optimality, algorithm, and applications to image processing
CN106506007A (en) * 2015-09-08 2017-03-15 联发科技(新加坡)私人有限公司 A kind of lossless data compression and decompressing device and its method
CN107507253A (en) * 2017-08-15 2017-12-22 电子科技大学 Based on the approximate more attribute volume data compression methods of high order tensor
CN107516129A (en) * 2017-08-01 2017-12-26 北京大学 The depth Web compression method decomposed based on the adaptive Tucker of dimension
CN107561576A (en) * 2017-08-31 2018-01-09 电子科技大学 Seismic signal method based on dictionary learning regularization rarefaction representation
CN108305297A (en) * 2017-12-22 2018-07-20 上海交通大学 A kind of image processing method based on multidimensional tensor dictionary learning algorithm
WO2018149133A1 (en) * 2017-02-17 2018-08-23 深圳大学 Method and system for face recognition by means of dictionary learning based on kernel non-negative matrix factorization, and sparse feature representation
CN108510013A (en) * 2018-07-02 2018-09-07 电子科技大学 The steady tensor principal component analytical method of improvement based on low-rank kernel matrix
CN108521586A (en) * 2018-03-20 2018-09-11 西北大学 The IPTV TV program personalizations for taking into account time context and implicit feedback recommend method
US20180306882A1 (en) * 2017-04-24 2018-10-25 Cedars-Sinai Medical Center Low-rank tensor imaging for multidimensional cardiovascular mri
CN109241491A (en) * 2018-07-28 2019-01-18 天津大学 The structural missing fill method of tensor based on joint low-rank and rarefaction representation

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1228887A (en) * 1996-07-24 1999-09-15 尤尼西斯公司 Data compression and decompression system with immediate dictionary updating interleaved with string search
CN1523372A (en) * 2003-02-21 2004-08-25 重庆邮电学院 Estimation method for radio orientation incoming wave direction based on TD-SCMA
US20050049990A1 (en) * 2003-08-29 2005-03-03 Milenova Boriana L. Support vector machines processing system
US20100246920A1 (en) * 2009-03-31 2010-09-30 Iowa State University Research Foundation, Inc. Recursive sparse reconstruction
CN102130862A (en) * 2011-04-26 2011-07-20 重庆邮电大学 Method for reducing overhead caused by channel estimation of communication system
US20130191425A1 (en) * 2012-01-20 2013-07-25 Fatih Porikli Method for Recovering Low-Rank Matrices and Subspaces from Data in High-Dimensional Matrices
JP2014096118A (en) * 2012-11-12 2014-05-22 Nippon Telegr & Teleph Corp <Ntt> Device, method, and program for missing value prediction and device, method, and program for commodity recommendation
WO2015042873A1 (en) * 2013-09-27 2015-04-02 Google Inc. Decomposition techniques for multi-dimensional data
US20150326247A1 (en) * 2014-05-07 2015-11-12 Realtek Semiconductor Corporation Dictionary-based compression method, dictionary-based decompression method and dictionary composing method
US20160012334A1 (en) * 2014-07-08 2016-01-14 Nec Laboratories America, Inc. Hierarchical Sparse Dictionary Learning (HiSDL) for Heterogeneous High-Dimensional Time Series
CN104318064A (en) * 2014-09-26 2015-01-28 大连理工大学 Three-dimensional head-related impulse response data compressing method based on canonical multi-decomposition
CN104683074A (en) * 2015-03-13 2015-06-03 重庆邮电大学 Large-scale MIMO system limiting feedback method based on compressive sensing
CN104933684A (en) * 2015-06-12 2015-09-23 北京工业大学 Light field reconstruction method
US20160371563A1 (en) * 2015-06-22 2016-12-22 The Johns Hopkins University System and method for structured low-rank matrix factorization: optimality, algorithm, and applications to image processing
CN106506007A (en) * 2015-09-08 2017-03-15 联发科技(新加坡)私人有限公司 A kind of lossless data compression and decompressing device and its method
CN106023098A (en) * 2016-05-12 2016-10-12 西安电子科技大学 Image repairing method based on tensor structure multi-dictionary learning and sparse coding
CN106097278A (en) * 2016-06-24 2016-11-09 北京工业大学 The sparse model of a kind of multidimensional signal, method for reconstructing and dictionary training method
WO2018149133A1 (en) * 2017-02-17 2018-08-23 深圳大学 Method and system for face recognition by means of dictionary learning based on kernel non-negative matrix factorization, and sparse feature representation
US20180306882A1 (en) * 2017-04-24 2018-10-25 Cedars-Sinai Medical Center Low-rank tensor imaging for multidimensional cardiovascular mri
CN107516129A (en) * 2017-08-01 2017-12-26 北京大学 The depth Web compression method decomposed based on the adaptive Tucker of dimension
CN107507253A (en) * 2017-08-15 2017-12-22 电子科技大学 Based on the approximate more attribute volume data compression methods of high order tensor
CN107561576A (en) * 2017-08-31 2018-01-09 电子科技大学 Seismic signal method based on dictionary learning regularization rarefaction representation
CN108305297A (en) * 2017-12-22 2018-07-20 上海交通大学 A kind of image processing method based on multidimensional tensor dictionary learning algorithm
CN108521586A (en) * 2018-03-20 2018-09-11 西北大学 The IPTV TV program personalizations for taking into account time context and implicit feedback recommend method
CN108510013A (en) * 2018-07-02 2018-09-07 电子科技大学 The steady tensor principal component analytical method of improvement based on low-rank kernel matrix
CN109241491A (en) * 2018-07-28 2019-01-18 天津大学 The structural missing fill method of tensor based on joint low-rank and rarefaction representation

Non-Patent Citations (18)

* Cited by examiner, † Cited by third party
Title
ARGUELLO H: ""Higher-order compu- tational model for coded aperture spectral imaging"", 《APPL OPTICS》 *
CAIAFA CF: ""Block sparse representations of tensors using Kronecker bases"", 《IN: 2012 IEEE INTERNA- TIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)》 *
CASSIO FRAGA DANTAS: ""Learning fast dictionaries for sparse representations using low-rank tensor decompositions"", 《LVA/ICA 2018 - 14TH INTERNATIONAL CONFERENCE ON LATENT VARIABLE ANALYSIS AND SIGNAL SEPARATION》 *
CHONG.Y: ""Block-Sparse Tensor Based Spatial-Spectral Joint Compression of Hyperspectral Images"", 《LECTURE NOTES IN COMPUTER SCIENCE》 *
KOLDA TG: ""Tensor decompositions and applications"", 《SIAM REV》 *
LY NH: ""Reconstruction from random projections of hyperspectral imagery with spectral and spatial partitioning"", 《IEEE J SEL TOP APPL》 *
PRATER-BENNETTE: ""Separation of Composite Tensors with Sparse Tucker Representations"", 《PROCEEDINGS OF SPIE》 *
S. ZUBAIR: ""Tensor dictionary learning with sparse TUCKER decompositio"", 《2013 18TH INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP)》 *
SHI JIARONG: ""Sparse representation of high-order tensor signal"", 《COMPUTER ENGINEERING&DESIGN》 *
YANG Y: ""A multi-affine model for tensor decomposition"", 《IN: 2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCV WORK- SHOPS)》 *
YUAN L: ""High-order tensor comple- tion for data recovery via sparse tensor-train optimiza- tion"", 《2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP》 *
刘杰平等: "结合系数重用正交匹配追踪的字典学习算法", 《华南理工大学学报(自然科学版)》 *
吴姗: ""基于稀疏张量的核磁共振成像算法"", 《中国优秀硕士学位论文全文数据库》 *
张祖凡: ""MIMO-IBC下基于特征向量拆分的干扰对齐算"", 《华中科技大学学报(自然科学版)》 *
李斌: ""基于张量和非线性稀疏的多维信号压缩感知理论与应用"", 《中国优秀硕士学位论文全文数据库》 *
熊李艳: ""基于张量分解的鲁棒核低秩表示算法"", 《科学技术与工程》 *
秦红星: ""体数据的多尺度张量表达与可视化"", 《计算机工程与应用》 *
郑思龙等: "基于字典学习的非线性降维方法", 《自动化学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110579967A (en) * 2019-09-23 2019-12-17 中南大学 process monitoring method based on simultaneous dimensionality reduction and dictionary learning
CN111241076A (en) * 2020-01-02 2020-06-05 西安邮电大学 Stream data increment processing method and device based on tensor chain decomposition
CN111241076B (en) * 2020-01-02 2023-10-31 西安邮电大学 Stream data increment processing method and device based on tensor chain decomposition

Also Published As

Publication number Publication date
CN109921799B (en) 2023-03-31

Similar Documents

Publication Publication Date Title
CN107122809B (en) Neural network feature learning method based on image self-coding
WO2021238333A1 (en) Text processing network, neural network training method, and related device
CN110060286B (en) Monocular depth estimation method
JP2012507793A (en) Complexity normalization pattern representation, search, and compression
CN110032951A (en) A kind of convolutional neural networks compression method decomposed based on Plutarch with principal component analysis
WO2020010602A1 (en) Face recognition and construction method and system based on non-linear non-negative matrix decomposition, and storage medium
CN106503659B (en) Action identification method based on sparse coding tensor resolution
CN112732864B (en) Document retrieval method based on dense pseudo query vector representation
CN109921799A (en) A kind of tensor compression method based on cumulative amount dictionary learning
CN106844524A (en) A kind of medical image search method converted based on deep learning and Radon
CN107515843A (en) Based on the approximate anisotropy data compression method of tensor
CN104504015A (en) Learning algorithm based on dynamic incremental dictionary update
CN106599903B (en) Signal reconstruction method for weighted least square dictionary learning based on correlation
Nguyen et al. Improving transformers with probabilistic attention keys
CN108090409B (en) Face recognition method, face recognition device and storage medium
CN104504406A (en) Rapid and high-efficiency near-duplicate image matching method
Wang et al. Efficient and robust discriminant dictionary pair learning for pattern classification
CN115455226A (en) Text description driven pedestrian searching method
CN111680529A (en) Machine translation algorithm and device based on layer aggregation
CN114492566A (en) Weight-adjustable high-dimensional data dimension reduction method and system
CN117611428A (en) Fashion character image style conversion method
CN117611701A (en) Alzheimer&#39;s disease 3D MRI acceleration sampling generation method based on diffusion model
CN112734025B (en) Neural network parameter sparsification method based on fixed base regularization
CN109727219A (en) A kind of image de-noising method and system based on image sparse expression
CN116226357A (en) Document retrieval method under input containing error information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant