CN104103060A - Dictionary expression method and device in sparse model - Google Patents

Dictionary expression method and device in sparse model Download PDF

Info

Publication number
CN104103060A
CN104103060A CN201310115751.3A CN201310115751A CN104103060A CN 104103060 A CN104103060 A CN 104103060A CN 201310115751 A CN201310115751 A CN 201310115751A CN 104103060 A CN104103060 A CN 104103060A
Authority
CN
China
Prior art keywords
msub
mrow
dictionary
sparse
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310115751.3A
Other languages
Chinese (zh)
Other versions
CN104103060B (en
Inventor
王宇
张宇
王栋
唐胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongling Huiheng Electronic Technology Co ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201310115751.3A priority Critical patent/CN104103060B/en
Publication of CN104103060A publication Critical patent/CN104103060A/en
Application granted granted Critical
Publication of CN104103060B publication Critical patent/CN104103060B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention provides a dictionary expression method and a device in a sparse model, relating to the signal processing field, reducing the reconstruction errors through utilizing the processing of solution of the discrete dictionary and obtaining the clear sample classification information through the solution process. The dictionary expression method comprises steps of collecting signal samples according the received signals, establishing a sparse model according to the data distribution characteristics of the data sample, obtaining a sparse code of the signal through calculating the sparse model according to the signal sample, iterating the sparse code into the sparse model to get a discrete dictionary, obtaining at least one sample subset of the signal sample through the processing of circulated iterating of the discrete dictionary, stopping the circulated iterating until the judgment condition is satisfied, performing statistics on at least one signal sample to constitute a new signal, and outputting new signals obtained through statistics of the sample subset. The embodiment of the invention is applied to the digital signal processing and image processing technology.

Description

Method and device for representing dictionary in sparse model
Technical Field
The invention relates to the field of signal processing, in particular to a dictionary representation method and equipment in a sparse model.
Background
In the development of signal technology, the conventional signal processing method faces the problem of mass data storage and data transmission, and as the problem of processing mass data storage and data transmission, the compressed sensing theory is proposed to process signals in a manner far lower than the conventional nyquist sampling rate, wherein the compressed sensing theory is the theory of acquiring and reconstructing signals on a proper over-complete primitive set. The input signal can be accurately reconstructed only by a plurality of elements, so that the reconstructed signal can obtain sparse coding on the over-complete element set, and the process of generating the over-complete dictionary from the signal set is sparse modeling by obtaining the sparse coding. The sparse model includes the sparse coding and the overcomplete dictionary generated from the set of signals.
Because reconstruction errors of reconstruction signals in a sparse model are large, in order to reduce the reconstruction errors, the prior art in dictionary learning is generally divided into two categories: firstly, a sample set is manually selected as a redundant large dictionary, wherein a large number of high-quality signal samples are manually selected as the dictionary, so that the sparse coding is ensured to have better classification capability, but the reconstruction error cannot be ensured to be reduced due to a manual selection mode, an overhigh calculation complexity and storage burden can be caused by an overlarge dictionary, and the calculation and storage burden can be increased due to the fact that an effective sample subset cannot be selected so that a large number of samples are selected as the dictionary;
secondly, solving a relatively compact dictionary by adopting a continuous method, wherein the continuous method aims to reduce reconstruction errors, an error matrix is obtained by solving, then, a matrix decomposition method is adopted to solve eigenvectors as dictionary elements to reduce errors, the reconstruction errors are used as one item in an objective function, and the dictionary elements capable of reducing errors are solved by adopting methods such as random gradient descent and the like. However, since the primitives in the continuous dictionary do not belong to the sample set, the interpretability is poor and there is no semantic meaning; sparse coding obtained on a continuous dictionary does not have explicit class information.
Disclosure of Invention
The embodiment of the invention provides a dictionary representation method and equipment in a sparse model, which reduce reconstruction errors through a solution process of a discrete dictionary and obtain clear sample category information through the solution process.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
in a first aspect, a method for representing a dictionary in a sparse model is provided, including:
acquiring a signal sample according to a received signal, and establishing a sparse model according to the data distribution characteristics of the signal sample;
obtaining sparse codes of the signals by calculating the sparse model according to the signal samples;
iterating the sparse code into the sparse model to obtain a discrete dictionary, circularly iterating the discrete dictionary to obtain a sample subset of at least one signal sample, terminating the circular iteration until a preset judgment condition is met, and counting the at least one signal sample to form a new signal;
outputting the new signal obtained by counting the sample subset.
In a first possible implementation manner, with reference to the first aspect, specifically, the sparse model is specifically expressed as: <math> <mrow> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>d</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> <mo>,</mo> <mi>a</mi> </mrow> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <msub> <mi>d</mi> <mi>j</mi> </msub> <msubsup> <mi>a</mi> <mi>i</mi> <mi>j</mi> </msubsup> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mo>,</mo> </mrow> </math> when { d1,d2,...,dkIs ∈ X, wherein aiIn order to perform the sparse coding,j in (1) is sparsely encoded aiSerial number of (1) { d }1,d2,...,dkIs a subset of samples of the sample set X, djFor samples in the subset of samples, xiFor a single sample composing the sample set X, N is the number of samples, and a coefficient lambda is used for limiting the sparseness of the sparse coding, wherein the sparseness is the number of non-zero elements in the sparse coding.
In a second possible implementation manner, with reference to the first aspect or the first possible implementation manner of the first aspect, before the obtaining of the sparse coding of the signal by calculating the sparse model according to the signal samples, the method further includes:
setting a total number of loops as T, a maximum error value as eta, wherein the total number of loops T is the loop times of loop iteration calculation of the sparse model, the maximum error value eta is a threshold value of an error value set when a reconstruction error is calculated through the sparse model, and the total number of loops or the maximum error value is a judgment condition for the end of loop iteration calculation;
set of samples { x1,x2,……,xNNormalizing, and arbitrarily selecting K samples as a discrete dictionary, wherein the sample set is normalized by summing samples in the sample set { x } as a constant 1 by a square1,x2,……,xNNormalization is expressed asAnd the number K of the samples is the number of the samples arbitrarily selected in the sample set.
In a third possible implementation manner, with reference to the first aspect or any one of the possible implementation manners included in the first aspect, the obtaining a sparse code of the signal by calculating the sparse model according to the signal samples includes:
setting the discrete dictionary as a known condition, and substituting the discrete dictionary into an expression of the sparse model <math> <mrow> <munder> <mi>min</mi> <mi>a</mi> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>D</mi> <msub> <mi>a</mi> <mi>i</mi> </msub> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mo>;</mo> </mrow> </math>
Obtaining sparse codes of N samples through a least angle regression Lasso-LARS algorithm according to the expression, wherein the sparse codes of the N samples are the number of codes of a plurality of samples corresponding to the sparse codes, and the sparse codes are { a }1,a2,……,aN}。
In a fourth possible implementation manner, with reference to the first aspect or any one of the possible implementation manners included in the first aspect, the iterating the sparse coding into the sparse model to obtain a discrete dictionary includes:
setting the sparse code as a known condition, and substituting the sparse code into the expression of the sparse model to obtain an expression of a computational discrete dictionary, wherein the expression of the computational discrete dictionary is when { d1,d2,...,dkWhen the ∈ X is larger than the preset value,the sparse coding is A ═ a1,a2,……,aNAnd (c) the step of (c) in which,entering a paradigm for solving the discrete dictionary after setting the sparse coding to a known condition, X being a known set of samples, D being the discrete dictionary, { D }1,d2,...,dkE is a sample set in a discrete dictionary;
when dictionary element d in the sample subsetiSequentially substituting the expressions for calculating the discrete dictionary, and setting other dictionary elements as known conditions when updating to the Kth dictionary element, and calculating the expressions for calculating the discrete dictionaryIn (1)Replacing to E, so that the expression of the calculation discrete dictionary is converted into <math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>X</mi> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&NotEqual;</mo> <mi>k</mi> </mrow> </munder> <msub> <mi>d</mi> <mi>i</mi> </msub> <mi>A</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mi>A</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>E</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mi>A</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> </mrow> </math> And computing a discrete dictionary, wherein X is a known sample set, A is the sparse code, diFor known dictionary elements, E-dkA is used to compute the discrete dictionary by screening samples to form a new output signal,for X-DA in dictionary elements diAnd carrying out expansion after sequential substitution.
In a fifth possible implementation form of the method,with reference to the fourth possible implementation manner, the dictionary element d in the sample subset is specifically includediSubstituting the expressions of the calculation discrete dictionary in sequence and updating the expressions to the Kth dictionary element, wherein the method comprises the following steps:
selecting a subset of samples referencing the Kth dictionary primitive from the sparse coding, the subset of samples being
Substituting each sample in the sample subset into a conversion formula of the expression for calculating the discrete dictionary to calculate an error value of the sample in the conversion formula, and comparing the error value with a current error value, wherein the current error value is
If the error value is greater than the current error value, discarding the error value, and removing the sample corresponding to the error value from the sample subset;
or,
if the error value is smaller than the current error value, the error value corresponding to the sample is reserved and updated to be the first current error value, the sample corresponding to the error value is removed from the sample subset, and the samples in the sample subset are circularly brought into the conversion formula to calculate the error value until the sample subset is an empty set.
In a second aspect, an electronic device is provided, comprising:
the acquisition unit is used for acquiring signal samples according to received signals and establishing a sparse model according to the data distribution characteristics of the signal samples;
the calculation unit is used for calculating the sparse model according to the signal samples acquired by the acquisition unit to obtain sparse codes of the signals;
the calculation unit is further configured to iterate the sparse code into the sparse model to calculate a discrete dictionary, obtain a sample subset of at least one signal sample by circularly iterating the discrete dictionary, terminate the circular iteration until a predetermined judgment condition is met, and perform statistics on the at least one signal sample to form a new signal;
a transmitting unit, configured to output the new signal obtained by counting the sample subset.
In a first possible implementation manner, with reference to the second aspect, specifically, the sparse model is specifically expressed as: <math> <mrow> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>d</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> <mo>,</mo> <mi>a</mi> </mrow> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <msub> <mi>d</mi> <mi>j</mi> </msub> <msubsup> <mi>a</mi> <mi>i</mi> <mi>j</mi> </msubsup> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mo>,</mo> </mrow> </math> when { d1,d2,...,dkIs ∈ X, wherein aiIn order to perform the sparse coding,j in (1) is sparsely encoded aiSerial number of (1) { d }1,d2,...,dkIs a subset of samples of the sample set X, djFor samples in the subset of samples, xiFor a single sample composing the sample set X, N is the number of samples, and a coefficient lambda is used for limiting the sparseness of the sparse coding, wherein the sparseness is the number of non-zero elements in the sparse coding.
In a second possible implementation manner, with reference to the second aspect or the first possible implementation manner, the apparatus further includes:
the setting unit is used for setting the total number of cycles as T and the maximum error value as eta before the sparse coding of the signal is obtained by calculating the sparse model according to the signal sample, wherein the total number of cycles is the cycle number of the sparse model cycle iteration calculation, the maximum error value eta is a threshold value of an error value set when the reconstruction error is calculated through the sparse model, and the total number of cycles or the maximum error value is a judgment condition for calculating the end of cycle iteration;
a selection unit for selecting a sample set { x1,x2,……,xNNormalizing, and arbitrarily selecting K samples as a discrete dictionary, wherein the sample set is normalized by summing samples in the sample set { x } as a constant 1 by a square1,x2,……,xNNormalization is expressed asAnd the number K of the samples is the number of the samples arbitrarily selected in the sample set.
In a third possible implementation manner, with reference to the second aspect or any one possible implementation manner included in the second aspect, the calculating unit specifically includes:
a calculation subunit, configured to set the discrete dictionary as a known condition and bring the discrete dictionary into an expression of the sparse model <math> <mrow> <munder> <mi>min</mi> <mi>a</mi> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>D</mi> <msub> <mi>a</mi> <mi>i</mi> </msub> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mo>;</mo> </mrow> </math>
The calculating subunit is further configured to obtain sparse codes of N samples according to the expression by a least-angle regression Lasso-LARS algorithm, where the sparse codes of the N samples are the number of codes of a plurality of samples corresponding to the sparse codes, and the sparse codes are { a }1,a2,……,aN}。
In a fourth possible implementation manner, with reference to the second aspect or any one possible implementation manner included in the second aspect, the calculating unit further includes:
a conversion subunit, configured to set the sparse code as a known condition, and substitute the sparse code into an expression of the sparse model to obtain an expression of a computational discrete dictionary, where the expression of the computational discrete dictionary is when { d }1,d2,...,dkWhen the ∈ X is larger than the preset value,the sparse coding is A ═ a1,a2,……,aNAnd (c) the step of (c) in which,entering a paradigm for solving the discrete dictionary after setting the sparse coding to a known condition, X being a known set of samples, D being the discrete dictionary, { D }1,d2,...,dkE is a sample set in a discrete dictionary;
the conversion subunit is further configured to convert the dictionary element d in the sample subsetiSequentially substituting the expressions for calculating the discrete dictionary, setting other dictionary elements as known conditions when updating to the Kth dictionary element, and calculating the expressions of the discrete dictionaryReplacing to E, so that the expression of the calculation discrete dictionary is converted into <math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>X</mi> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&NotEqual;</mo> <mi>k</mi> </mrow> </munder> <msub> <mi>d</mi> <mi>i</mi> </msub> <mi>A</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mi>A</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>E</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mi>A</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> </mrow> </math> And computing a discrete dictionary, wherein X is a known sample set, A is the sparse code, diFor known dictionary elements, E-dkA is used to compute the discrete dictionary by screening samples to form a new output signal,for X-DA in dictionary elements diAnd carrying out expansion after sequential substitution.
In a fifth possible implementation manner, in combination with the fourth possible implementation manner, the calculation unit is specifically configured to select, according to the sparse coding, a sample subset that refers to the kth dictionary primitive, where the sample subset is
Substituting each sample in the sample subset into a conversion formula of the expression for calculating the discrete dictionary to calculate an error value of the sample in the conversion formula, and comparing the error value with a current error value, wherein the current error value is
If the error value is greater than the current error value, discarding the error value, and removing the sample corresponding to the error value from the sample subset;
or,
if the error value is smaller than the current error value, the error value corresponding to the sample is reserved and updated to be the first current error value, the sample corresponding to the error value is removed from the sample subset, and the samples in the sample subset are circularly brought into the conversion formula to calculate the error value until the sample subset is an empty set.
According to the dictionary representation method and device in the sparse model, sparse codes are obtained through calculation of the sparse model, then the discrete dictionary is obtained through iterative calculation according to the sparse codes, the sparse degree of the sparse codes is controlled through limiting the value range of the coefficient lambda, the problem of large reconstruction errors is solved through learning of the discrete dictionary, the calculation amount is reduced through solving of the discrete dictionary, and clear class information of samples is obtained through solving of the discrete dictionary.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a representation method of a dictionary in a sparse model according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a dictionary representation method in another sparse model according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a technical effect of a representation method for a dictionary in a sparse model according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another electronic device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method for representing the dictionary in the sparse model, which is provided by the invention, is shown in fig. 1 and specifically comprises the following steps:
101. the electronic equipment acquires a signal sample according to the received signal and establishes a sparse model according to the data distribution characteristics of the signal sample.
Wherein the sparse model is embodied as: <math> <mrow> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>d</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> <mo>,</mo> <mi>a</mi> </mrow> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <msub> <mi>d</mi> <mi>j</mi> </msub> <msubsup> <mi>a</mi> <mi>i</mi> <mi>j</mi> </msubsup> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mo>,</mo> </mrow> </math> when { d1,d2,...,dkIs ∈ X, wherein aiIn order to perform the sparse coding,to the power of j, { d, { for sparse coding1,d2,...,dkIs a subset of samples of the sample set X, djFor samples in a subset of samples, xiFor the single samples forming the sample set X, N is the number of samples, and the coefficient λ is used to define the sparseness of the sparse coding, where the sparseness is the number of non-zero elements in the sparse coding.
102. And the electronic equipment obtains the sparse code of the signal by calculating a sparse model according to the signal sample.
Here, the electronic device solves the sparse coding by using the least absolute contraction selector, lasso (Leastabsolute Shrinkage and Selection operator) algorithm, first normalizing the sample set { x }1,x2,……,xNAre such thatRandomly selecting K samples as an initialization dictionary D, and fixing the K samples by using a fixed dictionary D
Is transformed intoThen, solving a transformation expression by a Least Angle Regression Lasso-LARS algorithm (Least Angle Regression) to respectively obtain sparse codes { a ] of the N samples1,a2,……,aN}。
In the expression of the sparse model provided in the embodiment of the present invention, | · |, is a paradigm, and the paradigm mentioned in the embodiment of the present invention is a first paradigm and a second paradigm.
103. The electronic equipment iterates the sparse code into a sparse model to obtain a discrete dictionary, obtains a sample subset of at least one signal sample through the processing of the discrete dictionary of the cyclic iteration, terminates the cyclic iteration until a preset judgment condition is met, and counts at least one signal sample to form a new signal.
Wherein the discrete dictionary is a sample subset D in the sample set X, where the sample subset D is a dictionary element DjThe components are as follows.
Here by fixed sparse coding a ═ a1,a2,……,aNIterate A backObtaining a conversion formula for solving a discrete dictionaryWherein { d1,d2,...,dkIs ∈ X. By updating the dictionary element di to the Kth and fixing the other dictionary elementsReplacing to E and converting again to obtain <math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>X</mi> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&NotEqual;</mo> <mi>k</mi> </mrow> </munder> <msub> <mi>d</mi> <mi>i</mi> </msub> <mi>A</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mi>A</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>E</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mi>A</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>.</mo> </mrow> </math>
In particular, A in E is a fixed sparse coding matrix, dictionary element diAlso fixed, X is a known set of samples, so E can be found by calculation, here by updating the dictionary element diTo dictionary element dkAnd solving the discrete dictionary, namely screening the dictionary elements in the solved discrete dictionary through the sample subset corresponding to the dictionary elements, so that the solved discrete dictionary can keep the data distribution characteristics of the samples, can obtain sparse codes and has clear class information.
104. The electronic device outputs a new signal derived by counting the subset of samples.
According to the dictionary representation method in the sparse model, sparse codes are obtained through calculation of the sparse model, then the discrete dictionary is obtained through iterative calculation according to the sparse codes, the sparse degree of the sparse codes is controlled through limiting the value range of the coefficient lambda, the problem of large reconstruction errors is solved through learning of the discrete dictionary, the calculated amount is reduced through solving of the discrete dictionary, and clear class information of samples is obtained through solving of the discrete dictionary.
Specifically, the following description will be given with reference to specific examples.
On the basis of the embodiment shown in fig. 1, referring to fig. 2, an embodiment of the present invention provides a method for representing a dictionary in a sparse model, which mainly includes: the electronic equipment obtains sparse codes by calculating a sparse model, and then obtains a discrete dictionary by calculating a sparse model expression according to sparse code iteration; referring to fig. 2, a process of obtaining a sparse code by an electronic device through calculation and obtaining a discrete dictionary by iterating according to the sparse code is shown, which includes the following specific steps:
201. the electronic equipment acquires a signal sample according to the received signal and establishes a sparse model according to the data distribution characteristics of the signal sample.
Wherein the sparse model is embodied as: <math> <mrow> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>d</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> <mo>,</mo> <mi>a</mi> </mrow> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <msub> <mi>d</mi> <mi>j</mi> </msub> <msubsup> <mi>a</mi> <mi>i</mi> <mi>j</mi> </msubsup> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mo>,</mo> </mrow> </math> when { d1,d2,...,dkIs ∈ X, wherein aiIn order to perform the sparse coding,to the power of j, { d, { for sparse coding1,d2,...,dkIs a subset of samples of the sample set X, djFor samples in a subset of samples, xiFor the single samples forming the sample set X, N is the number of samples, and the coefficient λ is used to define the sparseness of the sparse coding, where the sparseness is the number of non-zero elements in the sparse coding.
202. The electronic device sets the total number of cycles to be T and the maximum error value to be eta.
The total number of cycles T is the number of cycles of the sparse model cycle iteration calculation, the maximum error value eta is a threshold value of an error value set when the reconstruction error is calculated by the sparse model, and the total number of cycles or the maximum error value is a determination condition for ending the calculation.
The total number of cycles and the maximum error value are preset and used for judging that a judgment standard is provided for selecting samples for the discrete dictionary for multiple times when the discrete dictionary is subjected to iterative computation, and the computation is finished when the number of times of the cyclic iteration reaches T times; or, when the error value of the selected sample is smaller than the maximum error value eta, the calculation is finished.
203. The electronic device will sample the set { x1,x2,……,xNNormalize, and arbitrarily choose K samples as a discrete dictionary.
Wherein the set of samples is normalized by summing the samples in the set of samples { x, by square to a constant 11,x2,……,xNNormalization is expressed asThe number of samples K is the number of samples arbitrarily selected in the sample set.
In the embodiment of the invention, the discrete dictionary is calculated by fixing the discrete dictionary, calculating the sparse code and then carrying out transformation calculation by using a sparse code iteration recovery mode to obtain the discrete dictionary. Step 203 first determines the initialization dictionary before performing the calculation of the sparse coding, so that the sparse coding is calculated by fixing the variable of the initialization dictionary.
204. And the electronic equipment obtains the sparse code of the signal by calculating a sparse model according to the signal sample.
Here, the electronic device solves the sparse coding by using the least absolute contraction selector, lasso (Leastabsolute Shrinkage and Selection operator) algorithm, first normalizing the sample set { x }1,x2,……,xNAre such thatRandomly selecting K samples as an initialization dictionary D, and setting the dictionary D as a known condition <math> <mrow> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>d</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> <mo>,</mo> <mi>a</mi> </mrow> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <msub> <mi>d</mi> <mi>j</mi> </msub> <msubsup> <mi>a</mi> <mi>i</mi> <mi>j</mi> </msubsup> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> </mrow> </math> Is transformed into <math> <mrow> <munder> <mi>min</mi> <mi>a</mi> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>D</mi> <msub> <mi>a</mi> <mi>i</mi> </msub> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mo>,</mo> </mrow> </math> Then, solving a transformation expression by a Least Angle Regression Lasso-LARS algorithm (Least Angle Regression) to respectively obtain sparse codes { a ] of the N samples1,a2,……,aNAnd b, wherein the sparse coding of the N samples is the coding number of the corresponding multiple samples of the sparse coding.
In the expression of the sparse model provided in the embodiment of the present invention, | · |, is a paradigm, and the paradigm mentioned in the embodiment of the present invention is a first paradigm and a second paradigm.
In the solving process, the larger the set lambda value is, the fewer non-zero terms in the sparse coding are, and the larger the reconstruction error is. Therefore, in the embodiment of the present invention, any value of the set coefficient λ within a range of 0.1 to 0.25 is provided for limiting the sparsity of the sparse code, wherein when the coefficient λ is 0.15, the classification capability of the sparse model reaches a peak value, specifically, the fluctuation of the coefficient λ is usually calculated to obtain a peak value between 0.15 positive and negative errors and 0.05, where the sparsity is the number of nonzero elements in the sparse code. In particular, when the Lasso-LARS algorithm is used for calculating the sparse coding, coefficients are distributed to the dictionary elements, the coefficients are usually small, so that the reconstructed sample errors are located on bisectors of the dictionary elements and other dictionary elements, and the cyclic distribution process knows that enough dictionary elements are selected to reconstruct samples.
When the number of the non-zero elements of the sparse coding is far less than that of the dictionary elements, the Lasso algorithm is stable and convergent, and the sparsity degree of the sparse coding can be ensured by controlling the value of lambda to be more than or equal to 0.01.
205. The electronic equipment iterates the sparse code into a sparse model to obtain a discrete dictionary, obtains a sample subset of at least one signal sample through the processing of the discrete dictionary of the cyclic iteration, terminates the cyclic iteration until a preset judgment condition is met, and counts at least one signal sample to form a new signal.
Wherein the discrete dictionary is a sample subset D in the sample set X, where the sample subset D is a dictionary element DjThe components are as follows.
Here, by setting sparse coding a ═ a1,a2,……,aNIterate A back to known conditionsObtaining a conversion formula for solving the discrete dictionary when the { d1,d2,...,dkWhen the ∈ X is larger than the preset value, <math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>D</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>d</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> <mo>&Element;</mo> <mi>D</mi> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>X</mi> <mo>-</mo> <mi>DA</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>,</mo> </mrow> </math> wherein, <math> <mrow> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>d</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> <mo>&Element;</mo> <mi>D</mi> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>X</mi> <mo>-</mo> <mi>DA</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> </mrow> </math> substituting a sparse model expression into a sparse model expression for solving a normal form of the discrete dictionary after setting known conditions for sparse coding, wherein X is a known sample set, D is the discrete dictionary, and D1,d2,...,dkE D is the set of samples in the discrete dictionary. By combining dictionary elements diUpdating to Kth, and setting other dictionary elements as known conditions, will convertReplacing to E and converting again to obtain <math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>X</mi> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&NotEqual;</mo> <mi>k</mi> </mrow> </munder> <msub> <mi>d</mi> <mi>i</mi> </msub> <mi>A</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mi>A</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>E</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mi>A</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> </mrow> </math> And computing a discrete dictionary, wherein X is a known sample set, A is sparse coding, diAs dictionary elements, E-dkA is used to compute a discrete dictionary by screening samples, to form a new output signal,for X-DA in dictionary elements diAnd carrying out expansion after sequential substitution.
In particular, A in E is a fixed sparse coding matrix, dictionary element diAlso fixed, X is a known set of samples, so E can be found by calculation, here by updating the dictionary element diTo dictionary element dkSolving discrete dictionaries, i.e. by pairs of dictionary elementsAnd screening dictionary elements in the solved discrete dictionary by the sample subset, so that the solved discrete dictionary can keep the data distribution characteristics of the samples, can obtain sparse codes and has clear class information.
Further, when the dictionary element d in the sample subsetiSubstituting the expressions of the discrete dictionary into the calculation discrete dictionary in sequence and updating the expressions to the Kth dictionary element, comprising the following steps:
205a, the electronic device selects a subset of samples referencing a kth dictionary primitive according to the sparse coding.
Wherein the subset of samples is
205b, the electronic device calculates an error value of each sample in the sample subset in the conversion formula by substituting each sample in the sample subset into the conversion formula for calculating the expression of the discrete dictionary, and compares the error value with the current error value.
Wherein the current error value is represented as | E-dkA||2
205c, if the error value is greater than the current error value, discarding the error value and removing the sample corresponding to the error value from the sample subset.
Or,
205d, if the error value is smaller than the current error value, retaining the error value corresponding to the sample, updating the error value to be the first current error value, removing the sample corresponding to the error value from the sample subset, and circularly substituting the sample in the sample subset into the conversion formula to calculate the error value until the sample subset is an empty set.
Specifically, the calculation of the discrete dictionary by using the Lasso algorithm provided by the embodiment of the present invention is implemented by loop iteration, that is, after the sparse coding is calculated according to the fixed discrete dictionary in step 204, the discrete dictionary is further calculated by fixing the sparse coding in step 205 to obtain dictionary elements forming the dictionary, then the sparse coding is solved by fixing the new discrete dictionary in step 204, and the new dictionary elements obtained in step 205 are circulated in this way until the discrete dictionary capable of corresponding to the known sample set X is obtained.
206. The electronic device outputs a new signal derived by counting the subset of samples.
In the method for calculating a discrete dictionary provided by the embodiments of the present invention, since the dictionary elements in the discrete dictionary are from the sample set X, there will always be dictionary element d at the time of sample selectionkSame sample xj. According to the Lasso algorithm, X in the sample set XjWill select dkTherefore, d is updatedkThen x if there are samples for which the other samples cannot be reduced by ejWill be selected as a dictionary element and e will not change; conversely, if there are samples that can be reduced by e for other samples, then the new sample is saved and e is reduced. Wherein, during the updating process of the dictionary, the reconstruction error can be kept unchanged or reduced.
The representation method of the dictionary in the sparse model provided by the embodiment of the invention can be further realized by an Orthogonal Matching Pursuit (OMP) algorithm, and the OMP algorithm is more suitable for the Orthogonal primitive dictionary, and the processing object for realizing the representation method of the dictionary in the sparse model is an over-complete dictionary, so that the obtained over-complete dictionary is more stable and more accurate by using the Lasso algorithm.
According to the dictionary representation method in the sparse model, sparse codes are obtained by calculating the sparse model dictionary, the sparse degree of the sparse codes is controlled by limiting the value range of the coefficient lambda, the discrete dictionary is obtained according to the sparse codes through iterative calculation, and reconstruction errors are kept unchanged or reduced by selecting samples in the discrete dictionary calculation part, so that the reconstruction errors are integrally reduced, the calculation amount is reduced by solving the discrete dictionary, and clear class information of the samples is obtained by solving the discrete dictionary.
Specifically, the method for representing a dictionary in a sparse model provided by the embodiment of the present invention specifically refers to fig. 3, taking three-dimensional samples as an example, a 3 × 10 discrete dictionary is obtained by learning from 100 three-dimensional data sets, a left graph is a sample set space part, and a right graph is a distribution of dictionary elements of the discrete dictionary (shown as a cross point in the right graph), where the elements in the discrete dictionary are sample subsets.
The data distribution characteristics of the sample set space are reserved by calculating the discrete dictionary in the sparse model, namely the characteristics of the original signal can be ensured when the received signal is displayed after being processed.
According to the dictionary representation method in the sparse model provided by the embodiment of the invention, the discrete dictionary is analyzed, so that the data distribution characteristics of the signal sample can be reserved by learning the discrete dictionary, and the sparse code obtained on the discrete dictionary has clear category information due to the fact that no confused dictionary elements exist in the discrete dictionary.
An embodiment of the present invention provides an electronic device 3, which is specifically any electronic device in a signal processing system, such as a computer, a notebook computer, and the like, in the signal processing system, where any device capable of implementing a representation method of a dictionary in a sparse model is provided, and with reference to fig. 4, the electronic device includes:
the acquisition unit 31 is used for acquiring a signal sample according to a received signal and establishing a sparse model according to the data distribution characteristics of the signal sample;
the calculation unit 32 is used for obtaining sparse codes of the signals by calculating a sparse model according to the signal samples acquired by the acquisition unit;
the calculation unit 32 is further configured to iterate the sparse code into a sparse model to calculate a discrete dictionary, obtain a sample subset of at least one signal sample through processing of a cyclic iteration discrete dictionary, terminate the cyclic iteration until a predetermined judgment condition is met, and perform statistics on the at least one signal sample to form a new signal;
a sending unit 33 for outputting a new signal obtained by counting the subset of samples.
According to the electronic equipment provided by the embodiment of the invention, the sparse code is obtained by calculating the sparse model, then the discrete dictionary is obtained according to the sparse code through iterative calculation, wherein the sparse degree of the sparse code is controlled by limiting the value range of the coefficient lambda, the problem of large reconstruction error is solved by learning the discrete dictionary, the calculated amount is reduced by solving the discrete dictionary, and the definite class information of the sample is obtained by solving the discrete dictionary.
Further, the sparse model is embodied as: <math> <mrow> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>d</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> <mo>,</mo> <mi>a</mi> </mrow> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <msub> <mi>d</mi> <mi>j</mi> </msub> <msubsup> <mi>a</mi> <mi>i</mi> <mi>j</mi> </msubsup> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mo>,</mo> </mrow> </math> when { d1,d2,...,dkIs ∈ X, wherein aiIn order to perform the sparse coding,j in (1) is sparsely encoded aiSerial number of (1) { d }1,d2,...,dkIs a subset of samples of the sample set X, djThe method comprises the steps of taking samples in a sample subset, wherein X is a single sample forming a sample set X, N is the number of samples, and a coefficient lambda is used for limiting the sparseness degree of sparse coding, wherein the sparseness degree is the number of nonzero elements in the sparse coding.
Optionally, as shown in fig. 5, the electronic device 3 further includes:
the setting unit 34 is configured to set a total number of cycles as T and a maximum error value as eta before obtaining sparse coding of a signal by calculating a sparse model according to a signal sample, where the total number of cycles is a cycle number of sparse model cycle iteration calculation, the maximum error value eta is a threshold value of an error value set when a reconstruction error is calculated by the sparse model, and the total number of cycles or the maximum error value is a determination condition for ending calculation;
a selecting unit 35 for selecting a set of samples { x }1,x2,……,xNNormalizing, and arbitrarily selecting K samples as a discrete dictionary, wherein the set of samples is normalized by summing samples in a set of samples { x } into a constant 1 by a square1,x2,……,xNNormalization is expressed asThe number K of samples is the number of samples arbitrarily selected in the sample set.
Alternatively, as shown in fig. 6, the calculation unit 32 includes:
a computing subunit 321 for setting the discrete dictionary to a known condition and bringing the discrete dictionary into a sparse modeExpression of type <math> <mrow> <munder> <mi>min</mi> <mi>a</mi> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>D</mi> <msub> <mi>a</mi> <mi>i</mi> </msub> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mo>;</mo> </mrow> </math>
The calculating subunit 321 is further configured to obtain sparse codes of N samples according to the expression by a least-angle regression Lasso-LARS algorithm, where the sparse codes of the N samples are the number of codes of the sparse codes corresponding to the multiple samples, and the sparse codes are { a }1,a2,……,aN}。
Further, referring to fig. 6, the calculating unit 32 further includes:
a converting subunit 322, configured to set the sparse coding as a known condition, and substitute the sparse coding into an expression of the sparse model to obtain an expression of a computational discrete dictionary, where the expression of the computational discrete dictionary is when { d }1,d2,...,dkWhen the ∈ X is larger than the preset value,sparse coding is a ═ a1,a2,……,aNAnd (c) the step of (c) in which,importing sparse model representation after setting known conditions for sparse codingEquation for solving a paradigm of discrete dictionaries, X being a known set of samples, D being a discrete dictionary, { D }1,d2,...,dkE is a sample set in a discrete dictionary;
a conversion subunit 322, further configured to convert the dictionary element d in the sample subsetiSequentially substituting the expressions for calculating the discrete dictionary, and setting other dictionary elements as known conditions when updating to the Kth dictionary elementReplacing to E, so that the expression of the calculation discrete dictionary is converted into <math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>X</mi> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&NotEqual;</mo> <mi>k</mi> </mrow> </munder> <msub> <mi>d</mi> <mi>i</mi> </msub> <mi>A</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mi>A</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>E</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mi>A</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> </mrow> </math> And computing a discrete dictionary, where X is a known sampleSet, A is sparse coding, diFor known dictionary elements, E-dkA is used to compute the discrete dictionary by screening samples to form a new output signal,for X-DA in dictionary elements diAnd carrying out expansion after sequential substitution.
Further, the calculation unit 32 is specifically configured to select, according to the sparse coding, a subset of samples referencing a kth dictionary element, the subset of samples being
Substituting each sample in the sample subset into a conversion formula for calculating the expression of the discrete dictionary to calculate an error value of the sample in the conversion formula, and comparing the error value with a current error value, wherein the current error value is | | E-dkA||2
If the error value is larger than the current error value, discarding the error value, and removing the sample corresponding to the error value from the sample subset;
or,
if the error value is smaller than the current error value, the error value corresponding to the sample is reserved and updated to be the first current error value, the sample corresponding to the error value is removed from the sample subset, and the samples in the sample subset are circularly brought into the conversion formula to calculate the error value until the sample subset is an empty set.
According to the electronic equipment provided by the embodiment of the invention, the sparse code is obtained by calculating the sparse model dictionary, the sparse degree of the sparse code is controlled by limiting the value range of the coefficient lambda, the discrete dictionary is obtained according to the sparse code through iterative calculation, and the reconstruction error is kept unchanged or reduced by selecting the sample in the discrete dictionary calculation part, so that the reconstruction error is integrally reduced, the calculation amount is reduced by solving the discrete dictionary, and the clear class information of the sample is obtained by solving the discrete dictionary.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A method for representing a dictionary in a sparse model, comprising:
acquiring a signal sample according to a received signal, and establishing a sparse model according to the data distribution characteristics of the signal sample;
obtaining sparse codes of the signals by calculating the sparse model according to the signal samples;
iterating the sparse code into the sparse model to obtain a discrete dictionary, circularly iterating the discrete dictionary to obtain a sample subset of at least one signal sample, terminating the circular iteration until a preset judgment condition is met, and counting the at least one signal sample to form a new signal;
outputting the new signal obtained by counting the sample subset.
2. The method of claim 1, wherein the sparse model is embodied as: <math> <mrow> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>d</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> <mo>,</mo> <mi>a</mi> </mrow> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <msub> <mi>d</mi> <mi>j</mi> </msub> <msubsup> <mi>a</mi> <mi>i</mi> <mi>j</mi> </msubsup> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mo>,</mo> </mrow> </math> when { d1,d2,...,dkWhen ∈ X, where ai is sparse coding,j in (1) is sparsely encoded aiSerial number of (1) { d }1,d2,...,dkIs a subset of samples of the sample set X, diFor samples in the subset of samples, xiFor a single sample composing the sample set X, N is the number of samples, and a coefficient lambda is used for limiting the sparseness of the sparse coding, wherein the sparseness is the number of non-zero elements in the sparse coding.
3. The method according to claim 1 or 2, wherein before said deriving a sparse coding of said signal by computing said sparse model from said signal samples, said method further comprises:
setting a total number of loops as T, a maximum error value as eta, wherein the total number of loops T is the loop times of loop iteration calculation of the sparse model, the maximum error value eta is a threshold value of an error value set when a reconstruction error is calculated through the sparse model, and the total number of loops or the maximum error value is a judgment condition for the end of loop iteration calculation;
set of samples { x1,x2,……,xNNormalizing, and arbitrarily selecting K samples as a discrete dictionary, wherein the sample set is normalized by summing samples in the sample set { x } as a constant 1 by a square1,x2,……,xNNormalization is expressed asAnd the number K of the samples is the number of the samples arbitrarily selected in the sample set.
4. The method according to any one of claims 1 to 3, wherein the obtaining of the sparse coding of the signal by calculating the sparse model from the signal samples comprises:
setting the discrete dictionary as a known condition, and substituting the discrete dictionary into an expression of the sparse model <math> <mrow> <munder> <mi>min</mi> <mi>a</mi> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>D</mi> <msub> <mi>a</mi> <mi>i</mi> </msub> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mo>;</mo> </mrow> </math>
Obtaining sparse codes of N samples through a least angle regression Lasso-LARS algorithm according to the expression, wherein the sparse codes of the N samples are the number of codes of a plurality of samples corresponding to the sparse codes, and the sparse codes are { a }1,a2,……,aN}。
5. The method according to any one of claims 1 to 4, wherein iterating the sparse coding into the sparse model to obtain a discrete dictionary comprises:
setting the sparse code as a known condition, and substituting the sparse code into the expression of the sparse model to obtain an expression of a computational discrete dictionary, wherein the expression of the computational discrete dictionary is when { d1,d2,...,dkWhen the ∈ X is larger than the preset value,the sparse coding is A ═ a1,a2,……,aNAnd (c) the step of (c) in which,entering a paradigm for solving the discrete dictionary after setting the sparse coding to a known condition, X being a known set of samples, D being the discrete dictionary, { D }1,d2,...,dkE is a sample set in a discrete dictionary;
when dictionary element d in the sample subsetiSequentially substituting the expressions for calculating the discrete dictionary, setting other dictionary elements as known conditions when updating to the Kth dictionary element, and calculating the expressions of the discrete dictionaryReplacing to E, so that the expression of the calculation discrete dictionary is converted into <math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>X</mi> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&NotEqual;</mo> <mi>k</mi> </mrow> </munder> <msub> <mi>d</mi> <mi>i</mi> </msub> <mi>A</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mi>A</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>E</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mi>A</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> </mrow> </math> And computing a discrete dictionary, wherein X is a known sample set, A is the sparse code, diFor known dictionary elements, the E-dkA is used to calculate the discrete dictionary by screening samples to form a new output signal, theFor X-DA in dictionary elements diAnd carrying out expansion after sequential substitution.
6. The method of claim 5, wherein the dictionary primitive d in the sample subset isiSubstituting the expressions of the calculation discrete dictionary in sequence and updating the expressions to the Kth dictionary element, wherein the method comprises the following steps:
selecting a subset of samples referencing the Kth dictionary primitive from the sparse coding, the subset of samples being
Substituting each sample in the subset of samples into a transform of the expression for the computational discrete dictionary to calculate an error value for the sample in the transform, and comparing the error value to a current error value, the current error value being | E-dkA||2
If the error value is greater than the current error value, discarding the error value, and removing the sample corresponding to the error value from the sample subset;
or,
if the error value is smaller than the current error value, the error value corresponding to the sample is reserved and updated to be the first current error value, the sample corresponding to the error value is removed from the sample subset, and the samples in the sample subset are circularly brought into the conversion formula to calculate the error value until the sample subset is an empty set.
7. An electronic device, comprising:
the acquisition unit is used for acquiring signal samples according to received signals and establishing a sparse model according to the data distribution characteristics of the signal samples;
the calculation unit is used for calculating the sparse model according to the signal samples acquired by the acquisition unit to obtain sparse codes of the signals;
the calculation unit is further configured to iterate the sparse code into the sparse model to calculate a discrete dictionary, obtain a sample subset of at least one signal sample by circularly iterating the discrete dictionary, terminate the circular iteration until a predetermined judgment condition is met, and perform statistics on the at least one signal sample to form a new signal;
a transmitting unit, configured to output the new signal obtained by counting the sample subset.
8. The apparatus of claim 7, wherein the sparse model is embodied as: <math> <mrow> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>d</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> <mo>,</mo> <mi>a</mi> </mrow> </munder> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <msub> <mi>d</mi> <mi>j</mi> </msub> <msubsup> <mi>a</mi> <mi>i</mi> <mi>j</mi> </msubsup> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mo>,</mo> </mrow> </math> when { d1,d2,...,dkIs ∈ X, wherein aiIn order to perform the sparse coding,j in (1) is sparsely encoded aiSerial number of (1) { d }1,d2,...,dkIs a subset of samples of the sample set X, djFor samples in the subset of samples, xiFor a single sample composing the sample set X, N is the number of samples, and a coefficient lambda is used for limiting the sparseness of the sparse coding, wherein the sparseness is the number of non-zero elements in the sparse coding.
9. The apparatus according to claim 7 or 8, characterized in that it further comprises:
the setting unit is used for setting the total number of cycles as T and the maximum error value as eta before the sparse coding of the signal is obtained by calculating the sparse model according to the signal sample, wherein the total number of cycles is the cycle number of the sparse model cycle iteration calculation, the maximum error value eta is a threshold value of an error value set when the reconstruction error is calculated through the sparse model, and the total number of cycles or the maximum error value is a judgment condition for finishing the cycle iteration calculation;
a selection unit for selecting a sample set { x1,x2,……,xNNormalizing, and arbitrarily selecting K samples as a discrete dictionary, wherein the sample set is normalized by summing samples in the sample set { x } as a constant 1 by a square1,x2,……,xNNormalization is expressed asAnd the number K of the samples is the number of the samples arbitrarily selected in the sample set.
10. The apparatus according to any one of claims 7 to 9, wherein the computing unit comprises:
a calculation subunit, configured to set the discrete dictionary as a known condition and bring the discrete dictionary into an expression of the sparse model
The calculating subunit is further configured to obtain sparse codes of N samples according to the expression by a least-angle regression Lasso-LARS algorithm, where the sparse codes of the N samples are the number of codes of a plurality of samples corresponding to the sparse codes, and the sparse codes are { a }1,a2,……,aN}。
11. The apparatus according to any one of claims 7 to 10, wherein the computing unit further comprises:
a conversion subunit, configured to set the sparse code as a known condition, and substitute the sparse code into an expression of the sparse model to obtain an expression of a computational discrete dictionary, where the expression of the computational discrete dictionary is when { d }1,d2,...,dkWhen the ∈ X is larger than the preset value,the sparse coding is A ═ a1,a2,……,aNAnd (c) the step of (c) in which,set to a known condition for the sparse coding and substitutedThe sparse model expression is used for solving a normal form of the discrete dictionary, X is a known sample set, D is the discrete dictionary, and D1,d2,...,dkE is a sample set in a discrete dictionary;
the conversion subunit is further configured to convert the dictionary element d in the sample subsetiSequentially substituting the expressions for calculating the discrete dictionary, setting other dictionary elements as known conditions when updating to the Kth dictionary element, and calculating the expressions of the discrete dictionaryReplacing to E, so that the expression of the calculation discrete dictionary is converted into <math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>X</mi> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&NotEqual;</mo> <mi>k</mi> </mrow> </munder> <msub> <mi>d</mi> <mi>i</mi> </msub> <mi>A</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mi>A</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mo>{</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>E</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>k</mi> </msub> <mi>A</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> </mrow> </math> And computing a discrete dictionary, wherein X is a known sample set, A is the sparse code, diFor known dictionary elements, E-dkA is used to compute the discrete dictionary by screening samples to form a new output signal,for X-DA in dictionary elements diAnd carrying out expansion after sequential substitution.
12. Device according to claim 11, characterized in that said calculation unit is in particular adapted to select, from said sparse coding, a subset of samples referencing said kth dictionary cell, said subset of samples being
Substituting each sample in the subset of samples into a transform of the expression for the computational discrete dictionary to calculate an error value for the sample in the transform, and comparing the error value to a current error value, the current error value being | E-dkA||2
If the error value is greater than the current error value, discarding the error value, and removing the sample corresponding to the error value from the sample subset;
or,
if the error value is smaller than the current error value, the error value corresponding to the sample is reserved and updated to be the first current error value, the sample corresponding to the error value is removed from the sample subset, and the samples in the sample subset are circularly brought into the conversion formula to calculate the error value until the sample subset is an empty set.
CN201310115751.3A 2013-04-03 2013-04-03 The method for expressing and equipment of dictionary in a kind of sparse model Active CN104103060B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310115751.3A CN104103060B (en) 2013-04-03 2013-04-03 The method for expressing and equipment of dictionary in a kind of sparse model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310115751.3A CN104103060B (en) 2013-04-03 2013-04-03 The method for expressing and equipment of dictionary in a kind of sparse model

Publications (2)

Publication Number Publication Date
CN104103060A true CN104103060A (en) 2014-10-15
CN104103060B CN104103060B (en) 2017-09-12

Family

ID=51671184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310115751.3A Active CN104103060B (en) 2013-04-03 2013-04-03 The method for expressing and equipment of dictionary in a kind of sparse model

Country Status (1)

Country Link
CN (1) CN104103060B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112203089A (en) * 2020-12-03 2021-01-08 中国科学院自动化研究所 Image compression method, system and device based on code rate control of sparse coding
CN113297879A (en) * 2020-02-23 2021-08-24 深圳中科飞测科技股份有限公司 Acquisition method of measurement model group, measurement method and related equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102354395A (en) * 2011-09-22 2012-02-15 西北工业大学 Sparse representation-based blind restoration method of broad image
CN102708576A (en) * 2012-05-18 2012-10-03 西安电子科技大学 Method for reconstructing partitioned images by compressive sensing on the basis of structural dictionaries

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102354395A (en) * 2011-09-22 2012-02-15 西北工业大学 Sparse representation-based blind restoration method of broad image
CN102708576A (en) * 2012-05-18 2012-10-03 西安电子科技大学 Method for reconstructing partitioned images by compressive sensing on the basis of structural dictionaries

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
向馗等: "主元分析中的稀疏性", 《电子学报》 *
王春光: "基于稀疏分解的心电信号特征波检测及心电数据压缩", 《中国博士学位论文全文数据库 医药卫生科技辑》 *
顾莹: "基于压缩感知的分布式视频编码及其图像超分辨率重建研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297879A (en) * 2020-02-23 2021-08-24 深圳中科飞测科技股份有限公司 Acquisition method of measurement model group, measurement method and related equipment
CN113297879B (en) * 2020-02-23 2024-08-09 深圳中科飞测科技股份有限公司 Acquisition method, measurement method and related equipment of measurement model group
CN112203089A (en) * 2020-12-03 2021-01-08 中国科学院自动化研究所 Image compression method, system and device based on code rate control of sparse coding

Also Published As

Publication number Publication date
CN104103060B (en) 2017-09-12

Similar Documents

Publication Publication Date Title
CN107832837B (en) Convolutional neural network compression method and decompression method based on compressed sensing principle
CN111787323B (en) Variable bit rate generation type compression method based on counterstudy
Davenport et al. Introduction to compressed sensing.
US8055095B2 (en) Parallel and adaptive signal processing
CN111818346A (en) Image encoding method and apparatus, image decoding method and apparatus
Pant et al. Reconstruction of sparse signals by minimizing a re-weighted approximate ℓ 0-norm in the null space of the measurement matrix
CN111641832A (en) Encoding method, decoding method, device, electronic device and storage medium
CN113132723B (en) Image compression method and device
CN104506752B (en) A kind of similar image compression method based on residual error compressed sensing
Yan et al. Stochastic collocation algorithms using l_1-minimization for bayesian solution of inverse problems
Pant et al. Unconstrained regularized ℓ p-norm based algorithm for the reconstruction of sparse signals
CN113689513B (en) SAR image compression method based on robust tensor decomposition
CN108233943B (en) Compressed sensing method based on minimum correlation measurement matrix
CN105791189A (en) Sparse coefficient decomposition method for improving reconstruction accuracy
CN110166055A (en) A kind of compressed sensing based multichannel compression sensing optimization method and system
CN111010191B (en) Data acquisition method, system, equipment and storage medium
CN109905129B (en) Low-overhead power data acquisition method based on distributed compressive sensing
CN104103060B (en) The method for expressing and equipment of dictionary in a kind of sparse model
Goklani et al. Image reconstruction using orthogonal matching pursuit (OMP) algorithm
Granichin et al. Randomization of data acquisition and ℓ 1-optimization (recognition with compression)
Huang et al. Optimized measurement matrix for compressive sensing
WO2021205669A1 (en) Estimation program, estimation method, and information processing device
CN114630207A (en) Multi-sensing-node perception data collection method based on noise reduction self-encoder
CN107766294A (en) Method and device for recovering missing data
Singh et al. Minimax reconstruction risk of convolutional sparse dictionary learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20171214

Address after: 510640 Guangdong City, Tianhe District Province, No. five, road, public education building, unit 371-1, unit 2401

Patentee after: Guangdong Gaohang Intellectual Property Operation Co., Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: Huawei Technologies Co., Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20171219

Address after: 402260 Chongqing city Jiangjin Jijiang 90 Minsheng Road No. 1-7-2 Building 1

Patentee after: Chongqing GA doll Technology Co., Ltd.

Address before: 510640 Guangdong City, Tianhe District Province, No. five, road, public education building, unit 371-1, unit 2401

Patentee before: Guangdong Gaohang Intellectual Property Operation Co., Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20181220

Address after: 401120 Yubei District, Chongqing, Longxi street, Hong Jin Avenue 498, Jiale, Violet 1, 5- business.

Patentee after: Chongqing Kai Tuo development in science and technology company limited

Address before: 402260 Jiangjin 90 District, Chongqing City, No. 1, No. 1, No.

Patentee before: Chongqing GA doll Technology Co., Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201110

Address after: No. ZC030, College Students Pioneer Park, Tongling City, Anhui Province, 244000

Patentee after: TONGLING HUIHENG ELECTRONIC TECHNOLOGY Co.,Ltd.

Address before: 401120 Jiale Ziguang Building 5-Business, 498 Hongjin Avenue, Longxi Street, Yubei District, Chongqing

Patentee before: CHONGQING KAITUO TECHNOLOGY DEVELOPMENT Co.,Ltd.