CN112821894A - Lossless compression method and lossless decompression method based on weighted probability model - Google Patents
Lossless compression method and lossless decompression method based on weighted probability model Download PDFInfo
- Publication number
- CN112821894A CN112821894A CN202011577534.2A CN202011577534A CN112821894A CN 112821894 A CN112821894 A CN 112821894A CN 202011577534 A CN202011577534 A CN 202011577534A CN 112821894 A CN112821894 A CN 112821894A
- Authority
- CN
- China
- Prior art keywords
- binary sequence
- lossless
- probability model
- symbol
- weighted probability
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 111
- 238000007906 compression Methods 0.000 title claims abstract description 52
- 230000006835 compression Effects 0.000 title claims abstract description 52
- 230000006837 decompression Effects 0.000 title claims abstract description 21
- 230000009467 reduction Effects 0.000 claims abstract description 12
- 230000006870 function Effects 0.000 claims description 31
- 230000015654 memory Effects 0.000 claims description 16
- 230000009191 jumping Effects 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000011946 reduction process Methods 0.000 claims description 4
- 230000005236 sound signal Effects 0.000 claims description 2
- 230000001131 transforming effect Effects 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 29
- 238000004422 calculation algorithm Methods 0.000 abstract description 6
- 238000012545 processing Methods 0.000 abstract description 6
- 230000009466 transformation Effects 0.000 abstract description 4
- 230000000877 morphologic effect Effects 0.000 description 9
- 238000005315 distribution function Methods 0.000 description 7
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000001186 cumulative effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000013524 data verification Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 102100029469 WD repeat and HMG-box DNA-binding protein 1 Human genes 0.000 description 1
- 101710097421 WD repeat and HMG-box DNA-binding protein 1 Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005477 standard model Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/40—Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
The invention discloses a lossless compression method and a lossless decompression method based on a weighted probability model, and provides the lossless compression method based on the weighted probability model. The method has the core that a process of equal-length lossless entropy reduction transformation is provided, and because equal-length lossless entropy reduction processing is carried out on the uniformly distributed binary sequences, the uniformly distributed binary sequences can be subjected to lossless compression. The compression ratio is related to the customized weight coefficient. Obviously, the compression rate of the method is superior to that of the existing entropy coding algorithm. In addition, the method is a coding and decoding process with bits as units, does not need a large amount of hardware cache and coding, and can perform segmented parallel processing, so that the operation hardware resources are less. The invention also provides a lossless compression method based on the weighted probability model based on the lossless compression method, and the inverse process of the lossless compression method is realized.
Description
Technical Field
The invention relates to the technical field of communication coding, in particular to a lossless compression method and a lossless decompression method based on a weighted probability model.
Background
In the big data era, the rapid increase of data volume brings huge pressure to network transmission and storage. In order to solve this problem, on one hand, hardware needs to be upgraded, and on the other hand, lossless encoding algorithms with higher compression rates are mined. Common lossless compression methods include dictionary coding, run-length coding, arithmetic coding, etc., which are collectively called entropy coding, but the current entropy coding has the following defects:
1) according to the maximum entropy theorem, the uniformly distributed binary sequences cannot be subjected to lossless compression, and the existing entropy coding follows the theorem; 2) the compression ratio is low and the demand on hardware resources is high.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the invention provides a lossless compression method and a lossless decompression method based on a weighted probability model. Carrying out equal-length lossless entropy reduction treatment on the uniformly distributed binary sequences so that the uniformly distributed binary sequences can be subjected to lossless compression; the compression ratio is improved, the hardware resource requirement is less, and the compression ratio is determined by the weight coefficient used by the equal-length lossless entropy reduction.
In a first aspect of the present invention, a lossless compression method based on a weighted probability model is provided, which includes the following steps:
the method comprises the steps of converting an isometric lossless entropy reduction rate of a binary sequence X with a sequence length of n and uniformly distributed symbols into a binary sequence Y with the sequence length of n, wherein a first weight coefficient r used in the isometric lossless entropy reduction process1The value range is as follows: r is1∈[0.5,1.0);
Lossless compressing the binary sequence Y into a binary sequence Z through a weighted probability model, wherein a second weight coefficient r used by the weighted probability model2The values of (A) are as follows: r is2=1。
In a second aspect of the present invention, a weighted probability model-based lossless decompression method is provided, which is applied to the weighted probability model-based lossless compression method in the first aspect of the present invention, and includes the following steps:
lossless decompressing the binary sequence Z into the binary sequence Y through a weighted probability model;
and converting the equivalent-length lossless entropy increase of the binary sequence Y into the binary sequence X.
In a third aspect of the present invention, there is provided an encoding device comprising: at least one control processor and a memory for communicative connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform a weighted probability model based lossless compression method according to the first aspect of the invention and/or a weighted probability model based lossless decompression method according to the second aspect of the invention.
In a fourth aspect of the present invention, there is provided a computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions for causing a computer to perform the weighted probability model based lossless compression method according to the first aspect of the present invention and/or the weighted probability model based lossless decompression method according to the second aspect of the present invention.
According to the embodiment of the invention, at least the following beneficial effects are achieved:
the invention provides a lossless compression method based on a weighted probability model, according to the maximum entropy theorem, uniformly distributed binary sequences cannot be subjected to lossless compression any more, and the existing entropy coding conforms to the theorem. The method has the core that a process of equal-length lossless entropy reduction transformation is provided, and because equal-length lossless entropy reduction processing is carried out on the uniformly distributed binary sequences, the uniformly distributed binary sequences can be subjected to lossless compression. The compression ratio is related to the customized weight coefficient. Obviously, the compression rate of the method is superior to that of the existing entropy coding algorithm. In addition, the method is a coding and decoding process with bits as units, does not need a large amount of hardware cache and coding, and can perform segmented parallel processing, so that the operation hardware resources are less. The invention also provides a lossless compression method based on the weighted probability model based on the lossless compression method, and the inverse process of the lossless compression method is realized.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 shows x when n is 1 according to an embodiment of the present invention1A schematic diagram of F (X, r) for 0, 1.
Fig. 2 shows that when n is 2 and x is known in the embodiment of the present invention1When x2A schematic diagram of F (X, r) for 0, 1.
FIG. 3 is a schematic flow chart of a lossless compression method based on a weighted probability model according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of a lossless decompression method based on a weighted probability model according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an encoding apparatus according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
To facilitate understanding of the technical solutions of the present invention by those skilled in the art, before introducing the embodiments of the present application, explanation will be made on related concepts involved in the present application:
firstly, weighting probability and a weighting probability model;
let X be { X ═ X1,x2,...,xnIs a random process of finite or several possible values. Unless specifically stated, this set of possible values for the random process will be referred to as the set of nonnegative integers, a ═ 0, 1, 2iE.a (i ═ 1, 2.., n). There is then a probability space for all values in a:
where x ∈ A. Since the random process must be transferred to a value in set a, at any time i there is:
wherein p is more than or equal to 0 and (x) is more than or equal to 1. Thus, the cumulative distribution function f (a) at any time i can be expressed by p (x) as:
F(a)=∑x≤a p(x) (3)
wherein F is more than or equal to 0 and less than or equal to 1, and a belongs to A. A set of examples is provided herein: if the discrete random variable X has a probability mass function p (0) ═ 0.5, p (1) ═ 0.3, and p (2) ═ 0.2, then F (0) ═ p (0) ═ 0.5, F (1) ═ p (0) + p (1) ═ 0.8, and F (2) ═ p (0) + p (1) + p (2) ═ 1.0.
Let xiIs a Bernoulli random variable, then xi∈{0,1},xiP (0) ═ P (x) of the probability mass function of (1)i=0)=1-p,p(1)=P(xi1 ≦ p, where 0 ≦ p ≦ 1, and i ≦ 1, 2. Random Process X Presence 2nAnd each X is a binary sequence of length n. Obviously, 2nIn this possibility, there are distinct morphological features (regularity) in the partial binary sequence. For example: some binary sequences satisfy "the number of consecutive symbols 1 in the sequence is at most t". Also for example, some binary sequences satisfy the expression "the number of consecutive symbols 1 in the sequence is at most t, and the number of consecutive symbols 0 in the sequence is at most s". Since morphological features are known, t and s are positive integers of known numerosity. Wherein the bernoulli random variable is: assuming an experiment, the success and failure of the experiment result are represented by X ═ 1 and X ═ 0 respectively; the probability mass function of X is P (0) ═ P (X ═ 0) ═ 1-P, P (1) ═ P (X ═ 1) ═ P, wherein P (0 ≦ P ≦ 1) is the probability of success of the test result; the random variable X is referred to as a bernoulli random variable.
Therefore, when the random process X exhibits some known morphological characteristics, it has the following applications in the field of information coding technology:
(1) data compression; since a symbol 0 is inevitably generated after t consecutive symbols 1 in the binary sequence, the morphological feature of the binary sequence is known information, and lossless coding can improve the compression effect by removing the symbol 0. That is, the binary sequences with different morphological characteristics adopt different methods to remove redundant information.
(2) Data verification; if the decoded binary sequence does not satisfy the condition that the number of the continuous symbols 1 in the sequence is at most t, the decoding is in error, and the binary sequence can be used for data verification. Different morphological characteristics can construct channel coding methods with different code rates.
(3) Digital watermarking; any binary sequence is modified to meet a certain morphological characteristic. And then lossless encoding is performed, thereby constructing a digital watermark encoding method.
Obviously, in such a binary sequence (i.e. a random process), the current symbol state is related to a limited number of previous adjacent symbol states.
Taking as an example that the binary sequence satisfies morphological characteristics "the number of consecutive symbols 1 in the sequence is at most 2", the sequence is composed of "0", "10", and "110". Based on Markov chain or conditional probability analysis, there are three probability mass functions for symbol 0, p (0|0), p (0|1), p (0|1, 1). There are two probability mass functions for symbol 1, p (1|0) and p (1|1), respectively. When coding, because the binary sequence is known as the source, the probability quality function used by each symbol can be accurately selected. However, the probability mass function cannot be accurately selected during decoding, for example, a "0" is already decoded (the probability mass function can be defined during decoding of the first symbol), and the probability mass function cannot be accurately selected due to the existence of three probability mass functions for the symbol 0. The probability mass function cannot be correctly selected with the same sign 1. If "01" is decoded, it is not feasible to predict the next symbol by the decoded result because there are two kinds of probability mass functions p (1|1) or p (0| 1). When "011" has been decoded, there is a unique choice of p (0|1, 1) since "011" is necessarily followed by the symbol 0.
When the coded data is tampered or transmitted in error, each symbol can be decoded in error, and the Markov chain or the conditional probability construction coding and decoding method cannot be adopted. To summarize the above reasons, based on the probability theorem, the method for constructing the codec for the binary sequence needs to satisfy three conditions:
(1) there is a unique known probability mass function p (x) for each symbol x decoded, where p (x) can be the only known probability mass function derived from the decoding (inverse process of encoding), for example, "011" is decoded, and since "011" is necessarily followed by symbol 0, there is a unique probability mass function p (x) ═ p (0|1, 1).
(2) There is a known variable r characterizing the morphology of the sequence, which may also be the value of a known function f (i) (1, 2., n), i.e., r (f) (i). The morphological characteristics of the sequences are different and the value of r should also be different.
(3) The coding time r should act on the probability mass function of the corresponding symbol at each position of the sequence.
Definable functionsTo construct coding methods, e.g.(or),(or) And the probability mass function and r can be accurately selected in the whole coding and decoding process. The invention refers to r as the form characteristic coefficient of probability mass function, called weight coefficient for short. Due to the fact that(or),So thatGenerating polynomials is inconvenient to reason about and analyze, so the following is based onDefining a weighted probability mass function and a weighted cumulative distribution function, and analyzing the two functions to obtain the mathematical property of r. Since p (x) and r are known, andx is different, soCan be simply marked as
Define 1, the weighted probability mass function is:
p (a) is a probability mass function of a, 0 ≦ p (a) ≦ 1, r is a weight coefficient, and r is a known positive real number. Obviously, the weighted probability sum of all symbols is:
F(a,r)=rF(a)=r∑x≤a p(x) (6)
the weighted distribution function for sequence X is denoted as F (X, r) according to definition 2. When n is 1, F (X, r) is:
F(X,r)=rF(x1)=rF(x1-1)+rp(x1)
as shown in fig. 1, when n is 2, x is selected1Correspond toInterval of [ F (x) ]1-1,r),F(x1R)), due to F (x)1,r)=F(x1-1,r)+rp(x1) So that the interval length is Then, the interval [ F (x) ]1-1,r),F(x1-1,r)+rp(x1) Multiplying the length of the section by a weight coefficient r, and if r is less than 1, reducing the section; if r > 1 the interval is expanded; if r is 1, the time zone is unchanged. Then the interval becomes [ F (x) ]1-1,r),F(x1-1,r)+r2p(x1) Followed by mixing r)2p(x1) Dividing the probability mass of each symbol into k +1 parts according to the formula (1), wherein the corresponding interval of the divided symbol 0 is [ F (x)1-1,r),F(x1-1,r)+r2p(x1) p (0)); the interval corresponding to symbol 1 is [ F (x)1-1,r)+r2p(x1)p(0),F(x1-1,r)+r2p(x1) (p (0) + p (1))); the interval corresponding to symbol 2 is [ F (x) ]1-1,r)+r2p(x1)(p(0)+p(1)),F(x1-1,r)+r2p(x1) (p (0) + p (1) + p (2))), and so on, and F (x)1-1,r)=rF(x1-1) obtaining:
F(X,r)=rF(x1-1)+r2F(x2)p(x1)=rF(x1-1)+r2F(x2-1)p(x1)+r2p(x1)p(x2)
at this time, the interval length is r2p(x1)p(x2). As shown in fig. 2:
and analogy, when n is 3:
F(X,r)=rF(x1-1)+r2F(x2-1)p(x1)+r3F(x3)p(x1)p(x2)=rF(x1-1)+r2F(x2-1)p(x1)+r3F(x3-1)p(x1)p(x2)+r3p(x1)p(x2)p(x3)
the set of weighted distribution functions satisfying equation (7) is defined as a weighted probability model, referred to as a weighted model for short, and is denoted as { F (X, r) }. If XiE.g., a is {0, 1}, then, F (X, r) } is referred to as a binary weighting model. Order:
Hn=F(X,r) (8)
x is due toiMust take the value in A, so p (x)i) Is more than or equal to 0. It is apparent that the formulae (8) to (10) are in the interval sequence, Li,HiIs the variable X of the sequence X at the time i (i ═ 1, 2.., n)iSubscript, R, on corresponding intervali=Hi-LiIs the length of the interval. { [ L ]n,Hn) And is the interval column defined on the weighted probability model. Equations (8) to (10) are expressed iteratively as:
Li=Li-1+Ri-1F(xi-1,r) (12)
Hi=Li+Ri (13)
obviously, r in equation (7) is a known real number, and equation (7) is referred to as a static weighting model. If r is equal to at time iThe value of the function ω is knowniI.e. omegaiF (i) is a known function, and the coefficient sequence is W ═ ω { (i) }1,ω2,...,ωnThen (7) can be expressed as:
the set of weighted distribution functions satisfying equation (14) is referred to as a dynamic weighted model. When ω is1=ω2=…=ωnWhen r, F (X, W) is F (X, r). If omega1=ω2=…=ωnWhen r is 1, F (X, W) is F (X, 1) is F (X).
The iterative equation based on equation (15) is:
Ri=Ri-1p(xi) (16)
Li=Li-1+Ri-1F(xi-1) (17)
Hi=Li+Ri (18)
then, the weighting factor ω has theorem 1i(i ═ 1, 2.., n) satisfies 0 < ωiWhen the content is less than or equal to 1, this is true.
The proof process of theorem 1 is as follows:
∵0<ωi+1r is less than or equal to 1 and is obtained from the formula (11) to (13)i+1=Riωi+1p(xi+1);
∴0<Ri+1≤Rip(xi+1);
∵Li+1=Li+Riωi+1F(xi+1-1), wherein Riωi+1F(xi+1-1)≥0;
∴Li+1≥Li;
∵Hi+1=Li+1+Ri+1And R isi+1>0;
∴Li+1<Hi+1;
∵Hi+1=Li+Riωi+1F(xi+1-1)+Rip(xi+1)<Li+Ri(F(xi+1-1)+p(xi+1));
∵F(xi+1)=F(xi+1-1)+p(xi+1) Due to F (x)i+1)≤ωi+1And ω isi+1≤1;
∴Hi+1≤Li+Ri=Hi;
Any i has ω i1, then, the model is called as { F (X, W) } as a standard model; any i has a value of 0 < omega i1 or less and omega is presentiIf the value is less than 1, the model is called as a contraction model (F (X, W)); any i has ωiNot less than 1 and omegaiIf > 1, the expansion model is called as { F (X, W) }.
Secondly, weighting the information entropy of the probability model;
let the discrete memoryless information source sequence X ═ X1,x2,...,xn)(xiE.g., a ═ {0, 1, 2,. said, s }), when the weight coefficient r is 1, e.g., a, mTherefore, it is not only easy to useDefined by shannon information entropy, the entropy of X is (base on logarithm s + 1):
when r ≠ 1, the definition has a probabilityRandom variable x ofiThe self information quantity is as follows:
set { xiThe expression "a" refers to a (i: 1, 2.., n; a ∈ A) wherein c is presentaA. When r is known, the total information content of the source sequence X is:
the average amount of information per symbol is then:
define 3, let H (X, r) be the weighted model information entropy (unit: bits/symbols):
then, there is theorem 3, and the discrete memoryless source sequence X ═ X1,x2,...,xn)(XiE.g., a {0, 1, 2,.., s }, i 1, 2,.., n) is distortion-free encoded by a weighted probability model, with a minimum limit of H (X, r)max)(rmaxThe largest weight coefficient).
The proof process of theorem 3 is as follows:
any r > rmaxWhen L isn∈[Ln,Hn)∧Ln∈[Ln-1,Hn-1)∧…∧Ln∈[Li,Hi) If not, the sequence X cannot be reduced. When r is more than 0 and less than or equal to 1, -log r is more than or equal to 0, and H (X, r) is more than or equal to H (X); when r is more than 1 and less than or equal to rmaxWhen-log r < 0, there is H (X, r) < H (X), it is clear that the minimum limit is H (X, r) — log rmax+H(X)。
Theorem 3 gives the information entropy of the static weighting model. In the dynamic weighting model, when the coefficient sequence W is ═ ω1,ω2,...,ωnWhen known, according to the independent discrete random sequence X, the weighted probability is:
according to the logarithm algorithm, the following can be obtained:
due to the set { xiThe expression "a" refers to a (i: 1, 2.., n; a ∈ A) wherein c is presentaA, so:
obviously, equation (26) can be transformed into:
then, averaging equation (28) to each symbol, then there is:
order to
H (X, W) ═ logr-H (X) is available. When r is less than or equal to rmaxWhen L isn∈[Ln,Hn)∧Ln∈[Ln-1,Hn-1)∧…∧Ln∈[Li,Hi) This is true.
Embodiments of the invention;
according to the existing theory as above: let an arbitrary binary sequence X of length n, where the probabilities of symbol 0 and symbol 1 are p (0) and p (1). According to the theory of information, the entropy of information h (X) ═ p (0) log of sequence X2 p(0)-p(1)log2p (1). Assuming that the result after the weighted probability model coding is n, there is:
-n log2 r+nH(X)=n (32)
the method is simplified and can be obtained:
r=2H(x)-1 (33)
since H is more than or equal to 0 and less than or equal to 1 (X), r is more than or equal to 0.5 and less than or equal to 1.0. According to the above theorem 2, because And [ Li,Hi) Unique and symbol xiCorrespondingly, so lossless coding is possible. Take fig. 1 and 2 as an example, becauseSo x is decoded1. Decoding x in the same way2. Obviously, the weighted probability model coding algorithm is lossless. The weighted probability model can then transform the binary sequence X losslessly into a binary sequence Y of equal length by equation (32).
Then, it is concluded that: the symbols in the binary sequence Y are uniformly distributed, i.e., p (0) ═ p (1) ═ 0.5.
The proof procedure for conclusion one is as follows: according to the theory of information, the entropy of information of sequence Y is h (Y) -p (0) log2 p(0)-p(1)log2p (1). Since the sequence Y is the result of lossless coding using the weighted probability model, H (Y) is equal to H (X, r). Since the sequences X and Y are of equal length, there is nH (Y). Since the formula (32) is satisfied if and only if h (y) is 1, p (0) may be equal to p (1) or equal to 0.5 from h (y). The probability of symbol 0 and symbol 1 in sequence Y is then equal, i.e. the symbols are evenly distributed.
Then, according to the conclusion one, the arbitrary binary sequence X can be given a weighting coefficient by equation (33), and lossless coded into the completely random binary sequence Y by the weighted probability model, and the binary sequence Y can also be lossless restored to the sequence X. Since the sequence X is known, H (X) is determined, and r is known, namely r corresponds to the entropy of the sequence X. Because H is more than or equal to 0 and (X) is more than or equal to 1, r is equal to [0.5, 1.0 ].
If only the binary sequence X with the length of n is known (the symbols 0 and 1 in the sequence X are uniformly distributed), setting r ∈ [0.5, 1.0 ] to be decoded into the sequence Y with the length of n through a weighting model. It is obvious that different r can result in different entropy of the sequence Y.
And a second conclusion: given that symbols in the binary sequence X with length n are uniformly distributed, and any given weight coefficient r ∈ [0.5, 1.0) is decoded by a weighted probability model to obtain a binary sequence Y, the sequence Y must satisfy:
(1) h (y) < h (x) ═ 1; (2) the length of sequence Y is n.
The proof process of conclusion two is as follows:
when the binary sequence X is known, the weighting coefficients r correspond one-to-one to the sequence Y according to conclusion one. Since r is known, it can be obtained from formula (33):
H(Y)=1+log2 r (34)
if r is 1.0, since the completely random binary sequence X has h (X) 1, 1 is 2 from formula (33)H(Y)-1H (y) is 1, so h (y) is h (x). The weighted probability model coding process is an isentropic transformation process, so r is less than 1.0. If r is more than or equal to 0.5 and less than 1.0 and-1 and less than or equal to log2r is less than 0 and is substituted for formula (34), and H (Y) is more than or equal to 0 and less than 1. Then h (y) < h (x) ═ 1, the weighted probability model decoding process is a lossless entropy-reduction transform process. Since the weighted probability model is a lossless encoding and decoding process, the length of the decoded sequence Y is n.
According to the conclusion two, the symbols in any binary sequence X satisfy the uniform distribution, and can be considered as the result of coding the sequence Y based on the weighted probability model with the known weight coefficient r. When the sequence X is decoded by a weighted probability model with r being more than or equal to 0.5 and less than 1.0, a binary sequence Y is obtained, namely H being more than or equal to 0 and (Y) being less than 1. And then, the random sequence Y is weighted and coded by setting r to be 1.0 to obtain a sequence Z. Since H (Y) < 1, the length of sequence Z must be less than n according to Shannon distortion theorem (as can be demonstrated by Shannon distortion theorem).
According to the first and second conclusions, referring to fig. 3, a first embodiment is provided, in which a weighted probability model-based lossless compression method includes the following steps:
s100, converting the equal-length lossless entropy reduction of a binary sequence X with the sequence length of n and uniformly distributed symbols into a binary sequence Y with the sequence length of n, wherein a first weight coefficient r used in the equal-length lossless entropy reduction process1The value range is as follows: r is1∈[0.5,1.0)。
The specific implementation of step S100 is:
s101, setting initial parameters: r0=1,L 00, i-1, j-0 and r1,r1The interval [0 ] is taken.5, 1.0);
parameter r in step S1011Can be arbitrarily set in the interval of [0.5, 1.0). Parameter r1The choice of (c) determines the compressibility of the method.
S102, according to the coding formulaLi=Li-1+Ri-1F(xi-1,r1) And Hi=Li+RiCalculating the interval superscript value of the ith symbol 0 in the binary sequence X Wherein xiRepresenting the ith symbol, R, in a binary sequence Xi,Li,HiIn order to encode the parameters of the audio signal,p(xi) Denotes the ith symbol xiThe quality probability function of (a) is,denotes the ith symbol xiA weighted mass probability function of (a);
s103, judging Y andsize of (1), ifThe output symbol 0, j equals j + 1; if it isOutputting a symbol 1;
s104, if j is equal to or less than n, jumping to the step S102; if j is more than n, obtaining a binary sequence Y;
s200, passingThe weighted probability model losslessly compresses the binary sequence Y into the binary sequence Z, wherein the weighted probability model uses a second weight coefficient r2The values of (A) are as follows: r is2=1。
The specific implementation of step S200 is:
s201, setting initial parameters: r0=1,L 00, i-1, j-0, V-0 and r2=1;
For convenience of calculation and description, a parameter V is added in step S201, the initial value of the parameter V is 0, and the parameter V is used for characterizing the weighted model encoded LiThe value of (c).
S202, coding the ith symbol in the binary sequence Y, and if the ith symbol is a symbol 0, entering the step S203; if the ith symbol in the binary sequence Y is symbol 1, jumping to step S204;
s203, according to the coding formula Ri=Ri-1p(xi)、Li=Li-1+Ri-1F(xi-1,r2) Calculation of RiAnd LiValue of (A), Ri=Ri-1p (0), F (-1) ═ 0, then Li=Li-1(ii) a i is i + 1; skipping to step S205;
s204, according to the coding formula Ri=Ri-1p(xi) And Li=Li-1+Ri-1F(xi-1,r2) Calculation of RiAnd LiValue of (A), Ri=Ri-1p (0), since F (0) is p (0), Li=Li-1+Ri-1p(0);
S205, if i is less than or equal to n, jumping to S202; if i > n, V ═ LnAnd ending the coding, and outputting V, namely the binary sequence Z.
In the lossless compression method based on the weighted probability model provided in this embodiment, first, the sequence X is a sequence Y whose length is equal based on the above conclusion two lossless entropy reductions, and then the sequence Y passes through the weighted probability model (r)21) lossless compression into a sequence Z.
The lossless compression method based on the weighted probability model provided by the embodiment has the beneficial effects that:
according to the maximum entropy theorem, the uniformly distributed binary sequences cannot be subjected to lossless compression any more, and the existing entropy coding complies with the theorem. The core of the embodiment of the method is to provide a process of equal-length lossless entropy reduction transformation, and because equal-length lossless entropy reduction processing (corresponding to the obtained binary sequence Y) is performed on the uniformly distributed binary sequence (corresponding to the binary sequence X in the above step), the uniformly distributed binary sequence can be subjected to lossless compression. The compression ratio is given by equation (34) and is related to the custom weight factor. Obviously, the compression rate of the method is superior to that of the existing entropy coding algorithm. In addition, the method is a coding and decoding process with bits as units, does not need a large amount of hardware cache and coding, and can perform segmented parallel processing, so that the operation hardware resources are less. The applicable scenes of the method mainly comprise: the compressed image, video and file are compressed again, and the compression ratio can pass through a weight coefficient (corresponding to r)1) Self-defining
Referring to fig. 4, based on the first embodiment and the second embodiment of the present invention, there is further provided a weighted probability model-based lossless decompression method, where it is to be noted that the method is an inverse process of the weighted probability model-based lossless compression method provided in the first embodiment, both are based on the same inventive concept, and the weighted probability model-based lossless decompression method includes the following steps:
s300, lossless decompression is carried out on the binary sequence Z into the binary sequence Y through a weighted probability model.
The specific implementation manner of step S300 is:
s301, setting initial parameters: r0=1,L0=0,i=1,j=0,V;
The parameters V in step S301 are known, and correspond to V in step S205 of the first embodiment. The length of the binary sequence Z is known.
S302, according to the coding formulaLi=Li-1+Ri-1F(zi-1,r2) And Hi=Li+RiCalculating the interval superscript value of the ith symbol 0 in the binary sequence Z
S303, judging V andsize of (1), ifThe output symbol 0, j equals j + 1; if it isOutputting a symbol 1;
s304, i ═ i + 1; if j is less than or equal to the sequence length of the binary sequence Z, jumping to step S302; and if j is larger than the sequence length of the binary sequence Z, obtaining the binary sequence Y.
S400, converting the binary sequence Y into the binary sequence X in a lossless entropy-increasing mode in the same length.
The specific implementation of step S400 is:
s401, setting initial parameters: r0=1,L0=0,i=1,j=0,V 10 and r1;
Parameter r of step S4011For the known parameter r1Corresponding to r in step S101 in the first embodiment1,The sequence length of the binary sequence Y is known as n. For convenience of calculation and description, a parameter V is added in step S4011Parameter V1Is 0, parameter V1For characterizingWeighted model encoded LiThe value of (c).
S402, coding the ith symbol in the binary sequence Y, and if the ith symbol is a symbol 0, entering the step S403; if the ith symbol in the binary sequence Y is symbol 1, jumping to step S404;
s403, according to the coding formulaAnd Li=Li-1+Ri-1F(yi-1,r1) Calculation of RiAnd LiThe value of (a) is,f (-1) ═ 0, then Li=Li-1(ii) a i is i + 1; skipping to step S405;
s404, according to the coding formulaAnd Li=Li-1+Ri-1F(yi-1,r1) Calculation of RiAnd LiThe value of (a) is,due to the fact thatThe flow advances to step S405;
s405, if i is not more than n, jumping to the step S402; if i > n, V1=Ln(V1X), the encoding is ended, and the entropy-increasing transform is completed. V1I.e. a binary sequence X.
The lossless decompression method based on the weighted probability model provided by the embodiment is the inverse process of the method of the first embodiment, and mainly comprises the following steps: firstly, decoding a sequence Y from a sequence Z; sequence Y is then transformed to sequence X by a lossless entropy-increasing transform based on the above conclusion.
The method of the present embodiment is the reverse process of the method of the first embodiment, and the method of the present embodiment and the method of the first embodiment are the same inventive concept, and the beneficial effects thereof are not described herein again.
Referring to fig. 5, a third embodiment of the present invention provides an encoding device, which may be any type of smart terminal, such as a mobile phone, a tablet computer, a personal computer, etc. Specifically, the encoding device includes: one or more control processors and memory, here exemplified by a control processor. The control processor and the memory may be connected by a bus or other means, here exemplified by a connection via a bus.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the encoding devices in the embodiments of the present invention. The control processor implements the weighted probability model based lossless compression method described in the first embodiment above and/or the weighted probability model based lossless decompression method described in the second embodiment above by executing the non-transitory software programs, instructions, and modules stored in the memory.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the control processor, and these remote memories may be connected to the encoding-based device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory and, when executed by the one or more control processors, perform the weighted probability model based lossless compression method of the first embodiment described above and/or the weighted probability model based lossless decompression method of the second embodiment described above.
In a fourth embodiment of the present invention, a computer-readable storage medium is provided, which stores computer-executable instructions for one or more control processors to perform the weighted probability model-based lossless compression method according to the first embodiment and/or the weighted probability model-based lossless decompression method according to the second embodiment.
Through the above description of the embodiments, those skilled in the art can clearly understand that the embodiments can be implemented by software plus a general hardware platform. Those skilled in the art will appreciate that all or part of the processes in the methods for implementing the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes in the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Claims (8)
1. A lossless compression method based on a weighted probability model is characterized by comprising the following steps:
the method comprises the steps of converting an isometric lossless entropy reduction rate of a binary sequence X with a sequence length of n and uniformly distributed symbols into a binary sequence Y with the sequence length of n, wherein a first weight coefficient r used in the isometric lossless entropy reduction process1The value range is as follows: r is1∈[0.5,1.0);
Lossless compressing the binary sequence Y into a binary sequence Z through a weighted probability model, wherein a second weight coefficient r used by the weighted probability model2The values of (A) are as follows: r is2=1。
2. The weighted probability model-based lossless compression method according to claim 1, wherein the equal-length lossless entropy reduction process includes:
s101, setting initial parameters: r0=1,L00, i-1, j-0 and said r1Said r1Taking any value in the interval [0.5, 1.0);
s102, according to the coding formulaLi=Li-1+Ri-1F(xi-1,r1) And Hi=Li+RiCalculating the interval superscript value of the ith symbol 0 in the binary sequence X Wherein said xiRepresents the ith symbol in the binary sequence X, the Ri,Li,HiIn order to encode the parameters of the audio signal,the p (x)i) Denotes the ith symbol xiOf said mass probability function, saidDenotes the ith symbol xiA weighted mass probability function of (a);
s104, if j is equal to or less than n, jumping to the step S102; if j is more than n, obtaining the binary sequence Y.
3. The weighted probability model-based lossless compression method according to claim 2, wherein the binary sequence Y is lossless compressed into the binary sequence Z by the weighted probability model, comprising the steps of:
s201, setting initial parameters: r0=1,L00, i-1, j-0 and said r2=1;
S202, coding the ith symbol in the binary sequence Y, and if the ith symbol is a symbol 0, entering the step S203; if the ith symbol in the binary sequence Y is symbol 1, jumping to step S204;
s203, according to the coding formulaAnd Li=Li-1+Ri-1F(yi-1,r2) Calculation of RiAnd LiI +1, go to step S205; wherein said yiRepresents the ith symbol in the binary sequence Y;
S205, if i is less than or equal to n, jumping to S202; if i is more than n, obtaining the binary sequence Z.
4. A lossless decompression method based on a weighted probability model is applied to the lossless compression method based on the weighted probability model in claim 1, and comprises the following steps:
lossless decompressing the binary sequence Z into the binary sequence Y through a weighted probability model;
and converting the equivalent-length lossless entropy increase of the binary sequence Y into the binary sequence X.
5. The weighted probability model-based lossless decompression method according to claim 4, wherein the transforming the binary sequence Y into the binary sequence X without loss of entropy comprises:
s401, setting initial parameters: r0=1,L00, i-1, j-0 and said r1;
S402, coding the ith symbol in the binary sequence Y, and if the ith symbol is a symbol 0, entering the step S403; if the ith symbol in the binary sequence Y is symbol 1, jumping to step S404;
s403, according to the coding formulaAnd Li=Li-1+Ri-1F(yi-1,r1) Calculation of RiAnd LiI +1, jumping to step S405;
S405, if i is not more than n, jumping to the step S402; if i is more than n, obtaining the binary sequence X.
6. The weighted probability model-based lossless decompression method according to claim 5, wherein the binary sequence Z is decompressed losslessly into the binary sequence Y by a weighted probability model, comprising the steps of:
s301, setting initial parameters: r0=1,L00, i-1, j-0 and said r2;
S302, according to the coding formulaLi=Li-1+Ri-1F(zi-1,r2) And Hi=Li+RiCalculating the interval superscript value of the ith symbol 0 in the binary sequence Z Wherein said z isiRepresents the ith symbol in the binary sequence Z;
s304, i ═ i + 1; if j is less than or equal to the sequence length of the binary sequence Z, jumping to step S302; and if j is larger than the sequence length of the binary sequence Z, obtaining the binary sequence Y.
7. An encoding device, characterized by comprising: at least one control processor and a memory for communicative connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform the weighted probability model based lossless compression method of any one of claims 1 to 3 and/or the weighted probability model based lossless decompression method of any one of claims 4 to 6.
8. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the weighted probability model based lossless compression method of any one of claims 1 to 3 and/or the weighted probability model based lossless decompression method of any one of claims 4 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011577534.2A CN112821894A (en) | 2020-12-28 | 2020-12-28 | Lossless compression method and lossless decompression method based on weighted probability model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011577534.2A CN112821894A (en) | 2020-12-28 | 2020-12-28 | Lossless compression method and lossless decompression method based on weighted probability model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112821894A true CN112821894A (en) | 2021-05-18 |
Family
ID=75854165
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011577534.2A Pending CN112821894A (en) | 2020-12-28 | 2020-12-28 | Lossless compression method and lossless decompression method based on weighted probability model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112821894A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113486369A (en) * | 2021-06-23 | 2021-10-08 | 湖南遥昇通信技术有限公司 | Encoding method, apparatus, device and medium with symmetric encryption and lossless compression |
CN113556381A (en) * | 2021-06-15 | 2021-10-26 | 湖南幻影三陆零科技有限公司 | HTTP request optimization method, terminal, and storage medium |
CN113887989A (en) * | 2021-10-15 | 2022-01-04 | 中国南方电网有限责任公司超高压输电公司柳州局 | Power system reliability evaluation method and device, computer equipment and storage medium |
CN113922947A (en) * | 2021-09-18 | 2022-01-11 | 湖南遥昇通信技术有限公司 | Adaptive symmetric coding method and system based on weighted probability model |
CN113938273A (en) * | 2021-09-30 | 2022-01-14 | 湖南遥昇通信技术有限公司 | Symmetric encryption method and system capable of resisting vector parallel computing attack |
CN114039718A (en) * | 2021-10-18 | 2022-02-11 | 湖南遥昇通信技术有限公司 | Hash coding method and system of self-adaptive weighted probability model |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020167429A1 (en) * | 2001-03-20 | 2002-11-14 | Dae-Soon Kim | Lossless data compression method for uniform entropy data |
US20170347100A1 (en) * | 2016-05-28 | 2017-11-30 | Microsoft Technology Licensing, Llc | Region-adaptive hierarchical transform and entropy coding for point cloud compression, and corresponding decompression |
CN109565596A (en) * | 2016-05-12 | 2019-04-02 | 交互数字Vc控股公司 | The method and apparatus of the binary symbol sequence of syntactic element relevant to video data are indicated for context adaptive binary arithmetic coding |
CN110635807A (en) * | 2019-08-05 | 2019-12-31 | 湖南瑞利德信息科技有限公司 | Data coding method and decoding method |
CN112039531A (en) * | 2020-08-26 | 2020-12-04 | 湖南遥昇通信技术有限公司 | Jielin code error correction optimization method and device |
-
2020
- 2020-12-28 CN CN202011577534.2A patent/CN112821894A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020167429A1 (en) * | 2001-03-20 | 2002-11-14 | Dae-Soon Kim | Lossless data compression method for uniform entropy data |
CN109565596A (en) * | 2016-05-12 | 2019-04-02 | 交互数字Vc控股公司 | The method and apparatus of the binary symbol sequence of syntactic element relevant to video data are indicated for context adaptive binary arithmetic coding |
US20170347100A1 (en) * | 2016-05-28 | 2017-11-30 | Microsoft Technology Licensing, Llc | Region-adaptive hierarchical transform and entropy coding for point cloud compression, and corresponding decompression |
CN110635807A (en) * | 2019-08-05 | 2019-12-31 | 湖南瑞利德信息科技有限公司 | Data coding method and decoding method |
CN112039531A (en) * | 2020-08-26 | 2020-12-04 | 湖南遥昇通信技术有限公司 | Jielin code error correction optimization method and device |
Non-Patent Citations (1)
Title |
---|
刘海山: "遥感图像压缩技术的研究", 《测绘与空间地理信息》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113556381A (en) * | 2021-06-15 | 2021-10-26 | 湖南幻影三陆零科技有限公司 | HTTP request optimization method, terminal, and storage medium |
CN113556381B (en) * | 2021-06-15 | 2022-09-30 | 湖南幻影三陆零科技有限公司 | Optimization method of HTTP request, terminal and storage medium |
CN113486369A (en) * | 2021-06-23 | 2021-10-08 | 湖南遥昇通信技术有限公司 | Encoding method, apparatus, device and medium with symmetric encryption and lossless compression |
CN113486369B (en) * | 2021-06-23 | 2022-07-22 | 湖南遥昇通信技术有限公司 | Encoding method, apparatus, device and medium with symmetric encryption and lossless compression |
CN113922947A (en) * | 2021-09-18 | 2022-01-11 | 湖南遥昇通信技术有限公司 | Adaptive symmetric coding method and system based on weighted probability model |
CN113922947B (en) * | 2021-09-18 | 2023-11-21 | 湖南遥昇通信技术有限公司 | Self-adaptive symmetrical coding method and system based on weighted probability model |
CN113938273A (en) * | 2021-09-30 | 2022-01-14 | 湖南遥昇通信技术有限公司 | Symmetric encryption method and system capable of resisting vector parallel computing attack |
CN113938273B (en) * | 2021-09-30 | 2024-02-13 | 湖南遥昇通信技术有限公司 | Symmetric encryption method and system capable of resisting quantitative parallel computing attack |
CN113887989A (en) * | 2021-10-15 | 2022-01-04 | 中国南方电网有限责任公司超高压输电公司柳州局 | Power system reliability evaluation method and device, computer equipment and storage medium |
CN113887989B (en) * | 2021-10-15 | 2024-01-16 | 中国南方电网有限责任公司超高压输电公司柳州局 | Power system reliability evaluation method, device, computer equipment and storage medium |
CN114039718A (en) * | 2021-10-18 | 2022-02-11 | 湖南遥昇通信技术有限公司 | Hash coding method and system of self-adaptive weighted probability model |
CN114039718B (en) * | 2021-10-18 | 2023-12-19 | 湖南遥昇通信技术有限公司 | Hash coding method and system of self-adaptive weighted probability model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112821894A (en) | Lossless compression method and lossless decompression method based on weighted probability model | |
Minnen et al. | Joint autoregressive and hierarchical priors for learned image compression | |
CN112188198B (en) | Image data compression and decompression method and system | |
US10499086B2 (en) | Video data encoding and decoding methods and apparatuses | |
KR100561875B1 (en) | Decoding method and apparatus for position interpolator | |
US8295342B2 (en) | Method and system for efficient video compression with low-complexity encoder | |
JP2006126810A (en) | Lossless adaptive golomb-rice encoding, and decoding of integer data using backward-adaptive rule | |
US10366698B2 (en) | Variable length coding of indices and bit scheduling in a pyramid vector quantizer | |
JPH0485621A (en) | Rounding device | |
JP2008510349A (en) | System and method for compressing mixed graphic and video sources | |
CN113486369A (en) | Encoding method, apparatus, device and medium with symmetric encryption and lossless compression | |
CN115668777A (en) | System and method for improved machine learning compression | |
CN115866253B (en) | Inter-channel conversion method, device, terminal and medium based on self-modulation | |
Hidayat et al. | Survey of performance measurement indicators for lossless compression technique based on the objectives | |
US8320687B2 (en) | Universal lossy compression methods | |
CN115776571A (en) | Image compression method, device, equipment and storage medium | |
CN112715029A (en) | AI encoding apparatus and operating method thereof, and AI decoding apparatus and operating method thereof | |
CN115550650A (en) | Method and device for effectively adjusting compression rate of reference frame image and electronic equipment | |
CN116600123B (en) | Video encoding method and device, video decoding method and device and electronic equipment | |
CN115914630B (en) | Image compression method, device, equipment and storage medium | |
CN113068033B (en) | Multimedia inverse quantization processing method, device, equipment and storage medium | |
CN110099279B (en) | Method for adjusting lossy compression based on hardware | |
CN114024551A (en) | Data lossless compression method, system, electronic device and medium | |
JPH02131671A (en) | Picture data compression method | |
CN115767096A (en) | Image compression method, apparatus, device and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210518 |
|
RJ01 | Rejection of invention patent application after publication |