CN112188198B - Image data compression and decompression method and system - Google Patents

Image data compression and decompression method and system Download PDF

Info

Publication number
CN112188198B
CN112188198B CN202011016837.7A CN202011016837A CN112188198B CN 112188198 B CN112188198 B CN 112188198B CN 202011016837 A CN202011016837 A CN 202011016837A CN 112188198 B CN112188198 B CN 112188198B
Authority
CN
China
Prior art keywords
sequence
parameter
weighted
bit
symbol
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011016837.7A
Other languages
Chinese (zh)
Other versions
CN112188198A (en
Inventor
王杰林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Yaosheng Communication Technology Co ltd
Original Assignee
Hunan Yaosheng Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Yaosheng Communication Technology Co ltd filed Critical Hunan Yaosheng Communication Technology Co ltd
Priority to CN202011016837.7A priority Critical patent/CN112188198B/en
Publication of CN112188198A publication Critical patent/CN112188198A/en
Application granted granted Critical
Publication of CN112188198B publication Critical patent/CN112188198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Abstract

The invention discloses a method and a system for compressing and decompressing image data, wherein the method comprises the following steps: reading image data, quantizing the image data based on a preset strategy to obtain a corresponding quantized sequence, obtaining a first bit length of the quantized sequence, counting a first number of symbols 0 in the quantized sequence, coding the quantized sequence based on a weighted expansion probability model to obtain compressed data, and storing the first bit length, the first number and the compressed data into a compressed file; reading the compressed file, acquiring the first bit length, the first quantity and the compressed data, and decoding the compressed data based on the weighted expansion probability model to obtain decoded data. The invention has at least the following beneficial effects: the image data is compressed by the weighted expansion probability model, the realization is simple, lossless compression and lossy compression can be supported, the compression ratio of the image data is improved, and the storage management of the image is facilitated.

Description

Image data compression and decompression method and system
Technical Field
The present invention relates to the field of data compression technologies, and in particular, to a method and a system for compressing and decompressing image data.
Background
With the rapid development of internet technology, various video monitoring applications are increasing, and multimedia and streaming media are also widely applied. These industries generate a large amount of image video audio data, and the core of the industry lies in a compression method and a decompression method of the image data. How to increase the compression ratio of image data becomes an important issue.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the invention provides an image data compression and decompression method which can effectively improve the compression ratio of the image data.
The invention also provides an image data compression and decompression system with the image data compression and decompression method.
The image data compression and decompression method according to the embodiment of the first aspect of the invention comprises the following steps: a compression step, reading image data, quantizing the image data based on a preset strategy to obtain a corresponding quantized sequence, obtaining a first bit length of the quantized sequence, counting a first number of symbols 0 in the quantized sequence, encoding the quantized sequence based on a weighted expansion probability model to obtain compressed data, and storing the first bit length, the first number and the compressed data in a compressed file; and a decoding step of reading the compressed file, acquiring the first bit length, the first quantity and the compressed data, and decoding the compressed data based on the weighted expansion probability model to obtain decoded data.
The image data compression and decompression method according to the embodiment of the invention has at least the following beneficial effects: the image data is compressed through the weighted expansion probability model, the implementation is simple, lossless compression and lossy compression can be supported, the compression ratio of the image data is improved, and the storage management of the image is facilitated.
According to some embodiments of the invention, the method of quantizing the image data based on the preset policy comprises: acquiring all first sequences which accord with preset limiting conditions in a single byte, and obtaining quantization parameters according to the number of the first sequences
Figure BDA0002699348640000021
Wherein m is the number of the first sequences, and the preset limiting condition is the maximum number upper limit t of continuous occurrence of the symbols 1 in the binary sequences; and quantizing the image data according to bytes according to the quantization parameter Q to obtain the corresponding quantization sequence.
According to some embodiments of the invention, the method of quantizing the image data based on the preset policy comprises: and for the corresponding component of the color space of the image data, acquiring K bit plane data according to the sequence from high bit to low bit, and performing constraint processing through a preset limiting condition to obtain the corresponding quantized data, wherein the preset limiting condition is the maximum number upper limit t of continuous occurrence of the symbol 1 in the binary sequence, and K is an integer greater than 1.
According to some embodiments of the invention, the color space comprises: one of RGB, YUV, DCT-transformed coefficients, or wavelet-transformed coefficients.
According to some embodiments of the invention, the weighted dilation probability model comprises:
H n =F(X,r)
Figure BDA0002699348640000022
Figure BDA0002699348640000023
wherein F (X, r) represents a weighted cumulative distribution function of the sequence X, and F (X, r) ═ rf (X), r represents a weight coefficient, and r > 1, p (X) i ) Denotes the value of X i Probability mass function, function
Figure BDA0002699348640000024
Figure BDA0002699348640000025
According to some embodiments of the invention, the compressing step further comprises the steps of: s100, obtaining a first probability of occurrence of a symbol 0 in the quantized sequence according to the first number and the first bit length; s200, obtaining the maximum weighting coefficient r of the weighted expansion probability model through the first probability based on a first formula max The first formula is:
Figure BDA0002699348640000026
wherein p (0) is the first probability; and S300, based on the weighted expansion probability model, coding the quantization sequence according to the first probability and the maximum weighting coefficient to obtain the compressed data.
According to some embodiments of the invention, said step S300 further comprises: s310, initialization
Figure BDA0002699348640000031
H 0 =R 0 =1.0,L 0 Go through the quantity equal to 0Bits of the sequence are quantized, wherein,
Figure BDA0002699348640000032
representing a weighted probability of occurrence of the symbol 0, p (0) representing said first probability, r max Which represents the maximum weighting factor, is,
Figure BDA0002699348640000033
representing the weighted probability of the occurrence of the symbol 1, H 0 Representing an initial value of a weighted cumulative distribution function, R 0 ,L 0 Initial values of the first parameter and the second parameter are respectively represented; s320, if the value of the current bit of the quantization sequence is 0, according to the above
Figure BDA0002699348640000034
And L i =L i-1 Obtaining a first parameter and a second parameter, wherein R i And L i Respectively representing a first parameter and a second parameter corresponding to the ith bit, R i-1 And L i-1 Respectively representing a first parameter and a second parameter corresponding to the (i-1) th bit; s330, if the value of the current bit of the quantization sequence is 1, according to the above
Figure BDA0002699348640000035
And
Figure BDA0002699348640000036
a first parameter and a second parameter are obtained,
Figure BDA0002699348640000037
p (1) represents the probability of the occurrence of symbol 1; s340, completing traversal, and converting L n As the compressed data, where n is the first bit length.
According to some embodiments of the invention, the coding step comprises: reading the compressed file, acquiring the first bit length, the first quantity and the compressed data, and obtaining the first probability according to the first quantity and the first bit length; initialization
Figure BDA0002699348640000038
Figure BDA0002699348640000039
H 0 =R 0 =1.0,L 0 0; sequentially calculating the value H of the weighted cumulative distribution function corresponding to each bit in the first bit length i The calculation method comprises the following steps:
Figure BDA00026993486400000310
wherein i represents the ith bit; comparison H i Obtaining a decoding symbol corresponding to the ith bit according to the comparison result and the size of the compressed data value, and obtaining the first parameter R corresponding to the ith bit based on the weighted expansion probability model according to the comparison result i And the second parameter L i (ii) a And combining the decoding symbols in sequence to obtain the decoding data.
According to some embodiments of the present invention, the coding symbol, the first parameter R, are derived from the comparison result i And the second parameter L i The method comprises the following steps: if the compressed data value is less than H i Then the decoded symbol is 1,
Figure BDA00026993486400000311
and
Figure BDA00026993486400000312
otherwise the coded symbol is 0 and the coded symbol is zero,
Figure BDA00026993486400000313
and L i =L i-1
The image data compression and decompression system according to the second aspect of the present invention includes: the image processing device comprises a compression module, a compression module and a processing module, wherein the compression module is used for reading image data, quantizing the image data based on a preset strategy to obtain a corresponding quantization sequence, obtaining a first bit length of the quantization sequence, counting a first number of symbols 0 in the quantization sequence, coding the quantization sequence based on a weighted expansion probability model to obtain compressed data, and storing the first bit length, the first number and the compressed data into a compressed file; and the decoding module is used for reading the compressed file, acquiring the first bit length, the first quantity and the compressed data, and decoding the compressed data based on the weighted expansion probability model to obtain decoded data.
The image data compression and decompression system according to the embodiment of the invention has at least the following beneficial effects: the image data is compressed by the weighted expansion probability model, the realization is simple, lossless compression and lossy compression can be supported, the compression ratio of the image data is improved, and the storage management of the image is facilitated.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart of a method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a compression process of a quantization sequence in the method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a process of decompressing compressed data according to an embodiment of the present invention;
FIG. 4 is a block diagram of the system according to an embodiment of the present invention.
X in FIG. 5 when n is 1 1 A weighted distribution function F (X, r) of k is illustrated as 0, 1, ·;
fig. 6 shows n-2 and x is known 1 When x 2 A weighted distribution function F (X, r) of k is illustrated as 0, 1, ·;
FIG. 7 shows t and r when p (0) is p (1) in an embodiment of the present invention max A relationship table of (1);
FIG. 8 is a schematic diagram of a process of weighted coding of a binary sequence X in the method according to the embodiment of the present invention;
FIG. 9 is a schematic diagram of a process of weighted decoding of a binary sequence X in the method according to an embodiment of the present invention;
fig. 10 is a table listing values satisfying t-1 among values (i.e., 0 to 255) of a single byte;
fig. 11 is a diagram illustrating the effect of lossy compression and decompression on an image according to the method of the embodiment of the present invention.
Reference numerals:
a compression module 100 and a decoding module 200.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality is one or more, the meaning of a plurality is two or more, and larger, smaller, larger, etc. are understood as excluding the present numbers, and larger, smaller, inner, etc. are understood as including the present numbers. If there is a description of first and second for the purpose of distinguishing technical features only, this is not to be understood as indicating or implying a relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of technical features indicated.
The method of the embodiment of the invention, referring to fig. 1, comprises the following steps: a compression step, reading image data, quantizing the image data based on a preset strategy to obtain a corresponding quantized sequence, obtaining a first bit length of the quantized sequence, counting a first number of symbols 0 in the quantized sequence, coding the quantized sequence based on a weighted expansion probability model to obtain compressed data, and storing the first bit length, the first number and the compressed data into a compressed file; and a decoding step, namely reading the compressed file, acquiring the first bit length, the first quantity and the compressed data, and decoding the compressed data based on the weighted expansion probability model to obtain decoded data.
The compression step in the method of the present embodiment requiresFirstly, quantizing image data to enable a quantized binary sequence (called a quantized sequence for short) to meet the following limiting conditions: "upper limit t of the maximum number of consecutive occurrences of symbol 1 in the binary sequence"; if the constraint is satisfied, lossless compression and decompression can be performed for the quantized sequence. The quantization method may be performed for bytes of the image data, or for bit planes of the image data. In some embodiments of the present invention, all first sequences within a single byte that meet a preset constraint are obtained, the number m of the first sequences is obtained, and a quantization parameter is calculated
Figure BDA0002699348640000051
And quantizing the image data according to bytes according to the quantization parameter Q to obtain a corresponding quantization sequence. For example, if a certain byte of the image data is X, then multiplying by Q yields the corresponding quantization result. It is obvious that the byte-wise compression method of the invention can also be used in the compression of text data as well as audio data. In other embodiments of the present invention, based on the color space in the image video, in order from high bit to low bit, K bit plane data are respectively obtained for each component of the color space (RGB, YUV, DCT-transformed coefficients or wavelet-transformed coefficients), and constrained by the above-mentioned constraint "maximum number of consecutive occurrences of symbol 1 in the binary sequence, t", for example, a 0 may be added after t consecutive occurrences of symbol 1, so as to obtain corresponding quantized data, where K is an integer greater than 1. When the size of K is equal to the length of 8 bytes, lossless compression is performed; otherwise lossy compression. The process of compressing the quantized data, referring to fig. 2, includes: according to the first quantity c n And a first bit length n, yielding a first probability p (0) ═ c of the occurrence of symbol 0 in the quantized sequence n N; based on the first formula (i.e. formula 3-9), the maximum weighting coefficient r of the weighted expansion probability model is obtained by the first probability p (0) max The first formula is:
Figure BDA0002699348640000061
wherein p (0) is a first probability; based on the weighted expansion probability model, according to the first probability p (0) and the maximum weighting coefficient r max And coding the quantization sequence to obtain compressed data. First, initialization is performed
Figure BDA0002699348640000062
Figure BDA0002699348640000063
H 0 =R 0 =1.0,L 0 0, each bit of the quantized sequence is traversed, wherein,
Figure BDA0002699348640000064
representing a weighted probability of occurrence of the symbol 0, p (0) representing a first probability, r max Which represents the maximum weighting factor, is,
Figure BDA0002699348640000065
representing the weighted probability of the occurrence of the symbol 1, H 0 Representing an initial value of a weighted cumulative distribution function, R 0 ,L 0 Respectively representing the initial values of the first parameter and the second parameter. If the value of the current bit of the quantized sequence is 0, then
Figure BDA0002699348640000066
And L i =L i-1 Obtaining a first parameter and a second parameter, wherein R i And L i Respectively represent a first parameter and a second parameter corresponding to the ith bit, R i-1 And L i-1 Respectively representing a first parameter and a second parameter corresponding to the (i-1) th bit; if the value of the current bit of the quantized sequence is 1, then
Figure BDA0002699348640000067
And
Figure BDA0002699348640000068
a first parameter and a second parameter are obtained,
Figure BDA0002699348640000069
p (1) represents the probability of the occurrence of symbol 1. It should be understood that the first parameter R in FIG. 2 i Are not written, and do not represent embodiments of the present invention that do not require the second parameter R i And (6) processing. After the traversal is completed, L n As compressed data, with a first quantity c n And the first bit length n is saved in the corresponding compressed file.
Referring to fig. 3, the decoding process in the method according to the embodiment of the present invention includes: reading the compressed file, acquiring a first bit length, a first quantity and compressed data, and obtaining a first probability according to the first quantity and the first bit length; initialization
Figure BDA00026993486400000610
H 0 =R 0 =1.0,L 0 0; sequentially calculating the value H of the weighted cumulative distribution function corresponding to each bit in the first bit length i The calculation method comprises the following steps:
Figure BDA00026993486400000611
wherein i represents the ith bit; comparison H i And the size of the compressed data value, if the compressed data value is less than H i Then the decoded symbol is 1,
Figure BDA00026993486400000612
and
Figure BDA00026993486400000613
otherwise the decoded symbol is 0 and the decoded symbol is,
Figure BDA0002699348640000071
and L i =L i-1 (ii) a And combining the decoding symbols in sequence to obtain decoded data.
The apparatus of the embodiment of the present invention, referring to fig. 4, includes: the compression module 100 is configured to read image data, quantize the image data based on a preset policy to obtain a corresponding quantized sequence, obtain a first bit length of the quantized sequence, count a first number of symbols 0 in the quantized sequence, encode the quantized sequence based on a weighted expansion probability model to obtain compressed data, and store the first bit length, the first number, and the compressed data in a compressed file; the decoding module 200 is configured to read image data, quantize the image data based on a preset policy to obtain a corresponding quantized sequence, obtain a first bit length of the quantized sequence, count a first number of symbols 0 in the quantized sequence, encode the quantized sequence based on a weighted expansion probability model to obtain compressed data, and store the first bit length, the first number, and the compressed data in a compressed file; reading the compressed file, acquiring the first bit length, the first quantity and the compressed data, and decoding the compressed data based on the weighted expansion probability model to obtain decoded data.
The following will give a derivation of the theoretical basis of the embodiment of the present invention, and a description will be given of some embodiments.
Let X be { X ═ X 1 ,x 2 ,...,x n Is a random process of finite or several possible values, and the set of possible values of this random process will be denoted as the set of non-negative integers a ═ 0, 1, 2 i E.a (i ═ 1, 2.., n). Thus, there is a probability space for all values in A:
Figure BDA0002699348640000072
where x ∈ A. Since the random process must be transferred to a variable, at any time there is:
Figure BDA0002699348640000073
then, the distribution function f (x) under the normalized probability model at any time is:
Figure BDA0002699348640000074
wherein, F is more than or equal to 0 and less than or equal to 1 (x), and a belongs to A. Obviously, the probability p (x) of the value x at any time is constant for a memoryless stochastic process.
Defining the weighted probability mass function as:
Figure BDA0002699348640000075
in the formula (1-4), p (a) is a probability mass function, p (a) is more than or equal to 0 and less than or equal to 1, and r is a weight coefficient. Obviously, the weighted probability sum of all symbols is:
Figure BDA0002699348640000081
defining a weighted cumulative distribution function as:
Figure BDA0002699348640000082
the weighted cumulative distribution function is also referred to simply as a weighted distribution function.
Let the weighted distribution function for sequence X be F (X, r). When n is 1, F (X, r) is:
F(X,r)=rF(x 1 )=rF(x 1 -1)+rp(x 1 )
when n is 2, x is selected with reference to fig. 5 1 Corresponding interval [ F (x) ] 1 -1,r),F(x 1 R)), due to F (x) 1 ,r)=F(x 1 -1,r)+rp(x 1 ) So that the interval length is
Figure BDA0002699348640000083
Figure BDA0002699348640000084
Then, the interval [ F (x) ] 1 -1,r),F(x 1 -1,r)+rp(x 1 ) Multiplying the length of the section by a weight coefficient r, and if r is less than 1, reducing the section; if r > 1 the interval is expanded; if r is 1, the time zone is unchanged. Then the interval becomes | F (x) 1 -1,r),F(x 1 -1,r)+r 2 p(x 1 ) Followed by mixing r) 2 p(x 1 ) Dividing the probability mass of each symbol into k +1 parts according to the formula (1-1), and dividing the symbol into 0 corresponding intervals of [ F (x) 1 -1,r),F(x 1 -1,r)+r 2 p(x 1 ) p (0)); the interval corresponding to symbol 1 is [ F (x) 1 -1,r)+r 2 p(x 1 )p(0),F(x 1 -1,r)+r 2 p(x 1 ) (p (0) + p (1))); the interval corresponding to symbol 2 is [ F (x) ] 1 -1,r)+r 2 p(x 1 )(p(0)+p(1)),F(x 1 -1,r)+r 2 p(x 1 ) (p (0) + p (1) + p (2))), and so on, and F (x) 1 -1,r)=rF(x 1 -1) obtaining:
F(X,r)=rF(x 1 -1)+r 2 F(x 2 )p(x 1 )
=rF(x 1 -1)+r 2 F(x 2 -1)p(x 1 )+r 2 p(x 1 )p(x 2 )
at this time, the interval length is r 2 p(x 1 )p(x 2 ). Refer to fig. 6.
And analogy, when n is 3:
F(X,r)=rF(x 1 -1)+r 2 F(x 2 -1)p(x 1 )+r 3 F(x 3 )p(x 1 )p(x 2 )
=rF(x 1 -1)+r 2 F(x 2 -1)p(x 1 )+r 3 F(x 3 -1)p(x 1 )p(x 2 )+r 3 p(x 1 )p(x 2 )p(x 3 )
order to
Figure BDA0002699348640000085
By analogy, the following results are obtained:
Figure BDA0002699348640000086
the set of weighted distribution functions satisfying (1-7) is defined as a weighted probability model, called { F (X, r) }, for short as a weighted model. If X i E.g., a is {0, 1}, then { F (X, r) } is twoA meta-weighting model. Order:
H n =F(X,r) (1-8)
Figure BDA0002699348640000091
Figure BDA0002699348640000092
x of i Must take the value in A, so p (x) i ) Is more than or equal to 0. It is clear that (1-8) (1-9) (1-10) is a range column, L i ,H i Is the variable X of the sequence X at the time i (i ═ 1, 2.., n) i Subscript, R, on corresponding interval i =H i -L i Is the length of the interval. { [ L ] n ,H n ) And is the interval column defined on the weighted probability model. Iteratively expressing (1-8) (1-9) (1-10) as:
Figure BDA0002699348640000093
in (1-7), r is a known real number, and (1-7) is called a static weighting model. If r is equal to the known real number ω at time i i And the coefficient sequence is W ═ ω 1 ,ω 2 ,...,ω n Then (1-7) can be expressed as:
Figure BDA0002699348640000094
the set of weight distribution functions satisfying (1-12) is referred to as a dynamic weighting model. When ω is 1 =ω 2 =…=ω n When r, F (X, W) is F (X, r). If omega 1 =ω 2 =…=ω n When r is 1, F (X, W) is F (X, 1) is F (X).
Figure BDA0002699348640000095
The iteration based on (1-13) is:
Figure BDA0002699348640000096
expanding the model weight coefficient omega i Is recorded as M i Then M is i Having different values according to different sequences. When 1 is less than or equal to omega i ≤M i When, L n ∈[L n ,H n )∧L n ∈[L n-1 ,H n-1 )∧...∧L n ∈[L i ,H i )。
In information theory, entropy is used to measure the expected value of the occurrence of a random variable. Which represents the amount of information lost in the transmission of the signal before it is received, also known as information entropy. The information entropy is also called the source entropy, the average self information quantity.
Discrete forgetting-free source sequence X ═ X 1 ,x 2 ,...,x n },x i E.g., a (i ═ 1, 2.., n), a ═ 0, 1, 2.., s }, and when r ═ 1,
Figure BDA0002699348640000101
according to the Shannon information entropy definition, the entropy of X is (the logarithm thereof is the base of s + 1).
Figure BDA0002699348640000102
When r ≠ 1, the definition has a probability
Figure BDA0002699348640000103
The self-information quantity of the random variable of (2) is:
Figure BDA0002699348640000104
set { x i The expression "a" refers to a (i: 1, 2.., n; a ∈ A) wherein c is present a A. When the value of r is determined, the sourceThe total information content of sequence X is:
Figure BDA0002699348640000105
then the amount of information averaged for each signal is:
Figure BDA0002699348640000106
definition 2.1 let H (X, r) be the weighted model information entropy (in bits/symbol):
Figure BDA0002699348640000107
a binary bernoulli sequence X of length 32 is given, X ═ 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1. The weighted probabilistic lossless coding process is analyzed as follows:
if the sequence X satisfies "the number of consecutive symbols 1 in the sequence is at most t", and t is 3, the probability of symbol 0 and symbol 1 in the sequence X is p (0) ═ p (1) ═ 0.5. From FIG. 7, r can be obtained max =1.03758,H 0 =R 0 =r max =1.03758,L 0 =0。
According to (1-11)
Figure BDA0002699348640000108
L i =L i-1 +R i-1 F(x i -1,r)
H i =L i +R i
The weighted model coding process is:
i=1,x 1 =1,R 1 =R 0 rp(1)=0.5382861282,L 1 =L 0 +R 0 F(1-1,r)=L 0 +R 0 rp(0)=0.5382861282。
i=2,x 2 =0,R 2 =R 1 rp(0)=0.279257460449,L 2 =L 1 +R 1 F(0-1,r)=L 1 =0.5382861282。
i=3,x 3 =1,R 3 =R 2 rp(1)=0.144875977906,L 3 =L 2 +R 2 F(1-1,r)=L 2 +R 2 rp(0)=0.683162106106。
the results of the iterative calculations are processed in accordance with FIG. 8.
The result after encoding is L 32 =0.781740377568,R 32 0.000000000787, then H 32 =R 32 +L 32 0.781740378355. Since the compression result V ∈ [ L ] 32 ,H 32 ) Therefore, the compression result V may take 0.781740378. 0.781740378 is converted into a binary sequence Y of 101110100110000110100101011010, for a total of 30 bits.
When decoding, let R be known as R, where V equals 0.781740378 and p (0) equals p (1) equals 0.5 0 =1.03758,L 0 =0,H 0 =r max 1.03758. The decoding process of the weighted model obtained by the formula (1-11) is as follows:
the interval of the symbol 0 is
Figure BDA0002699348640000111
Figure BDA0002699348640000112
Figure BDA0002699348640000113
The interval of symbol 1 is
Figure BDA0002699348640000114
Figure BDA0002699348640000115
Figure BDA0002699348640000116
Due to the fact that
Figure BDA0002699348640000117
The symbol 1 is decoded. This process can be simplified to: when in use
Figure BDA0002699348640000118
Decoding the output symbol 0; when in use
Figure BDA0002699348640000119
The output symbol 1 is decoded. Only need to calculate after simplification
Figure BDA00026993486400001110
The value of (c). Since the symbol 1 is decoded, it
Figure BDA00026993486400001111
Figure BDA00026993486400001112
Figure BDA00026993486400001113
Figure BDA00026993486400001114
Due to the fact that
Figure BDA00026993486400001115
The symbol 0 is decoded.
Figure BDA00026993486400001116
R 2 =R 1 rp(0)=0.5382861282*1.03758*0.5=0.279257460449。
Figure BDA0002699348640000121
Figure BDA0002699348640000122
Due to the fact that
Figure BDA0002699348640000123
The symbol 1 is decoded.
Figure BDA0002699348640000124
Figure BDA0002699348640000125
R 3 =R 2 rp (1) ═ 0.279257460449 × 1.03758 × 0.5 ═ 0.144875977906. By analogy, the process of decoding can be obtained as shown in fig. 9.
Obviously, sequence X is lossless codec based on a weighted probability model. When r is max =1,H 0 =R 0 =r max =1,L 0 When 0, the standard model may be lossless coded and V0.7052892161. Converting V to binary Y of 110100100011000101001100000000001 for a total of 33 bits. When r is max =0.95,H 0 =R 0 =r max =0.95,L 0 When 0, the puncturing model may be lossless coded and V0.61360459856. Converting V to binary Y of 111001001001010111100101000001010000 for a total of 36 bits. In contrast, the weighted probability model is more compressed.
The sequence X generated by the source randomly does not satisfy (1-23) and the probability of the symbol is not necessarily equal. The sequence X needs to be processed:
"all '11.. 1' in sequence X (t symbols 1) are followed by a symbol 0" (3-1)
If the sequence X after the (3-1) process satisfies "the number of consecutive characters 1 in the sequence does not exceed the upper limit t (t is an integer of 1 or more)", the sequence X can be reduced by removing the symbol 0 after the '11.. 1'.
Example 1: given a binary bernoulli sequence X of length n, where p (0) ═ p (1) ═ 0.5, sequence X is processed through (3-1) such that t is 1. And solving the information entropy of the weighting model and the Y bit length of the compressed sequence.
(1) After treatment the sequence X has a length of
Figure BDA0002699348640000126
The number of symbols 0 is n, the number of symbols 1 is nIs the number of
Figure BDA0002699348640000127
From FIG. 7, r can be obtained max 1.236067, then by theorem 2.1:
Figure BDA0002699348640000128
the length of the coded sequence Y is 1.0413645n (bits), and there is no compression effect.
(2) The probabilities of symbol 0 and symbol 1 in the processed sequence X become
Figure BDA0002699348640000129
From (1-20) available:
Figure BDA00026993486400001210
thus:
Figure BDA00026993486400001211
the length of the coded sequence Y is
Figure BDA0002699348640000131
(bit) does not have a compression effect.
(3) Is provided with
Figure BDA0002699348640000132
According to (1-26), the
Figure BDA00026993486400001320
Has a maximum value of
Figure BDA0002699348640000133
Will be provided with
Figure BDA0002699348640000134
And
Figure BDA0002699348640000135
substituting (2-3) can obtain:
Figure BDA0002699348640000136
the encoded sequence Y is then of length
Figure BDA0002699348640000137
Bit obviously has no compression effect, but the weighted probability model lossless coding method obeys the Shannon information theory correlation theorem.
Example 2: given a binary Bernoulli sequence of length n X, in
Figure BDA0002699348640000138
The sequence X was processed by (3-1) so that t is 1. And solving the information entropy of the weighting model and the Y bit length of the compressed sequence.
(1) After treatment the sequence X has a length of
Figure BDA0002699348640000139
The number of symbols 0 is n, and the number of symbols 1 is n
Figure BDA00026993486400001310
If the sequence X adopts
Figure BDA00026993486400001311
And
Figure BDA00026993486400001312
and carrying out weighted probability model coding. R is obtained from (1-20) max 1.0355339, then H (X, r) max ) 0.721928. The length of the coded sequence Y is
Figure BDA00026993486400001313
Figure BDA00026993486400001314
(bits) with a compression effect.
(2) If sequence X is processed
Figure BDA00026993486400001315
And
Figure BDA00026993486400001316
and carrying out weighted probability model coding. R is obtained from (1-20) max 1.024922359, then H (X, r) max ) 0.6145. The length of the coded sequence Y is
Figure BDA00026993486400001317
(bit) with compression effect and compare to
Figure BDA00026993486400001318
And (3) coding by a temporal weighted probability model, and improving the lossless compression ratio by 12.15%.
(3) When p (0) is 0.7417741, r max =1.0586921,H(X,r max ) 0.82405732, the length of the sequence Y after coding is
Figure BDA00026993486400001319
(bit). Obviously, when p (0) > 0.651881 in the sequence X or p (0) > 0.741774 in the processed sequence X, the compression effect is achieved.
Example 3: given a binary bernoulli sequence X of length n, with probabilities of symbol 0 and symbol 1 in X being p (0) and p (1), the sequence X is processed by (3-1) such that the sequence X satisfies "the number of consecutive characters 1 in the sequence does not exceed an upper limit t (t is an integer of 1 or more)". And solving the information entropy of the weighting model and the Y bit length of the compressed sequence.
Obviously, the number of symbols 0 is np (0), the number of symbols 1 is np (1), and a "11.. 1" (t symbols 1) in the sequence X is assumed. Thus, the sequence X is processed by (3-1) with a symbols 0 added. The probabilities of symbol 0 and symbol 1 in the processed sequence X are
Figure BDA0002699348640000141
R is obtained from (1-20) max Then H (X, r) can be obtained max ). Thus, theThe length of the coded sequence Y is (n + a) H (X, r) max ) (bit).
Let the source generate a binary bernoulli sequence X of length n. The weighted probability model lossless coding is mainly divided into the following steps.
The method comprises the following steps: and obtaining a statistical table T corresponding to T. If T is 1, T 1 Is the number of "1" in the sequence X; when T is 2, T 2 Is the number of "11" in sequence X; when T is 3, T 3 Is the number of "111" in sequence X; and so on. Synchronizing to obtain the number c of symbols 0 0 And the number c of symbols 1 1 Then:
Figure BDA0002699348640000142
step two: according to each T in the table T t To obtain
Figure BDA0002699348640000143
R is calculated from p (0) by the following formula max
Figure BDA0002699348640000144
Synchronous completion of H (X, r) max ) To obtain list H.
Step three: selecting p (0), t, r corresponding to the minimum value in the list H max And performing weighted probability model lossless coding, wherein the coding process refers to fig. 4. Obviously, the shortest lossless coding result Y is obtained at this time. The lossless compression effect of the weighted probability model is closer to the theoretical limit of the information entropy.
The weighted model lossy compression algorithm is mainly applied to the fields of images, videos and audios. The weighting model has good lossless compression effect on binary sequences meeting the requirements of (1-23), and the sensory effect of data such as images, videos and audios is not influenced under a certain distortion rate. Thus, if the distortion ratio is controlled so that the data satisfies (1-23), a good compression effect can be achieved by using the weighted probability model.
"all '11.. 1' in sequence X (t symbols 1) followed by k symbols 1 are changed to 0" (3-4)
Given a binary bernoulli sequence X of length n, with probabilities of symbol 0 and symbol 1 in X being p (0) and p (1), the sequence X is processed by (3-4) such that the sequence X satisfies lossless decoding requirements, and t ═ 1. If the sequence X satisfies that "each symbol 1 is separated by at least k (k ═ 1, 2..) symbols 0", the weighting model information entropy and the compressed sequence Y bit length are calculated.
Assuming that 2k +1 symbols of the post-processing sequence X from i +1 are 0.. 010.. 0 (where 0.. 0 represents k symbols 0), which is obtained from (1-8) (1-9) (1-10):
Figure BDA0002699348640000151
Figure BDA0002699348640000152
because of H i+5 ≤H i+1 Therefore:
Figure BDA0002699348640000153
thus:
r max k p(0) k +r max 2k p(0) 2k-1 p(1)=1 (3-7)
r can be obtained by (3-7) max H (X, r) can be obtained by substituting theorem 2.1 into the sequence X after the probability of symbol 0 and symbol 1 in the sequence X is changed after the (3-4) processing max ). The length of the sequence Y is then nH (X, r) max ). Similarly, r corresponding to t and k can be obtained max Equation (c) of (c).
One of the controls of the distortion rate.
The combination of a byte expressed in byte units by 8-bit binary and a byte satisfying t-1 is shown in fig. 10 (note: any two adjacent bytes must satisfy t-1, for example, a value of 1Byte x i If byte x i+1 If it is more than 128, t ═ 1) is not satisfied, and the binary value adopts polynomial x i =a 7 2 7 +a 6 2 6 +a 5 2 5 +a 4 2 4 +a 3 2 3 +a 2 2 2 +a 1 2 1 +a 0 2 0 Coefficient (a) of (A) 7 ,a 6 ,a 5 ,a 4 ,a 3 ,a 2 ,a 1 ,a 0 ) And (4) sorting mode.
Referring to FIG. 10, there are 34 x i (ii) a If any byte value x is to be added i The uniform quantization is 34 values in the figure, then the quantization parameter
Figure BDA0002699348640000154
When t is 2, obviously, there are 81 values,
Figure BDA0002699348640000155
Figure BDA0002699348640000156
when t is 3, there are 108 values,
Figure BDA0002699348640000157
when t is 7, there are 128 values, and Q is 0.5. Thus, an arbitrary byte value x i Multiplied by Q to quantize
Figure BDA0002699348640000159
And x i Can pass through
Figure BDA0002699348640000158
Inverse quantization is performed. When Q → 1
Figure BDA00026993486400001510
The distortion ratio approaches 0. The lower the distortion rate, the higher the data fidelity.
And controlling the distortion rate.
The above is a method of directly quantizing each byte in image data. Book (I)The sections will be based on three components (color space) in the image video, such as R (red), G (green), B (blue) components (which may also be YUV, or DCT, wavelet transformed coefficients). The xth of each component i The bytes are expressed in binary polynomial form:
x i =a 7 2 7 +a 6 2 6 +a 5 2 5 +a 4 2 4 +a 3 2 3 +a 2 2 2 +a 1 2 1 +a 0 2 0 (3-8)
definition a 7 ,a 6 ,a 5 ,a 4 ,a 3 ,a 2 ,a 1 ,a 0 Is x i Coefficients of different bit planes, and a j E {0, 1}, j ═ 0, 1. Arbitrary x for each component i In, if x i Is odd and a 0 0, x' i =x i -1, the odd value is reduced by 1 on reduction; x is the number of i Is even number and a 0 X 'when equal to 0' i =x i The even values can be restored without loss. If x i Is odd and a 1 =0,a 0 X 'when equal to 0' i =x i -3, odd numbers are reduced by 3 upon reduction; x is the number of i Is even number and a 1 =0,a 0 X 'when equal to 0' i =x i -2, the even value is reduced by 2 upon reduction. It is clear that the distortion rate of the second method is lower than the distortion rate of the first method.
Let K be 1, 2, 3, 4, 5, 6, 7, 8, and if the byte length of each component is n, then i is 1, 2. When K is 1, then each byte x in the three components is fetched i Corresponding to a 7 Then each component has n a 7 . The first bit plane, labeled BitPlane1, is then obtained. It is clear that BitPlane1 has a bit length of n. Then, assuming that t is 1, a symbol 0 is added after each symbol 1, and the bit plane data after adding the symbol 0 is denoted as tmpbtlane 1. Then, the probability p (0) of the symbol 0 in tmppBitplane 1 was counted, and r was calculated from (1-20) max . The tmpbtplane 1 is then losslessly encoded using a weighted probability model.
When K is 2, each byte x in the three components is acquired i Corresponding to a 7 To obtain tmpbtplane 1. Then, each byte x in the three components is obtained i Corresponding to a 6 To obtain tmpbtplane 2. Let t equal to 1, r is obtained for tmpbtplane 1 and tmpbtplane 2 max Tmpbtplane 1 and tmpbtplane 2 were losslessly encoded using a weighted probability model. And so on.
Obviously, K < 8 indicates that the image video is lossy compressed by means of bit-plane quantization. The bit length T corresponding to T can be obtained for the coefficient after DCT transform or the coefficient after wavelet transform by setting the maximum absolute value of the coefficient to be T bit 。K<T bit It is obvious that the smaller K, the larger the quantization degree, the higher the compression efficiency, the higher the lossy compression ratio, and the lower the image sharpness.
A new bit plane quantization algorithm is constructed by utilizing a weighted probability model, so that the operation efficiency is far higher than that of algorithms such as SPIHT, EZW, EBCOT and the like. The weighted probability model is linear coding, and can be decomposed into arbitrary block size for coding, and is encoded by weight coefficient r max Such that the lossless compression of the weighted probability model for each bit-plane is at or near the entropy of the information. Because the weighted probability model is linear coding, the bit plane quantization can be carried out according to different blocks, and the value of K and the termination position of the weighted probability model coding can be automatically adjusted according to the size requirement of the whole image output code stream.
The embodiment of the invention adopts a lossy coding mode.
Let the source generate a sequence X of length n, the sequence X being in bytes. The weighted probability model lossy coding is mainly divided into the following steps.
The method comprises the following steps: let t be 1 in units of bytes, and obtain Q0.1328125. Then each byte x i ( i 1, 2.... times.n) is quantized to values of 0-33, and by looking at the table of fig. 7, a binary sequence S of 8 bits in length is obtained. Counting the number c of all symbols 0 in n S 0 Then:
Figure BDA0002699348640000171
step two: r is calculated from p (0) by the following formula max
Figure BDA0002699348640000172
Step three: mixing p (0), t, r max And performing weighted probability model lossless coding, and referring to fig. 3, obtaining a coding result Y.
Referring to fig. 11, after lossy compression by the lossy compression method, 42.6MB of original image is compressed 600 times, and the image size after compression is 72.8KB, which results in an efficiency of 40M/S. Under the condition of no difference in vision after compression, the storage space is saved by 600 times compared with the original picture; the transmission efficiency of the image is improved.
Although specific embodiments have been described herein, those of ordinary skill in the art will recognize that many other modifications or alternative embodiments are equally within the scope of this disclosure. For example, any of the functions and/or processing capabilities described in connection with a particular device or component may be performed by any other device or component. In addition, while various illustrative implementations and architectures have been described in accordance with embodiments of the present disclosure, those of ordinary skill in the art will recognize that many other modifications of the illustrative implementations and architectures described herein are also within the scope of the present disclosure.
Certain aspects of the present disclosure are described above with reference to block diagrams and flowchart illustrations of systems, methods, systems, and/or computer program products according to example embodiments. It will be understood that one or more blocks of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by executing computer-executable program instructions. Also, according to some embodiments, some blocks of the block diagrams and flow diagrams may not necessarily be performed in the order shown, or may not necessarily be performed in their entirety. In addition, additional components and/or operations beyond those shown in the block diagrams and flow diagrams may be present in certain embodiments.
Accordingly, blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of elements or steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions, elements or steps, or combinations of special purpose hardware and computer instructions.
Program modules, applications, etc. described herein may include one or more software components, including, for example, software objects, methods, data structures, etc. Each such software component may include computer-executable instructions that, in response to execution, cause at least a portion of the functionality described herein (e.g., one or more operations of the illustrative methods described herein) to be performed.
The software components may be encoded in any of a variety of programming languages. An illustrative programming language may be a low-level programming language, such as assembly language associated with a particular hardware architecture and/or operating system platform. Software components that include assembly language instructions may need to be converted by an assembler program into executable machine code prior to execution by a hardware architecture and/or platform. Another exemplary programming language may be a higher level programming language, which may be portable across a variety of architectures. Software components that include higher level programming languages may need to be converted to an intermediate representation by an interpreter or compiler before execution. Other examples of programming languages include, but are not limited to, a macro language, a shell or command language, a job control language, a scripting language, a database query or search language, or a report writing language. In one or more exemplary embodiments, a software component containing instructions of one of the above programming language examples may be executed directly by an operating system or other software component without first being converted to another form.
The software components may be stored as files or other data storage constructs. Software components of similar types or related functionality may be stored together, such as in a particular directory, folder, or library. Software components may be static (e.g., preset or fixed) or dynamic (e.g., created or modified at execution time).
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.

Claims (4)

1. An image data compression/decompression method, comprising:
a compression step, reading image data, quantizing the image data based on a preset strategy to obtain a corresponding quantized sequence, obtaining a first bit length of the quantized sequence, counting a first number of symbols 0 in the quantized sequence, encoding the quantized sequence based on a weighted expansion probability model to obtain compressed data, and storing the first bit length, the first number and the compressed data in a compressed file; wherein the weighted dilation probability model comprises:
H n =F(X,r)
Figure FDA0003705802920000011
Figure FDA0003705802920000012
wherein F (X, r) represents a weighted cumulative distribution function of the sequence X, and F (X, r) ═ rf (X), r represents a weighting coefficient, and r > 1, p (X) i ) Denotes the value of X i Probability mass function, function
Figure FDA0003705802920000013
L n ,H n Is the upper and lower subscripts of the interval corresponding to the variable Xn of the sequence X at the time n, R n Is the length of the interval;
the compressing step further comprises the steps of:
s100, obtaining a first probability of occurrence of a symbol 0 in the quantized sequence according to the first number and the first bit length;
s200, obtaining the maximum weighting coefficient r of the weighted expansion probability model through the first probability based on a first formula max The first formula is:
Figure FDA0003705802920000014
wherein p (0) is the first probability;
s300, based on the weighted augmented probability model, encoding the quantized sequence according to the first probability and the maximum weighting coefficient to obtain the compressed data, where the step S300 further includes: s310, initialization
Figure FDA0003705802920000015
H 0 =R 0 =1.0,L 0 0, traversing the bits of the quantized sequence, wherein,
Figure FDA0003705802920000016
representing the weighted probability of the occurrence of the symbol 0,
Figure FDA0003705802920000017
representing the weighted probability of the occurrence of the symbol 1, H 0 Representing an initial value of a weighted cumulative distribution function, R 0 ,L 0 Initial values of the first parameter and the second parameter are respectively represented; s320, if the value of the current bit of the quantization sequence is 0, according to the above
Figure FDA0003705802920000021
And L i =L i-1 Obtaining a first parameter and a second parameter, wherein R i And L i Respectively representing a first parameter and a second parameter corresponding to the ith bitParameter, R i-1 And L i-1 Respectively representing a first parameter and a second parameter corresponding to the (i-1) th bit; s330, if the value of the current bit of the quantization sequence is 1, according to the above
Figure FDA0003705802920000022
And
Figure FDA0003705802920000023
a first parameter and a second parameter are obtained,
Figure FDA0003705802920000024
p (1) represents the probability of the occurrence of symbol 1; s340, completing traversal, and converting L n As the compressed data, where n is the first bit length;
a decoding step of reading the compressed file, obtaining the first bit length, the first number, and the compressed data, and decoding the compressed data based on the weighted expansion probability model to obtain decoded data, wherein the decoding step specifically includes: initialization
Figure FDA0003705802920000025
H 0 =R 0 =1.0,L 0 0; sequentially calculating the value H of the weighted cumulative distribution function corresponding to each bit in the first bit length i The calculation method comprises the following steps:
Figure FDA0003705802920000026
wherein i represents the ith bit; comparison H i Obtaining a decoding symbol corresponding to the ith bit according to the comparison result and the size of the compressed data, and obtaining the first parameter R corresponding to the ith bit based on the weighted expansion probability model according to the comparison result i And the second parameter L i (ii) a Sequentially combining the decoding symbols to obtain the decoding data; wherein the decoding symbol and the first parameter R are obtained according to the comparison result i And the second parameter L i Method (2)The method comprises the following steps: if the compressed data value is less than H i Then the decoded symbol is 1,
Figure FDA0003705802920000027
and
Figure FDA0003705802920000028
Figure FDA0003705802920000029
otherwise the coded symbol is 0 and the coded symbol is zero,
Figure FDA00037058029200000210
and L i =L i-1
2. The method according to claim 1, wherein the quantizing the image data based on the preset policy comprises:
acquiring all first sequences which accord with preset limiting conditions in a single byte, and obtaining quantization parameters according to the number of the first sequences
Figure FDA00037058029200000211
Wherein m is the number of the first sequences, and the preset limiting condition is that the maximum number of continuous occurrences of the symbol 1 in the binary sequence is less than an upper limit t;
and quantizing the image data according to bytes according to the quantization parameter Q to obtain the corresponding quantization sequence.
3. The method according to claim 1, wherein the quantizing the image data based on the preset policy comprises: and for the corresponding component of the color space of the image data, acquiring K bit plane data according to the sequence from high bit to low bit, and performing constraint processing through a preset limiting condition to obtain the corresponding quantization sequence, wherein the preset limiting condition is that the maximum continuous occurrence frequency of the symbol 1 in the binary sequence is less than an upper limit t, and K is an integer greater than 1.
4. An image data compression decompression system comprising a processor and a memory, the memory storing a program for execution by the processor to implement the method of any one of claims 1 to 3.
CN202011016837.7A 2020-09-24 2020-09-24 Image data compression and decompression method and system Active CN112188198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011016837.7A CN112188198B (en) 2020-09-24 2020-09-24 Image data compression and decompression method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011016837.7A CN112188198B (en) 2020-09-24 2020-09-24 Image data compression and decompression method and system

Publications (2)

Publication Number Publication Date
CN112188198A CN112188198A (en) 2021-01-05
CN112188198B true CN112188198B (en) 2022-08-02

Family

ID=73956586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011016837.7A Active CN112188198B (en) 2020-09-24 2020-09-24 Image data compression and decompression method and system

Country Status (1)

Country Link
CN (1) CN112188198B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866722B (en) * 2021-01-06 2024-03-22 湖南遥昇通信技术有限公司 Wavelet transformation and inverse transformation method and device based on weighted filter function
CN113486369B (en) * 2021-06-23 2022-07-22 湖南遥昇通信技术有限公司 Encoding method, apparatus, device and medium with symmetric encryption and lossless compression
CN113922947B (en) * 2021-09-18 2023-11-21 湖南遥昇通信技术有限公司 Self-adaptive symmetrical coding method and system based on weighted probability model
CN115550660B (en) * 2021-12-30 2023-08-22 北京国瑞数智技术有限公司 Network video local variable compression method and system
CN114390065B (en) * 2022-01-24 2024-03-19 浙江数秦科技有限公司 Block chain network data rapid transmission method
CN115514967B (en) * 2022-11-07 2023-03-21 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Image compression method and image decompression method based on binary block bidirectional coding

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106445890A (en) * 2016-07-07 2017-02-22 湖南千年华光软件开发有限公司 Data processing method
CN106484753A (en) * 2016-06-07 2017-03-08 湖南千年华光软件开发有限公司 Data processing method
CN107704435A (en) * 2017-10-25 2018-02-16 佛山市顺德区遥实通讯技术有限公司 A kind of lossless probabilistic model transform method
CN109450596A (en) * 2018-11-12 2019-03-08 湖南瑞利德信息科技有限公司 Coding method, coding/decoding method, encoding device, decoding device, storage medium and terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8577165B2 (en) * 2008-06-30 2013-11-05 Samsung Electronics Co., Ltd. Method and apparatus for bandwidth-reduced image encoding and decoding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106484753A (en) * 2016-06-07 2017-03-08 湖南千年华光软件开发有限公司 Data processing method
CN106445890A (en) * 2016-07-07 2017-02-22 湖南千年华光软件开发有限公司 Data processing method
CN107704435A (en) * 2017-10-25 2018-02-16 佛山市顺德区遥实通讯技术有限公司 A kind of lossless probabilistic model transform method
CN109450596A (en) * 2018-11-12 2019-03-08 湖南瑞利德信息科技有限公司 Coding method, coding/decoding method, encoding device, decoding device, storage medium and terminal

Also Published As

Publication number Publication date
CN112188198A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN112188198B (en) Image data compression and decompression method and system
JP4343440B2 (en) Real-time algorithm and architecture for encoding images compressed by DWT-based techniques
RU2417518C2 (en) Efficient coding and decoding conversion units
US5818877A (en) Method for reducing storage requirements for grouped data values
US7289047B2 (en) Decoding variable length codes while using optimal resources
US8483500B2 (en) Run length coding with context model for image compression using sparse dictionaries
US6885320B2 (en) Apparatus and method for selecting length of variable length coding bit stream using neural network
CN112821894A (en) Lossless compression method and lossless decompression method based on weighted probability model
US8254700B1 (en) Optimized method and system for entropy coding
JP2008510349A (en) System and method for compressing mixed graphic and video sources
CN113486369A (en) Encoding method, apparatus, device and medium with symmetric encryption and lossless compression
JP2006129467A (en) Lossless adaptive encoding/decoding of integer data
CN114520659A (en) Method for lossless compression and decoding of data by combining rANS and LZ4 encoding
RU2611249C1 (en) Entropy modifier and method to use it
CN106878757B (en) Method, medium, and system for encoding digital video content
US8305244B2 (en) Coding data using different coding alphabets
US20160309190A1 (en) Method and apparatus to perform correlation-based entropy removal from quantized still images or quantized time-varying video sequences in transform
US9948928B2 (en) Method and apparatus for encoding an image
CN112449191A (en) Method for compressing a plurality of images, method and apparatus for decompressing an image
US20080025620A1 (en) Data compression apparatus and data compressing program storage medium
CN106664099B (en) Method for encoding pulse vector using statistical properties
Tagne et al. An Efficient Data Compression Approach based on Entropic Codingfor Network Devices with Limited Resources
Hussin et al. A comparative study on improvement of image compression method using hybrid DCT-DWT techniques with huffman encoding for wireless sensor network application
US8073270B2 (en) Image decoding apparatus and method
US8754791B1 (en) Entropy modifier and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210408

Address after: Room 2605, office building 3, area C, Kaifu Wanda Plaza, 589 Zhongshan Road, Tongtai street, Kaifu District, Changsha City, Hunan Province, 410000

Applicant after: Hunan Xintong Microelectronics Technology Co.,Ltd.

Address before: 10 / F, business incubation building, No.001 Jinzhou North Road, Ningxiang high tech Industrial Park, Changsha, Hunan 410000

Applicant before: Hunan Yaosheng Communication Technology Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220105

Address after: 10 / F, business incubation building, No.001 Jinzhou North Road, Ningxiang high tech Industrial Park, Changsha, Hunan 410000

Applicant after: Hunan Yaosheng Communication Technology Co.,Ltd.

Address before: Room 2605, office building 3, area C, Kaifu Wanda Plaza, 589 Zhongshan Road, Tongtai street, Kaifu District, Changsha City, Hunan Province, 410000

Applicant before: Hunan Xintong Microelectronics Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant