CN116566397A - Encoding method, decoding method, encoder, decoder, electronic device, and storage medium - Google Patents

Encoding method, decoding method, encoder, decoder, electronic device, and storage medium Download PDF

Info

Publication number
CN116566397A
CN116566397A CN202210102679.XA CN202210102679A CN116566397A CN 116566397 A CN116566397 A CN 116566397A CN 202210102679 A CN202210102679 A CN 202210102679A CN 116566397 A CN116566397 A CN 116566397A
Authority
CN
China
Prior art keywords
data
context
sequence number
bit
decoded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210102679.XA
Other languages
Chinese (zh)
Inventor
王立传
张园
杨明川
韩韬
茅心悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202210102679.XA priority Critical patent/CN116566397A/en
Publication of CN116566397A publication Critical patent/CN116566397A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/40Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
    • H03M7/4006Conversion to or from arithmetic code
    • H03M7/4012Binary arithmetic codes
    • H03M7/4018Context adapative binary arithmetic codes [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The present disclosure provides an encoding and decoding method, an encoder, a decoder, an electronic device, and a storage medium, and relates to the technical field of encoding/decoding. The encoding method comprises the following steps: performing binarization processing on the data to be compressed to obtain a binary bit stream corresponding to the data to be compressed, which is represented by a unitary code; selecting a context model for encoding the value of the bit according to the context sequence number corresponding to each bit in the binary bit stream, and performing binary arithmetic encoding on the value of the bit; wherein, each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number is set according to a preset condition. In the embodiment of the disclosure, each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number can be set according to a preset condition, so that the number of bits corresponding to each context sequence number is non-uniform and configurable, and further, the method and the device can be better suitable for scenes with larger value range.

Description

Encoding method, decoding method, encoder, decoder, electronic device, and storage medium
Technical Field
The present disclosure relates to the field of encoding/decoding technologies, and in particular, to an encoding method, a decoding method, an encoder, a decoder, an electronic device, and a computer readable storage medium.
Background
Huffman coding requires the preservation of code tables and cannot adapt to different distributions of data. In order to solve the problem that Huffman coding cannot adapt to different distributions of data, context-adaptive binary arithmetic coding may be employed. However, binary arithmetic coding is less effective in a scene where the range of values is relatively large. Therefore, how to better adapt to scenes with a larger range of values is an important point of research.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides an encoding method, a decoding method, an encoder, a decoder, an electronic device, and a storage medium, which solve, at least to some extent, the problem that in the related art, the effect of binary arithmetic encoding is poor in a scene where the range of value is relatively large.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to one aspect of the present disclosure, there is provided an encoding method, the method comprising:
performing binarization processing on the data to be compressed to obtain a binary bit stream corresponding to the data to be compressed, which is represented by a unitary code;
selecting a context model for encoding the value of the bit according to the context sequence number corresponding to each bit in the binary bit stream, and performing binary arithmetic encoding on the value of the bit;
wherein, each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number is set according to a preset condition.
In one embodiment of the present disclosure, the context model is built based on the order of each bit in the binary bit stream.
In one embodiment of the present disclosure, the method further comprises:
according to preset conditions, bits of the binary bit stream are sequentially divided into a plurality of groups according to an arrangement sequence, and each group corresponds to a context sequence number;
a context model is built separately for the values of the bits of each group.
In one embodiment of the present disclosure, the preset conditions include at least one of the following conditions:
probability of occurrence of target values, empirical values of group division in binary bit stream.
In one embodiment of the present disclosure, when the preset condition is the occurrence probability of the target value in the binary bit stream, the number of bits corresponding to the following sequence number is inversely proportional to the occurrence probability of the target value on the bits.
In one embodiment of the present disclosure, the data to be compressed includes at least one of the following data:
video data, audio data, image data, text data.
According to another aspect of the present disclosure, there is provided a decoding method, the method comprising:
acquiring data to be decoded, wherein the data to be decoded comprises coded data corresponding to a plurality of context serial numbers;
performing arithmetic decoding on the coded data corresponding to the context sequence numbers by using a context model corresponding to each context sequence number to obtain a bit value corresponding to the context sequence number;
obtaining a binary bit stream after decoding the data to be decoded according to the bit value corresponding to each context sequence number;
performing inverse binarization processing of a unary code on a binary bit stream decoded by data to be decoded to obtain decoded data;
wherein, each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number is set according to a preset condition.
According to another aspect of the present disclosure, there is provided an encoder, including:
the binarization module is configured to carry out binarization processing on the data to be compressed by adopting a unitary code to obtain a binary bit stream corresponding to the data to be compressed;
an arithmetic coding module configured to select a context model for coding the value of a bit to perform binary arithmetic coding on the value of the bit according to a context number corresponding to each bit in the binary bit stream;
wherein, each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number is set according to a preset condition.
According to another aspect of the present disclosure, there is provided a decoder including:
the data acquisition module is configured to acquire data to be decoded, wherein the data to be decoded comprises coded data corresponding to a plurality of context serial numbers;
the arithmetic decoding module is configured to carry out arithmetic decoding on the coded data corresponding to the context sequence numbers by utilizing the context model corresponding to each context sequence number to obtain the bit value corresponding to the context sequence number;
the bit stream restoration module is configured to obtain a binary bit stream after decoding the data to be decoded according to the value of the bit corresponding to each context sequence number;
The inverse binarization module is configured to perform inverse binarization processing on the binary bit stream decoded by the data to be decoded to obtain decoded data;
wherein, each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number is set according to a preset condition.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the above-described encoding method and/or decoding method via execution of the executable instructions.
According to yet another aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described encoding and/or decoding method.
According to the encoding method provided by the embodiment of the disclosure, the data to be compressed is subjected to binarization processing to obtain the binary bit stream corresponding to the data to be compressed, which is represented by the unitary code, the context model for encoding the value of the bit is selected according to the context serial number corresponding to each bit in the binary bit stream, and the value of the bit is subjected to arithmetic encoding, so that binarization of the whole number domain can be completed.
Each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number can be set according to a preset condition. Therefore, the number of bits corresponding to each context sequence number is non-uniform and configurable, and further the method can be better suitable for scenes with larger value range.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1 is a schematic diagram of a range distribution scenario in an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of an encoding method according to an embodiment of the disclosure;
FIG. 3 is a diagram illustrating the correspondence between context sequence numbers and bits in an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart of a decoding method according to an embodiment of the disclosure;
FIG. 5 is a schematic diagram of an encoder in an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a decoder in an embodiment of the present disclosure; and
fig. 7 is a block diagram of a computer device in an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
For ease of understanding, related art and technical terms in the present disclosure will first be described before describing specific embodiments of the present disclosure.
Entropy coding: one way of lossless coding is the process of converting specified data (syntax elements) into a bit (bit) stream, by which the original data can be fully recovered.
Arithmetic coding: a process of encoding a string of symbols into an arithmetic number.
Arithmetic decoding: a process of reducing an arithmetic number to a string of symbols.
Binary arithmetic coding: arithmetic coding process of 0/1 binary string symbols.
MPS (Most Probable Symbol ): the symbol having a high occurrence probability may be 0 or 1. MPS symbols are designated with 1 in the presently disclosed embodiments.
LPS (Least Probable Symbol ): the symbol with a low occurrence probability corresponds to the MPS symbol, and in the embodiment of the present disclosure, the LPS symbol is denoted by 0.
Binarization, the process of converting data (syntax elements) into corresponding binary symbol strings.
Context modeling: the process of arithmetic coding/decoding depends on the probability of the symbol occurrence. The sequence numbers of the probability model are typically expressed in ctxId, each sequence number corresponding to a probability distribution. When encoding/decoding a particular binary symbol, the probability model to which the symbol belongs needs to be determined. The determination of the relationship between a particular binary symbol and a corresponding probabilistic model is referred to as context modeling. Wherein the probability model may also be referred to as a context model.
Bypass coding: and (3) the equal probability symbol coding, namely outputting a corresponding symbol string by using a fixed bit number, and a coding mode without binary arithmetic coding and decoding.
Entropy coding belongs to a lossless coding method, huffman coding belongs to one type of entropy coding, but a code table is required to be saved, so that different distributions of data cannot be self-adapted. The problem can be solved well by the context-adaptive binary arithmetic coding, which is adopted by a plurality of standards such as h.264.
Context-adaptive binary arithmetic coding has three key parts, namely binarization of data, context modeling and arithmetic coding and decoding, wherein the binarization and the context modeling are key to the compression efficiency of an arithmetic coding system.
The inventors found that: binarization and context modeling require special designs to fit the statistical properties of the original data distribution, and context-adaptive binary arithmetic coding methods are not used in many data distributions. Standards such as h.264 mainly use context-adaptive arithmetic coding for residuals of video, motion vectors, etc., while CAVLC (Context Adaptive Variable Length Coding, context-adaptive variable length coding based) methods are used for transform coefficients of 4x4 blocks, and even partial data are coded by bypass. The data range is large, the probability distribution of low values is large, and no arithmetic coding means exists at present under the condition that the probability distribution of high values is small.
Aiming at the situations of large data range, large low-value probability distribution and small high-value probability distribution, the embodiment of the disclosure provides an encoding method, a decoding method and related equipment, wherein the encoding method finishes binarization by using a unitary code, models the Nth bit of the unitary code in a non-uniform and configurable way, and can finish binarization of the whole number domain by adopting the method. By adopting the modeling mode, the contexts of the low-value region are more, the contexts of the high-value region are less, and the distribution characteristics of large probability distribution of the low-value region and small probability distribution of the high-value region are matched.
In the related art, several typical ways of converting a specific numerical value into a corresponding binary symbol string include a unary code, a truncated unary code, an exponential golomb code, and the like. Where truncated unary codes are a special case of known maximum coding values.
As one example, the unary code is shown in table 1.
TABLE 1
Numerical value Unitary code
0 0
1 10
2 110
3 1110
4 11110
5 111110
As one example, truncated unary codes are shown in table 2.
TABLE 2
Numerical value Cut-off unary code (5)
0 0
1 10
2 110
3 1110
4 11110
5 11111
As one example, an exponential golomb code of order 0 is shown in table 3.
TABLE 3 Table 3
Numerical value 0 order exponential golomb code
0 0
1 010
2 011
3 00100
4 00101
5 00110
The modeling mode of arithmetic coding in the related technology has the following characteristics:
a. When each bit is encoded/decoded, it must be clear to which context model it belongs, the context model must be built;
b. context modeling presents a challenge, and if the number of contexts is too large, the adaptation of the probability estimation may fail due to the frequency of symbols within the context being too low. If the number of contexts is too small, the symbols within the contexts actually have different statistics, and the probability estimates cannot closely approximate the actual symbol statistics of all of these symbols within the corresponding contexts;
c. the practical result is that context modeling is only performed on a small number of scenes such as residual errors, motion vectors and the like according to the h.264 standard, and the context modeling of other standard scenes is small and cannot be commonly used.
Problems in the related art will be described below with reference to specific examples.
In the case shown in fig. 1, the range of the value range is relatively large, and it is difficult to apply arithmetic coding.
The unary code representation 29:111111111111111111111111111110
Order 0 exponential golomb code representation 29:000011110
In the case shown in fig. 1, if an exponential golomb code is used, 0/1 of the inside of the bit stream is irregular, and no favorable MPS symbol can be created; if a unary code is adopted, the potential binary string is too long, the corresponding context model is too many, and the binarization and modeling scheme in the related art described above cannot be applied.
The encoding method and decoding method provided by the embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The encoding method and the decoding method provided by the embodiments of the present disclosure may be performed by any electronic device having computing processing capabilities.
For example, the execution body of the method may be, but not limited to, any terminal device, server, or the execution body of the method that can be configured to execute the encoding method and/or the decoding method provided by the embodiments of the present disclosure, or may be a client itself that can execute the method.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligent platforms, and the like.
The terminal may be a variety of electronic devices including, but not limited to, smartphones, tablets, laptop portable computers, desktop computers, wearable devices, augmented reality devices, virtual reality devices, etc., which are not limited herein.
Fig. 2 shows a flowchart of an encoding method in an embodiment of the present disclosure, and as shown in fig. 2, the encoding method provided in the embodiment of the present disclosure includes the following steps:
step S202, binarizing the data to be compressed to obtain a binary bit stream corresponding to the data to be compressed, which is represented by a unitary code;
step S204, selecting a context model for encoding the value of the bit according to the context serial number corresponding to each bit in the binary bit stream, and performing binary arithmetic encoding on the value of the bit;
wherein, each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number is set according to a preset condition.
The following describes the above steps in detail, as follows:
in the embodiment of the present disclosure, the terminal device executing the encoding method may perform binarization processing on the data to be compressed. The data to be compressed may be data collected by the terminal device, or may be data transmitted by other devices received by the terminal device.
In step S202, the binarization process is completely expressed by using a unary code. The binarization process is described in detail in the above description of technical terms, and will not be repeated here.
The number of bits corresponding to each context number is set according to a preset condition. Here, each context sequence number may correspond to one or more bits, that is, the number of bits each context sequence number contains may be non-uniform, configurable.
As an example, the correspondence of the context sequence number and the bit may be as shown in fig. 3.
The context model in the foregoing may be the probability model introduced in the foregoing.
For ease of understanding, the process of arithmetic coding based on a context model is described below in connection with a specific example.
As an example, one context sequence number corresponds to 3 bits, and the value of 3 bits is 001. Here, the probability of 0 in "001" is 0.6,1 and the probability is 0.4.
In this example, arithmetic coding would partition 0-1.
0:[0,0.6),1:[0.6,1)
The value of the first bit of 001 is 0, then interval [0,0.6 ] of 0 can be selected as the new target interval. For a new target interval, the partitioning may be done with a duty cycle of 0 and 1.
0:[0,0.36),1:[0.36,0.6)
The value of the second bit of 001 is also 0, then interval of 0 [0,0.36 ] can be selected again as the new target interval. For a new target interval, the partitioning may be done with a duty cycle of 0 and 1.
0:[0,0.216),1:[0.216,0.36)
The value of the third bit of 001 is 1, then finally falls into the interval: [0.216,0.36)
Because 0.25 is located between [0.216,0.36 ], (0.25) 10= (0.01) 2, the final code is 01, one bit is omitted.
The compressive nature of arithmetic coding, i.e., the ability to assign larger fractional intervals to more frequently occurring characters, i.e., characters with greater probability, while preserving the order of the character arrangement. Therefore, the range of the final target section needs to be as large as possible.
It should be noted that binary arithmetic coding is employed in the present disclosure. The encoding method of binary arithmetic coding is similar to that of arithmetic coding in the above example, but only two symbols "0", "1" are input, that is, a binary string, that is, a binary bit stream is input.
In addition to the feature of encoding binary strings, binary arithmetic coding differs from ordinary arithmetic coding in general by the following description:
the symbols in the input symbol string are divided into two types: MPS and LPS represent the symbols of the probability of occurrence, respectively, and need to be adjusted according to the actual situation. If more "0" and less "1" are input in the binary string, mps= "0" and lps= "1".
The specific encoding process is described in detail in the related art, and is not described herein.
Aiming at the situations of large data range, large low-value probability distribution and small high-value probability distribution, the embodiment of the disclosure provides an encoding method, a decoding method and related equipment, wherein the encoding method finishes binarization by using a unitary code, models the Nth bit of the unitary code in a non-uniform and configurable way, and can finish binarization of the whole number domain by adopting the method.
Each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number can be set according to a preset condition. Thus, the number of bits corresponding to each context number becomes non-uniform, configurable; by adopting the modeling mode, the contexts of the low-value region are more, the contexts of the high-value region are less, and the distribution characteristics of large probability distribution of the low-value region and small probability distribution of the high-value region are matched. In some embodiments, the above encoding method may further include:
according to preset conditions, bits of the binary bit stream are sequentially divided into a plurality of groups according to an arrangement sequence, and each group corresponds to a context sequence number;
A context model is built separately for the values of the bits of each group.
The preset conditions may include at least one of the following conditions:
probability of occurrence of target values, empirical values of group division in binary bit stream.
The coding method provided by the embodiment of the disclosure can select preset conditions of different group divisions according to different conditions, for example, a general modeling mode with large probability distribution of a low-value region and small probability distribution of a high-value region can be completed in a configurable mode, so that different coding scenes can be better adapted, and coding efficiency is improved.
As an example, when the preset condition is the occurrence probability of the target value in the binary bit stream, the number of bits corresponding to the following sequence number is inversely proportional to the occurrence probability of the target value on the bits.
It should be noted that, the data to be compressed above may include at least one of the following data:
video data, audio data, image data, text data.
According to the encoding method provided by the embodiment of the disclosure, binarization of the whole number domain is completed by using a unitary code, and a segmented complex encoding mode is not used; context modeling is completed in a non-uniform mode, the contexts of the low-value region are more, the contexts of the high-value region are less, the distribution characteristics of large probability distribution of the low-value region and small probability distribution of the high-value region are matched, and the method has higher compression ratio, is more general and simpler.
Based on the same inventive concept, the embodiment of the present disclosure further provides a decoding method, as shown in fig. 4, including the following steps:
step S402, obtaining data to be decoded, wherein the data to be decoded comprises coded data corresponding to a plurality of context serial numbers;
step S404, performing arithmetic decoding on the coded data corresponding to the context sequence number by using the context model corresponding to each context sequence number to obtain a bit value corresponding to the context sequence number;
step S406, obtaining a binary bit stream after decoding the data to be decoded according to the bit value corresponding to each context sequence number;
step S408, performing inverse binarization processing of the unary code on the binary bit stream decoded by the data to be decoded to obtain decoded data;
wherein, each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number is set according to a preset condition.
It should be noted that, the decoded data corresponds to the data to be compressed in the foregoing, and may include at least one of the following data:
video data, audio data, image data, text data.
In some embodiments, the preset conditions may include at least one of the following conditions:
Probability of occurrence of target values, empirical values of group division in binary bit stream.
In some embodiments, when the preset condition is the occurrence probability of the target value in the binary bit stream, the number of bits corresponding to the following sequence number is inversely proportional to the occurrence probability of the target value on the bits.
The principle of solving the problem in the decoding method embodiment is similar to that in the encoding method embodiment, and the repetition is not repeated.
As an example, corresponding to the example of the encoding process in the foregoing,
reading in: 01, obtaining its original binary sequence
[0.01]2=[0.25]10
1, first division, 0.25 falling into [0,0.6]
Obtain an output of 0
2, second division, 0.25 falling into [0,0.36]
Obtain an output of 0
3, third division, 0.25 falling into [0.216,0.36]
Obtain output of 1
If there is a control bit or if it has been fixed in length, then there is a sequence of 001 if it is known to this point that the sequence is over.
The encoding and decoding processes in the embodiments of the present disclosure are described in detail below in conjunction with a specific example.
First, the encoding method and the decoding method in the above embodiments may be performed at the encoding end and the decoding end, respectively, or may be performed on the same electronic device, which is not limited herein. For convenience of description, in the following examples, an encoding method is performed by the encoding side, and a decoding method is performed by the decoding side.
The encoding side may be configured to perform the steps of:
step 1: modeling an nth symbol of the binary string;
here, in constructing the context model, modeling may be performed according to the correspondence relationship between the context number and the bit shown in fig. 3.
Step 2: binary symbolizing a specific value v with a unary code to obtain a binary string s=11 … (v 1 s);
for example, v is 9s= 1111111110
Step 3: each symbol in the binary string is encoded.
Here, each symbol in s= 1111111110 is encoded as follows:
encoding symbol 0, symbol 1, using context number 0;
encoding 1 st symbol 1, using context number 1;
encoding symbol 2, symbol 1, using context number 2;
encoding symbol 3, symbol 1, using context number 3;
encoding symbol 4, symbol 1, using context number 4;
encoding symbol 5, symbol 1, using context number 4;
encoding symbol 6, symbol 1, using context number 4;
encoding symbol 7, symbol 1, using context number 4;
encoding symbol 8, symbol 1, using context number 5;
encoding symbol 9, 0, using context number 5, the encoding ends.
A total of 6 contexts of 0,1,2,3,4,5 are used.
The above step uses the context number for encoding, which is the binary arithmetic encoding described above.
The decoding side may be configured to perform the steps of:
step 1: the same modeling mode as the coding end;
here, the modeling process corresponds to the modeling process of the encoding end in the foregoing, and thus, decoding of compressed data of the encoding end is facilitated.
Step 2: each specific symbol is decoded, and the number of symbols 1 is counted.
Here, the process of decoding each specific symbol is as follows:
decoding the 0 th symbol using the context number 0, with the result being 1, continuing decoding;
decoding the 1 st symbol using context number 1, resulting in 1, continuing decoding;
decoding the 2 nd symbol using context number 2, resulting in 1, continuing decoding;
decoding the 3 rd symbol using context number 3, resulting in 1, continuing decoding;
decoding the 4 th symbol using the context number 4, with a result of 1, continuing decoding;
decoding the 5 th symbol using context number 4, resulting in 1, continuing decoding;
decoding the 6 th symbol using context number 4, resulting in 1, continuing decoding;
decoding the 7 th symbol using context number 4, resulting in 1, continuing decoding;
Decoding the 8 th symbol using context number 5, resulting in 1, continuing decoding;
decoding the 9 th symbol using the context number 5, resulting in 0 and decoding is completed;
there are 9 total 1 s, which are 9.
A total of 6 contexts of 0,1,2,3,4,5 are used.
Based on the same inventive concept, an encoder is also provided in embodiments of the present disclosure, as described in the following embodiments. Since the principle of the encoder embodiment for solving the problem is similar to that of the encoding method embodiment, the implementation of the encoder embodiment can be referred to the implementation of the encoding method embodiment, and the repetition is not repeated.
Fig. 5 illustrates an encoder in an embodiment of the present disclosure, as shown in fig. 5, the encoder 500 includes:
the binarization module 502 is configured to perform binarization processing on the data to be compressed by adopting a unary code to obtain a binary bit stream corresponding to the data to be compressed;
an arithmetic coding module 504 configured to select a context model for coding the values of bits to binary arithmetic code the values of bits according to the context number corresponding to each bit in the binary bit stream;
wherein, each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number is set according to a preset condition.
In some embodiments, the encoder 500 may further include:
the group dividing module is configured to divide bits of the binary bit stream into a plurality of groups according to the arrangement sequence according to preset conditions, and each group corresponds to a context sequence number;
a context modeling module configured to construct a context model for each group of bit values, respectively.
In some embodiments, the preset conditions may include at least one of the following conditions:
probability of occurrence of target values, empirical values of group division in binary bit stream.
In some embodiments, when the preset condition is the occurrence probability of the target value in the binary bit stream, the number of bits corresponding to the following sequence number is inversely proportional to the occurrence probability of the target value on the bits.
In some embodiments, the data to be compressed may include at least one of the following data:
video data, audio data, image data, text data.
The encoder provided in the embodiments of the present application may be used to execute the encoding method provided in the embodiments of the methods, and its implementation principle and technical effects are similar, and for the sake of brevity, it is not repeated here.
Based on the same inventive concept, a decoder is also provided in the embodiments of the present disclosure, as described in the following embodiments. Since the principle of the decoder embodiment for solving the problem is similar to that of the decoding method embodiment, the implementation of the decoder embodiment can be referred to the implementation of the decoding method embodiment, and the repetition is omitted.
Fig. 6 illustrates a decoder in an embodiment of the present disclosure, as shown in fig. 6, the decoder 500 includes:
the data acquisition module 602 is configured to acquire data to be decoded, where the data to be decoded includes encoded data corresponding to a plurality of context sequence numbers;
an arithmetic decoding module 604 configured to arithmetically decode the encoded data corresponding to the context sequence number using the context model corresponding to each context sequence number, to obtain a value of a bit corresponding to the context sequence number;
a bit stream restoration module 606, configured to obtain a binary bit stream after decoding the data to be decoded according to the bit value corresponding to each context sequence number;
an inverse binarization module 608 configured to perform inverse binarization processing of the unary code on the binary bit stream decoded by the data to be decoded, to obtain decoded data;
wherein, each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number is set according to a preset condition.
In some embodiments, the preset conditions may include at least one of the following conditions:
probability of occurrence of target values, empirical values of group division in binary bit stream.
In some embodiments, when the preset condition is the occurrence probability of the target value in the binary bit stream, the number of bits corresponding to the following sequence number is inversely proportional to the occurrence probability of the target value on the bits.
In some embodiments, the decoded data may include at least one of the following:
video data, audio data, image data, text data.
The decoder provided in the embodiments of the present application may be used to perform the decoding method provided in the embodiments of the methods described above, and its implementation principle and technical effects are similar, and for the sake of brevity, it is not repeated here.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 700 according to such an embodiment of the present disclosure is described below with reference to fig. 7. The electronic device 700 shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of embodiments of the present disclosure in any way.
As shown in fig. 7, the electronic device 700 is embodied in the form of a general purpose computing device. Components of electronic device 700 may include, but are not limited to: the at least one processing unit 710, the at least one memory unit 720, and a bus 730 connecting the different system components, including the memory unit 720 and the processing unit 710.
Wherein the storage unit stores program code that is executable by the processing unit 710 such that the processing unit 710 performs steps according to various exemplary embodiments of the present disclosure described in the above-described "exemplary methods" section of the present specification. For example, the processing unit 710 may perform the following steps:
performing binarization processing on the data to be compressed to obtain a binary bit stream corresponding to the data to be compressed, which is represented by a unitary code;
selecting a context model for encoding the value of the bit according to the context sequence number corresponding to each bit in the binary bit stream, and performing arithmetic encoding on the value of the bit;
wherein, each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number is set according to a preset condition.
In some embodiments, the processing unit 710 may be further configured to perform the following steps:
according to preset conditions, bits of the binary bit stream are sequentially divided into a plurality of groups according to an arrangement sequence, and each group corresponds to a context sequence number;
a context model is built separately for the values of the bits of each group.
In some embodiments, the processing unit 710 may be further configured to perform the following steps:
acquiring data to be decoded, wherein the data to be decoded comprises coded data corresponding to a plurality of context serial numbers;
performing arithmetic decoding on the coded data corresponding to the context sequence numbers by using a context model corresponding to each context sequence number to obtain a bit value corresponding to the context sequence number;
obtaining a binary bit stream after decoding the data to be decoded according to the bit value corresponding to each context sequence number;
performing inverse binarization processing of a unary code on a binary bit stream decoded by data to be decoded to obtain decoded data;
wherein, each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number is set according to a preset condition.
The memory unit 720 may include readable media in the form of volatile memory units, such as Random Access Memory (RAM) 7201 and/or cache memory 7202, and may further include Read Only Memory (ROM) 7203.
The storage unit 720 may also include a program/utility 7204 having a set (at least one) of program modules 7205, such program modules 7205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 730 may be a bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 700 may also communicate with one or more external devices 740 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 700, and/or any device (e.g., router, modem, etc.) that enables the electronic device 700 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 750. Also, electronic device 700 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through network adapter 760. As shown, network adapter 760 communicates with other modules of electronic device 700 over bus 730. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 700, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium, which may be a readable signal medium or a readable storage medium, is also provided. On which a program product is stored which enables the implementation of the method described above of the present disclosure. In some possible implementations, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device.
More specific examples of the computer readable storage medium in the present disclosure may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In this disclosure, a computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Alternatively, the program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
In particular implementations, the program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order or that all illustrated steps be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
From the description of the above embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (11)

1. A method of encoding, the method comprising:
performing binarization processing on data to be compressed to obtain a binary bit stream which is represented by a unitary code and corresponds to the data to be compressed;
selecting a context model for encoding the value of each bit according to the context sequence number corresponding to each bit in the binary bit stream, and performing binary arithmetic encoding on the value of each bit;
wherein each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number is set according to a preset condition.
2. The method of claim 1, wherein the context model is constructed based on an order of each bit in the binary bit stream.
3. The method according to claim 1, wherein the method further comprises:
according to preset conditions, bits of the binary bit stream are sequentially divided into a plurality of groups according to an arrangement sequence, and each group corresponds to a context sequence number;
a context model is built separately for the values of each of the groups of bits.
4. A method according to claim 3, wherein the preset conditions include at least one of the following conditions:
Probability of occurrence of target values, empirical values of group division in binary bit stream.
5. The method of claim 4, wherein when the preset condition is an occurrence probability of a target value in a binary bit stream, the number of bits corresponding to the lower sequence number is inversely proportional to the occurrence probability of the target value on the bits.
6. The method of claim 1, wherein the data to be compressed comprises at least one of:
video data, audio data, image data, text data.
7. A decoding method, the method comprising:
acquiring data to be decoded, wherein the data to be decoded comprises coded data corresponding to a plurality of context serial numbers;
performing arithmetic decoding on the coded data corresponding to the context sequence numbers by using a context model corresponding to each context sequence number to obtain a bit value corresponding to the context sequence number;
obtaining a binary bit stream after decoding the data to be decoded according to the bit value corresponding to each context sequence number;
performing inverse binarization processing of a unary code on the binary bit stream decoded by the data to be decoded to obtain decoded data;
Wherein each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number is set according to a preset condition.
8. An encoder, comprising:
the binarization module is configured to carry out binarization processing on data to be compressed by adopting a unitary code to obtain a binary bit stream corresponding to the data to be compressed;
an arithmetic coding module configured to select a context model for coding the value of each bit in the binary bit stream according to the context number corresponding to the bit;
wherein each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number is set according to a preset condition.
9. A decoder, comprising:
the data acquisition module is configured to acquire data to be decoded, wherein the data to be decoded comprises coded data corresponding to a plurality of context serial numbers;
the arithmetic decoding module is configured to carry out arithmetic decoding on the coded data corresponding to the context sequence numbers by utilizing a context model corresponding to each context sequence number, so as to obtain the bit value corresponding to the context sequence number;
The bit stream restoration module is configured to obtain a binary bit stream after the data to be decoded are decoded according to the value of the bit corresponding to each context sequence number;
the inverse binarization module is configured to perform inverse binarization processing on the binary bit stream decoded by the data to be decoded to obtain decoded data;
wherein each context sequence number corresponds to at least one bit, and the number of bits corresponding to each context sequence number is set according to a preset condition.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the encoding method of any one of claims 1-6, or the decoding method of claim 7, via execution of the executable instructions.
11. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the encoding method of any of claims 1-6 or performs the decoding method of claim 7.
CN202210102679.XA 2022-01-27 2022-01-27 Encoding method, decoding method, encoder, decoder, electronic device, and storage medium Pending CN116566397A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210102679.XA CN116566397A (en) 2022-01-27 2022-01-27 Encoding method, decoding method, encoder, decoder, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210102679.XA CN116566397A (en) 2022-01-27 2022-01-27 Encoding method, decoding method, encoder, decoder, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN116566397A true CN116566397A (en) 2023-08-08

Family

ID=87490274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210102679.XA Pending CN116566397A (en) 2022-01-27 2022-01-27 Encoding method, decoding method, encoder, decoder, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN116566397A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117874314A (en) * 2024-03-13 2024-04-12 时粤科技(广州)有限公司 Information visualization method and system based on big data processing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117874314A (en) * 2024-03-13 2024-04-12 时粤科技(广州)有限公司 Information visualization method and system based on big data processing
CN117874314B (en) * 2024-03-13 2024-05-10 时粤科技(广州)有限公司 Information visualization method and system based on big data processing

Similar Documents

Publication Publication Date Title
CN105684316B (en) Polar code encoding method and device
US10666289B1 (en) Data compression using dictionary encoding
RU2630750C1 (en) Device and method for encoding and decoding initial data
US20110181448A1 (en) Lossless compression
US8159374B2 (en) Unicode-compatible dictionary compression
EP1465349A1 (en) Embedded multiple description scalar quantizers for progressive image transmission
CN112398484A (en) Coding method and related equipment
CN103152054A (en) Method and apparatus for arithmetic coding
CN111918071A (en) Data compression method, device, equipment and storage medium
CN116566397A (en) Encoding method, decoding method, encoder, decoder, electronic device, and storage medium
CN113630125A (en) Data compression method, data encoding method, data decompression method, data encoding device, data decompression device, electronic equipment and storage medium
CN112449191B (en) Method for compressing multiple images, method and device for decompressing images
US8018359B2 (en) Conversion of bit lengths into codes
CN116668691A (en) Picture compression transmission method and device and terminal equipment
US20220005229A1 (en) Point cloud attribute encoding method and device, and point cloud attribute decoding method and devcie
CN104682966A (en) Non-destructive compressing method for list data
CN111836051B (en) Desktop image encoding and decoding method and related device
US10585626B2 (en) Management of non-universal and universal encoders
CN110913220A (en) Video frame coding method and device and terminal equipment
CN116567239A (en) Coding and decoding method, device, coder and decoder, equipment and medium
CN116418997A (en) Characteristic data compression method, device and system, electronic equipment and storage medium
CN116366070A (en) Wavelet coefficient coding method, device, system, equipment and medium
US20230085142A1 (en) Efficient update of cumulative distribution functions for image compression
CN116567238A (en) Encoding and decoding method and device, electronic equipment and storage medium
US20160323603A1 (en) Method and apparatus for performing an arithmetic coding for data symbols

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination