GB2306280A - A coding system and entropy decoder - Google Patents

A coding system and entropy decoder Download PDF

Info

Publication number
GB2306280A
GB2306280A GB9624358A GB9624358A GB2306280A GB 2306280 A GB2306280 A GB 2306280A GB 9624358 A GB9624358 A GB 9624358A GB 9624358 A GB9624358 A GB 9624358A GB 2306280 A GB2306280 A GB 2306280A
Authority
GB
United Kingdom
Prior art keywords
codeword
memory
data
bit
states
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9624358A
Other versions
GB2306280B (en
GB9624358D0 (en
Inventor
Edward L Schwartz
Michael Gormish
James D Allen
Martin Boliek
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority claimed from GB9518375A external-priority patent/GB2293735B/en
Publication of GB9624358D0 publication Critical patent/GB9624358D0/en
Publication of GB2306280A publication Critical patent/GB2306280A/en
Application granted granted Critical
Publication of GB2306280B publication Critical patent/GB2306280B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/40Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
    • H03M7/4006Conversion to or from arithmetic code
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Description

SPECIFICATION
TITLE A CODING SYSTEM AND ENTROPY DECODER 2306280 FIELD OF-THE INVENTION
The present invention relates to the field of data compression and dec:), pnpressiDn syslems; particulafly, the present invention relates to apparatus for parallel encoding and decoding of dalka in co -1p,,.essio,ilde--ornpresslc)n systerns.
BACKQROUND OF THE INVENTION Today, dala compression is widely used, parlicularly for storing and larce a,.nDins of data. Many difterent data compression exist in the prior art. Compression techniques can be divided into tv,,w- b7zat 1--ssy coding and lossless coding. Lossy coding thall results in the loss oll info,.rnaIL;,on. such that there is no gia-an,"ee of perle: reconstruction of the original data. In lossless =-$ 1pression, all the inform. ation is retained and the data is compressed in a rnanner ""&1--h, fogr perfect rec-Onstruction.
LAD C1RAGINAL -3.
In lossless compression, input symbols are converted to output cc>dewords. if the compression is successful, the codewords are represented in fewer bits than the number of input symbols. Lossiess coding methods include dictionary methods of coding (e.g., Lempel-Mv), run length eficoding, enumerative coding and entropy.coding.
Entropy coding consists of any method of lossless coding which attempts to compress data close to the entropy limit using known or estimated symbol probabilities. Entropy codes include Huffrnan codes, arithmetic codes and binary entropy codes. Binary entropy coders are lossless coders which act only on binary (yes/no) decisions, often expressed as the most probable symbol (MPS) and the least probable symbol (LPS). Examples of binary entropy coders include IBM's 0-coder and a coder referred to as the Bcoder.
For more information on the B-coder, see U.S. Patent No. 5,272,478, entitled Method and Apparatus for Entropy Coding", (J.D. Alien), issued December 21, 1993, and assigned to the corporate assignee of the present invention.
See also M.J. Gormish and J.D. Alien, 'Finite State Machine Binary Entropy Coding, abstract in Proc. Data Compresion Conference, 30 March 1-093, Snowbird, UT, pg. 449. The B-coder is a binary entropy coder which uses a finite state machine for compression.
Figure 1 shows a block diagram of a prior art compression and decompression system using a binary entropy coder. For coding, data is input into context model (CM) 101. CM 101 translates the input data into a set or sequence of binary decisions and provides the context bin for each decision. Both the sequence of binary decisions and their associated context bins are output from CM 101 to the probability estimation module (PEM) 102.
BAn ORIGINAL PEM 102 receives each context bin and generates a probability estimate for each binary decision. The actual probability estimate is typically represented by a class, referred to as PClass. Each PClass is used for a range of probabilities. PEM 102 also determines whether the binary decision (result) is or is not in its more probable state (i.e., whether the decision corresponds to the MPS). The bit-stream generator (BG) module 103 receives the probability estimate (i.e., the PClass) and the determination of whether or not the binary decision was likely as inputs. In response, BG module 103 produces a compressed data stream, outputting zero or more bits, to represent the 10 original input data.
For decoding, CM 104 provides a context bin to PEM 105, and PEM 105 provides the probability class (PClass) to BG module 106 based on the context bin. BG Module 106 is coupled to receive the probability class. In response to the probability class'and the compressed data, BG module 106 retums a bit representing whether the binary decision (i.e., the event) is in its most probable state. PEM 105 receives the bit, updates the probability estimate based on the received bit, and returns the result to CM 104. CM 104 receives the returned bit and uses the returned bit to generate the original data and update the context bin for the next binary decision.
One problem with de=ders using binary entropy codes, such as IBM's 0coder and the B-coder, is that they are slow, even when implemented in hardware. Their operation requires a single large, slow feedback loop. To restate the decoding process, the context model uses past decoded data to piroduce a context. The probability estimation module uses the context to produce a probability class. The bit-stream generator uses the probability BAD ORIGINAL class and the compressed data to determine if the next bit is the likely or unlikely result. The probability estimation module uses the likely/unlikely result to produce a result bit (and to update the probability estimate for the context). The result bit is used by the context model to update its history of past data. All of these steps are required for decoding a single bit. Because the contexl model must walt for the result bit to update its history before it can provide the nexl context, the decoding of the next bit must wait. It is desirable to avoid having to wait for the feedback loop to be completed before decoding the nexl bit. In other words, it is desirable to decode more than one bit or codewDrd at a time in order to increase the speed at which compressed data is decoded.
Another problem with decoders using binary entropy codes is that va-iable length data must be processed. In most systems, the codewords to be decoded have variable lengths. Alternatively, other systems encode var.able length symbols (uncoded data). When processing the variable length data, h is necessary to shift the data at the bit level in order to provide the correct next data f or the decoding or encoding operation. These bit level manipulations on the data stream can require costly and/or slow hardware and/or Software. Furthermore, prior art systems require this shifting to be done in time critical feedback loops that limit the performance of the decoder. It would also be advantageous to remove the bit level manipulation of the data stream from time critical feedback loops, so that parallelization could be used to increase speed.
BAD ORIGINAL The present invention provides a coding system for processing data comprising: an index generator for generating indices based on the data; and a state table coupled to provide a probability estimate based on the indices, wherein the state table includes a first plurality of states and a second plurality of states, wherein each of the states corresponds to a code, and further wherein transitions between different codes corresponding to the first plurality of states occurs faster when transitioning between states in the first plurality of states than transitions between different codes corresponding to the second plurality of states when transitioning in the second plurality of states.
is The invention also provides a coding system for processing data comprising: an index generator for generating indices based on the data; and a state table coupled to provide a probability estimate based on the indices, wherein the state table includes a plurality of states, wherein each of the states corresponds to a code, and every code in the state table is repeated a predetermined number of times; wherein transitioning between states of the state table occurs based on an acceleration term that is modifiable, such that a first rate of transitioning between states during a first time period is different than a BAD oRIGINAL - 7 second rate of transitioning during a second time period.
According to another aspect of the present invention there is provided an entropy decoder for decoding a data stream of a plurality of codewords comprising:
a plurality of bit stream generators for receiving the data stream; and a state table coupled to the plurality of bit stream generators to provide a probability estimate to the plurality of bit stream generators, wherein the plurality of bit stream generators generates a decoded result for each codeword in the data stream in response to the probability estimate using a RnW code for multiple values of n, and further wherein the state table includes a first plurality of states and a second plurality of states, wherein transitions between different codes in the first plurality of states occurs faster when transitioning in the first plurality of states than transitions between codes when transitioning in the second plurality of states.
The invention further provides an entropy decoder for decoding a data stream of a plurality of codewords comprising:
a plurality of bit stream generators for receiving the data stream; and a state table coupled to provide a probability estimate based on the indices, wherein the state table includes a plurality of states, wherein each of the states corresponds to a code, and every code in the state table is BAD ORGINAL - 8 repeated a predetermined number of times; wherein transitioning between states of the table occurs based on an acceleration term that is modifiable, such that a first rate of transitioning between states during a first time period is different than a second rate of transitioning during a second time period.
The present invention will now be described, by way of example only, with reference to the accompanying drawings. To assist in more fully understanding the present invention, it will be described and illustrated in the context of examples of coding and decoding methods and apparatus. The following is a brief description of the accompanying drawings. Figure 1 is a block diagram of a prior art binary entropy encoder and decoder. Figure 2A is a block diagram of an example of a decoding system. Figure 2B is a block diagram of an example of an encoding system. 20 Figure 2C is a block diagram of an example of a decoding system which processes context bins in parallel. Figure 2D is a block diagram of an example of a decoding system which processes probability classes in parallel. Figure 3 illustrates a non-interleaved code stream.
is -g- Figure 4 illustrates an example 'of the interleaved code stream as derived from an exemplary set of data.
Figure 5 is one example of a probability estimation table and bit-strearn generator for an R-coder, Figure 6 is a block diagram of one example of an encoder.
Figure 7 is a block dl:agram of one example of a bit generator.
Figure 8 is a block d;agrarn of one example of a reorder unit Figure 9 is a block diagram of one example of a run count reorder unit.
Figure 10 is a block diagram of another example of a run count reorder unit.
Figure 11 is a block diagram of one example f a blit packing unit.
BAD ORIGINAL - 10 Figure 12 is a block diagram of one example of the packing logic.
Figure 13 is a block diagram of the encoder bit 5 generator.
Figure 14A is a block diagram of an example of a decoding system.
Figure 14B is a block diagram of an example of a decoder.
Figure 14C is a block diagram of an example of a FIFO structure.
is Figure 15A illustrates one example of a decoding pipeline.
Figure 15B illustrates an example of a decoder.
Figure 16A is a block diagram of one example of a shifter.
Figure 16B is a block diagram of another example of a 25 shifter.
Figure 17 is a block diagram of a system having an external context model.
Figure 18 is a block diagram of another system having an external context model.
Figure 19 is a block diagram of one example of a decoder.
Figure 20 is a block diagram of one example of a decoder with separate bit generators.
Figure 21 is a block diagram of one example of a bit generator.
is Figure 22 is a block diagram of one example of a long run unit.
Figure 23 is a block diagram of one example of a short run uni Figure 24 is a block diagram of one example of an initialization and control logic.
12- Figure 25 is a block ci:,p-g,,a,.,n Of One example of reordering data usinp a snooper decoder.
Figure 26 is a block diagram of another example of a reordering unit.
Figure 27 is a block d:Aagrarn of another example of a reordering U:4 71, using a merged queue.
Figure 28 is a block d:ac-.arn. of a high bandwid41h systern using the invention.
Figure 29 is a blo--k d'i-z.,-,.a-.n of a ba-,)d%,ic.,"h matching system using the invention, Ficure 30 is a block dilacrarn of a reat-time video systern using the p. 5 S =_ nt Fiaure 31 illustrates one example of the coded data memory.
Figure 32 is a timing diagram of a decoding system.
Figure 33 is a graph of coding efficiency versus MPS probability for different R-codes.
BAD ORIGINAL A method and apparatus for parallel encoding and decoding of data is described. In the following description, numerous specific details are set forth, such as specific numbers of bits, numbers of coders, specific probabilities, types of data, etc., in order to provide a thorough understanding of the preferred embDdiments of the present invention. It Will be understood to one skilled in the art that the present invention may be practiced Without these specific details. Also, well-known circuits have been shown in block dlagram form rather than in detail in order to avoid unnecessarily obscuring the present invention.
Sorne portions of the detailed descriptions which follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is hare, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely BAD ONUINAL convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as MproCessing' or 'computing' or calculating or 'determining' or displaying' or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The present examples also relate to apparatus for performing the operations herein. This apparatus may be specially constructed forthe required purposes, or ilk may comprise a general purpose computer selectively a---lkivated or reconfigured by a computer program stored in the computer. The alcorithms and displays presented herein are not inherently related to any particu!ar computer or other apparatus. Various general purpose machines may be used with procrarns in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these machines will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
BAD ORIGINAL Parallel Fntror) Cpdnc The present invention provides a parallel entropy coding system. The system includes an encoder and a decoder. In one example. the encoder performs encoding on data in real-time. Similarly, in one example, the decoder of the present invention performs decoding on data in real-time. Together, the reaMime encoder and real-time decoder form a balanced coding system.
The present invention provides a system that decodes IDssiessly encoded data in parallel. The data is decoded in parallel by using muftiple decoding resources. Each of the multiple decoding resources is assigned data (e.g., codewords) from the data stream to decode. The assignment of the data stream occurs on the fly wherein the decoding resources decode data concurrently, thereby decoding the data stream in parallel. In order to enable the assignment of data in a manner which makes efficient use of the decoding resources, the data stream is ordered. This is referred to as parallelizing the data stream. The ordering of dalta allows each decoding resource to de--Dde any or all of the coded data without waiting for feedback from the context model.
Figure 2A illustrates a decoding system without the slow feedback loop of the prior art. An input buffer 204 receives coded data (i.e., codewords) and a feedback signal from decoder 205 and supplies coded data in a predetermined order (e.g., context bin order) to decoder 205 of the present invention, which decodes the coded data. Decoder 205 includes multiple deCDders (e.g., 205A, 205B, 205C, etc. ).
t 3AD ORIGINAL In one example,. each of the decoders 205A, 205B, 205C, etc. is Supplied data for a group of contexts. Each of the decoders in decoder 205 is Supplied coded data for every context bin in its group of contexts from input buffer 204. Using this data, each decoder 205A, 205B. 205C, etc. produces the decoded data for its group of context bins. The context model is not required to associate coded data with a particular group of context bins.
The decoded data is sent by decoder 205 to decoded data storage 207 (e.g., 207A, 207B, 207C, etc.). Note that decoded data storage 207 may store intermediate data that is neither coded nor uncoded, such as run counts. In this case, decoded data storage 207 stores the data in a compact, but not entropy coded, form.
Operating independently, context model 206 is coupled to receive the Previously decoded data from decoded data storage 207 (i.e., 207A, 2073, 207C, etc.) in response to a feedback signal it sends to decoded data storage 207. Therefore, two independent feedback loops exist, one between decoder 205 and input buffer 204 and a second between context model 206 and decoder data storage 207. Since the large feedback loop is eliminated, the decoders in decoder 205 (e.g., 205A, 205B, 205C, etc.) are able to decode their associated codewords as soon as they are received from input buffer 2D4.
The context model provides the memory portion of the coding system and divides a set of data (e.g., an image) into different categories (e.g., context bins) based on the memory. In the present examples, the context bins are considered independent ordered sets of data. In one example, each group of context bins has its own probability estimation model and each 13AD ()RIGINAL context bin has its own state (where probability estimation models are shared). Therefore, each context bin could use a different probability estimation model and/or bit-stream generator.
Thus, the data is ordered, or parallelized, and data from the data stream is assigned to individual coders for decoding.
Addn,p Ppralielisrn to the Classic Entropy Codino Model To parallelize the data stream, the data may be divided according to elther context, probability, tiling, cDdeword sequence (based on cDdewDrds), etc. The reordering of the coded data stream is independent of the paraklism, a method used to parallelize data or the probability at any other point. A parallel encoder portion of an encoding system of the present example fed by data differentiated by context model (CM) is shown in Figure 2B.
is Referring to Figure 2B, the context dependent parallel encoder portion cornprises context model (CM) 214, probability estimation modules (PEMS) 215-217, and bitstream, generators (BGs) 218-220. CM 214 is coupled to receive coded input dalla. CM 214 is also coupled to PEMs 215-217. PEMs 215-217 are also coupled to BGs 218-220, respectively, which output code 20 streams 1, 2 and 3 respeclively. Each PEM and BG pair comprises a coder. Therefore, the parallel encoder is shown with three coders. Although only three parallel coders are shown, any number of coders way be used.
CM 214 divides data stream into different contexts in the same way as a conventional CM and sends the multiple streams to the parallel hardware encoding resources. Individual contexts, or groups of contexts, are directed BAD ORUNAL to separate probability estimators (PEMs) 215-217 and bit generators (BGs) 218-219. Each of BGs 218-220 outputs a coded data stream.
Figure 2C is a block diagram of one example of the decoder portion of the decoding system. Referring to Figure 2C. a context dependent parallel decoder is shown having BGs 221-223, PEMs 224-226 and CM 227. Code streams 1-3 are coupled to BGs 221-223 respectively. BGs 221-223 are also coupled to PEMs 224-226 respectively.
PENIls 224-226 are coupled to CM 2-27 which outputs the reconstructed input data. The input comes from several code streams, shown as code streams 1- 3. One code stream is assigned to each PEM and SG. Each of the BG 2212-23 returns a bit representing whether the binary decision is in its more probable state, which the PEMs 224-226 use to return decoded bits (e.g., the b,nary de,-.isi4on). Each of PE1As 224-226 is associated with one of BGs 221223, indicra-ting which code is to be used to produce a data stream from its input =de stream. CM 227 produces a decoded data stream by selecting the decoded bts from the bit-stream generators in the proper sequence, thereby recrea4ing the original data. Thus, the CM 227 obtains the decompressed data bit from the appropriate PEM and BG, in effect reordering the data into the original order. Note that the control for this design flows in the reverse direction of the data stream. The BG and PEIA may decode data before the CIA 227 needs it, staying one or more bits ahead. Alternatively, the CM 227 may request (but not receive) a bit from one BG and PEM and then request one Or More bits from other BGs and PEMs before using the initially requested bit.
la'D OFLIG1t4AL -19.
The configuration shown in Figure 2C is designed to couple the PEM and I3G tightly. The ISM Q-Coder is a good example of a coder having a tightly coupled PEM and SG. Local feedback loops between these two are not fundamental limit to system performance.
In a different design. the PEM could differentiate the data and send it to parallel I3G units. Thus, there would be only one CM and PEM and the I3G is replicated. Adaptive Huffrnan coding and finite state machine coding could be used in this way.
A similar decoding system that uses the PEM to differentiate the data and send it to parallel BGs is shown in Figure 2D. In this case, probability classes are handled in prarallel and each bit-stream generator is assigned to a specific probability class and receives knowledge of the result. Referring to Fioure 2D, the Coded data streams 1- 3 are coupled to one of multiple bitV,rear,,n generators (e.g., BG 232, I3G 233, BG 234, etc.), which are coupled to receive it. Each of the bit- stream generators is coupled to PEM 235. PEM 21315 is alsD coupled to CIA 235. In this configuration, each of the bit-stream generators decodes coded data and the results of the decoding are selected by PEM 235 (instead of by CM 236). Each of the bit-stream generator receives coded data from a source associated with one probability class (i.e., where the coded data could from any context bin). PEM 235 selects the bitstream generators using a probability class. The probability class is dictated by the context bin provided to h by CM 236. In this manner, decoded data is produced by processing probability classes in parallel.
Numerous implementations exist. for the parallel decoding systems. In one example. the coded data streams BAD ORIGINAL is corresponding to the multiple context bins can be interleaved into one stream ordered by the demands of the various coders. In one example, the Coded data is ordered such that each coder is constantly supplied with data even though the coded data is delivered to the decoder in one stream. Note that the present examples operates with all types of data, including image data.
By using small simple coders that can be cheaply replicated in integrated circuits, coded data can be decoded quickly in parallel. In one example, the coders are implemented in hardware using field programmable gate array (FPGA) chips or a standard cell application specific integrated circuit (ASIC) chip. The combination of parallelism and simple bitstream generators allow the decoding of coded data to occur at speeds in excess of the prior art decoders, while maintaining or exceeding the compression efficiency of prior decoding systems.
Orde-in:) of DaIa S!regr.,i There are many different design issues and problems that affect svstem performance. A few of these will be mentioned below. However, the examples shown in Figure 2B and 2C (and 2D) use the multiple code streams. Systems with parallel channels that could accommodate this embodiment are imaginable: multiple telephone lines, multiple heads on a dsk drive, etc. In some applications, only one channel is available, or convenient. Indeed, if multiple channels are required there may be poor Lr,ilA'zaion of the bandwidth because of the bursty nature of the individual code CL r e am. s.
SAID OR1G1NU- -21.
In one example, the code streams are concatenated and sent contiquously to the decoder. A preface header contains pointers to the beginning bit location of each stream. Figure 3 illustrates one example of the arrangement of this data. Referring to Figure 3, three pointers 301-3DO indicate the starting location in the concatenated code of code streams 1, 2 and 3 respectively. The complete compressed data file is available in a buffer to the decoder. As needed, the codewDrds are retrieved from the proper location via the proper pointer. The pointer is then updated to the next codeword in that code stream.
Note that this method requires an entire coded frame to be stored at the decoder and, for practical purposes, at the encoder. If a real-time system, or less bursly data flow, is required then two frame buffers may be used for banking at both the encoder and the decoder.
Da, O'de! to Codeword Orde Notice that a decoder decodes codewords in a given deterministic crider. With parallel coding, the order of the requests to the code stream is deterministic. Thus, if the codewords from parallel code streams can be interleaved in the right order at the encoder, then a single code stream will suffice. The codewords are delivered to the decoder in the same order on a just-in-lime basis. At the encoder, a model of the decoder determines the codeword order and packs the codewords into a single stream. This model rnioht be an actual decoder.
A problem de!lvedng data to the parallel decoding elements arises when data is variable length. Unpacking a stream of variable length BAD ORGNAL codewords requires using a bit shifter to align the codewords. Bit shifters are often costly andlor slow when implemented in hardware. The control of the b,,t shifter depends on the size of the particular codeword. This control feedback loop prevents variable length shifting from being performed quickly.
is The virtues of feeding multiple decoders with a single stream cannot be realized if the process of unpacWing the stream is performed in a single bit shifter that is not fast enough to keep up with the multiple decoders.
The solution described herein separates the problem of d.' stributing the coded data to the parallel coders from the alignment of the vahable-lengthcodewords for decoding. The codewords in each independent code stream are packed into fixed-length words, called interleaved words. At the decoder end of the channel these interleaved words can be distributed to the paraMel decoder units with fast hardwired data lines and a simple control circuit.
It is convenient to have the interleaved word ienath larger than the maxirnum codeword lenoth so that at least enough bits to complete one codewz,,d is contained in each interleaved word. Th:? interleaved words can contain may codewords and parts of codewords. Figure 4 illustrates the interleaving of an example set of parallel code streams.
These words are interleaved according to the demand at the decoder.
Each independent decoder receives an entire interleaved word. The bit shifting operation is now done locally at each decoder, maintaining the parallelism of the system. Note in Figure 4 that the first codeword in each interleaved word is the lowest remaining codewDrd in the set. For instance, the first interleaved words come from code stream 1, starting with the lowest 13AD ORIC-"N4 codeword (i.e., C). This is followed by the first interleaved word in code stream 2 and then by the first interleaved word in code stream 3. However, the next lowest codeword not contained completely in an already ordered interleaved word is #7. Therefore, the next word in the stream is the second interleaved word of code stream 2.
In another example, the order in which the subsequent set of interleaved words (e.g., the codeword starting with codeword #8 in stream 1, the codeword starting with codeword #7 in stream 2, the codeword stallLing with codeword #11 in stream 3) are inserted into the interleaved codestrearn is based on the first cDdeword of the previous set of interleaved words (e.g., the codeword starting with codeword #1 in stream 1, the codeword starting with cDdeword #2 in stream 2, the codeword starting with codeword #4 in stream 3) and are ordered from the interleaved word with the lowest number first codeword to the interleaved word with the highest number first codeword.
Therefore, in this case, since the interleaved word starting with codeword #1 was first, then the next interleaved word in stream 1 is the first of the second group of interleaved words to be inserted into the interleaved stream, followed by the next interleaved word in stream 2 and then the next interleaved word in stream 3. Note that after the second group of interleaved words is inserted into the interleaved stream, the next interleaved word in stream 2 would be the next interleaved word inserted into the stream because codeword #7 is the lowest codeword of the second set of interleaved words (followed by codewordr,118 in stream 1 and then codeword #11 in stream 3).
Using the actual decoder as the mDdeler for the data stream accounts for all design choices and delays to create the interleaved stream. This is not BAD ORGINAL a great cost for duplex systems that have both encoders and decoders anyway. Note that this can be generalized to any parallel set of variablelength (or different sized) data words that are consumed in a deterministic order.
Types Of Codes and B11-Strearn Gengralors For Parallel Decoding The present systems could employ existing Coders, such as 0-coders or Bcoders, as the bit-stream generation elements which are replicated in parallel. However, other codes and coders may be used. The coders and their associated codes employed by the present example are simple coders.
Using a bit-strearn generator with a simple code instead of complex code, such as the arithmetic code used by the 0-coder or the multi-state codes used by the B-coder, offers advantages. A sirnple code is advantageous in that the hardware implementation is much faster and simpler and requires less silicon than a complex code.
Another advantane is that coding efficiency ran be improved. A code that uses a finite amount of state information cannot perfectly meet the Shannon entropy limit for every probability. Hardware implemented codes known in the art that allow a single b11- strearn generator to handle multiple probabilities or conlexis have constraints that reduce Coding efficiency. Removing the constraints needed for multiple conlexIls or probability classes allows the use Of Codes that comes closer to meeting the Shannon entropy limit.
BA.0 OR1GlWU.
R-code The code (and coder) employed by one examplary system is referred to as an R-code. R-codes are adaptive codes that convert a variable number of identical input symbols into a codeword. In an embodiment, the R-codes are parameterized so that many different probabilities can be handled by a single decoder design. Moreover, the R-codes of the present invention can be decoded by simple, high-speed hardware.
In the present examples, 5-codes are used by an R-coder to perform encoding or decoding. In one example, an R-coder is a combined bitstream generator and probability estimation module. For instance, in Figure 1, an R-coder could include the combination of probability estimation module 102 and bit-strearn generator 103 and the combination of probability estiAllation module 105 with bit-stream generator 106.
Codewords represent runs of the most probable symbol (MPS). A EAPS represents the outcome of a binary decision with more than 50% plrzbability. On the other hand, the least probable symbol (LPS) represents the outcome in a binary decision with less than 50% probability. Note that when two outcomes are equally probable, it is not important which is desionated MPS or LPS as long as both the encoder and decoder make the same designation. The resulting bit sequence in the compressed file is shown in Table 1, for a given parameter referred to as MAXRUN.
BAD ORIGINAL Table 1 - Bit-generation Encodinq QZI2Wr Meaning 0 MAXRILIN Consecutive MPSs 1N N Consecutive MPSs followed by LPS, N < MAXRUN To encode, the number of MPSs in a run are counted by a simple counter. If that count equals the MAXRUN count value, a 0 codeword is emitted into the code stream and the counter is reset. If an LPS is encountered, then a 1 followed by the bits N, which uniquely describe the number of MPS symbols before the LPS, is emitted into the code stream. (Note that there are many ways to assign the N bits to describe the run length). Again the counter is reset. Note that the number of bits needed for N is dependent on the value of MAXRUN. Also note that the l's complement of the codewords could be used.
To decode, if the first bit in the code stream is 0, then the value of MAXRUN is put in the MPS counter and the LPS indication is cleared. Then the 0 bit is discarded. If the first bit is a 1, then the following bits are examined to extract the bits N and the appropriate count (N) is put in the MPS counter and the LPS indicator is set. Then the code stream bits containing the 1 N codeword are discarded.
R-codes are generated by the rules in Table 1. Note that the definition of a given R-code Rx(k) is defined by the MAXRUN. For instance:
MAXRUN for Rx(k) = x 2k,l 0 W,o - ORIGINAL thus MAXRUN for R2(k) = 2 2k-1, MAXRUN for R3(k) = 3 - 2k-1, etc......
Note that R-codes are a subset of Golomb codes. Also note that Rice codes use R2(.) codes only. The R-codes of the present invention allow the use of both R2(k) and R3(k) codes, and other Rn(k) codes if desired. In one embodiment, R2(k) and R3(k) codes are used. Note that Rn exists for n=2 and n equals any odd number (e.g., R2, R3, RS, R7, Rg, Rl 1. R1 3, R1 5). In one embodiment, for R2(k) code, the run count, r, is encoded in N; the run count, r, is described in k bits. such that 1 N is represented with k+ l bits. Also in one embodiment, for an R3(k) code, the bits N can contain 1 bit to indicate if n,,-2(k-1) or nk2(k-1) and either k-1 or k bits to indicate the run count, r, such that the variable N is represented by a total k or k+l bits respectively. In other embodiments, the l's complement of N could be used in the codeword. In this case, the MPS tends to produce code streams with many os and LPS tends to produce code streams with many 1 s.
Tables 2, 3, 4 and 5 depict some efficient R-codes utilized for one embodiment of the present invention. It should be noted that other run length codes may also be used in the present invention. An example of alternative run length code for R2(2) is shown in Table 6. Tables 7 and 8 show examples of the codes used in an embodiment.
SAD OR1GINAL ---I- 28- Table 2 -
Table 3 uncoded data 1 codeword 0 0 uncoded data 1 codeword 00 0 01 10 1 11 Table 4 - R3(1) Table uncoded data 1 codeword 000 0 001 100 01 101 1 11 uncoded data 1 codeword 0000 0 0001 100 001 101 01 110 1 ill Table 6 -Alternative R2(2) Table 7 - Alternative R3(2) Code Alternative R2(2) 0000 0 0001 ill 001 101 01 110 1 100 Preferred R3(2) 000000 0 000001 1000 00001 1010 0001 1001 001 1011 01 110 1 ill Table 8 - Another Alternative R2(2) Code Preferred R2(2) 0000 0 0001 10D 001 110 01 101 1 ill Probal)lily Estirnation Model for P-Codes In one example, the R2(0) code performs no coding: an input of 0 is encoded into a 0 and an input of 1 is encoded into a 1 (or vice versa) and is cp",irn,al for probabilities equal to 50%. The R2(1) code of the currently preferred embodiment is optimal for probabilities close to 0.707 (i.e., 70.7%) and the R3(1) is optimal for the 0.794 probability (79.4%). The R2(2) code is optimial for the 0.641 probability (84.1 %). Table 9 below depicts the near- optimal run-lencth code, where the probability skew is defined by the following equation:
Probability skew = -1og2 (LPS).
BAD ORIGINAL Table 9 probability probability skew Best Golomb Code 5D0 1.00 R2(0) 707 1.77 R2(1) 841 2.65 R2(2) 917 3.59 R2(3) 953 4.56 R2(4) 979 5.54 R2(5) 989 6.54 R2(6) 291-15 7.53 R2(7) 997 8.53 R2(8) L9 9 9.53 R2(9) Note that the codes are near-optimal in that the probability range, as indicated by the probability skew, is covering the space relatively evenly even though the optimal probabilities do not differentiate as much in the higher k values as 5 in the lower k values.
Reference is made to the probability at which an R-code is optimal. In fact, only R2(2) meets the entropy curve. The real consideration is for what range of probabilities is a particular R-coder better than all other R-codes in a given class. The following tables provide the probability ranges for the class 10 of R2 codes and the class of R2 and R3 codes.
For the class of R2 codes from 0 to 12 the ranges are in the Table 10 below. For example, when only R2 codes are used, R2(0) is best when 0.50 BAD ORIGINAL -31.
probability 5 0.6180. Similarly, R2(1) is best when 0.6180.5 probability 0.7862.
Table 10 - R2 Codes from 0 to 12 1 Probabilities Code R2(0) R2(1) R2(2) R2(3) R2 (4) R2 (5) R2 (6) R2(7) R2 (8) R2(9) R2 (10) R2(1 1) R2(12) 0.6180 0.7862 0.8867 0.9416 0.9704 0.9851 0.9925 0.9962 0.9981 0.9991 0.9995 0. 9 9 9 a For the class of R2 and R3 codes the solutions are in the Table 11 below. For example, when R2 and R3 codes are used, P2(1) is best when 0.6160 _.5 probability 5 0.7549.
BAD ORIGINAL :S -32.
Table 11 - R2 and R3 codes lenaths less than or eQual to 13 bits Code j Probabilities R2(0) R2(1) R3(1) R2(2) R3(2) R2(3) R3(3) R2(4) R3(4i) R2 (5) R3(5) R2 (6) R3(6) R2 (7) R3(7) R2 (6) R3(8) R2 (9) R3(9) R2 (10) R3(1 0) R2(11) R3(1 1) R2(12) 0.6180 0.7549 0.8192 0.8688 0.9051 0.9321 O.OW514 0.9655 0.9754 0.9B26 0.9876 0.9913 0. -0 9 3 B 0.9956 0.9969 0.9978 0. C.) 9 54 0.9969 0.9992 0.9995 0.9996 0.9997 0. 9 9 9 8 An R2(k) for a fixed k is called a run-length code. However, a fixed k is only best for a range of probabilities. It is noted that when coding near an optimal probability, an R-code according to the present example uses a 0 and 1 N codewords with roughly equal frequency. In other words, half the time, the R-coder of the present example outputs one code and the other half of the time, the R-coder outputs the other. By examiing the number of 0 and 1 N codewords, a determination can be made as to whether the best code is lbeing used. That is, if too many 1 N cDdewords are being output, then the run-length is too long; on the other hand, if too many 0 codewDrds are being output, then the run length is too short.
The probability estimation model used by Langdon examines the first b1t of each codeword to determine whether the source probability is above or below the current estimate. See G.G. Langdon, An Adaptive Run-Length Coding Aloorithrn',]BM Technical Qisclos,-jre Bulletiz, Vol. 26, No. 7B, Dec. 1983. Based on this determination, k is increased or decreased. For exam, ple, if a codeword indicating MPS is seen, the probability estimate is too low. Therefore, according to Langdon, k is increased by 1 for each 0 cDdeword. If a codeword indicating less than MAXRUN MPS followed by an LPS (e.g., IN codewDrd) is seen, the probability estimate is too high. Therefore, according to Langdon, k is decreased by 1 for each 1N codeword.
The present examples allow more complex probability estimation than the simple increase or decrease of k by 1 every codewDrd. The present examples include a probability estimation module state that determines the code to use. Many states may use the same code. Codes are assigned to States using a state table or state machine.
In one example, the probability estimate changes state every codeword output. Thus, the probability estimation module increases or decreases the probability estimate depending on BAD ORIGINAL whether a codeword begins with a 0 or a 1. For instance. if a "C' codeword is output, an increase of the estimate of the MPS probability occurs. On the other hand, if a M codeword is output, the estimate of MPS probability is decreased.
The Langdon coder of the prior art only used R2(k) codes and increased or decreased k for each codeword. The present example alternatively, uses R2(k) and R3(k) codes, in conjunction with the state table or state machine, to allow the adaptation rate to be tuned to the application.
That is, if there is a small amount of stationary data, adaptation must be quicker to result in more optimal coding, and where there is a larger amount of stationary data, the adaptation time can be longer so that the coding ran be chosen to achieve better compression on the remainder of the data. Note that where variable numbers of state changes can occur, application specific cha-a--,ehstics may also influence the adaptation rate. Because of the nature of the R-codes, the estima!ion for R-cDdes is simple and requires little hardware, while being very powerful. Figure 33 illustrates this graph of coding eliii--,ency (codelength norr-nalized with respect to entropy) versus MPS p,,cbab.,lity. Figure 33 shows how some of the R-codes --over the probability space. As an example, Figure 33 shows that for a MPS probability of approximately 0.55, the efficiency of the R2(0) code is 1.01 (or 1% worse than) the entropy limit. In contrast, the R2(1) code has an efficiency of 1.09 (or 9% worse than) the entropy limit. This example shows that using the wrong code for this particular low probability case cause an 8% loss in coding efficiency.
BAD ORIG" The incorporation of the R3(k) codes allows more probability space to be covered with a greater efficiency. An example probability estimation state table 1 is shown in Figure 5. ReferTing to Figure 5, the probability estimation state table shows both a state counter and the code associated with each of the separate states in the table. Note that the table includes both positive and negative states. The table is shown having 37 positive states and 37 negative states, including the zero states.
The negative states signify a different MPS than the positive states. In one example, the negative states can be used when the MPS is 1 and the posilklive states can be used when the MPS is 0, or vice versa. Note that the table shown in Figure 5 is an example only and that other tables might have more or less states and a different state allocation.
Initially, the coder is in state 0 which is the R2(0) Code (i.e., no code) for Probability estimate equal to 0.50. After each codeword is processed, the SL ' ae Counter is incremented or decremented depending on the first bit of the codeword. In crif- example, a codeword of 0 increases the magnitude of a State Counter; a codeword starting with 1 decreases the magnitude of the state counter. Therefore, every codeword causes a change to be made in the state by the state counter. In other words, the probability estimation module changes state. However, consecutive states could be associated with the same code. In this case, the probability estimation is accomplished Without changing codes every codeword. In other words, the state is changed for every CDdeword; however, the state is mapped into the same probabilities at certain times. For instance, states 5 to -5 all use the R2(0) code, while states 6 through 11 and---6 through -11 use the R2(1) code. Using the state table of BAL) ORGINAL the present example, probability estimation is allowed to stay with the same coder in a non-linear manner.
h should be noted that more states with the same R-CDde are included for the lower probabllitlies. This is done because the loss of efficiency when using the wrong code at low probabilities is great. The nature of the run length codes state table is to transfer between states after each codeword. In a state table designed to change codes with every change in state, when loggling belween states at the lower probabilities, the code toggles between a code which is very close to the entropy efficiency limit and code which is far from the entropy efficiency limit. Thus, a pena!ty (in terms of the number of coded data bits) can result in the transition between states. Prior art p,l,obab.,lity estimation modules, such as LangdDn's probability estimation mod,",,1e, lose performance because of this penalty.
In the higher probability run length codes, the penalty for being in the wrong code is not as great. Therefore, in the present example, additional states are added at the lower probabilities, so that the changes of 1Dopling between the two correct states are increased, thereby reducing the coding inefficiency.
Note that in certain embodiments, the coder may have initial probability estimate state. In other words, the coder could start in a predetermined one of the stales, such as stale 18. In one embodiment, a different stale table is used so that some states are used for the first few symbols to a!low for quick adaptation, and a second state table is used for the remaining symbols for slow adaptation to allow fine-tuning of the probability estirnate. In this manner, the coder may be able to use a more efficient code BAD ORIGINAL sooner in the coding process. In another embodiment, the code stream specifies an initial probability estimate for each context. In one embodiment, the increments and decrements are not made according to a fixed number (e.g., 1). Instead, the probability estimate state is incremented by a variable number according to the amount of data already encountered or the arn.Dunt of change in the data (stability). Examples of such tables are Tables 21-25 deschbed below.
If the stale table is symmetric, as the example table of Figure 5 shows, only half of it (including the zero state) needs to be stored or implemented in ha7dware. In one embDdl,,neni, the state number is stored in sign magnitude (1 s) complement form to take advantage of the symmetry. In this manner, the table can be utilized by taking the absolute value of the ones complement nurnber to determine the slate and examining the sign to determine whether the MPS is a 1 or 0. This allows the hardware needed for incrementing and dewementing the stale to be reduced because the absolute value of the state is used to index the table and the computation of the absolute value of ones c c.-., p! e m en t n urn, lo er is trivial. In another embodiment, for greater hardware etficiency, a slate table can be replaced by a hardwired or programmable slr--'e machine. A hardwired state to code converter is one implementation of the slate table.
c)f the Blanced Parallel Eniro2v Coclina Systern The present invention provides a balanced parallel entropy coding systam. The parallel entropy coding system includes both real-time encoding and. real-time decoding performed in high speed/low cost hardware. The BAD ORIGINAL present invention may be used in numerous lossless coding applications, including, but not limited to, real-time compression/deCDMpreSSion of wrileable optical disk or magnetic disk data, real-time cornpression/decompressiDn of computer network data, real-time conipression/decompression of image data in a compressed framestore in a multi-function (e.g., copier, facsimile, scanner, printer, etc.) machine, and realtime compression/decompression of audio data.
Specifying the performance of the encoder requires some attention. It iss4,,-a,ghtfo,-,%.ardiDdes,g,nan encoder that achieves a certain rate for the original data given a sufficiently fast coded data channel. In many however, the goal is for the encoder to utilize the coded data channel effliciently. Coded data channel utilization is impacted by the max,,,-,.u m, bjrst rate oil the original data interface, the encoder speed, and the coi-14pressic,n achieved on the data. The impact of these effects must be c--ns;dc-jred over sorne local amount of data which is dependent on the ar-,,--unL of bufferinn in the encoder. It is desirable to have an encoder that LLi!;Z-2s the coded data channel efficiently while maintaining encoder speed and high compression and Still accommodating the maximum burst rate.
The following description describes an example of such an encoder.
A decoder that may be used with the encoder is also described.
Beal-the Encodin2 Figure 6 is a block diagram of the encoding system.
In one example the encoder performs re-al-time encoding. Referring to Figure 6, the encoding system 600 includes SAC) ORIGINAL an encoder 602 coupled to a context Model (CM) & state memory 603 for generating coded information in the form of codeword information 6D4 in response to original data 601. COdeWDrd information 6D4 is received by a reorder unit 606, which is coupled to a reorder memory 607. In response to codeword information 604, reorder unit SDS in cooperation with reorder memory 607 generates coded data stream 608. It should be noted that the e, -)cod:$ng system 600 is not limited to operating on codeWDrds, and may, in other examples operate on discrete analog waveforms, variable length bit P-lerns, channel symbols, alphabets, events, etc.
Encoder 602 includes a context model (CM), a probability estimation machine (PEM) and a bitstrearn generator (BG). The contexl model and PEM b: 1:4 h- i -ly estilmatiDn machine) in encoder 602 are essentially identical to th.--s-^- in the decoder (except the direction of data flow). The bit generator of encoder 6D2 is similar to the decoder bit generator, and is described below.
The res,-,!, cl the coding by encoder 602 is the output of zero or more bits that repIressn', the original daia. The output of the bilstream generator also includes one or more control signals. These control signals provide a control path to the data in the bit stream. In One example,. the codeword information may comprise a start of run indication, an end of run indication, a codeWDrd and an index identifying the run count (whether h be by context or probability class) for the cDdeword. One example of the b',ts'Ll.ean, ceneia'IDr is described below.
Reorder unit 606 receives the bits and Control signals generated by the bit stream generator (if any) of coder 602 and generates coded data. In one BAD ORIGINAL example, the coded data output by reorder unit 606 comprises a stream of interleaved words.
In one example, reorder unit 606 performs two functions. Reorder unit 606 moves codewords from the end of runs as created by the encoder to the beginning of runs as needed by the decoder and combines variable length codewords into fixed lenoth interleaved words and outputs them in the proper order required by the decoder.
The reorder unit 6D6 uses a temporary reordering memory 607. In one example, where encoding is performed on a workstation, temporary rewdering memory 607 can be over 100 Megabytes in size. In the balanced system of the present o examples, the temporary reordering memory 607 is m,.jch simaller (e.g., approximately 1 Kbyle) and f..xed. Thus, in one example real-time encoding is performed using a fixed amount of memory, even if ths increases the memory required by the decoder or the b..' a.z- (sj--,h as when an cutput is made prior to the completion of a run). The dezoder is able to determine the effects of the reorder unill's memory using, for instance, implicit, explicit or instream signaling (as desm-,,bed below). Reorder unit 606 has finite memory available for reords-ring, but the memory "needed" is unbounded. Both the effect of lim. Ated 23 memoy. for end of run to beginning of run queue and for interleaved word reordering must be considered.
In One example, the encoding system (and corresponding decoding system) of the present invention performs the encoding (or deCDding) using a s,r,n'ie intepraled circuit chip. In another example, a single integrated circuil contains the encoder systen including its 13,ID ORIGINAL encoder and decoder, and memory. A separate external memory may be added to aid in encoding. A multi-chip Module or integrated circuit may contain both the encoding/decoding hardware and the memory.
The encoding system. 1 may attempt to increase the effe,-,41ive bandwidth by up to a factor of N. If the compression achieved is less than N1, then the coded data channel will be fully ufilized but the effective bandwidth increase achieved is only equal to the compression rate. If the compression achieved is greater then NA, then the effective bandwidth is achieved with extra bandwidth being writable. In both cases, the co-,,npressiz,i achieved must be over a local region of the data defined by the arnount of buffering present in the encoding system.
Ite lt Icr 1he Encoder Ficure 7 sho..,s one example of the encoder bit generator.
Bit cenerator 701 is coupled to receive a probability class anZI an uncoded bit (e.g., an MPS or LPS indication) as inputs. In response to the inputs, bit generator 701 outputs multiple signals. Two of the outputs are control signals that indicate the start of the run and the end of a run (each codeword represents a run), start signal 711 and end signal 712 respectively.
It is possible for a run to start and end at the same time. When a run starts or ends, index" output 713 comprises an indicati.on of the probability class (or context) for the uncoded bit. In one example, index output 713 represents a combina'ion of the probability class for the bit and a bank identificELiDn for sysiems in which each pirobability class is replicated in several banks of BAD ORIGINAL memory. Codeword output 714 is used to Output a codeword from bit generator 701 when a run ends.
A memory 702 is coupled to bit generator 701 and contains the run count for a given probability class. During bit generation, bit generator 701 reads from memory 702 using the index (e.g., probability class). After reading frorn mernory 702, bit generator 701 performs bit generation as follows. First, if the run count equa!s zero, then start sic nal 711 is asserted indicating the sta,l of a run. Then, if the uncoded bit is equal to the LPS, then end signal 712 is asserled indicating the end of the run. Also if the uncoded bit equals 0 an, LPS, codeword output 714 is set to indicate that the codeword is a 1 N codeword and the run count is cleared, e.g., set to zero (since its the end of the run). If the uncoded bit does not equal the LPS, then the run count is increrne-lited and a test determines if the run count equals the maximum run cc-,n'tf.-,rt.he code. If so, then end signal 712 is asserted, codeword output 714 is se', to zero and the run count is cleared (e.g., run count is set to zero). If t's ie tes' determines lhal the run count does not equal the maximum for thecode, then the run count is incremented. Note that index signal 713 represents the probability class received as an input.
In the present examples, the generation of 1 N codewords is performed such that their length can be determined without any additional information.
Table 12 illustrates 1 N codewords representations of R3(2) codewords for the decoder and encoder. The decoder expects that the M' bit in a 1 W be the I-SS and that W' count portion is in the proper MSB... I-SB order. In decoder order, the variable lenath codeword cannot be d;stinguished from zero padding without knowing which particular code is BAD ORIGINAL -43.
used. in encoder order, the codeword is reversed and the position of the most significant "I" bit indicates the length of M W codewords. To generate cDdewords in encoder order, the complement of the Count value must be reversed. This can be accomplished by reversing the 13-bit count and then sh., tling it so that it is aligned to the LSB. As described in detail below, the bit pack unit reverses the codewDrds back into decoder order. However, this reversal of codewords causes no increased complexity of the bit pack unit 605 since it must performing shifting anyway.
Tab!e 12 - "1 W Codeword Representations for R3(2) Codewords unc:oded data codeword reverse decoder order 1 encoder order of count value (count value is underlined) ODDJ33 0 0000DO000000D 0000000000DOO 0 D 3 3 13 1000 00 00000DO000001 OD00000001000 03D31 1010 01 000DOOD000101 0000000001010 0001 1001 10 0000000001001 0000000001001 001 1011 11 0DOOD00001101 ODOODDOO01011 01 110 0 OODDOODOOD011 DDODOODD00110 ill 1 0000000000111 0000000DO0111 For R3 codes, generating W codewords also requires that the bit following the M' indicate whether a short or long count is present.
By using multiple banks of memory, the Present example allows pipelining. For instance, in the case of a multi-ported memory, a read operation occurs to mernDry for an uncoded bit while a write operation occurs to the memory for the previous uncoded bit.
BAD ORiGiNAL SarnZIe Design One example of the encoder bit generator, comprises a FPGA. The design handles all R2 and R3 codes up to R2(12). The AHDL (Altera Hardware description language) source code is listed below.
The design comprises multiple parts, as shown in Figure 13. First, ENCBG 13D1 is the main part of the design which has the logic to handle the start, end and continuation of runs. Second, "KEXPANW 1302 is used to expand the probability class into the maximum run length, a variable length 10 mask, and the length of the first long codeword for R3 codes. KEXPAND 1302 is identical to the decoder function with the same name. Third, the "LPSCW 1303 part takes a count value and information about the probability class as inputs and generates the proper "1 W codeword.
The design uses two pipeline stages. During the first pipeline stage, the count is incremented, the probabMity class is expanded, and a subtracion and comparison for long R3 CDdeWOrds is performed. All of the other operations are perforr-ned during the second pipeline stage.
33 encbg.tdf 20 TITLE "Bit Generator for the encoder"; PQCLUDE "kex-pand.inc"; DCLUDE "lpscw.inc"; SUBDESIGN encbg k[3..0], r3, bit, count_in[12..0], clk ORIGINAL VAR1ABLE start - run, end - run, index[4..0], count - out[12..0], codeword[12..0] : INPUT; : OUTPUT; k_q[3..0], r3_q, k_qq[3..0], r3qq, bit_q, bit_q, coun t - in_q[120], start - run, end - run, start run_q, indeZ[4..0], count - out[12..Ol, count_plus[12..0], max_r 1 [ 12.. 0], codeword[12 0] : DFF; kexpand_: kexpand; lpscw_: Ipsciv; BEGIN, Ipscw_.clk = elk; k_qD.clk = elk; r3_q.clk = elk; k_qqo.clk - elk; r3_qq.clk = elk., bit_q.clk = elk; bit_qq.clk = elk., count - in_qn.clk = elk; start - run.clk = elk; end - run.clk- elk; start-r-un_q.clk = elk; indexo.clk = elk; count - outo,clk = elk., count_pluso.clk = elk; max_rl[]x1k = elk; codewordo.clk = elk; k_qo = kD; r3_q - r3; k_qqo = k_qo r3_qq - r3_q; bit_q = bit; bit_qq = bit_q; count in - qU - count_ino; count-PIUSO = count-in-qo + 1; start-run start-run_q; stat - run - q - (count_in_qO -- 0); index[O] = r3_qq., index[4 1 = k._qqo; kexpand_.k-regO = k-_q D; kexpand_.r3_reg = r3-q; Ipscw_x3 = r3 Ipscw_1_q - k,_qo Ipscw_x3_q = r3_qq; Ipscw_.counto count-in-qo; Ipsc-vv_.maska - kex-pand_.mask[] Ipsc-w_x3_splito = kexpand_.r3 split Ipscw_.ma-x71_q[] = max-rlo; max-r 10 = kexpand_.maxTI1] EF (bit_qq) THEN 9/0 LPS 9/D end-run = VCC; count - outo = 0; codewordo = lpscw_.cwc; E1SIF (count_plus[] end run count - outO codeivordo EISE end run count - out[] codeivordo END IF; 40 ED; 1PSC.w.tdf SUBDESIGN Ipsew == max_rID) THEN = VCC; = 0; = GND; = count-PIUS0; = 0; r3, k_q[3..Ol, r3_q, count[12-01, mask[ll..0], r3_split[I0..0], ma>zl_q [ 12.. 0], dk cw[12..0] VARIABLE temp[12..0] temp_rev[12..0] temp_sh[12..0] split[l 1-0] d_long count-minus[l 1-0] mask_q[1 1-0] count_q[12..Ol input; output; NODE; NODE; NODE; NODE, DFF; DFF; DFF; DFF; BEGIN r3_Iong.clk = clk; count rninuso.clk = clk; mask_qo.clk = clk; count_qo.clk = clk., Split[I0-0) = r3_split[]; split[ll] - GND; r3-long = (r3) AND (count[l 1-01.=split[]); count-minuso = count[l 1-01 - split[]; mask_qo = mask[] count_qU = counto % pipeline stage - % IF (r3_Iong) THEN temp11 1-0] (count-minusD) XOR mask-q[l; EISE temp[l 1-0] count_q[l 1-01 XOR mask_qD; END IF; temp[ 12] = GNI),, temp_re,v[0] = temp[12]; temp_rev[11 - temp[l 1]; temp_rev[2] = temp[10]; temp_rev[3] = temp[g] temp_re.,.,[4] = temp [8] temp_rev[S] temp[7]; temp_rev[6] = temp[6]; temp_rev[7] - temp[S]; tempjev[g] temp[4]; tempjev[9] = temp[3]; temp_rev[10] - temp[2) temp_rev[l 1] = temp[l] temp_rev[12] - temp [0] CASE kAo IS WHEN 0 => WHEN 1 => WHEN 2 => WHEN 3 => WHEN 4 => WHEN 5 => WHEN 6 => WHEN 7 => WHEN 9 => WHEN 10 => WHEN 11 => WHEN 12 => ENrD CASE, temp_sho = 0; temp_sh[O] = temp_rev[12]; temp_sh[12..1] = 0; temp_sh [ L. 0] - temp_rev [ 1 2A 1 temp_sh[12..21 = 0; temp_sh[2..0] - temp_rev[12.. 10] temp_sh[12..3] = 0; temp_sh[3..0] = temp_rev[12..9]; temp_sh[12..4] = 0; temp_sh[4..0] = temp_rev[12..8]; temp-sh[12..5] = 0; temp_sh[S..0] - temp_rev[12..7]; temp_sh[12..6] = 0; temp_sh[6..0] = temp_re,,,[12..6]; temp_sh[12..77] = 0; WHEN 8 => temp_sh[7..0] = temp_rev[12..5]; temp_sh[12..8] = 0; temp_sh[8.. 0] = temp_re\,[12..4]; temp_sh[12..9] = 0; temp_sh[12..10] = 0; temp_sh[12..10] = 0; temp_sh[10..0] - temp_rev[12..2]; temp_sh 12-11] - 0; temp_sh 11.. 0] - temp_rev [ 12-11; temp_sh[12] = GND; IF (NOT r3_q) THEN % R2 91/6 nv[] = temp_shO OR ma>xl_qo; EISIF (NOT r3-long) THEN cyo R3 SHORT c/o c",[11..0]=temp_sh[12..11 ORmax71_q111-0); EISE n%,[12] = GND; % R3 LONG n%1[12..11=temp-sh[I2..II OR (ma>Tl_q[1 L.0] AND NOT mask qlll..01); END IF; END; c-%,[0] - temp_sh[O); kexpand.tdf MU "decoder, k expand logiC; SUBDESIGN kexpand k_regf3..0] r3-reg 4D BEGIN ma=1[12..Ol mask[l L.0] r3split[I0..Ol TABLE k_regn,r3_reg 0, 0 1, 0 0 2: 1 3, 0 3, 1 4, 0 4, 1 5, 0 5, 1 6, 0 6, 1 7, 0 7, 1 8, 0 8, 1 9, 0 9, 1 10, 0 10, 1 11, 0 : input, : output; 1, 2, 3, 4, 6, 8, 12, 16, 24, 32, 48, 64, 96, 128, 192, 256. 384, 512, 768, 1024, 1536, 2048, ma.xrl[l, masko, r3split[]., 0, X; 1, X; 1, 1; 3, X; 3, 2; 7, X, 7, 4; 15, X:
15, 8; 31, X; 31, 16; 63, X; 63, 32; 127, X; 127, 64; 255, X.
255, 128; 511, X; 511, 256; 1023, X; 1023, 512; 2047, X; 11, 1 m > 3072, 12, 0 - > 4096, E\M TABLF E NT D; 2047, 1024; 4095, X; The FReo,.der Unit of the Present Invention Figure 8 is a block diagram of one example of the reorder unit. Referring to Figure 8, reorder unit 606 comprises a run count reorder unit 801 and a bit packing unit 802. Run count reorder unit 801 moves codewords from the end w' runs as created by the encoder to the beginning of runs as needed by the decoder, while bit packing unit 802 combines variable length cDdev,,o,,ds into fixed length interleaved words and outputs them in the proper wder required by the decoder.
A snooper" decoder can be used to reorder for any decoder, in which a decoder is included in the encoder and provides requests for data in an order in which the codewords will be needed by the real decoder. To support a snooper decoder, reordering of run counts might have to be done independently for each stream. For deCoders that can be modeled easily, mu,,;ple time stamped queues or a single merged queue may be used to allow reordehng. In one, example, reordering each CDdeWOrd can be ac-- omplished using a queue-like data structure and is independent of the use of multiple coded data streams. A description of how the reordering may be performed is given below.
The first rec)rdenc2 operation that is performed in the encoder is to reorder each of the run counts so that the run count is specified at the beginning of the run (as the decoder requires for decoding). This reordering BAD ORIGINAL 51- is required because the encoder does not determine what a run count (and codeword) is until the end of a run. Thus, the resulting run count produced from coding the data is reordered so that the decoder is able to properly decode the run counts back into the data stream.
Referring back to Figure 8, reorder unit 6D6 comprises run count reorder unit 801 and bit pack unit B02. Run count reorder unit 801 is coupled to receive multiple inputs that include starl signal 711, end signal 712, index signal 713 and cDdeword 714. These signals will be described in more detail in conjunction with the run count reorder unit of Figure 9. In response to the inputs, the run count reorder unit 801 generates codewoord 803 and signal 804. Signal 804 indicates when to reset the run count. CodewDrd 803 is received by bit pack unit 802. In response to Codeword 803, bit pack unit 802 generates interleaved words 805.
Run count reorder unit 801 and bit pack unit 802 are described in further de-42ii belz)w.
Pun CC,-In! Unit AS described above, the decoder receives C1DdeWOrds at the lime the beginning of the d-sta coded by the codeword is needed. However, the encoder does not know the identity of the CDdeword until the end of the data coded by the COdeWDrd.
A block diagram of one example of the run count reorder unit 801 is described in Figure 9. The described embodiment accommodates four slrea,-, is, where eachi intelleaved word is 16 bits, and the codewords vary in length from one to thirteen bits. In such a case, the aAL) UNGiNAL reorder unit 606 may be pipelined to handle all streams. Furthermore, an encoder that associates run counts with probability classes is used such that the maximum number of run counts thal can be active at any time is small, and is assumed to be 25 for this embodiment. Note that the present example 5 is not limited to four interleaved streams, interleaved words of 16 bits or codeword lengths of 1 to 13 bits, and may be used for more or less streams with interleaved words of more or less than 16 bits and codeword lengths that extend from 1 bit to over 13 bits.
Referring to Figure 9, a pointer memory 9D1 is coupled to receive index input 713 and produces an address output that is coupled to one input of multiplexer (MUX) 902. Two other inputs of MUX 902 are coupled to receive an address in the form of a head pointer from head counter 903 and an in the form of a tail pointer from tail counter 904. The output of MUX 9%^J2 is an address coupled to and used to access a codeword memoy. 908.
Index input, 713 is also coupled to as an input to MUX 905. Another inpil, of tAUX 905 is coupled to the codeword input 714. The output of MUX 9D5 is cDup!ed to an input of valid detection module 906 and to a data bus 9D7. Daa bus 907 is coupled to codeword memory 908 and an input of MUX 95. Also coupled to data bus 907 is an output of control module 9D9. Start input 711 and end input 712 are coupled to separate inputs of control module 9D9. The outputs of valid detection module 906 comprise the codeword output 603 and the signal BD4 (Figure 8). Run count reorder unit 801 also com.pses controller logic (not shown to avoid obscuring the present invention) to coordinate the operations of the various COMPDnents of run count reorder unit 601 - B,,) oBiGiNAL To re,llerate, index input 713 identifies a run. In one example, the index indicates one of 25 probability classes. In such a case, five bits are needed to represent the index. Note that if multiple banks of probability classes are used, then extra bits might be required to specify the particular bank. In one example, the index input identifies the probability class for the run count. Codeword input 714 is the codeword when the end of a run 0,11curs and is a "don't care" otherwise. Start input 711 and end input 712 are control sionals that indicate whether a run is beginning, ending, or both. A run begins and ends at the same time, when the run consists of a single uncoded bit.
Run count reorder unit 801 reorders the run counts generated by the b11 generator in response to its input signals. Codeword memory 908 stores codev,,clpds diring reordering. In one example, codeword memory 908 is 1a7::e-r the number of run counts thal can be active at one time. This leads to be-,Ler compression. If the codeword memory is smaller than the number ofrun counts that can be active at one time, this would actually limit the nurnber of active runcDunts to the number that could be held in memory. In a system that provides good compression, it often occurs that while data for one codeword with a Iona runcount is being accumulated, many codewords with short runcounts will start (and perhaps end also). This requires having a large memory to avoid forcing out the long run before it is completed.
Pointer memory 901 stores addresses for codeword memory locations for probabil;ty classes that are in the middle of a run and addresses codeword memory 908 in a random access fashion. Pointer memory 9D1 has a storage lc)caion for the address in codewo,,.d memory 908 for each probability class BAD ORIGINAL that may be in the middle of a run. Once a run has completed for a particular probability class. the address stored in pointer memory 901 for that probability class is used to access codeword memory 908 and the completed codeword is written into codeword memory 908 at that location. Until that time, that location in codeword memory 908 contained an invalid entry. Thus, pointer memory 901 stores the location of the invalid codeword for each run count.
Head counter 903 and tail counter 904 also provide addresses to access codeword memory 908. Using head counter 903 and tail counter 904 allow codeword memory 908 to be addressed as a queue or circular buffer (e.g., a first in, first out IFIFO] memory). Tail pointer 904 contains the address of the next available location in codeword memory 908 to permit the insertion of a codeword into codeword memory 908. Head counter 903 contains the address in codeword memory 908 of the next codeword to be output. In other words, head counter 903 contains the codeword memory address of the next codeword to be deleted from codeword memory 908. A location for each possible index (e.g., probability class) in pointer memory 901 is used to remember where tail pointer 904 was when a run was started so that the proper codeword can be placed in that location of codeword memory 908 when the run ends.
Control module 909 generates a valid signal as part of the data stored in codeword memory 908 to indicate whether or not an entry stores valid codeword data. For instance, if the valid bit is at a logical 1, then the codeword memory location contains valid data. However, if the valid bit is at a logic 0, then the codeword memory location contains invalid data. Valid detect module 907 determines if a memory location contains a valid codeword BAD ORIGINAL each time a codaword is read out from codeword memory 809. In one example, the valid detection module 907 detects whether the memory location has a valid codeword or a special invalid code.
When starling a new run, an invalid data entry is put in codeword memory 908. The invalid data entry acts as space holders in the stream of data stored in codeword memory 90B, such that the codeword for the run may be stored in the memory in the correct location (to ensure proper ordering to model the decoder) when the run has completed. In one example, the inva!id data entry includes the index via MUX 905 and an invalid indication 10 (e.g., an invalid bit) from control module 909. The address in codeword rnernory 9D8 at which the invalid entry is stored is given by tall pointer 904, and subsequently stored in pointer memory 901 as a reminder of the location for the run count in cDdeword memory 908. The remainder of the clata that appears between head pointer 903 and tail pointer 904 in codeword memory 91DE as c:)-,npleted run counts (e.g., reordered run counts). The maximum number of invalid memory locations is 0 to 1-1 where 1 is the number of run counts. When a cod.2",:)rd is complelte at the end of a run, the run count is filled in codeword memory 908 using the address stored in pointer memory 931.
When a run starts, the index for the run is stored in codeword memory 908, so that if codeword memory 905 is full but the run is not yet complete, the index is used in conjunrtion with signal 804 to reset the corresponding run counter. In addition to storing codewords or indices in codewDrd memory 903, one bil, referred to herein as the "valid" bit, is used to indicate which of these two types of dalla is stored.
E3AD 0RiGiNAL If not starting or ending a run, the run Count reorder unit is idle. If starting a run and not ending a run and if the memory is full, then a codeword is output from codeword memory 908. The codeword that is output is the codeword stored at the address contained in head pointer 903 for that probability class. Then, if starling a run and not ending a run (irrespective of whether the memory is full), index input 713 is wdtten into CDdeword memory 908 via MUX 905 at the address designated by tail Pointer 9D4. Tall Pointer 904 is then writlen into Pointer memory 901 at an address designated by the data on index input 713 (e.g., at the location in pointer memory 901 for the p"jDbab,lity class). After writing tail pointer 904, tall pointer 904 is incremented.
If ending a run and not starting a run, then the address stored in the p:),nter, memory 901 corresponding to the index (probability class) is read out and used as the location in the codeword memory to store the completed codeword on codeword input 714.
If sta-ihing a run and ending a run (i.e., a r.un both begins and ends at the time), and the memory is full, then a codeword is output from codewDrd memory 908. Then, if starting a run and ending a run (irrespective of whether the memory is full), CDdeword input 714 is whtten into codeword memory 908 at the address specified by tail pointer 9D4. Tail pointer 904 is then incremented to contain the next available location (e.g., increment by 1).
In the present examples, run Count reorder unit 801 may output codewords at different times. In one example codewords may be output when they are valid or invalid. Codewords may be output when invalid if a memory full cc)nd-ltion exists and a run has not completed. Invalid CDdewDrds E3AD ORIGINAL may also be output to maintain a minimum rate (i.e., for rate control). Also, invalid codewords may be output to flush codeword memory 908 when all of the data has undergone run count reordering or when the run count reorder unit jumps to the middle of codeword memory 9DB as a result of a reset operation. Note that in such a case, the decoder must be aware that the. encoder is operating in this way.
As described above, a codeword is output whenever the codeword memory 908 is full. Once the. memory is full, whenever an input (i.e., starting a new codeword) to the codeword m.emory 908 is made, an output from the codeword memory 908 is made. Note that an update to an entry does not cause an output from the cDdeword memory 908 when a memory full cond:, lbion exists. That is, the completion of a run followed by the wflting of the resulting codewok,d into its previously assigned memory location does not cause a memory full output to occur. Similarly, when a run ends and the co,iresp.,nding adidress in pointer memory 901 and the address in the head counter 903 are the same, the codeword can be output immediately and the head counter 9D3 can then be incremented without accessing the cDdeword memory 908. In one example, a memory full condition occurs when the tall pointer 904 is equal the head pointer 903 after the tall pointer has been incrernented. Therefore, once the tail pointer 904 has been incremented, the controller logic in the run count reorder unit 801 compares the tail pointer 9D4 and the head pointer 903 and if the same, the controller logic determines that the codeword memory 908 is full and that a codeword should be output. In another example, codewords may be output prior to the memory being full. For instance, if the p:)rtion of the queue addressed by the head contains BAD ORIGINAL valid codewords, it may be output. This requires that the beginning of the queue be repeatedly examined to determine the status of the codewords therein. Note that the codeword memory 908 is emptied at the end of coding of file.
Using run count reorder unit 801, a cDdeword is output by first reading a value (e.g., data) from codewDrd memory 90B at an address specified by head pointer 903. The outputting of codewords is controlled and coordinated using controller logic. Valid detection module 906 performs a test to de4ermine if the value is a codeword. In other words, valid detection module 905 delermines if the codeword is valid. In one example, valid detection module 906 determines the validity of any entry by checking the validity bit stored ",4',,,h each entry. If the value is a cDdeworci (i.e., the codeword is valid), then. the va!Lje is output as a codeword. On the other hand, if the value is not a codew,,r,-,' (i.e., the codeword is invalid), then any codeword may be output which has a run of MPSs at least as long as the current run count. The D' is one codeword that correctly represents the current run thusilar, and may be output. After the output has been made, head pointer 903 is incrernented to the next location in codeword memory 908 Alternatively, using the 1 W with the shortest allowable run lengths allows the decoder to check c-ily whether a cDdeword has been forced out before emitting a LPS.
In one, example, run count reorder unit 801 operates with a two clock cycle time. In the first clock cycle, inputs are received into run count reorder unit 801. In the second clock cycle, an output from codeword memory 9DE OCC-urs.
SPO OnIG114P.L While codewords may be output whenever head pointer 903 addresses a valid codeword, it may be desirable in some implementations to only output a codeword when the buffer is full. This causes the system to have a fixed delay in terms of a number of codewords, instead a variable delay. If memory 908 is able to hold a predetermined number of cDdewords, between the time when a run is started and is input and when is output, the delay is that number of codewords since an output it is not made until it is full. Thus, there is constant delay in codewords. Note that the reordering delay is still variable in other measures, for example, the amount of coded or original data. By allowing memory 908 to fill up prior to producing an output, the output generates a codeword per cycle.
Note that if a codeword memory location is marked as invalid, the unused bits may be used to store an identification of what run count it is for (i.e., the context bin or probability class that must fill the location is stored therein). This information is useful for handling the case where the memory is full. Specifically, the information may be used to indicate to the bit generator that a codeword for this particular run length was not finished and that it must be finished now. In such a case, a decision has been made to output an invalid codeword, which may have occurred due to a memory full condition.
Thus, when the system resets the run counter, the information indicates when, in terms of bit generators and run counts, the system is to begin again.
With respect to the index input, for pipelining reasons when banks of probability classes are used, the index may include a bank identifier. That is, there may be multiple run counts for a particular probability class. For BAD ORGINAL is instance, two run counts may be used for the 80 percent code, where one is used and then the other.
Since the codewords are variable length, they must be stored in codeword memory 908 in a manner that allows their length to be determined.
While it would be possible to store the size explicitly, this would not minimize memory usage. For R-codes, storing a value of zero in memory can indicate a one bit '0" codeword and the IN" codewords can be stored such that a priority encoder can be used to determine the length from the first 1 0 bit.
If codeword memory 908 is multi-ported (e.g., dual ported), this design can be pipelined to handle one codewDrd per one clock cycle. Because any location in codeword memory 908 could be accessed from multiple ports, a location in codeword memory 908 may be written, such as when an invalid or codeword is being stored, while another portion may be read, such as when a codeword is being output. Note that in such a case, the mutliplexers may have to be modified to support the multiple data and address buses.
Whenever the encoder outputs a "0" codeword and resets a run counter because the codeword memory is full, the decoder must do the same.
This requires the decoder to model the encoder's codeword memory queue.
How this is accomplished will be discussed below.
Note that to save power in CMOS implementations, counters can be disabled for "IN" codewords when V' codewords are output for invalid runs.
This is because a 1 W codeword being decoded is valid, while only a M codeword may be invalid.
ex) oBIGINAL Alternative Example Based on Context Figure 10 is a block diagram for another example of a run count reorder unit that reorders data received according to context (as opposed to probability class). The run count reorder unit 1000 performs reordering using theR-codes. Referring to Figure 10, the reorder unit 1000 includes a pointer memory 1001. a head counter 1002, a tall counter 1003, a data multiplexer (MUX) 1004, an address MUX 1005. a compute length block 1006, a valid detect block 1007, and a codeword memory 100B. Codeword memory 1008 stoi es codewords during reordering. Pointer memory 1001 stores addresses fO' Code'WOrd MeMOry locations for contexl bins that are in the middle of a run.
Head counter 1002 and tail counter 1003 allow codeword memory 1008 to be addressed as a queue or circular buffer in addition to being addressed in random, aCcess fashion by the pointer memory 1001. For R-codes, storing a va!ue of ze, D in memory can indicate a one bit "0" codeword and the M W codewords can be stored such that a priority encoder can be used to determ.ine the length from the first M" bit. Compute length module 1006 cpe,Pales like a pio,-ity encoder. (if other variable length codes were used, it would be more memory efficient to add a "1" bit to mark the start of the codeword than to add 1092 bits to explicitly store the length.) Run count reorder unit 1000 also includes backstage controller logic to coordinate and control the operation of the components 1001-1 DOB.
The operation of the run count reorder unit 1000 is very similar to the run count reorder unit that is based on probability estimates. If starting a new run, then an invalid entry including the context bin is written into codewDrd memory 1008 at the address indicated by tail pointer 1003. Tail pointer 1003 BAD ORiGiNAL address is then stored in pointer memory 1001 at the address of the context bin of the current run count. Tail pointer 1003 is then incremented. When completing a run, then the pointer in Pointer memory 1001 corresponding to the run count is read from pointer memory 1001 and the codeword is written in parallel into codeword memory 1008 at a location designated by that pointer. If neither starting or ending arun, and if a location in codeword memory 1008 designated by the address of head pointer 1002 does not contain invalid data, then the codeword addressed by the head is read and output. Head pointer 1002 is then incremented. For the case when a run both begins and ends at the same time, the codeword is written into codeword memory 1008 at the address designated by tail pointer 1003 and then tail pointer 1003 is incremented.
Similarly, when a run ends and the corresponding address in pointer memory 1001 and the address in head counter 1002 are the same, the codeword can be output immediately and value in head counter 1002 can be incremented without accessing codeword memory 1008.
For run count "by context" systems, every context requires a memory location in pointer memory 1001, so the width of the BG and PEM state memory can be extended to implement this memory. The width of pointer memory 1001 is equal to the size needed for a codeword memory address.
The number of locations in codeword memory 1008 can be chosen by the designer in a particular implementation. The limited size of this memory reduces compression efficiency, so there is a cost/compression trade-off. The width of the codeword memory is equal to the size of the largest codeword plus one bit for a validAnvalid indication.
up, C) OBIGIsAL An example using the R2(2) code, show in Table 13 below, will be used to illustrate reordering. Table 14 shows the data to be reordered (O=MPS, more probable symbol; 1 =LPS, less probable symbol), labeled by context. There are only two contexts. The uncoded bit number indicates time 5 in uncDded bit clock cycles. Start and end of runs are indicated, and codewords are shown at the end of runs.
Table 13 - R2(2) Code Original Codeword 0000 0 0001 100 001 110 01 101 1 ill BAD 0RjUINA,_ Table 144 -- EE ample Data to be Encoded Uncoded Data Context Start/End Codeword bit n=ber of Run 1 0 0 S 2 0 1 S 3 0 0 4 1 1 E 101 0 0 6 0 1 S 7 0 0 E 0 8 1 1 E 101 9 0 0 S 0 1 S 11 0 0 12 0 1 13 0 0 14 0 1 1 0 E 100 16 0 E 0 The reordering operation for the example data is shown in Table 15. A codeword memory with four locations, 0-3, is used, which is large enough to not overflow in this example. Each row shows the state of the system after an operation which is either the start or end of run for a certain context or the output of a codeword. An "x" is used to indicate memory locations that are 'don't care". For some uncoded bits, a run neither starts or ends so the run count reorder unit is idle. For coded bits that end runs, one or more codewords can be output, which may cause several changes to the system state.
BP'D OFtiGimp- Table 15 - Example of Reordering Operations Uncoded Input pointers pointer memory codeword memory output bit number head tail 7-1 -0 F2 3 o 1 stall 0 0 1 0 X invalid X X X 2 start 1 j 0 2 0 1 invalid invalid X X 3 (reorderino unit idle) 4 end1101 10 2 10 1 invalid 101 X X(reorderina unit idle) 6 start 1 0 3 0 2 invalid 101 invalid X 7 endD,0 0 3 X 2 0 101 invalid X 1 3 X 2 X 101 invalid X 0 2 3 X 2 X X invalid X 101 8 endl,l ol 2 3 X X X X 101 X 3 3 X X X X X X 101 9 13 0 3 X X X X inV2lid start 1.3 1 3 0 invalid X X invalid 11 12 (reordering unit idle) 13 14 endD,loo 3 1 X 0 invalid X X 100 0 1 X 0 invalid X X X 100 16 endi,0 0 1 X X 0 X X X 1 1 X X X X X X 10 Referring to Table 15, the head and tail pointers are initialized to zero, indicating that nothing is contained in the codeword memory (e.g., queue).
The pointer memory is shown having two storage locations, one for each context. Each location has "don't carew values prior to bit number one. The codeword memory is shown with a depth of four codewords, all initially "donl care" values.
In response to the data received for bit number 1, the head pointer remains pointing to codeword memory location 0. Since the decoder will expect data, the next available codeword memory location, 0, is assigned to the codeword and an invalid value is written into the memory location 0. Because the context is zero, the address of the codeword memory location assigned to the codeword is stored in pointer memory location for the zero context (pointer memory location 0). Thus, a 'V is stored in pointer memory location 0. The tail pointer is incremented to the next codeword memory location 1.
In response to the data corresponding to bit number 2, the head counter remains pointing to the first memory location (since there has not been an output causing it to increment). Since the data corresponds to the second context, context 1, the next codeword memory location is assigned to the codeword as codeword memory location 1 as indicated by the tail pointer and an invalid value is written into the location. The address, codeword location 1, is written into the pointer memory location corresponding to context 1. That is, the address of the second codeword memory location is written into the pointer memory location 1. The tail pointer is then incremented.
In response to the data corresponding to bit number 3, the reorder unit is idle since a run is not starting or ending.
v BAO ORIGINAL L--- In response to the data corresponding to bit number 4, an end of a run is indicated for context 1. Therefore, the codeword 1 W is written into the codeword memory location assigned to context 1 (codeword memory location 1) as indicated by the pointer memory location for context 1. The head and tall pointers remain the same, and the value in the pointer memory location for context 1 will not be used again, so it is "donl care".
In response to the data corresponding to bit number 5, the reorder unit is idle since a run is not starting or ending.
In response to the data corresponding to bit number 6, the same type of operations as described above for bit 2 occur.
In response to the data corresponding to bit number 7, the end of the run for the codeword for context 0 occurs. In this case, the codewordV is written into the codeword memory location (codeword memory location 0) as indicated by the pointer memory.location for context 0 (pointer memory location 0). Then the value on the pointer memory location will not be used again, such that it is a "don't care.' Also the codeword memory location designated by the head pointer contains valid data. Therefore, the valid data is output and the head pointer is incremented. Incrementing the head pointer causes it to point at another codeword memory location containing a valid codeword. Therefore, this codeword is output and the head pointer is incremented again. Note that in this example, codewords are output when they are able, as opposed to when the codeword memory is completely full.
Processing through the uncoded bits continues to occur according to the description above. Note that the codeword memory locations are not dedica!ed for use with parlicular contexts, such that codewords from any of
ERAD ORIGMAL the contexts may be stored in a particular codeword memory location throughout the coding of a data file.
The B'.t Pack Unit Bit packing is illustrated in Figure 4 where data processed by the reorder unit before and after bit packing is shown. Referring back to Figure 4, sixteen vahable length cDdewords are shown, numbered 1 through 16 to ind.,cate the order of use by the decoder. Every cDdeword is assigned to one of three coded streams. The drata in each coded stream is broken into fixed 10 length words called interleaved words. (Note that a single variable length codeword may be broken into two interleaved words. ) In this example, the interleaved words are ordered in a single interleaved stream such that the order of first vai lable lenoth codeword (or partial codeword) in a particular interleave word determines the order of the interleaved word. Other types of ordehng c-iteiia may be performed. The advantage of interleaving the mL;!'.jple coded strearns is that a single coded data channel to transfer data can be used and that variable length shifting can be performed for each stream in a parallel or in a pipeline.
The bit pack unit BD2 receives the variable length codewords from the run count reorder unit 801 and packs them into interleaved words. The bit pack unit B02 comprises logic to perform the handling of varlable length codewords and a merged queue type reordering un,t to cL,pjt fixed length interleaved words in the correct order. In one example, the codewords are received from the run count reorder unit at a rate of up to one codewD,$d per clock cycle. A block diagram of one B.4D ORIGINAL example of the bit pack unit 802 is shown in Figure 11. In the following example, four interleaved streams are used, each interleaved word is 16 bits, and codewords vary in length from one to thirteen bits. In one example, a single bit pack unit is pipelined to handle all streams. If the bit pack unit BD2 uses a dual-pDrted memory (or register file). it can output one interleaved word per clock cycle. This may be faster than required to keep up with the rest of the encoder.
Relerring again to Figure 11. the bit pack unit 802 includes packing logc 1101, a stream counter 1102, memory 1103, tall pointers 1104 and a head counter 1105. Packing logic 1101 is coupled to receive the codewords and is coupled to strearn counter 1102. Stream counter 1102 is also coupled to the memory 1103. Also coupled to memory 1103 are tail pointers 1104 and head counter 1105. Stream, counter 1102 keeps track of the interleaved stream with whi-ch the curgrent, input codeword is associated. In one example, stream couriter 1102 repe-atedly counts the streams from 0 to N-1, where N is the number of streams. Once stream counter 1102 reaches N-1, it begins counting from 0 in on e, example, stream counter 1102 is a twD-bit counter and counts from 0 to 3 (for four interleaved streams). In an example, stream counter 1102 is initialized to zero (e.g., through global reset).
Packing logic 1101 merges the current input codeword with previously inpLrt codewords to form interleaved codewords. The length of each of the cojewords may vary. Therefore, packing logic 1101 packs these variable codewords into fixed length words. The interieaved codewords created by pa--king logic 1101 are output lo memory 1103 in order and are sto,red in BAD ORiGiNAL memory 1103 unti: the proper time to output them.]none example. memory 1103 is a static random access memory (SRAM) or a register file with sixty-tur 16-b!t words.
The interleaved words are stored in memory 1103. Inthepresent 1 example. the size of memory 1103 is large enough to handle two cases. One case is the normal operation case where one interleaved stream has rninimum, length CDdeWO7CIS and the other interleaved streams have maximum len94h codewords. This first case requires 3xl 3=39 memory locations. The other case is the initializaion case where again one stream has minimum lenath, or short, codewords and the others have maximum length, or long, codewords. For the secz),nd case, while 2x3xl 3=78 memory locations are sulificient, the operation of the PEN1 allows a tighter bound of 56.
1.14=_-jnory 1103 in cc)c)pe.ra4ic)n with stream cointer 1102 and the tail 1104 pe,,4,orm reordering. Stream counter 1102 indicates current stream of a codeword being received by memory 1103. Each interleaved Creams is associated with at least one tail pointer. Tail pointers 1104 and head couriter 1105 p=-rfol,m a reordering of the codewords. The reason for having two tall pointers per stream follows from interleave word N being requested by the decoder when data in interleaved word N-1 contains the eza,,t of the next codeword. One tail pointer determines the location in the memory 1103 to store the next interleaved word from a given interleaved strearn.. The ollher tail pointer determines the location in memory to store the interleaved word alfter the next one. This allows the location of interleaved word N to be specified when the decoder request time of interleaved word N-1 o.a.D oftiol"k- is known. In One example, the pointers are eight 6-bil registers (two tail pointers per stream).
In onE example, at the start of encoding, the tail pointers 1104 are set such that the first eight interieaved words (two from each stream) are stored in the memory 1103 in sequence one from each stream. After initialization, whenever the packing logic 1101 begins a new interleaved word for a particular code stream, the "next" tail pointer is set to value of the 'after nexl" tail pointer, and the "after next" tail pointer for the code stream is set to the next available memory location. Thus, there are two tai! pointers for each strearn. In another example, only one tail pointer is used for each stream and indicates where the next interleaved word is to be stored in the memory 1103.
The head counter 1105 is used to determine the memory location of the next interleaved word to output from the bit pack unit 802. In the describet example, the head counter 1105 comprises a 6-bit counter that is incrennented to output an entire interleaved word at a time.
The memory 1103, in addition to being used for reordering, can also be used as a FiFO buffering between the encoder and the channel. It may be desirable to have this memory bigger that what is required for reordedng, so a FIFO-almost-full signal can be used to stall the encoder when the channel cannot keep up with the encoder. A one-bit-per-cycle encoder cannot generate one interleaved word per cycle. When an encoder is well matched to a channel, the channel will not accept an interleaved word every cycle, and some FIFO buffering is necessary. For example, a channel that can accept a BAD OffiGINAL 16-bit illerleaved word every 32 clock cycles would be a well matched design for 2:1 e,eeive bandwidth expansion when compression was 2:1 or greater.
The Packing Logic of the Present Examples A block diagram of the pacWing logic is shown in Figure 12. Referring to Figure 12, the packing logic 1101 comprises a size unit 1201. a set of accumulators 1202, a shifter 1203, a MUX 1204, a set of registers 1205, and an OR gate 1DOiC 1206. Size unit 1201 is coupled to receive codewords and is coupled to accumulators 1202. The accumulators as well as the codewords are coupled to shifter 1203. Shifter 1203 is coupled to MUX 1204 an d 0 R g alle logic 1206. MUX 1204 is also coupled to registers 12D5 and an output o! OR gate logic 1206. The registers are also coupled to OR gate logic 12o0^.
In one example, codewords are input on a 13-bit bus with unused b.lsze,,oed. These zeroed unused bits are adjacent to the "V' in W =de%,crds so a priority encoder in size unit 1201 can be used to determine the of the M W codewords and generate a size for "0" codewords.
Accumulators 1202 comprise multiple accumulators, one for each interleaved stream. The accumulator for each interleaved stream maintains a record of the number of bits already in the current interleaved word. In one e-.,ibDd:4".nent, each accumulator comprises a 4-bit adder (with carry out) and a 4-bit register used for each stream. In one example, the output of the adder is the output of the accumulator. In another embodiment, the output of the is the cullpil oil the accumulator. Using the size of the codewords as re,-eved from size unit 1201, the accumulators determine the number of ofkxcx%o- bits to shift to concatenate the current codeword into the register containing the current interleaved word for that stream.
Based on the current value of the accumulator, the shifter 1203 aligns the current codeword so it properly follows any previous codewords in that interleaved word. Thus, data in the encoder is shifted into decoder order.
The outpul of shifter 1203 is 2B bits, which handles the case where a 13bit codeword must be appended to 15 bits in the current interleaved word, such that bits from the current codeword end up in the higher 12 bits of the 28 bits being output. Note that shifier 1203 operates without feedback, and, thus, can be pipelined. In one example, shifter 1203 comprises a barrel shifter.
* Recisters 1205 store bits in the current interleaved words. in one example, a 16-bit register for each interleaved stream holds previous bits in the current interleaved word.
InitiaMy, a codeword of a stream. is received by shifter 1203, while size unit 1201 indicates the size of the codeword to the accumulator corresponding to the strearn. The accumulator has an initial value of zero set through a global reset. Since the accumulator value is zero, the codeword is not shifted and is then 0Red using OR logic 1206 with the contents of the register corresponding with the stream. However, in some examples, 1 N codewDrds must be shifted to be properly aligned even at the start of an interleaved word. This register has been initialized to zero and, therefore, the result of the ORing operation is to put the CDdeword into the right-most bit positions of the output of OR logic 1206 and are feedback through MUX 1204 to the re,,,ster for storace until the next codeword from the stream. Thus, Z initially shifter 1203 operates as a pass through. Note that the number of bits BAD ORIGINAL -74.
is in the first codeword are now stored in the accumulator. Upon receiving the next codeword for that stream, the value in the accumulator is sent to the shifter 1203 and the codeword is shifted to the left that number of bits for combining with any previously input bits in the interleaved word. Zeros are piaCed in the other bit positions in the shifted word. Bits from the register corresponding to the stream are Combined With bits from shifter 1203 using OR logic 1206. If the accumulator does not produce a carry out indication (e.g., signal), then more bits are required to complete the current interleaved word and the data resulting from the ORin.q =eration is saved back into the MUX 1204 comprises a 2:1 reg,ster through MUX 1204. In one. example, multiplexer. When the accumulator generates a carry out, the 16 bits of ORed data frorn OR logic 1206 are a complete interleaved word and are then output. MUX 1204 causes the register to be loaded with any additional bits (e.g., the upper 12 bits of the 28 bits output from the shifter 1203) after the firs', 16 and fills the rest with zeros.
The control for both MUX 1204 and the outputting of the interleaved word comprises the carry out signal from the accumulator. In one example, the multiplexer 1204 comprises sixteen 2:1 multiplexers with 4 of these having one input that is always zero.
Peorderinc Options Multiple options are possible for performing reordering on the data. For instance, in a system with multiple code streams, the code streams must be reordered into interleaved words as shown in B,,ID ogIG1t4AL F1gu, re 4. There are numerous ways to accomplish reordering into interleaved words.
One method for reordering data into interleaved words is to use a snooper decoder as shown in Figure 25. Referring to Figure 25, multiple run count reorder units 2501A.n are coupled to receive codeword information along with the codeword stream. Each generates a cDdeword output and a size output. A separate bit packing logic (1101) unit, such as bit packing units 25'A2A-n, is coupled to receive the codeword and size outputs from one of the run count reorder units 2501 A-n. Bit packing logic units 2502a-n Output inte,ieaved words that are coupled to both MUX 2503 and snooper decoder 2504. Decoder 2504 pi ovides a select control signal that is received by MUX 2W3 and indicates to MUX 2503 which interleaved word to output into the code stream.
Each coded data strearn has a run count reorder unit, comprising run coint reorder unit 801, in Figure 8. Each bit pack unit combines variable codewo,ds into fixed size interleave words, perhaps 8, 16 or 32 bits per word. Each bit pack unit contains registers and shifting circuitry, as described above. Decoder 2504 comprises a fully operational decoder (including SG, PEM and CM) that has access to interleaved words from all bit pack units (efther on separate buses as shown in Figure 25 or via a common bus). Whenever decoder 2504 is selected an interleave word from one of the bit pack units, that word is transmitted in the code stream. Since the decoder at the receiving end will request the data in the same order as the identical snooper decoder, the interleaved words are transmitted in the proper order.
BAD ORGINAL An encoder with a snooper decoder may be attractive in a half duplex system, since the Snooper decoder can also be used as a normal decoder. An advantage of the snooper decoder approach is its applicability for any deterministic decoder. Alternative solutions, discussed below. without dependence on a snooper decoder, use simpler models of the decoder in order to reduce hardware cost. For the decoders that decode multiple codewords in the same clock cycle, modeling the decoder with less hardware than a decoder itself may not be possible, necessitating the use of a snooper decoder. As will be described below, for decoders thal only decode at most one codeword per cycle, simpler models exist.
Another technique for reordering data for pipelined decoder systems that decode at most one code word per clock cycle is based on the fact that the only information needed to model the decoder's requests for coded data is to know the order of the codewords (considering all codewords, not the codewords for each coded data stream independently). If a time stamp is associated with each codeword when it enters the run count reorder unit, whichever bit packed interleaved word has the oldest time stamp associated with it is the next interleaved codeword to be output.
An exemplary encoder reordering unit is shown in block diagram form in Figure 26. Referring to Figure 26, the encoding system is the same as described in Figure 25, except time stamp information is received by each run count reorder unit 2501 A.n as well. This time stamp information is also forwarded to bit pack units 2502A-n. Bit pack units 2502A-n provide interleaved words to MUX 2503 and their associated time stamps to logic BAD ORIGINAL 2601. Logic 2601 provides a control signal to MUX 2503 to select the interleaved word to be output to the code stream. In this example, the snooper decoder is replaced by a simple comparison which determines which of bit pack units 2502a-n has a codeword (or part of a codeword) with the oldest time stamp. Such a system appears to MUX 2503 as multiple queues with time stamps. Logic 2601 simply selects between various queues. The logic of each of run count reorder units 25MA-n only changes slightly (from run count reorder unit 801) to write a lime stamp when a run is started. Each run count reorder units 2501 A-n is equipped to store the time stamp in the codeword MeMDry.
Storing time stamps with enough bits to enumerate every odeword word in the coded data strearn is sufficient, but in some examples, fewer bits may be used.
A short description of the steps used with multiple queues with time stamps appears below. The description is discernible to one skilled in the art. These are the encoder operations. No simplification has been done for the cases where a run is both started and ended by the same codeword. The operations can be checked for each symbol encoded (although in practice not all checks need to be made). Interleaved words are assumed to be 32 bits in size.
h (no current COdeWDrd for context) 1 P!ace time in Queue (used to determine next Queue) place context pointer in Queue place invalid data in Queue point contexl to Queue entry increment Queue tail 9 (already a codeword and MPS) increment context run.-Dunt 1 BAD ORIGINAL 9 (MAXRUN or LPS) ( place correct data in Queue (context pointer unneeded) zero pointer & runcount in context memory update probability estimate in context memory 11 (valid data at head of next queue) place 32 bits of data on output clear Queue entry increment Queue head while (any queue is almost full) find the next Queue which must place data on the output while (less than 32 bits of valid data) ( use context point er to find context zero pointer & run count in context memory place MAXRUN code word in Oueue data The decoder operations are similar although the codewords need not be saved in the queue. it is still necessary to save the lime stamp of the codewords in the queue.
The function of the time stamps discussed above is used to store the order information of the codewords. An equivalent manner of expressing the same concept is through the use a single queue for all codewords, i.e., a merged queue. In a merged queue system, as shown in Figure 27, a single run count reorder unit 2701 is used for all interleaved streams. Run count reorder unit 2701 generates codeword, size and stream outputs to bit pack units 2502A.n output interleaved words to MUX 2503 and position information to logic 2702, which signals MUX 2503 to output interleaved words as part of the code stream.
For arbitrary streams, the run count reorder memory stores an interleaved stream ID for each codeword. Each interleaved stream has its BAD ORIGINAL own head pointer. When a bit pack unit needs more data, the corresponding head pointer is used to fetch as many codewords as are needed to form a new interleaved word. This may involve looking at many CDdeWord Memory locations to determine which ones are part of the proper stream.
Alternatively, this may involve looking to the codeword memory for additional fields to implement a linked list.
Another method of interleaving streams uses a merged queue with fixed stream assignment. This method uses a single tail pointer as in the merged queue case, so no time stamps are required. Also, multiple head pointers are used as in the previous case, so there is no overhead in outputting the data from a particular stream. To accomplish this, the assionment of codewords to interleaved streams is performed according to the following rule, for N streams: codeword M is assigned to stream M modulD (mod) N. Note that interleaved streams can have codewords from any context bin or probability class according to this method. If the number of streams is a power of two, M mod N can be computed by discarding some of the more significant bits. For example, assume that the codeword reorder memory is addressed with 12 bits and that four interleaved streams are used. The ta:,] pointer is 12 bits long, and the two least significant bits identity the coded stream for the next codeword. Four head pointers with 10 bits, each are implicitly assigned to each of the four possible combinations of the two least significant bits. Both the tall and head pointers are incremented as normal binary counters.
In the decoder, the shifier has registers to store interleaved words.
The shifter presents properly aligned coded dalla to the bit generator. When BAD ORGINAL -so- the bit generator uses some coded data, it informs the shifter. The shifter presents properly aligned data from the next interleaved stream. If the number of coded data streams is N, the shifter has N-1 clock cycles to shift out the used data and perhaps request another interleaved codeword before that particular interleaved stream will be used again.
The Decoder The present, examples include a decoder that supports the real-time encoder with limited reorder memory. In one example, the decoder also includes reduced memory requirements and complexity by maintaining a run count for each probability class instead of each context bin.
One Example of the Decoder System Figure 14A illustrates a block diagram Of One example of a deCoder hardware sYstem. Referring to Figure 14A the der=der system 1400 includes first-inffirst-out (FIFO) structure 1401, 1402, mernory 1403, and context model 1404. Decoders 1402 includes multiple decoders. Coded data 1410 is coupled to be received by FIFO structure 1401. FIFO structure 1401 is coupled to supply the coded data to decoder 1402. Decoders 1402 are coupled to memory 1403 and contexl model 1404. Context model 1404 is also coupled to memory 1403.
One output of context model 604 comprises the decoded data 1411.
In system 14DD, the coded data 1410 input into FIFO structure 1401 Ordered and interleaved. FIFO structure 1401 contains data in proper order Thestreams are delivered to decoders 1402. Decoders 1402 requires data 13AD ORIG1NAL 81- from these streams in a serial and deterministic order. Although the order in which decoders 1402 require the coded data is non-trivial, h is not random.
By ordering the codewords in this order at the encoder instead of the decoder, the coded.data can be interleaved into a single stream. In another example, coded data 1410 may comprise a single stream of non interleaved data, where data for each context bin, context class or Drobability class is appended onto the data stream. In this case, FIFO 1401 is replaced by a storage area. to receive all of the coded data prior to forwarding the data to decoders 1402 so that the data may be segmented properly.
As the coded data 1410 is received by FIFO 1401, context model 1404 determines the current context bin. In one example. context model 1404 determines the current context bin based on previous pixels and/or bits. Although not shown, line buffering may be included for context model 1404. The line buffering provides the necessary data, or template, by which context model 1404 determines the current context bin. For example, where the cont,ext is based on pixel values in the vicinity of the current pixel, line buffering may be used to store the pixel values of those pixels in the vicinity that are used to provide the specific context.
In response to the context bin, the decoder system 1400 fetches the decoder state from memory 1403 for the current context bin. In one example, the decoder state includes the probability estimation module (PEM) state and the bit generator stale. The PEM state determines which code to use to decode new codewords. The bit generator state maintains a record of the bits in the current run. The state is provided to decoders 1402 from memory 1403 in respDnse to an address provided by context model BAD ORIGINk------ is 1404. The address accesses a location in memory 1403 that stores the information corresponding to the context bin.
Once the decoder state for the current context bin has been fetched from memory 1403, system 1400 determines the next uncompressed bit and lp, rocesses the decoder state. Decoders 1402 then decode the new codewDrd, if needed, andfor updates the run count. The PEM state is updated, if needed, as well as the bit generation state. Decoders 1402 then write the new coder state into memory 1403.
Figure 14B illustrates one example of a decoder.
Referring to Figure 14B, the decoder includes shifting logic 1431, bit gernerator logic 1432, "New k" logic 1433, PEM update logic 1434, New codeword locic 1435, PEM state to code. logic 1436, code-to-mask logic 1437, code-to-MaxRL, Mask, and R3Split expansion Ionic 1438, decode logic 143-0, multiplexer 1440, and run count update logic 1441. Shifiting logic 1431 is coupled to receive the coded data input 1443, as well as the state input 1442 (from memory). The output of shifting logic 1431 is also Coupled as an input to bit generation lopic 1432, "new-k" generation logic 1433 and PEM update logic 1434. Bit generation logic 1432 is also coupled to receive the state input 1442 and generates the decoded data output to the context model.
New-k logic 1433 generates an output that is coupled to an input of codeto mask logic 1437. PEM update logic 1434 is also coupled to state input 1442 and generates the state output (to memory). State input 1442 is also coupled to inputs of new-codeword logic 1435 and PEM state-to-code logic 1436. The output of PEIA state-ID-code logic 1436 is coupled to be received by expansion logic 1438. The output of expansion loglic 1438 is coupled to EIAD ORIGINAL decode Ioplc 1439 and run count update logic 1441. Another input to decode logic is coupled to the output of code-to-mask 1437. The output of decode logic 1439 is coupled to one input of MUX 1440. The other input of MUX 1440 is coupled to state input 1442. The selection input of MUX 1440 is coupled to the output of new codeword logic 1435. The output of MUX 1440 and expansion logic 1438 are coupled to two inputs of run count update logic 1441 with the output of cDde-to-mask logic 1437. The output of run count update logic 1441 is included in the state output to memory.
Shifting logic 1431 shift in data from the code data stream. Based on the coded data input and state input, bit generation logic 1432 generates decoded data to the context model. New-k logic 1433 also uses the shifted in data and the state input to generate a new value of k. In one example, new-k logic 1433 uses the PEM stale and the first bit of coded data to generae the new value of k. Based on the new k value, code-tD-mask logic 1437 cenerates a RLZ mask for the nexl codeword. The RLZ mask for the next codeword is sent to decode logic 1439 and the run count update logic 1441.
The PEM update logic 1434 updates the PEM stale. In one example, the P-CM state is updated using the present state. The updated state is sent to memory. New codeWDrd logic 1435 determine if a new codeword is needed. PEM state-to-code logic 1436 determines the code for decoding using the state input 1442. The code is input to expansion logic 143B to generate the maximum run length, the current mask and an F13 split value. Decode logic 1439 decodes the codeword to produce a run count output. MUX 1440 selects either the output from decode logic 1439 or the BAO ORUNAL state input 1442 to the run count update logic 1441. Run count update logic 1441 updates the run count.
The decoding system 1400, including decoders 1402 operates in a pipeline manner. In one t example, the decoding system 1400 of. the present invention determines context bins, estimates probabilities, decodes codewords, and generates bits from run counts all in a pipelined manner. One. example of the pipeline structure of the decoding system is depicted in Figure 15A. Referring to Figure 15A, an example of the pipelined decoding process of the present invention is shown in six sta:ies, numbered 1-6.
In the first stage, the current context bin is determined (1501). In the second stage, after the context bin has been determined, a memory read occurs (1502) in which the current decoder state for the context bin is fetched frorn, m emory. As stated above,Ihe decoder state includes the PEM state and the bit generator state.
In the third stage of the pipelined decoding process a decompressed bit is generated (1503). This allows for a bit to be available to the context Model. Two other operations occur during the third stage. The PEM state is converted into a code type (1504) and a determination is made as to whether a new codeword must be decoded (1505) also occur in the third stage.
During the fourth stage, the decoding system processes a codeword and/or updates the run count (1506). Several sub-operations are involved in processing a codeword and updating the run count. For instance, a codeword is decoded to determine the next run count or the run count is updated for the 13,D OVAIGISAL -as- current codeword (1506). If needed when decoding new codewords, more coded data is fetched from the input FIFO. Another sub-operation that occurs in the fourth stage is the updating of the PEM state (1 5D7). Lastly, in the fourth stage of the decoding pipeline, the new PEM state is used to determine what the run length zero codeword (described later) is for the next code if the run count of the current code word is zero (1508).
During the fifth stage of the decoding pipeline the decoder state with an updated PEM state is written into memory (1509) and the shifting begins for the nexl codeword (1510). In the siX1h stage. the shlifting to the next codeword is completed (1510).
The pipelined decoding, actually begins with a decision as to whether to start the decoding process. This determination is based on whether there is enough data to present to the decoder If there is not enough dala from the FIFO, the decoding system is stalled. In another case, the decoding system may be stalled when outputting decoded dalla to a peripheral device that is not capable of receiving all of the data output from the decoder as it is being generated. For instance, when the decoder is providing output to a video display interface and its associated video circuitry, the video may be too slow, such that the decoder needs to be stalled to allow a video to catch up.
Once the decision has been made to start the decoding process, the current context bin is determined by the context model. In the present invention, the current context bin is ascertained by examining previous data. Such previous data may be stored in line buffers and may include data from the current line andior previous lines. For instance, in a context template, for BAD ORIGINAL a given bit. Bits from line buffer(s) may be designed using a template with respect to the previous data, such that the context bin for the L current data is selected according to whether the previous data being examined matches the template. These line buffers may include bit shift registers. A template may be used for each bit plane of an n-bft image.
In one example, the context bin is selected by outputting an address to memory during the next pipeline stage. The address may include a predetermined number of bits, such as three bits, to identify the bit plane. By using three bits, the bit position in pixel data may be identified. The template used to determine the context may also be represented as a portion of the address. The bits used to identify the bit plane and the bits identifying the template may be combined to create an address for a specific location in memory that contains the state information for the context bin defined by th-.se bits. For example, by utilizing three bits to determine the bit position in a partic-ular pixel and the ten previous bits in the same position in each of the previous pixe!s in the template, a 13-bit context address may be generated.
Using the address created by the context model, the memory (e.g., RAL11) is accessed to obtain the state information. The state includes the PEM state. The PEM state includes the current probability estimate.
Because more than one state uses the same code, the PEM state does not include a probability class or code designation, but rather an index into a table, such as the table shown in Figure 5. Also when using a table such as that shown in Figure 5, the PEM state also provides the most probable symbol (MPS) as a means for identifying whether the current PEM state is l=aled on the positive or necative side of the table. The bit generation state 13AD ORIGINAL may include the count value and an indication of whether an LPS is present.
In one embodiment the MPS value for the current run is also included for decoding the next codeword. In the present invention, the bit generator state is stored in memory to reduce the space required for run counters. If the cost.
of space in the system for counters for each context is low, the bit generation state does not have to be stored in memory.
Once the fourth stage has been completed, the new bit generator state and PEM state is written to memory. Also in the fifth stage, the coded data strearn is shlified to the next codeword. The shifting operation is completed in the sixlh stace.
Figure 14C is a block diagram of one example of FIFO structure 1401 illustrating interleave word buffering for two de,Coders. Note that any number of decoders may be supported using the teachings of the present invention. As shown, the inpLn data and the FIFO are wide enDuch to hold two interleave words. FIFO 1401 comprises FIFO 1460, rec;slters 1461-62, MUXs 1463-1464 and control block 1465. The two inpJt C0dewords are coupled as the input interleaved words. The outputs of FIFO 1460 are coupled to inputs to registers 1461-1462. Inputs to MUX 1463 are coupled to the outputs of registers 1461 and 1462. Control block 1465 is coupled to provide control signals to FIFO 1460, registers 1461 and 1462 and MUXs 1463 and 1464. Interleave Words are the output data (output data 1 and 2) provided to two decoders. Each decoder uses a request signal to indicate thalk the current word has been used and a new word will be needed nexl. The request signals from the decoders are coupled to inputs of control 13AL) ORIGiNAL block 1465. Control block 1465 also outputs a FIFO request signal to request more data from memory.
Initially, the FIFO and registers 1461 and 1462 are filled with data and a valid flip flop in the control unit 1465 is set. Whenever a request occurs, the control block 1465 provides the data according to the logic shown in Table 16.
Table 16
Ezl,.h Valid Request 1 Request 2 Multiplexer 1 Muftiplexer 2 Next Both FIFO and Valid Register Enable 0 10 0 X X 0 0 0 10 1 X REG 1462 1 1 0 1 0 RE-G 1462 X 1 1 0 1 1 REG 1462 FIFO 0 1 1 0 0 X X 1 0 1 0 1 X REG 1461 0 0 1 1 0 REG 1461 X 0 1 1 1 REG 1 REG 1462 11 1 X rneans 'doni care.
Figure 15B illustrates a different conceptual view of the decoder.
Referring to Figure 15B, variable length (coded) data is input into a decoder. The decoder outputs fixed length (decoded) data. The output is a!so fed back as a delayed feedback which received as an input into the decoder. In the decoder of the present invention, variable length shifting BAD ORIGINAL used in decoding is based on decoded data that is available after some delay. The feedback delay does not reduce the throughput in the delay tolerant decoders.
The input variable length data is divided into fixed length interleaved words such as described in conjunction With Figure 4. The decoder uses the fixed length words as described in Figure 16A below. The decoder and delay models a pipeline decoder as described in conjunction with Figures 15 and 32 or multiple parallel decoder,, such as described in conjunction with Figures 2-A-2D. Thus, the present, example provides a delay tolerant decoder. The delay tolerant decoders of the present invention allow handling of variable length data in parallel.
Prior art decoders (e.g., Huffman decoders) are not delay tolerant. Informalion determined from decoding all previous cDdewords is required to the variable lenCh shifting needed to decode the next cDdeword.
In contrast, the present examples are delay tolerant decoders.
Sh"t.); in the Decdnc SvRlern The decoder has shifting logic to shift the interleaved words to the proper bit generator for decoding. The shitler does not require any particular type of "by context or by probability" parallelism. An encoder which assigns codeword M to stream M mod N (M%N in the C language), where N is the number of streams is assumed. in the present example,coded data from the current stream is presented until a cDdew,.,rd is requested. Only then is the dalka switched to the next stream.
BAD ORGINAL -9D- Figure 16A illustrates one example of the shifter for the decoder Shifter 1600 is designed for four data streams. This allows four clock cycles for each shifting operation. The intedeaved words are 16 bits and the longest codeword is 13 bits. Referdng to Figure 16A eh 1600 comprises four registers 1601-1604 coupled to receive inputs from the interleaved coded data. The outputs of each of registers 1601-1604 is coupled as inputs to MUX 1605. The output of MUX 1605 is coupled to the input of a barrel shifter 1606. The output of barrel shifter 1606 is coupled as inputs to a register 1607, MUX & registers 1605-1610, and a size unit 1611.
The output of size unit 1611 is coupled to an accumulator 1612. An output of accumulator 1612 is fed back and coupled to barrel shifter 1606. An output of register 1607 is coupled as an input to MUX & register 1608. An output of MUX & register 1606 is coupled as an input to MUX & register 1309. An output of L1W & register 1609 is coupled as an input to MUX & register 1610.
The cutput of MIUX & register 1610 is the aligned coded data. In one er,nbodime.nt, registers 1601-1604 are 16-bit registers, barrel shifter 1606 is a 32-b4t to 13-bit barrel shifter and accumulator 1612 is a 4-bit accumulator.
Registers 1601-1604 accept 16-bil words from the FIFO and input them into barrel shifter 1606. At least 32 bits of the undecoded data is provided to barrel shifter 1606 at all times. The four registers 1601- 1604 are initialized with two 16-bit words of coded data to begin. This allows there to always be at least one new codeword available for each stream.
For R-codes, codeword size unit 1611 determines if a "0' or - 1 N codeword is present and, if it is an "1 W codeword so, how many bits after the M" are part of the current codeword. The size unit, providing the same cp,p ox:xGltlp, - function, was described in conjL!nction with Figure 12. For other codes, determining the size Cf. a cDdeword is well-known in the art.
Shifter 16DO comprise a FIFO Consisting of four registers, three of which have multiplexed inputs. Each register of registers 1607-1610 holds at least one codeword, so the width of the registers and the multiplexers is 13 bits to accommodate the longest possible codeword. Each register also has one Control flip-flop associated with it (not shown) that indicates if a particular register contains a codeword or if it is waiting for barrel shifter 1606 to provide a codeword.
The FIFO will never empty. Only one cDdeword can be used per clock cycle and one codeword can be shifted per clock cycle. The delay to perform the shifting is compensated for since the system starts out four codewords ahead. As each codeword is shifted into being the aligned coded data output, the other codewords in registers 1607-1610 shift down. When the CDdeword left, in the FIFO is stored in register 161o, the barrel shifter 1606 causes codewords to be read out from registers 1601-1604 through MUX 1605 in order to fill replisters 1607-1609. Note that the FIFO may be designed to refill register 1607 with the next codeword as soon as its codeword is shifted into register 1608.
Barrel shifter 1605, codeword size calculator 1611 and an accumulator select 1612 handle the variable length shifting. Accumulator 1612 has four registers, one for each coded data stream, that contains the alignment of the current codeword for each data stream. Accumulator 1612 is a four bit accumulator used to control barrel shifter 1606. Accumulator 1612 increases its value by the value input from the codeword size unit 1611. When 8AD 0RiGINAL accumulator 1612 overflows (e.g., every time the shift count is 16 or greater), registers 1601-1604 are clocked to shift. Every other 16 bit shift causes a new 32 bit word to be requested from the FIFO. The input to accumulator 1612 is the size of the codeword, which is determined by the current code and the first one or two bits of the current codeword. Note that in some examples, registers 1601-1604 must be initialized with coded data before the decoding can begin.
When a codawDrd is requested by the system, the registers in the FIFO are clocked so that codewords are moved towards the output. When the barrel shifter 1606 is ready to deliver a new codeword, it is multiplexed into the f rst empty register in the FIFO.
In th;s example, a next codeword signal from the bit generator is recelved before the decision to switch streams is made.
If the next cDdeword signal from the bit generator cannot be to be received before the decision to switch streams, a lookP-head system such as the one shown in Figure 16B can be used. Referhng to Figure 169, a shifter 1620 using look ahead is shown in block diagram form. Shifter 1620 includes a shifter 1600 that produces outputs of the current coded data and the next coded daka. The current coded data is coupled to an input of codeword preprocessing logic unit 1621 and an input of a codeword processing unit 1624. The next coded data is coupled to an inpul of codeword preprocessing loolc unit 1622. Outputs from both preprocessing lonic units 1621 and 1622 are coupled to inputs of a MUX 1623. The output of MUX 1623 is coupled to another input of cDdeword processing logic 1624.
SAD OBIG1t'P1- The logic that uses the codeword is divided into two parts, codeword Preprocessing logic and cDdewc)rd processing logic. Two identical pipelines Preprocessing units 1621-1622 operate before the interleaved stream can be shifted. One of preprocessing units 1621-1622 generates the proper information if the stream is switched and the other generates the information if the stream is not switched. When the stream is switched, the output of the proper codeword preprocessing is multiplexed by MUX 1623 to codeword processing logic 1624 which completes the operation with the proper codeword.
Off Chin, Merno:y and Conlo,,yt Models 1 In One example, it may be desirable to use multiple chips for external memory or external context models. In these examples, ' it is desirable to reduce the delay between generating a bit and having the bit be 15 ava, ,!-:ble to the context model where multiple integrated circuits are used.
Figure 17 illustrates a block diagram of one example of a system with both an, external context model chip 1701 and a coder chip 1702 with memory for each context. Note that only the units relevant to the context model in the coder chip are shown; it is apparent to those skilled in the art that the coder chip 1702 contains bit generation, probability estimation, etc.
Referring to Figure 17, the coder chip 1702 comprises a zero order context model 1703, context models 1704 and 1705, a select logic 1706, a memory control 1707 and a memory 1708. Zero order context Model 1703 and Contexl mode!s 1704-1705 generate outputs that are coupled to inputs of the select locic 1706. Another input of select logic 1706 is coupled to an output of BAD ORIGINAL 94- external context model chip 1701. The output of select logic 1706 is Coupled to an input of memory 1708. Also coupled to an input of memory 1708 is an output of memory control 1707.
Select logic 1706 allows either an external context model or an internal context model (e.g., zero order context model 1703, context model 1704, context model 1705) to be used. Select logic 1706 allows the internal zero order portion of context model 1703 to be used even when the external context model 1701 is used. Zero order context model 1703 provides one bit or more while the external context model chip 1701 provides the remainder. For instance, the immediately previous bits may be feedback and retrieved from zero order context model 1703, while previous bits go to the external context model 1701. In this manner, the time critical information remains onchip. This eliminates the off-chip communication delay for recently generated bits.
Figure 18 is a block diagram of one system with an external context model 1801, and external memory 1803 and a coder chip 1802. Referringto Figure 18, some memory address lines are driven by the external context model 1601, while others are driven by the "zero order" context model in the decoder chip 1802. That is, the context from the immediately past decoding cycle are driven by the zero order context model. This allows the decoder chip to provide the context information from the immediate past with minimum communication delay. The context model chip 1802 precedes the rest of the context information using bits decoded further in the past only, therefore allowing for communication delay. In many cases, the context information from the immediate past is zero order Markov state, and the context OFIIQI%llCND information from further in the past is higher order Markov state. The example shown in Figure 18 eliminates the communication delay inherent in implementing the zero order model in the external context model chip 1802.
However, there may still be a context bin determination to bit generated delay due to the decoder chip 1802 and the memory 1803.
It should be noted that other memory architecture's could be used. For instance, a system with the context model and memory in one chip and the coder in another chip'may be used. Also a system may includes a coder chip with an internal memory that is used for some contexts and an extemal mernory that is used with other contexts.
Et Generators Usin2 a M2rno!l Figure 19 shows a decoder with a pipelined bit generator using memory. Referring to Figure 19, the decoder 1900 comprises a context model 1901, memory 1902, PEM state-to-code block 1903, pipelined bit generator 1905, memory 1904 and shifter 1906. The input of Context model 1901 comprises the deCoded data from pipelined bit stream generator 1905.
The inputs of shifter 1906 are coupled to receive the Coded data. The output of context model 1901 is Coupled to an input to memory 1902. The output of memory 1902 is coupled to PEM state-to-code 1903. The output of PEM SlEte-tD-CDde 1903 and the aligned coded data output from shifter 19D6 are coupled to inputs of bit generator 19D5. Memory 1904 is also coupled to bit generator 1905 using a bi-directional bus. The output of bit generator 1905 is the decoded data.
- 8AL) C)RiGityAL ----- - - Context model 1901 outputs a context bin in response to Coded data on its inputs. The context bin is used as an address to access memory 1902 based on the context bin to obtain a probability state. The probability state is received by PEM state-ID-code module 1903 that generates the probability class in response to the probability state. The probability class is then used as an address to access memory 1904 to obtain the run count. The run count is then used by bit generator 1905 to produce the decoded data.
In One example, memory 1902 comprises a 1024x7 bit memory (where 1024 is the number of different contexts), while memory 1904 comprises a 25x14 bit memory (where 25 is the number of different run counts).
Since bit generator states (run counts, etc.) are associated with probab,. ,ljty classes, not context bins, there is additional pipeline delay before a bit is avalable to the context model. Because updating a bit generator state takes mulliple clock cycles (the bit generator state memory revisit delay), muPliple bit generallor slates will be used for each probability class. For example, if the pipeline is six clock cycles, then the bit generator stale memory will have six entries per probability class. A counter is used to select the proper memory location. Even with multiple entries per probability class, the size of the memory will typically be less than the number of contexts. The memory can be implemented with either multiple banks of SRAM or a multiporled register file.
Since one run count may be associated with multiple contexts, a system must upgrade the probability estimation state of one or more BAD C)R1G1NAL contexls. In one example, the PEM state of the context which causes a run to end is updated.
Instead of requiring a read, modify and write of a runcount before it can be read again, a run count can be used again as soon as the modify is complete.
Figure 32 illustrates a timing diagram of a decode operation in one example of a decoder. Referring to Figure 32, a three cycle decode operation is depicted. Signal names are listed on the left hand column of the timing dip-gram. The validity of a signal during any one cycle is indicated with a bar during the cycle (or portion thereof). In certain cases, the unit or logic responsible for generating the signal or supplying the valid signal is shown adjacent to the valid signal indication in a dotted-lined box. At times, examples of specific elements and units disclosed herein are provided as well. Nole that any portion of the signal that extends into another cycle ind-.ales the validity of the signal only for that period of time in which the sional is shown extending into the other cycle. Also, certain signals are shown as belIng separately valid for more than one cycle. An example of such is the temp run count signal which is valid at one point at the end of the second cycle and then again during the third cycle. Note that this indicates that the signal is merely being registered at the end of the cycle. A list of dependencies is also shown in Table 17 below setting forth the dependcies from the same or previous clock cycle to the current time which the signal is specified to be valid.BAD ORiGINAL Table 17
Name Unft DeDenden-jes 1 = register file 1 CM (previous bit, CM Shift registe4 state to code CM register file 1 barrel shift SH (a=rmlator register. unaligned coded data reffisters) sze barrel shifter outpu 1 (aligned coded data) SH.
(K. R-T a= (accumulator) SH size (previous accumulator recister value register file 2 (K. R3 registered) BG a (codeword needed) BG register file 2.
code to (mask, EG (K R3 regisfered) mn-xRL, FR3sD!il) pen bit (generator bit) BG register file 2 barrel shifter output (aligned Coded data) code to (mask, maxRL, R3split) (recister file 1, recistered MPP de:ocie barrel shilter DUtPUt (aligned coded data) BG code to (mask, maxRL, R3sD!it) PEM table 1 K R3 registered) PEM (PF_A Updale) TT {registered PEM table outpit, LPS present, PEM continue -1 (run coun., updale) (registered: codeword needed regisfered, run coun', E3G L PS present. cowinuel y (cominue, LPS (registered: codeword needed. run count, LPS D'eseni U:)dale) BG Present. continue) A j CtA=conlexi model, SH=sih:tter, BG=bil generator, PEtyl;=probability estimation machine. 1 [italics} means dependencies from previous clock cycle.
tt In one embodiment, most combinational logic for updating the PEM state is performed in the 'PEM table" step, TEM update" is simply a multiplex operation.
Arn:)lic;! Sionalino In some examples, the decoder must model the finite reordering buffer of the encoder. In one example. this modeling is accomplished with implicit signalling.
EIPP C)EkIGI%hb As explained previously which regard to the encoder, when a codewDrd is Carted in the encoder, space is reserved in the appropriate buffer for the codeword in the order the codewords should be placed on the channel. When the last space in a buffer is reserved for a new codeword, then some codewords are placed in the compressed bit stream whether or not they have been completely determined.
When a partial codeword must be completed, a codeword may be chosen which is short and correctly specifies the symbols received so far. For exarnple, in a R-coder system. if it is necessary to prematurely complete a codeword for a series of 100 MPSs in a run code with 128 maximum runlength, then the codeword for 128 MPSs can be used, since this correctly specifies the first 100 symbols.
Afternatively, a codeword that specifies 100 MPSs followed by a LPS can be used. When the codeword has been completed, it can be removed from the reordering buffer and added to the code stream. This may allow previously completed codewords to be placed in the code stream as well. If forcing the completion of one partial codeword results in the removal of a codeword from the full buffer then encoding can continue. If one buffer is still full, then the next codeword must again be completed and added to the code stream. This process continues until the buffer which was full is no longer full. The de-coder may model the encoder for implicit signaling using a counter for each bit generator stale. information Memory location.
In one example, each run counter (probability class in this exr-_rr,ple) has a counter which is the same size as the head or tail counters in the encoders (e.g., 6 bits). Every time a new run is started (a new codeword GAD ORIGINAL j -100- is fellched), the corresponding count is loaded with the size of the codeword memory. Every lime a run is started (a new codeword is fetched) all counters are decremented. Any counter that reaches zero causes the corresponding run count to be cleared (and the LPS present flag is cleared).
for Sionabno for Finite Mernory Real-lime encoding in the present examples requires the decoder handle runs of MPSs that are not followed by an LPS and are not the maximum run length. This occurs when the encoder begins a run of MPSs.
but does not have enough limited re-ordering memory to wait until the run is complete. This condition requires a new CDdeWDrd to be decoded the next time this contexl bin is used, and this condition must be signaled to the decoder. Three potential ways of modifying the decoder are described below.
When the buffer is full, the run count for the context bin or probability class that is forced cut must be reset. To implement this efficiently, storing the context bin or probability class in the codewDrd memory is useful. Since this is only needed for runs that do not yet have an associated codeword, the memory used to store the codeword can be shared. Note that in some systems, instead of forcing out an incomplete codeword, bits can be forced into the context/probabl lily class of the (or any) codeword that is pending in the buffer when the buffer is full. The decoder delects this and uses the corresponding (wrong) context bin or probability class.
Insiream signaling uses codewords to signal the decoder. In one example, the R2(k) and R3(k) code definitions are changed to include non-maximum length runs of MPS that are not followed by an LPS. This can WP 0 -101- be implemented by adding one bit to the codeword that should occur with the lowest probability. This allows a uniquely decodable prefix for the nonmaximum length run counts. Table 18 shows a replacement for R2(2) codes that allows instream signaling. The disadvantages of this method are that the P-code decoding logic must be changed and that there is a compression cost every time the codeword with the lowest probability occurs.
Table 18
Original Data --Fcodewc)rd 0000 0 0001 1 ODO 001 101 01 110 1 ill 000 100100 00 100101 0 ---710011 In son, e examples, ine decoder performs implicit signaling using time stamps. A counter keeps track of the current "time" by incrementing every time a codeword is requested. Also, whenever a codeword is started, the current'lirne" is saved in memory associated with the codeword. Anytime Cte r the first time a codewDrd is used, the corresponding stored ^time" value plus the size of the encDdes reordering buffer is compared with the current "time". If the current "lime' is greater, an implicit signal is generated so that a new codeword is requested. Thus, the limited reorder memory in the encoder il ,h ORIGINS-102- has been simulated. In one example, enough bits for "time' values are used to allow all codeWDrds to be enumerated.
To reduce the memory required, the number of bits used for the time stamps is kept to a minimum. If the time stamps use a small number of bits.
such that time values are reused, care must be taken that all old time stamps are noted before the counter starts reusing times. Let N be the greater of the number of address bits for the queue or the bit generator state memory. Time stamps with N+1 bits can be used. The bit generator state memory must support multiple accesses, perhaps two reads and two writes per decoded bit. 10 A counter is used to cycle through the bit generator state memory, incrementing once for each bit decoded. Any memory location that is too old is cleared so a new codeword is fetched when its used in the future. This cuarantees all time stamps are checked before any time value is reused.
If the bit generator state memory is smaller than the queue, the rate of counting (the time stamp counter) and the memory bandwidth required can be reduced. This is because each time stamp (one per bit generator state memory) must be checked only once per the number of cycles required to use the entire queue. Also storing the time stamps in a different memory might reduce the memory bandwidth required. In a system that uses "0" codewDrds 20 for partial runs, time stamps do not have to be checked for 1 W codewDrds. In a system that uses 01 W codewords for partial runs, the time stamp only has to be checked before,generating a LPS.
In SOMe examples, implicit signaling is implemented with a queue during decoding. This method might be useful in a half duplex system where the hardware for encoding is available during decoding. The operation of the BXD ORIG -103- queue is almost the same as during encoding. When a new codeword is requested, its index is placed in the queue and marked as invalid. When the data from a codeword is completed, it's queue entry is marked as valid. AS data is taken out of the queue to make room for new codewords, if the data taken out is marked as invalid, the bit generator state information from that index is cleared. This clearing operation may require that the bit generator stale memory be able to suppDrI an additional write operation.
Explicit signaling, in contrast, communicates buffer overflow as compressed data. One example is to have an auxiliary Context bin that is used once for every normal context bin decode or once for every codeword that is decoded. Bits decoded from the auxiliary context bin indicates if the new-codeword- needed condition occurs and a new codeword must be decoded for the corresponding normal context bin. In this case, the cDdewords for this special context must be reordered properly. Since the utilization of this context is a function of something known to the reorder unit (typil-ally, it is used once for each codeword), the memory required to reorder the auxiliary context can be bounded or modeled implicitly. Also, the possible codes allowed for this auxiliary context can be limited.
Implicit signaling models the encoder's limited buffer when decoding to generate a sigal that indicates that a new codeword must be decoded. In one example, a time stamp is maintained for each context. In one example, the encoder's finite size reordering buffer is modeled directly. In a half duplex system, since the encoder's reordering circuitry is available diring decoding, it might be used to generate the signals for the decoder.
BAD ORIGINAL -104- Exactly how implicit signaling is accomplished depends on the details of how the encoder recognizes and handles the full buffer condition. For a system using a merged queue with fixed allocation. the use of multiple head pointers allows choices of what wbuffer full' means. Given a design for the 5 encoder, an appropriate model can be designed.
The following provides encoder operation and a model for use by the decoder for a merged queue with fixed stream assignment, parallel by probability system. For this example, assume that the reordering buffer has 256 locations, 4 interleaves streams are used, and each interleaved word is 16 bits. When the buffer contains 256 entries, an entry must be sent out to a bit packer (e.g., bit pack unit) before the entry for the 257th codeword can be placed in the queue. Entries can be forced out earlier if necessary.
In SDMe systems removing the first entry in the buffer requires removing enough bits to complete an entire interleaved CDdeword. Therefore, if 1bit codewords are possible, removing codeword 0 might require also removing codewords 4, 8, 12,..., 52, 56, 60 for 16-bit interleaved words. To ensure that all of these buffer entry have valid entries, forcing an entry to be filled to because the memory is full can be performed at address 64, 192 locations from the location where a new codeword is entered (256 - 16 X 4 192).
In the decoder there is a counter for each probability. When a new codeword is used to starl a run, the counter is loaded with 192. Any time a new codeword is used by any probability, all counters are decremented. If any c:)unter reaches zero, the run length for that probability is set to zero (and the LPS present flag is cleared).
BAD ORIGINAL -105- It may be convenient to use multiple RAM banks (multi-ported memory, simulation with fast memory, etc.), one bank for each coded data stream. This permits all bit pack units to receive data simultaneously, so reading multiple codewords for a particular stream does not prohibit reading by other 5 streams.
In other systems, multiple bit pack units must arbitrate for a single memory based on the cDdeword order as stored in the buffer. In these systems, removing an entry from a buffer may not complete an interleaved word. Each bit pack unit typically receives some fraction of an interleaved word in sequence. Each bit pack unit receives at least a number of bits equal to the shortest codeword length (e.g. 1 bit) and at most a number of bits equal tot he longest codeword length (e.g. 13 bits). Interleave words cannot be emitted until they are complete, and must be emitted in the order of initialization. In this example, a bit pack unit might have to buffer 13 interleave words, this is the maximum number of interleave words that can be completed with maximal length codewords while another stream has an interleaved word pending that is receiving minimal length codewords.
A system where every codeword requires two writes and one read of memory may be less desirable for hardware implementation than a system that performs two writes and two reads. If this was desired for the example system with four streams, bit pack units 1 and 2 could share one memory read cycle and bit pack units 1 and 3.could share the other read cycle (or any other arbitrary combination). While this would not reduce the size of the buffering needed, it would allow a higher transfer rate into the bit pack unit.
BAD ORiGJNAL -106- This may allow the bit pack units to better utilize the capacity of the coded data channel.
Systems With Fixed Size Memory One advantage of a system that has multiple bit generator states per probability class is that the system can support lossy coding when a fixed size memory overflows. This might be useful for image compression for a frame buffer and other applications that can only store a limited amount of coded data.
For systems with fixed size memory, the multiple bit generator states for each probability are each assigned to a part of the data. For example, each of eight states could be assigned to a particular bitplane for eight bit data. In this case, a shifter is also assigned to each parl of the data, in contrast to shifters sequentially providing the next codeword. It should be noted that the data need not be divided by bitplane. Also, in the encoder, no interleaving is performed, each part of the data is simply bitpacked. Memory is allocated to each part of the data.
Memory management for coded data is presented for systems that store all of the data in a fixed size memory and for systems that transmit data in a channel with a maximum allowable bandwidth. In both of these systems, graceful degradation to a lossy system is desired. Different streams of data are used for data with different importance so that less important streams can be stored or not transmitted when sufficient storage or bandwidth is not available.
BAD OBtGINAI- -107- When using memory, the coded data must be stored so that it can be accessed such that less important data streams can be discarded Without losing the ability to decode important data streams. Since coded data is variable length, dynamic memory allocation can be used. Figure 31 shows an example dynamic memory allocation unit for three coded data streams. A register file 3100 (or other storage) holds a pointer for each stream plus another pointer for indicating the next free memory location. Memory 3101 is divided into fixed size pages.
Initially, each pointer assigned to a stream points to the start of a page of memory and the free pointer to the next available page of memory. Coded data from a particular stream is stored at the memory location addressed by the corresponding pointer. The pointer is then incremented to the next memory location.
When the pointer reaches the maximum for the current page, the following occurs. The address of the start of the next free page (stored in the free pointer) is stored with the current page. (Either part of the coded data memory or a separate memory or register file could be used.) The current pointer is set to the next free page. The free pointer is incremented. These actions allocate a new page of memory to a particular stream and provide links so that the order of allocation can be determined during decoding.
When all pages in the memory are in use and there is more data from a stream that is more important than the least important data in memory, one of three things may be done. In all three cases, memory assigned to the least important data stream is reassigned to more important data stream and no more data from the least important data stream is stored.
13AD ()RJGIPIAI -108- First, the page currently being used by the least important stream is simply assigned to the more important data. Since most typical entropy coders use internal state information, all of the least important data stored previously in that page is lost.
Second, the page currently being used by the least important stream is simply assigned to the more important data stream. Unlike the previous case, the pointer is set to the end of the page and as more important data is written to the page, the corresponding pointer is decremented. This has the advantage of preserving the least important data at the start of the page if the more important data stream does not require the entire page.
Third, instead of the current page of least important data being reassigned, any page of least iMpDriant data may be reassigned. This requires that the coded data for all pages be coded independently, which may reduce the compression achieved. It also requires that the uncoded data corresponding to the start of all pages be identified. Since any page of least important data can be discarded, greater flexibility in graceful degradation to lossy coding is available.
The third alternative might be especially attractive in a system that achieves a fixed rate of compression over regions of the image. A specified number of memory pages can be allocated to a region of the image. Whether less important data is retained or not can depend on the compression achieved in a particular region. (The memory assigned to a region might not be fully utilized if lossless compression required less than the amount of memory assigned.) Achieving a fixed rate of compression on a region of the image can support random access to the image regions.
laAlD ORIC.1LNO- -log- is The ability to write data into each page from both ends can be used to better utilize the total amount of memory avaHable in the system. When all pages are allocated, any page that has sufficient free space at the end can be allocated for use from the end. The ability to use both ends of a page must be balanced against the cost of keeping track of the location where the two types of data meet. (This is different from the case where one of the data types was not important and could simply be overwritten.) Now consider a system where data is transmitted in a channel instead of being stored in a memory. Fixed size pages of memory are used, but only one page per stream is needed. (Or perhaps two if ping-ponging is needed to provide buffing for the channel, such that while writing to one, the other may be read for output) When a page of memory is full, it is transmitted in the channel, and the memory location can be reused as soon as the page is transmitted. In some applications, the page size of the memory can be the size of data packets used in the channel or a multiple of the packet size.
In some communications systems, for example ATM (Asynchronous Transfer Mode), priorities can be assigned to packets. ATM has two priority levels, priority and secondary. Secondary packets are only transmitted if sufficient bandwidth is available. A threshold can be used to determine which streams are priority and which are secondary. Another method would be to use a threshold at the encoder to not transmit streams that were less important than a threshold.
BAD ORGINAL -110- Separate Sit Generators for Each Code Figure 20 is a block diagram of a system with separate bit generators for each code. Referring to Figure 20, decoding system 2000 comprises context model 2001, memory 2002, PEM state-to-code block 2003. decoder 2004, bit generators 2005A.n, and shifter 2006. The output of context model 2001 is coupled to an input of memory 2002. The output of memory 2002 is coupled to an input of PEM state-to-code block 2003. The output of PEM state-to- code block 2003 is coupled to an input of decoder 2004. The output of decoder 2004 is coupled as an enable for bit generators 2005A.n. Bit generators 200SA-n are also coupled to receive coded data output from shifter 2006.
Context model 2001, memory 2002, and PEM state-to-code block 2003 operate like their counterparts in Figure 19. Context model 2001 generates a context bin. Memory 2002 outputs a probability state based on the context bin. The probability state is received by the PEM state-to-code block 2003 which generates a probability class for each probability state. Decoder 2004 enables one of the bit generators 2005A-n upon decoding the probability class. (Note that decoder 2004 is a M to 2M decoder circuit similar to a 74xl 3B 3:8 decoder which is well-known -- it is not an entropy coding decoder.) Note that since each code has a separate bit generator, some bit generators may use codes other than R-codes. Particularly, a code for probabilities near 60% might be used to better tile the probability space between R2(0) and R2(1). For instance, Table 19 depicts such a code.
-111- Table 19 uncoded data codeword 0 0 0 0 0 0 0 1 0 1 0 1 1 0 1 1 1 If needed to achieve the desired speed, pre-decoding of one or more b,,ts may be done to guarantee that decoded data is available quickly. In Some examples, to avoid the need to be able to update a large run count every clock cycle, both codeword decoding and run counting for long codes are paltitioned.
The b,,t generator for R2(0) codes is uncomplicated. A codeword requested every lime a bit is requested. The bit generated is simply the codeword (X0Red with the MPS).
Codes for short run length, for example, R2(1), R3(1), R2(2) and R3(2), are handled in the following manner. All of the bits in a codeword are decoded and stored in a state machine that comprises of a small counter (1, 2, or 3 bits respectively) and a LPS present bit. The counter and LPS present bit operate as an R-code decoder.
For longer codes, such as R2(k) and R3(k) for k >2, bit generators are partitioned into two units as shown in Figure 21. Referring to Figure 21, a bit generator structure for R2(k) codes for k>2, is shown having a short run unit 20..21101 and a long run unit 2102. Note that although the structure is for use with is 13AD ORiGINAI_ -112- R2(k>2) Codes, its operation will be similar for R3(k>2) codes (and is apparent to one skilled in the arl).
Short run unit 2101 is coupled to receive an enable signal and a codeword [ 2.. 01 as inputs into the bit generator and an all ones" signal and a count zero signal (indicating a count of zero), both from long run unit 2102.
In response, to these inputs. short run unit 2101 outputs a decoded bit and a next signal indication, which signals that a new codeword is needed. Short run unit 2101 also generates a count enable signal, a count load signal and a count max signal to long run unit 2102. Long run unit 2102 is also coupled to receive codewDrd.lk... 3] as an input to the bit generator.
Sho:1 run unit 2101 handles runs of up to length 4 and is similar to a R2(2) bit generator. In one, example. shorl run unit 2101 is the same for all R2(k>2) codes. The purpose of long run count 2102 is to determine when the last 1-4 bits of the run are to'be output. Long run unit 2102 has inputs, 15 AND loalc and a counter that vary in size with k.
One example of the long run count unit 2102 is shown in Figure 22.
Referring to Figure 22, the long run unit 2102 comprises AND logic 2201 coupled to receive the codeword[k... 31 and outputs an "all ones" signal as a logical 1 if all of the bits in the CDdeword are 1's, thereby indicating that the current cDdeword is a 1 N codeword and that the run count is less than 4.
NOT logic 2202 is also coupled to receive the codewDrd and inverts it. The output of NOT logic 2202 is coupled to one input of a bit counter 2203. The bit counter 2203 is also coupled to receive the Count enable signal, the count load sional and the count max signal. In response to the inputs, the bit counter 2203 generates a count zero signal.
CAD OB'G1t4AI -113- is In one example, the counter 2203 is a k-2 bit counter and is used to break long run counts into runs of four MPSs and possibly some remainder. The count enable signal indicates that four MPSs have been output and the counter should be decremented. The count]Dad signal is used when decoding 1 N codeWDrds and causes the Counter to be loaded with the complement Of CDdeword bits k through 3. The count max signal is used when decoding "0" CDdewords and loads the counter with its maximum value. A count zero output signal indicates when the Counter is zero.
One example of the short run count unit 2101 is Shown in Figure 23. Referring to Figure 23, the short run count unit Contains a Control module 2301, a two-bit counter 2302 and a three-bit counter 2303. The control module 2301 receives the enable signal, the codeword [2...0], and the all ones and count zero signals from the long run count unit. The two bit counter is used to count four b:it runs of MPSs that are parl Of longer runs. A R2(2) counter and LPS bit (three bits total) 2303 is used to generate the 1-4 bits at the end of a run. The enrable input indicates that a bit should be generated on the bil output. The count zero input when not asserted indicates that a run of four MPSs should be Output. Whenever the MPS counter 2302 reaches zero, the count enable output is asserted. When the Count zero input is asserted, either the R2(2) Counter the LPS is used or a new codeword is decoded and the next output is asserted.
When the new codeword is decoded, the actions performed are determined by the codeword input. If the input is "0" codeword, the MPS counter 2302 is used and the count max output is asserted. For '1 N" codewords, the first three bits of the codeword are loaded into the R2(2) BAD ORIGINAL -114counter and LPS 2303, and the count load output is asserted. If the all ones input is asserted then the R2(2) counter and LPS 2303 are used to generate bits; otherwise the MPS counter is used until the count zero input is asserted.
From a system perspective, the number of codes must be small for the system to work well, typically 25 or less. The size of the multiplexer needed for bit and next codeword outputs and the decoder for enabling a particular bit generator must be limited for fast operation. Also, the fan-out of the codeword from the shifter must not be too high for high speed operation.
Separate bit generators for each code allow pipelining. If all codewords resulted in at least two bits, processing of codewords could be pipelined in two cycles instead of one. This might double the speed of the decoder if the bit generators were a limitingportion of the system. One way to accomplish this is for the run length zero codeword (the codeword indicates just a LPS) to be followed by one bit which is the next uncoded bit. These might be called RN(k)+1 codes and would always code at least two bits. Note that R2(0) codewords and perhaps some of the other short codewords do not need to be pipelined for speed.
Separate bit generators lends itself for use with implicit signaling.. Implicit signaling for encoding with finite memory can be accomplished in the following manner. Each bit generator has a counter that is the size of a queue address, for example, -9 bits when a size 512 queue is used. Every time a new codeword is used by a bit generator, the counter is loaded with the maximum value. Any time any bit generator requests a codeword, the counters for all bit generators are decremented. Anytime a counter reaches zero, the corresponding bit generator's state is cleared (for example, the MPS 13P0 OFI1G1t4AL -115- c-ounter, the R2(2) counter and LPS and the long run count counter are cleared). Because clearing can occur even if a particular bit generator is not enabled, there is no problem With stale counts.
Initializatio"f Memory for Each Context Rin In cases where memory for eachcontext bin holds probability estimation information, additional memory bandwidth may be required to initialize the decoder (e.g., the memory) very quickly. initializing the decoder quickly can be a problem when the decoder has many contexts and they all 10. need to be cleared. When the decoder supports many contexts (1 K or more) and the memory cannot be globally cleared, an unacceptably large number of clock cycles would be required to clear the memory.
In order to clear contexts quickly, some examples use an extra bit, referred to herein as the initialized status bit, that is stored with each context. Thus, an extra bit is stored with the PEM state (e.g., 8 bits) for each context.
The memory for each context bin and the initialization control logic are shown in Figure 24. Referring to Figure 24, a context memory 2401 is shown coupled to a register 2402. In one example, the register 2402 comprises a one bit register that indicates the current proper stale for the initialized status bin. The register 2402 is coupled to one input of XOR logic 2403.
Another input to XOR tonic 2403 is coupled to an output of the memory 2401.
The output of XOR logic 2403 is the valid signal and is coupled to an input of control Ionic 2404. Other inputs of control logic 2404 is coupled to the output of counter 2405 and the context bin signal. An output of control logic 2404 is BAD ORIGINAt -116- coupled to the select inputs of MUXs 2406-2407 and to an input of counter 2405. Another output of control logic 2404 is coupled to the select input of MUX 2408. The inputs of MUX 2406 are coupled to the output of counter 2405 and the context bin indication. The output of MUX 2406 is coupled to the memory 2401. The inputs of IMUX 2407 are coupled to the new PEM state and zero. The output of MUX 2407 is coupled to one input of the memory 2401. The output of memory 2401 and the initial PEM state are coupled to input of MIA 2408. The output of = 2408 is the PEM state out.
The value in register 2402 is complemented every occurrence of a decode operation (i.e., each data set, not each decoded bit). XOR logic 2403 compares the validity of the accessed memory location with the register value to detel,Mine whether the accessed memory location is valid for this decode operation. This is accomplished using XOR logic 2403 to check if the ini'lia!ize..I status b,t malches the proper state in register 2402. If the data in me,,no,-y 2401 is not valid, then control logic 2404 causes the data to be ignored by the sta'Le to code logic and the initial PEM state to be used instead.
This is accomplished using MUX 2408. When a new PEIA state is written to mernory, the initialized bit is set to the current value of the register so that it wifl be considered valid when accessed again.
Every context bin memory entry must have its initialized status bit set to the current value of the register before another decode operation can begin. Counter 2405 steps through all memory locations to assure that they are initialized. Whenever a context bin is used, but its PEM state in not up,,ljal,ed, the unused wri.te cycle can be used to test or update the memory location Pointed to be counter 2405. After a decode operation is complete, if BAD C)RIGINAL -117- counter 2405 has not reached the maximum value, the remaining contexts are initialized before beginning the next operation. The following logic is used to control operation.
wrfte ft false; cour;t-er 0:
all initialized a false; wt: je (counter < maximum context bin+l) read PEM state from contexl memory e (counter -- context bin read) and (wrfte_h) write-it false Counter counter + 1 if (PEM state changed) write new PEM state else if (write-it) else else all-initialized = true; 25 while (decoding) write initial PEM state to memory location "counter" counter = counter + 1 read memory location 'counter" if (initialized bit in read location is in wrong state) write-it true counter counter + 1 read PEM state from context memory if (PEM state changed) write new PEM state PEM with Fast Ada:)ialion The PEM used in the present invention may include an adaptation scheme to allow faster adaptation regardless of the amount of data available. By doing so, the present invention allow the decoding to adapt more quickly initially, and to adapt more slowly as more data is available, as a means for providing a more accurate estimate. Furthermore, the PEM may be fixed in an field programmable gate array (FPGA) or ASIC implementation of a PEM slate tablelmachine.
BAD 0RJGiNAL -118- Tables 20-25 below describe a number of probability estimation state machines. Some tables use do not use R3 codes or do not use long codes, for reduced hardware cost. All tables except for Table 20 use fast adapting' special states used to quickly adapt at the start of coding until the first LPS occurs. These fast adaptation states are shown italisized in the tables. For instance, referring to Table 21, when decoding begins, the current state is state 0. If an MPS occurs, then the decoder transitions to state 35. As long as MPSs occur, the decoder transitions upward from state 35, eventually transitioning to state 28. If an LPS occurs at any time, the decoder transitions out of the bolded fast adapting states to a state that represents the correct probability state for the data that has been received thus far.
Note that for each table, after a certain number of MPSs have been received, the decoder transitions out of fast adapting states. In the desired embodiment, once the fast adapting states have been exited, there is no mechanism to return to them, aside from restarting the decoding process. In other embDdirnents, the state table may be designed to reenter these fast adapting states by allowing faster adaptation, the present invention allows for the decoder to arrive at the more skewed codes faster, thereby possibly benefiting from improved compression. Note that the fast adaptation can be eliminated for a particular table by changing the table entry for current state 0 such that the table transitions only one state up or down depending on the data input.
For all the tables, the data for each state is the code for that state, the next state on a positive update (up) and the next state on a negative update BAD ORIGINAL -119- (down). Asterisks indicate states where the MPS must be changed on a negative update.
Table 20
Current Code Up next Down state state next state 0 r2(0) 1 0 1 r2(0) 2 0 2 r2(0) 3 1 3 r2(0) 4 2 4 r2(0) 5 3 r2(0) 6 4 6 r2M 7 5 7 r2M 8 6 8 r2M 9 7 9 r2M 10 8 r2M 11 9 11 r2M 12 10 12 r3M 13 11 13 r3M 14 12 14 r3M 15 13 is r2(2) 16 14 r3(2) 17 15 r2(3) is 16 Current Code Up next Down state state next state 18 r3W 19 17 19 r2W 20 18 r3W 21 19 21 r2(5) 22 20 2-2 r3(5) 23 21 2-3 r2(6) 24 22 24 r3(6) 25 23 r2(7) 26 24 26 r37) 27 25 27 r2(8) 28 26 28 rYS) 29 27 29 r2(9) 30 28 r3(9) 31 29 31 r2(1 0) 32 30 32 r3(10) 33 31 33 r2(11) 34 32 rYl 1) 34 33 a Switch to MPS -120- Table 21
Current Code Up next Down State state next state 0 r2(0) 35 35 1 r2(0) 2 1 2 r2(0) 3 1 3 r2(0) 4 2 4 r2(0) 5 3 r2(0) 6 4 6 r2M 7 5 7 r2M 8 6 8 r2M 9 7 9 r2M 10 8 r2M 11 9 11 r2M 12 10 12 r3M 13 11 13 r3M 14 12 14 r3M 15 13 r2(2) 16 14 16 r3(2) 17 15 17 r2(3) is 16 is r3(3) 19 17 19 r2M 20 is r3W 21 19 7M r2(5) 22 20 Current Code Up next Down State state next state 22 r3(5) 23 21 23 r2(6) 24 123 22 24 r3(6) 25 r2(7) 26 24 26 r3M 27 25 27 r2(8) 28 26 28 r3(8) 29 27 29 r2(9) 30 28 r3(9) 31 29 31 r2(10) 32 30 32 r3(10) 33 31 33 r2(1 1) 34 32 34 r3(11) 34 33 r2(0) 36 1 56 r2M 37 2 57 r2(2) 38 4 58 r2(3) 39 6 59 r2M 40 10 r23) 41 16 41 r2(6) 42 19 42 r2(7) 3- 22 :4 5 r2(8) 28 25 Switch to MPS -121- Table 22
Current Code Up next Do State state next state 0 r2(0) 35 35 1 r2(0) 2 1 2 r2(0) 3 1 3 r2(0) 4 2 4 r2(0) 5 3 r2(0) 6 4 6 r2M 7 5 7 r2M 8 6 8 r2M 9 7 9 r2M 10 8 r2M 11 9 11 r2M 12 10 12 r2(1) 13 11 13 r2(2) 14 12 14 r2(2) 15 13 r2(2) 16 14 16 r2(2) 17 15 17 r2(3) is 16 is r2(3) 19 17 19 r2(4) 20 18 r2(4) 21 19 r2(5) 22 20 Current Code UP next Doi%m State state next state 22 r2(5) 23 21 23 r2(6) 24 22 24" r2(6) 25 23 r2(7) 26 24 26 r2(7) 27 25 27 r2(8) 28 26 28 r2(8) 2-9 27 29 r2(9) 30 28 r2(9) 31 29 31 r2(10) 32 30 32 r20 0) 33 31 33 r2(1 1) 33 32 r2(0) 36 1 56 r2M 37 2 57 r2(2) 38 4 58 r2(3) 39 6 59 r2(4) 40 10 r23) 41 16 41 r2(6) 42 19 42 r2(7) 43 22 r2(8) 28 25 Switch MPS -122- Table 23
Current Code Up next Down State state next state 0 r2(0) 35 35 1 r2(0) 2 1 2 r2(0) 3 1 3 r2(0) 4 2 4 r2(0) 5 3 r2(0) 6 4 6 r2M 7 5 7 r2M 8 6 8 r2M 9 7 9 r2M 10 8 r2M 11 9 11 r2M 12 10 12 r3M 13 11 13 r3M 14 12 14 r3M 15 13 15. r2(2) 16 14 16 r3(2) 17 15 17 r2(3) is 16 18 r3(3) 19 17 19 r2(4) 20 18 r3W 21 19 21 22 20 Current Code Up next Down State state next state 22 r2(5) 23 21 23 r2(6) 24 22 24 r2(6) 25 23 2-5 r2(7) 26 24 26 r2(7) 27 25 27 r2(8) 28 26 28 r2M 29 27 29 r2(9) 30 28 r2(9) 31 29 31 r2(10) 32 30 32 r2(1 0) 33 31 33 r2(1 1) 34 32 34 r2(11) 34 33 i2(0) 36 1 56 r2M 37 2 57 r2(2) 38 4 58 r2(3) 39 6 59 r2(4) 40 10 r2(5) 41 16 41 r2(6) 42 19 42 r2M 43 22 r2(8) 28 25 0 51%itch mils -123- Table 24
Current Code Up next Down State state next state 0 r2(0) 35 35 1 r2(0) 2 1 2 r2(0) 3 1 3 r2(0) 4 2 4 r2(0) 5 3 r2(0) 6 4 6 r2M 7 5 7 r2M 8 6 8 r2M 9 7 9 r2M 10 8 r2M 11 9 11 r2M 12 10 12 r3M 13 11 13 r3M 14 12 14 r3M 15 13 r2(2) 16 14 16 r3(2) 17 15 17 r2(3) is 16 18 r3(3) 19 17 19 r2(4) 20 is r3W 21 19 21 r25) 2-2 0 Switch MPS Current Code Up next Down State state next state 22 r3(5) 23 21 23 r2(6) 24 22 24 r3(6) 25 23 r2(7) 26 24 26 r2(7) 27 25 27 r2(7) 27 26 r2(0) 36 1 56 r2M 37 2 57 r2(2) 38 4 58 r2(3) 39 6 59 r2M 40 10 r2(5) 41 16 41 r2(6) 42 19 42 r2(7) 25 22 -124- Table 25
Current Code Up next Down State state next state 0 r2(0) 35 35 1 r2(0) 2 1 2 r2(0) 3 1 3 r2(0) 4 2 4 r2(0) 5 3 r2(0) 6 4 6 r2M 7 5 7 r2M 8 6 8 r2M 9 7 9 r2(1) 10 8 r2M 11 9 11 r2M 12 10 12 r2(1) 13 11 13 r2(2) 14 12 14 r2(2) 15 13 r2(2) 16 14 16 r2(2) 17 15 17 r2(3) 18 16 is r2(3) 19 17 19 r2(4) 20 18 r2(4) 21 19 r23) 22 0 0 5%itch mils Current Code Up next Down State state next state 22 r2(5) 23 21 23 r2(6) 24 22 24 r2(6) 25 23 r2(7) 26 24 26 r2(7) 27 27 r2(7) 28 26 r2(7) 28 27 -55 r2(0) 36 1 56 r2M 37 2 37 r2(2) 38 4 38 r2(3) 39 6 39 r2(4) 40 10 r2(5) 41 16 41 r2(6) 42 19 4 r2(7) 23 22 4.5 r2(8) 28 25 -125- Adding a fast adaptation to probability estimation only helps at the start of coding. Other methods can be used to improve adaptation during coding when the statistics of a context bin change more rapidly than the previously described PEM state tables can track.
One method of maintaining fast adaptation throughout coding is to add an acceleration term to the PEM state update. This acceleration could be incorporated into a PEM state table by repeating every code a constant number of times (e.g., 8). Then an acceleration term M (e.g., a positive integer) can be added or subtracted from the current state when updating.
When M is 1, the system operates the same as one without acceleration and the slowest adaptation occurs. When M is greater than 1, faster adaptation occurs. Initially, M may be set to some value greater than 1 to provide an initial fast adaptation.
One method of the present invention for updating the value of M is based on the number of consecutive codewords. For instance, if a predetermined number of codewords occurred consecutively, then the value of M is increased. For instance, if four consecutive codewords are V V V M' or M W M W M W M W, then the value of M is increased. On the other hand, a pattern of switching between '7 and M W codewords may be used to decrease the value of M. For instance, if four consecutive codewDrds are V 01 W V 1 NH or 1 N- '0' -1 M' M' then the value of M is decreased.
Another method of acceleration uses state tables in which each code is repeated S times, where S is a positive integer. S is an inverse acceleration parameter. When S is one, adaptation is fast, and when S is larger, adaptation is slower. The value of S can be initially setto 1 to provide initial BAD ORIGINAL - -126- fast adaptation. Using a similar method to the one described above, the value of S may be updated when four consecutive codewords are V '0' now now or al NR wl Nwe l NO col Nw. In such a case, the value of S is decreased. In contrast, if four consecutive codewords are'nOw 1 NO Ow Nw or 1 N- '0' -1 N-.0% then the value of S is increased.
The definition of consecutive codewords can have several meanings.
In a 'by contextw system. consecutive codewords may refer to consecutive cDdewords in one context bin. In a "by probability' system, consecutive codewords may refer to consecutive codewords in one probability class.
Alternatively, in either system consecutive codewords may refer to consecutive codewords globally (without regard to context bin or probability class). For these three examples, the bits of storage required to maintain a history of codewords is 3 x number of context bins, 3 x number-of_probability_classes and 3 respectively. Maintaining acceleration for each context bin might provide the best adaptation. Since poor tracking often due to a global change in the uncoded data, determining acceleration globally might also provide good adaptation.
Syster-n Applications One virtue of any compression system is to reduce storage requirements for a set of data. The parallel system of the present invention may be substituted for any applicatior currently fulfilled by a lossless coding system, and may be applied to systems operating on audio, 1e4 databases, computer executable, or other digital data, signals or symbols. Exemplary lossless coding systems include facsimile compression, database 13XD OR1G1t4AL is -127.
compression, compression of bitmap graphic images, and compression of transform coefficients In image compression standards such as JPEG and MPEG. The presentinvenlion allows small efficient hardware implementation and relatively fast software implementations making it a good choice even for 5 applications that do not require high speed.
The real virtue that the present invention has over the prior art is the
Ibility of operation at very high speeds, especially for cl 1 poss 1 ecoding. In this manner. the present invention can make full use of expensive high speed channels. such as high speed computer networks, satellite and terrestrial broadcast channels. Figure 26 illustrates such a system. wherein broadcast data or a high speed computer network supplies data to decoding system 2801 which decodes the data in parallel to produce output data. Current hardware entropy (such as the Q-Coder) would slow the throughput of these systems. All of these systems are designed. at great cost, to have high bandwidth. 11 is counter productive to have a decoder slow the throughput.
The present invention not only accommodates these high bandwidths, it actual!y increases the effective bandwidth because the data can be transmitted in a compressed form.
The present invention is also applicable to obtaining more effective bandwidth out of moderately fast channels like ISDN, CD-ROM, and SCSI. Such a bandvvidoh matching system is shown in Figure 29. whereas data from sources, such as a CD-ROM, Ethernet, Small Computer Standard Interface (SCSI), or other similar source, is coupled to decoding system 2901. whicn receives and decodes to,he data to produce an output. These channels are still faster than some current coders. OCen these BAD ORGINAL -128channels are used to service a data source that requires more bandwidth than the channel has, such as real-time video or computer based multimedia. The system of the present invention can perform the role of bandwidth matching.
The system of the present invention is an excellent choice for an entropy coder part of a real-time video system like the High Definition Television (HDTV) and the MPEG video standards. Such a system is shown in Figure 30. Referring to Figure 30, the real-time video system includes decoding system 3001 which is coupled to compressed image data. System 3001 decodes the data and outputs it to lossy decoder 3002. Lossy decoder 3002 could be the transform, color conversion and subsampling portion of an HDTV or MPEG decoder. Monitor 3DO3 may be a television or video monitor.
Whereas many alterations and modifications of the present invention will no doubt become apparent to a person of ordinary skill in the art after having read the foregoing description, it is to be understood that the particular embodiment shown and described by way of illustration is in no way intended to be considered limiting. Therefore, references to details of the preferred embodiment are not intended to limit the scope of the claims which in themselves recite only those features regarded as essential to the invention.
Thus, a method and apparatus for parallel decoding and encoding of data has been described.
BPO C)BIGINAL - 129 - Attention is drawn to the following UK patent applications:
Patent Application Number 9518375.2 (Publication Number GB 2 293 735), from which the present application is divided, which claims aspects of an encoding method and encoding system using an encoder, context model and state memory, reorder unit and reorder memory; Patent Application Number RG2-'-i'j5-1) (Publication Number GB), a further divisional of Application Number 9518375.2, which claims aspects of a decoder for decoding a plurality of interleaved words, comprising a variable length shifting mechanism, a run length decoder, a probability estimation machine and a plurality of registers; is Patent Application Number (Publication Number GB a further divisional of Application Number 9518375.2, which claims aspects of a decoding method employing a counter associated with each run counter which is loaded with the count value corresponding to the size of codeword memory used during encoding; and Patent Application Number (Publication Number GB a further divisional of Application Number 9518375.2, which claims aspects of a decoding system using a context modelling mechanism having a plurality of integrated circuits, a memory and a plurality of decoders.
BAD ORIGINAL - 130 - C L A 1 M S 1 ' 1. A coding system foy processing data comprising: an index generator for generating indices based on the data; and a state table coupled to provide a probability estimate based on the indices, wherein the state table includes a first plurality of states and a second plurality of states, wherein each of the states corresponds to a Code, and further wherein transitions between different codes corresponding to the fimt plurality of states occurs faster when transitioning between states in the first plurality of states than transitions between different codes Corresponding to the second plurality of states when transitioning in the second plurality of states.

Claims (1)

  1. 2. The coding system defined in Claim 1 wherein the first plurality of
    states are used only for a predetermined number of indices.
    3. The coding system defined in Claim 1 wherein the first plurality of states are used only for a predetermined number of indices that initially index the state table.
    4. The coding system defined in Claim 1 'wherein each of the first plurality of states are associated with an R2 code.
    BAD ORIGINAL - 131 - 5- The coding system defined in Claim 2 wherein the first plurality of states includes at least one transition to the second plurality of states, such that the state table transitions to the second plurality of states from the first plurality of states after the predetermined number of indices.
    6. The coding system defined in Claim 1 wherein each of the first plurality of states is associated with a different code.
    7. The coding system defined in Claim 1 wherein the state table transitions from one of the first plurality of states to one of the second plurality of states in response to a least probable symbol.
    8. The coding system defined in Claim 1 wherein the state machine increases state in response to a most probable symbol.
    is 9. A coding system for processing data comprising: an index generator for generating indices based on the data; and a state table coupled to provide a probability estimate based on the indices, wherein the state table includes a plurality of states, wherein each of the states corresponds to a code, and every code in the state table is repeated a predetermined number of times; wherein transitioning between states of the state table occurs based on an acceleiation term that is modifiable, such that a first rate of transitioning between states during a first time period is different than a second rate of transitioning during a second time period.
    BAL) ORGINAl- - 132 - 10. The coding system defined in Claim 9 wherein updates to the state table comprise modifying the PEM state by incrementing or decrementing the acceleration term.
    11. The coding system defined in Claim lo wherein no adaptive acceleration occurs when the acceleration term comprises a predetermined number.
    12. The coding system defined in Claim 10 wherein the acceleration term is updated based on the number of consecutive codewords.
    13. The coding system defined in Claim 12 wherein consecutive codewords comprises consecutive codewords in a context.
    14. The coding system defined in Claim. 12 wherein consewtive codewords comprises consecutive codewords in a probability class.
    15. The coding system defined in Claim 10 wherein the acceleration 20 term is updata based on the number of alternating codewords.
    16. An entropy decoder for decoding a data stream of a plurality of codewords comprising:
    a plurality of bit stream generators for receiving the data stream; and BAD ORIGINAL - 133 - a state table coupled to the plurality of bit stream generalors to provide a probability estimate to the plurality of bit stream generators, wherein the plurality of bit stream generators generates a decoded result for each codeword in the data stream in response to the probability estimate using a Rn(k) code for multiple values of n, and further wherein the state table includes a first plurality of states and a second plurality of states, wherein transitions between different codes in the first plurality of states occurs faster tra itimim in the first plurality of states than transitions between codes when transitioning in the second plurality of states.
    17. The entropy decoder defined in Claim 16 wherein the first plurality of states each contain an R2(k) code.
    18. The entropy decoder defined in Claim 16 wherein the first 15 plurality of states are only used during initialization.
    19. An entropy decoder for decoding a data stream of a plurality of codewords comprising: a plurality of bit stream generators for receiving the data stream; and a state table coupled to provide a probability estimate based on the indices, wherein the state table includes a plurality of states, wherein each of the states corresponds to a code, and every code in the state table is repeated a predetermined number of times; wherein transitioning between stales of the state table occurs based on an acceleration term that is modifiable, such that a first rate of transitioning BAD ORUNIAL h---- 134 between states during a first time period is different than a second rate of transitioning during a second time period.
    20. The entropy decoder defined in claim 16 wherein every code in the state table is repeated a constant number 5 of times.
    21. The entropy decoder defined in claim 20, wherein updates to the state table comprise modifying the PEM state by an acceleration term.
    22. The entropy decoder defined in claim 21, wherein no adaptive acceleration occurs when the acceleration term comprises a predetermined number. 23. The entropy decoder defined in claim 21 wherein the acceleration term is updated based on the number of consecutive codewords. 15 24. The entropy decoder defined in claim 21, wherein the acceleration term is updated based on the number of alternating codewords. 25. A coding system, according to any one of claims 1 to 15 and substantially as described herein. 20 26. An entropy decoder, according to any one of claims 16 to 24 and substantially as described herein.
GB9624358A 1994-09-30 1995-09-07 A coding system and entropy decoder Expired - Fee Related GB2306280B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US31611694A 1994-09-30 1994-09-30
GB9518375A GB2293735B (en) 1994-09-30 1995-09-07 Method and apparatus for encoding data

Publications (3)

Publication Number Publication Date
GB9624358D0 GB9624358D0 (en) 1997-01-08
GB2306280A true GB2306280A (en) 1997-04-30
GB2306280B GB2306280B (en) 1997-10-22

Family

ID=26307711

Family Applications (4)

Application Number Title Priority Date Filing Date
GB9624358A Expired - Fee Related GB2306280B (en) 1994-09-30 1995-09-07 A coding system and entropy decoder
GB9624357A Expired - Fee Related GB2306279B (en) 1994-09-30 1995-09-07 Apparatus for decoding data
GB9624754A Expired - Fee Related GB2306868B (en) 1994-09-30 1995-09-07 Apparatus for decoding data
GB9624640A Expired - Fee Related GB2306281B (en) 1994-09-30 1995-09-07 nethod for decoding data

Family Applications After (3)

Application Number Title Priority Date Filing Date
GB9624357A Expired - Fee Related GB2306279B (en) 1994-09-30 1995-09-07 Apparatus for decoding data
GB9624754A Expired - Fee Related GB2306868B (en) 1994-09-30 1995-09-07 Apparatus for decoding data
GB9624640A Expired - Fee Related GB2306281B (en) 1994-09-30 1995-09-07 nethod for decoding data

Country Status (1)

Country Link
GB (4) GB2306280B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2340703A (en) * 1998-06-04 2000-02-23 Ricoh Kk Adaptive coder which adjusts the speed with which it switches between possible code schemes based on its output

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2356508B (en) * 1999-11-16 2004-03-17 Sony Uk Ltd Data processor and data processing method
US11113054B2 (en) 2013-09-10 2021-09-07 Oracle International Corporation Efficient hardware instructions for single instruction multiple data processors: fast fixed-length value compression
US9430390B2 (en) 2013-09-21 2016-08-30 Oracle International Corporation Core in-memory space and object management architecture in a traditional RDBMS supporting DW and OLTP applications
US10025822B2 (en) 2015-05-29 2018-07-17 Oracle International Corporation Optimizing execution plans for in-memory-aware joins
US10067954B2 (en) 2015-07-22 2018-09-04 Oracle International Corporation Use of dynamic dictionary encoding with an associated hash table to support many-to-many joins and aggregations
US10419772B2 (en) 2015-10-28 2019-09-17 Qualcomm Incorporated Parallel arithmetic coding techniques
US10061714B2 (en) 2016-03-18 2018-08-28 Oracle International Corporation Tuple encoding aware direct memory access engine for scratchpad enabled multicore processors
US10055358B2 (en) * 2016-03-18 2018-08-21 Oracle International Corporation Run length encoding aware direct memory access filtering engine for scratchpad enabled multicore processors
US10402425B2 (en) 2016-03-18 2019-09-03 Oracle International Corporation Tuple encoding aware direct memory access engine for scratchpad enabled multi-core processors
US10061832B2 (en) 2016-11-28 2018-08-28 Oracle International Corporation Database tuple-encoding-aware data partitioning in a direct memory access engine
US10599488B2 (en) 2016-06-29 2020-03-24 Oracle International Corporation Multi-purpose events for notification and sequence control in multi-core processor systems
US10380058B2 (en) 2016-09-06 2019-08-13 Oracle International Corporation Processor core to coprocessor interface with FIFO semantics
US10783102B2 (en) 2016-10-11 2020-09-22 Oracle International Corporation Dynamically configurable high performance database-aware hash engine
US10176114B2 (en) 2016-11-28 2019-01-08 Oracle International Corporation Row identification number generation in database direct memory access engine
US10459859B2 (en) 2016-11-28 2019-10-29 Oracle International Corporation Multicast copy ring for database direct memory access filtering engine
US10725947B2 (en) 2016-11-29 2020-07-28 Oracle International Corporation Bit vector gather row count calculation and handling in direct memory access engine
CN116192154B (en) * 2023-04-28 2023-06-27 北京爱芯科技有限公司 Data compression and data decompression method and device, electronic equipment and chip

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0260461A2 (en) * 1986-09-15 1988-03-23 International Business Machines Corporation Arithmetic coding encoding and decoding method
EP0260460A2 (en) * 1986-09-15 1988-03-23 International Business Machines Corporation Arithmetic coding with probability estimation based on decision history
US5274478A (en) * 1990-03-31 1993-12-28 Goldstar Co., Ltd. Displayer with holograms
US5363099A (en) * 1992-08-17 1994-11-08 Ricoh Corporation Method and apparatus for entropy coding
US5381145A (en) * 1993-02-10 1995-01-10 Ricoh Corporation Method and apparatus for parallel decoding and encoding of data
GB2285374A (en) * 1993-12-23 1995-07-05 Ricoh Kk Parallel encoding and decoding of data
US5475388A (en) * 1992-08-17 1995-12-12 Ricoh Corporation Method and apparatus for using finite state machines to perform channel modulation and error correction and entropy coding
US5539401A (en) * 1994-08-31 1996-07-23 Mitsubishi Denki Kabushiki Kaisha Variable-length code table and variable-length coding device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0260461A2 (en) * 1986-09-15 1988-03-23 International Business Machines Corporation Arithmetic coding encoding and decoding method
EP0260460A2 (en) * 1986-09-15 1988-03-23 International Business Machines Corporation Arithmetic coding with probability estimation based on decision history
US5274478A (en) * 1990-03-31 1993-12-28 Goldstar Co., Ltd. Displayer with holograms
US5363099A (en) * 1992-08-17 1994-11-08 Ricoh Corporation Method and apparatus for entropy coding
US5475388A (en) * 1992-08-17 1995-12-12 Ricoh Corporation Method and apparatus for using finite state machines to perform channel modulation and error correction and entropy coding
US5381145A (en) * 1993-02-10 1995-01-10 Ricoh Corporation Method and apparatus for parallel decoding and encoding of data
US5471206A (en) * 1993-02-10 1995-11-28 Ricoh Corporation Method and apparatus for parallel decoding and encoding of data
US5583500A (en) * 1993-02-10 1996-12-10 Ricoh Corporation Method and apparatus for parallel encoding and decoding of data
GB2285374A (en) * 1993-12-23 1995-07-05 Ricoh Kk Parallel encoding and decoding of data
US5539401A (en) * 1994-08-31 1996-07-23 Mitsubishi Denki Kabushiki Kaisha Variable-length code table and variable-length coding device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2340703A (en) * 1998-06-04 2000-02-23 Ricoh Kk Adaptive coder which adjusts the speed with which it switches between possible code schemes based on its output
GB2340703B (en) * 1998-06-04 2000-10-11 Ricoh Kk Adaptive coding with adaptive speed
US6222468B1 (en) 1998-06-04 2001-04-24 Ricoh Company, Ltd. Adaptive coding with adaptive speed

Also Published As

Publication number Publication date
GB9624357D0 (en) 1997-01-08
GB2306868A (en) 1997-05-07
GB2306279B (en) 1997-10-22
GB9624754D0 (en) 1997-01-15
GB2306281B (en) 1997-10-22
GB2306868B (en) 1997-10-22
GB2306279A (en) 1997-04-30
GB2306280B (en) 1997-10-22
GB2306281A (en) 1997-04-30
GB9624358D0 (en) 1997-01-08
GB9624640D0 (en) 1997-01-15

Similar Documents

Publication Publication Date Title
US5717394A (en) Method and apparatus for encoding and decoding data
CA2156889C (en) Method and apparatus for encoding and decoding data
GB2306280A (en) A coding system and entropy decoder
US11705924B2 (en) Low-latency encoding using a bypass sub-stream and an entropy encoded sub-stream
US9698818B2 (en) Entropy encoding and decoding scheme
US5583500A (en) Method and apparatus for parallel encoding and decoding of data
US8907823B2 (en) Entropy coding
EP1912443B1 (en) Context-based adaptive arithmetic decoding system and apparatus
US5381145A (en) Method and apparatus for parallel decoding and encoding of data
US5973626A (en) Byte-based prefix encoding
US5912636A (en) Apparatus and method for performing m-ary finite state machine entropy coding
US20140210652A1 (en) Entropy coding
US5808570A (en) Device and method for pair-match Huffman transcoding and high-performance variable length decoder with two-word bit stream segmentation which utilizes the same
US20050007263A1 (en) Video coding
GB2333000A (en) Finite state machine coding of information
JP2022504604A (en) Methods and equipment for image compression
US5835033A (en) Decoding apparatus and method for coded data
KR100450753B1 (en) Programmable variable length decoder including interface of CPU processor
JP3230933B2 (en) Data decompression device, data decompression method, decoding device, decoding method, encoding device, and entropy decoder
CA2273144C (en) Apparatus and system for decoding data
Howard Interleaving entropy codes
US20240338855A1 (en) Split runlength encoding compression and decompression of multi-planar image data
JP2002152756A (en) Moving picture coder
KR19990038968A (en) Data variable device of variable length decoder
An et al. A video encoder/decoder architecture for consumer-use HD-DVCRs

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20070907