WO2000041396A1 - Method and device for robust decoding of header information in macroblock-based compressed video data - Google Patents

Method and device for robust decoding of header information in macroblock-based compressed video data Download PDF

Info

Publication number
WO2000041396A1
WO2000041396A1 PCT/US2000/000373 US0000373W WO0041396A1 WO 2000041396 A1 WO2000041396 A1 WO 2000041396A1 US 0000373 W US0000373 W US 0000373W WO 0041396 A1 WO0041396 A1 WO 0041396A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
optimal sequence
determining
codewords
encoded
Prior art date
Application number
PCT/US2000/000373
Other languages
French (fr)
Inventor
Jiangtao Wen
Original Assignee
Packetvideo Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Packetvideo Corporation filed Critical Packetvideo Corporation
Priority to EP00906881A priority Critical patent/EP1142344A1/en
Priority to KR1020017008651A priority patent/KR20010108077A/en
Priority to JP2000593025A priority patent/JP2002534920A/en
Publication of WO2000041396A1 publication Critical patent/WO2000041396A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present invention relates to the recovery of compressed digital data, and more particularly, to a device and method for decoding header information in macroblock-based encoded digital signals transmitted over error-prone channels.
  • Variable-length coding is a coding technique often used for lossless data compression.
  • an 8x8 block of pixels of the video data is converted into discrete cosine transform (“DCT") coefficients.
  • the DCT coefficients are then quantized by quantization factors.
  • the quantized DCT coefficients are Huffman encoded to form Huffman codewords.
  • Such an encoding of the video data contained in the bitstreams is commonly used to construct a minimum redundant variable-length code for a known data statistic.
  • One set of standards using Huffman encoding for compression of motion picture video images for transmission or storage is known as the Motion Picture Experts Group (“MPEG”) set of standards.
  • MPEG Motion Picture Experts Group
  • Each MPEG standard is an international standard for the compression of motion video pictures and audio.
  • the MPEG standards allow motion picture video to be compressed along with the corresponding high quality sound and provide other features such as single frame advance, reverse motion, and still frame video.
  • the decoding and processing of the MPEG video bitstreams are critical to the performance of any MPEG decoding system.
  • the compressed MPEG video bitstreams contain the various parameters needed in the reconstruction of the audio and video data.
  • the MPEG bitstream can easily be divided into two bitstreams, audio and video.
  • the MPEG video bitsream consists of the video parameters, as well as the actual compressed video data.
  • Two versions of the MPEG video standard which have received widespread adoption are commonly known as the MPEG-1 and MPEG-2 standards.
  • the MPEG-2 standard has higher resolution than the MPEG-1 standard and enables broadcast transmission at a rate of 4-6 Mbps.
  • a proposed MPEG-4 standard is currently being standardized by the ISO/IEC.
  • the MPEG4 standard is intended to facilitate, for example, content-based interactivity and certain wireless applications.
  • the video codecs specified by the standards provide compression of a digital video sequence by utilizing a block motion-compensated DCT.
  • a first block- matching step of the DCT process an algorithm estimates and compensates for the motion that occurs between two temporally adjacent frames. The frames are then compensated for the estimated motion and compared to form a difference image. By taking the difference between the two temporally adjacent frames, all existing temporal redundancy is removed. The only information that remains is new information that could not be compensated for in the motion estimation and compensation algorithm.
  • this new information is transformed into the frequency domain using the DCT.
  • the DCT has the property of compacting the energy of this new information into a few low frequency components. Further compression of the video sequence is obtained by limiting the amount of high frequency information encoded.
  • the motion estimation and compensation algorithm The majority of the compression provided by this approach to video encoding is obtained by the motion estimation and compensation algorithm. That is, it has been found to be more efficient to transmit information regarding the motion that exists in a video sequence, as opposed to information about the intensity and color.
  • the motion information is represented using vectors which point from a particular location in the current intensity frame to where that same location originated in the previous intensity frame. For block-matching, the locations are predetermined non-overlapping blocks of equal size called macroblocks ("MBs"). All pixels contained in a MB are assumed to have the same motion.
  • the motion vector associated with a particular MB in the present frame of a video sequence is found by searching over a predetermined search are in the previous temporally adjacent frame for a best match.
  • the motion vector points from the center of the MB in the current frame to the center of the block which provides the best match in the previous frame. Utilizing the estimated motion vectors, a copy of the previous frame is altered by each vector to produce a prediction of the current frame. This operation is referred to as motion compensation. As described above, each predicted MB is subtracted from the current MB to produce a differential MB which is transformed into the spatial frequency domain by the DCT. These spatial frequency coefficients are quantized and entropy encoded providing further compression of the original video sequence. The motion vectors are compressed using differential pulse code modulation ("DCPM"), and entropy encoding. Both the motion vectors and the DCT coefficients are transmitted to the decoder, where the inverse operations are performed to produce the decoded video sequence. Because the video codecs specified by the standards are very efficient at removing all but the most essential information, any errors in the reconstruction process effected by the decoder result in a portion of the video being constructed incorrectly.
  • DCPM differential pulse code modulation
  • Efforts have been made to design the MPEG4 standard to be particularly robust in its ability to accommodate transmission errors in order to allow accessing of image or video information over a wide range of storage and transmission media.
  • a number of different types of tools have been developed to enhance the error resiliency of the MPEG4 standard. These tools may be characterized as relating to resynchronization, data recovery and error concealment.
  • error resilient mode of the MPEG4 standard fixed length packets separated by resynchronization markers are used to transmit the video data. Within each packet, header information for the packet is placed in an initial packet segment and the actual encoded video data occupies the remainder of the packet. Information contained in the header portion of the packet includes and index to the first MB in the packet, quantization information, information concerning macroblock type and coded block pattern for chrominance ("MCBPC”), and motion information.
  • MBBPC macroblock type and coded block pattern for chrominance
  • this invention provides for a method and apparatus for decoding encoded parameter data included within a packet containing encoded video data.
  • the inventive method contemplates determining a bit length L of the encoded parameter data.
  • Candidate sequences of codewords are then compared to the encoded parameter data in accordance with a predetermined distortion metric.
  • An optimized sequence is selected from the candidate sequences based upon predefined criteria related to the distortion metric.
  • the optimized sequence collectively has a number of bits equivalent to the bit length L and is usable in decoding the encoded video data.
  • a first of the candidate sequences is preferably generated by selecting a first codeword hypothesis and determining a first conditionally optimal sequence of N-l codewords associated with the first codeword hypothesis.
  • Other candidate sequences are then generated by selecting different codeword hypotheses and determining associated conditionally optimal sequences of N-l codewords.
  • FIG. 1 is a block diagram of a video decoder operative to decode packet header data in accordance with the dynamic soft decoding technique of the present invention.
  • FIG. 2 provides a diagrammatic representation of an exemplary encoded video packet included within the bitstream provided to the video decoder of FIG. 1.
  • FIG. 3 is a generalized flow diagram of a preferred embodiment of a method for dynamic soft decoding of header information included within an encoded video packet.
  • FIGS. 4(a)-4(c) illustratively provide an example of the dynamic soft decoding of a packet header within the context of a simplified encoding system.
  • FIG. 4(d) is a code table containing three codewords referred to in the example represented by FIG. 4(a)-4(c).
  • FIG. 5 is a flow diagram representative of a preferred recursive routine disposed to implement the dynamic soft decoding procedure of the present invention.
  • FIG. 6 is a flow diagram representative of a preferred non-recursive routine disposed to implement the dynamic soft decoding procedure of the present invention.
  • FIG. 1 is a block diagram of a data transmission system which includes a video decoder 100 operative in the manner described herein.
  • the video decoder 100 functions to decode packet header data within encoded video packets contained within a received bitstream 104 using the dynamic soft decoding technique of the present invention.
  • a multiplexed audio and video bitstream 105 generated by an audio/video encoder 106 is provided via transmission channel 108 to the video decoder 100. Due to the unreliable nature of the transmission channel 108 errors are introduced into the bitstream 105, which results in particular bits of the received bitstream 104 differing from corresponding bits in the transmitted bitstream 105.
  • header data within each packet of the received bitstream 104 is decoded in the manner described below.
  • the video decoder 100 includes a demultiplexer 114 for separating encoded audio information from encoded video information included within the received bitstream 104.
  • the encoded audio bitstream is provided to an audio decoder 118, while the encoded video bitstream is provided to a video bitstream decoder 120.
  • the header of each packet of encoded video information is decoded in accordance with the present invention. Once the header of a given packet has been decoded, the resultant decoding parameters are used to decode the encoded video information within the macroblocks included in such packet.
  • the decoded video data is then provided to a conventional inverse quantizer and inverse DCT module 124.
  • FIG. 2 provides a diagrammatic representation of an exemplary encoded video packet 140 included within the bitstream 104 provided to the video decoder 100.
  • the encoder 106 places resynchronization markers 142a, 142b at approximately evenly spaced (in terms of bits) locations within the bitstream 104.
  • Each resynchronization marker 142 defines the beginning of an individual video packet.
  • successive macroblocks within a given video packet are encoded until the number of bits included in such packet exceeds a predetermined threshold. At this point a new video packet is created, and a resynchronization marker inserted, upon beginning encoding of the next macroblock.
  • the video packet 140 includes header information 144, which consists of the resynchronization marker 142a and other packet control information 146 necessary for restarting the decoding process.
  • the packet control information 146 is separated by a motion marker 148 from the remainder of the packet 140, which contains texture information 150 in the form of encoded macroblocks.
  • the packet control information 146 includes an index 152 to the location of the first macroblock in the packet 140, absolute quantization information 154, and interleaved MCBPC parameter and absolute motion vector information 156.
  • the quantization information 154 enables the differential decoding process to be restarted at the location of the first macroblock specified by the index 152.
  • the partitioning of texture information 150 and MCBPC/motion information 156 allows such information to be used in concealing errors which would otherwise arise as a result of loss of any texture information 150.
  • differential encoding is used to represent motion vectors associated with particular macroblocks.
  • absolute values of motion vectors and of other information e.g., quantization factors
  • the values of the motion vectors and quantization factors for the first macroblock of the packet 140 can be obtained by finding the sum of the absolute values of these parameters for the immediately preceding packet and the differential values for these parameters associated with the first macroblock.
  • FIG. 3 is a generalized flow diagram of a preferred embodiment of a method for dynamic soft decoding of header information included within the encoded video packet 140.
  • an initial step 160 the locations of the resynchronization markers 142a and 142b, and of the motion marker 148, are identified.
  • the length "L" (in bits) of the header information 144 is determined by comparing the locations of the motion marker 148 and the resynchronization marker 142a.
  • the number "N" of codewords to be used in decoding the texture information 150 is then determined by examining the indices to the first MBs in the current and subsequent packets.
  • step 166 a sequence of "N" codewords which contains L bits and which corresponds to the decoded header information for the packet 140 is found in accordance with the dynamic soft decoding technique described below.
  • an optimal decoding of the header information 144 into "N" codewords spanning "L” bits is given by D*(N,L), where
  • H*(l) is the first codeword within an optimum sequence of codewords defined by D*(N,L)
  • l H *(i) is the number of bits included within H*(l)
  • K is the number of codewords in the encoding system being utilized, and is a measure of the distance or distortion between the bitstream 104 and the most closely matching concatenation of codewords when the i th codeword is assumed to be the first codeword in the video packet 140.
  • the distance or distortion metric Dist(.) can be be a hard-decision based metric (e.g., Hamming distance), or can be a soft-decision based metric in cases where the bitstream decoder 120 is provided with some indication of the reliability of particular bits in the bitstream 104. Such an indication could be provided by, for example, a channel decoder having access to channel quality information. It follows that the optimal decoding result, D*(N,L), is the sequence of available codewords defining a bit pattern which minimizes a predefined distance metric when compared to the bitstream 104.
  • D*(N,L) is the sequence of available codewords defining a bit pattern which minimizes a predefined distance metric when compared to the bitstream 104.
  • FIGS. 4(a)-4(c) illustratively provide an example of the dynamic soft decoding of packet header data in accordance with the present invention in the context of a simplified three-codeword encoding system.
  • a conventional hard-decision, look-up-based decoder will output "AA” and then detect an error upon encountering a single "1" at the end of the packet. As is indicated by FIG. 4(b), such an error is detected because a single "1" is not a codeword included within the code table of FIG. 4(d).
  • FIG. 4(c) illustratively represents the inventive dynamic soft decoding process and corresponding results in the context of the present example.
  • the oval-shaped closed curves 167 indicate that a choice is to be made among the operations represented by the arrowed lines encircled thereby.
  • the thin arrowed lines 168 designate possible decompositions of the original opitmization problem D*(N,L).
  • Optimal results returned by lower-level operations performed during the process of determining D*(N,L) are represented by thicker arrowed lines 169.
  • equation (4) finds that the distance metric Dist(.) in equation (4) is the Hamming distance (i.e. the total number of different bits in the decoded and received bit sequence), then for the received packet "001" equation (4) involves finding
  • the reduction of expression (5) into expression (6) indicates that if the first codeword in the packet is a 2-bit code word it should be "B” rather than "C".
  • the optimal decoding result, D*(2,3) is equivalent to D*(l,2)+ 0, which has been shown to minimize (7) and be of value "1". Since the term D*(l,2)+ 0 arises in equations (6) and (7) under the assumption that the first codeword is "A”, and since (7) is minimized when the last codeword in the received packet is "C”, the optimal decoding result in the present example is "AC".
  • FIG. 5 is a flow diagram representative of a preferred recursive routine 170 disposed to implement the dynamic soft decoding procedure of the present invention.
  • FIG. 5 it is assumed that all operations in FIG. 3 have been performed on the received packet header information except for step 166. That is, the procedure of
  • step 180 it is determined whether D*(currN, currL) has already been obtained when the recursive routine 170 was initially called (with a reduced currN and currL) in connection with solution of the original problem D*(N,L) corresponding to the entirety of the packet being decoded. If so, the saved result D*(currN, currL) is returned (step 184) and decoding of the header information within the next received video packet is commenced. If not, a parameter Rest is set to an infinite value in a step 186 and a codeword C, is selected from a table of available codewords (e.g., see FIG. 4(d)) in step 188.
  • a parameter Rest is set to an infinite value in a step 186 and a codeword C, is selected from a table of available codewords (e.g., see FIG. 4(d)) in step 188.
  • a difference E is calculated by comparing the bits of the selected codeword C, to the first / bits in the header information of the received packet in accordance with the applicable distance metric. This bit length / is recorded in step 194, and corresponds to the bit length of the selected codeword C,.. If the number N of codewords is one (step 198), then a temporary variable TmpDist is assigned the value of the difference D, (step 202). If the number N of codewords is not equal to one, then TmpDist is assigned the value of + D*(currN-l, currL-l) (step 204).
  • routine 170 is again called to evaluate D*(currN-l, currL-l) in the manner contemplated by FIG. 5 (step 205).
  • D*(currN- ⁇ , currL-l) which may involve making one or more further calls to the recursive routine 170
  • a corresponding value of TmpDist is returned.
  • the originally calling instance of routine 170 determines whether the returned value for TmpDist is less than the current value of Best (step 206). If so, the value of Best is set to the current value of TmpDist (step 208), and the index "/" of C, is saved along with the bit length / of C, (step 210).
  • step 214 it is determined whether the routine 170 has evaluated D*(N, L) using each of the K available codewords (C ⁇ , C 2 , ..., C K ) as the first codeword in the decoded sequence. If so, a flag is set and the routine 170 is terminated upon returning a value of Best as the distortion value associated with the optimal sequence of N codewords (step 216). If all K codewords have not been used as C Compute then the value of the index i is incremented by one (step 218). Processing then continues at step 188 using the next codeword C,.
  • FIG. 6 is a flow diagram representative of a preferred non-recursive routine
  • routine 250 disposed to implement the dynamic soft decoding procedure of the present invention.
  • the non-recursive routine 250 for determining D*(N, L) is executed using a memory stack having a plurality of stack elements. Each stack element includes three fields for holding corresponding values of the parameters (currDist, currN, currL, currS).
  • the parameter currDist reflects an accumulated value of the applicable distance metric for a partially decoded sequence in which a particular codeword has been selected as the first codeword in the sequence.
  • the value of currDist is incremented in accordance with the applicable distance metric each time a new codeword is added to the partially decoded sequence.
  • the parameter currN specifies the number of additional codewords remaining to be added to the partially decoded sequence, and the parameter currL reflects the remaining number of bits available to encode such additional currN codewords.
  • the parameter currS consists of an aggregation of the ⁇ -cwrrN codewords (consuming -currL bits) which have been decoded as of the time when currN codewords remain to be decoded.
  • routine 250 Since the routine 250 is non-recursive, it does not call itself (as does the recursive routine 170) to solve "intermediate" problems of the form D*(N-n, L-l). Rather, these intermediate problems are saved in the stack in a way facilitating evaluation of the expression of ultimate interest, E>*(N, L).
  • the stack is loaded in a "first-in, last-out" manner such that the (currDist, currN, currL, currS) parameters corresponding to the problem of ultimate interest, i.e., (0, N, L, ⁇ ) wherein ⁇ denotes the empty string, are pushed into the stack first and popped from the stack last.
  • This reflects an intention to decompose the problem of ultimate interest into a set of smaller, intermediate problems, each of which hypothesizes a different codeword as the first codeword in the decoded sequence.
  • an initialization step 254 the parameters (0, N L, ⁇ ) are pushed into the stack and the parameter Best is assigned an infinite value. If the stack is determined not to be empty (step 258), then the parameters (currDist, currN, currL, currS) are popped from the stack (step 262) and are used as described below. If currN is not equal to zero (step 264), then the value of an index "/ " is set to one (step 268) and a codeword C, is selected from a table of available codewords (step 270) to be the first codeword in a potentially optimal decoded sequence.
  • a value for the parameter TmpDist is then determined by comparing the bits of the selected codeword C, to the first , bits in the header information of the received packet in accordance with the applicable distance metric, where /, is the bit length of the selected codeword C, (step 274). This bit length I, is also recorded in step 274.
  • step 276 After appending C, to currS (step 276), the parameters (currDist + TmpDist, currN-l, currL-l,) are then pushed into the stack (step 278).
  • step 294 it is determined whether currDist is less than the current value of the parameter Best (step 294). If not, the routine 250 returns to step 258. If so, the value of the parameter Best is made equal to currDist, currS is saved as the current optimal decoding result BestS, and processing returns to step 258. If the stack is found to be empty at step 258, then the optimal codeword sequence BestS is returned together with the associated value D*(N,L) (represented by the parameter Best) of the applicable distance metric and the routine 250 terminates (step 262).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Error Detection And Correction (AREA)
  • Detection And Prevention Of Errors In Transmission (AREA)

Abstract

A method and apparatus for decoding encoded parameter data included within a packet containing encoded video data transmitted over an error prone channel is disclosed herein. The method contemplates determining a bit length L of the encoded parameter data. Candidate sequences of codewords are then compared to the encoded parameter data in accordance with a predetermined distortion metric. An optimized sequence is selected from the candidate sequences based upon predefined criteria related to the distortion metric. The optimized sequence collectively has a number of bits equivalent to the bit length L and is usable in decoding the encoded video data.

Description

METHOD AND DEVICE FOR ROBUST DECODING OF
HEADER INFORMATION IN MACROBLOCK-BASED
COMPRESSED VIDEO DATA
FIELD OF THE INVENTION
The present invention relates to the recovery of compressed digital data, and more particularly, to a device and method for decoding header information in macroblock-based encoded digital signals transmitted over error-prone channels. BACKGROUND OF THE INVENTION
Recently, demands for full motion video in such applications as video telephony, video conferencing, and/or multimedia applications have required that standards be introduced for motion video on computer and related systems. Such applications have required development of compression techniques which can reproduce the amount of data required to represent a moving image and corresponding sound to manageable lengths in order to, for example, facilitate data transmission using conventional communications hardware,
Variable-length coding is a coding technique often used for lossless data compression. In accordance with this technique, an 8x8 block of pixels of the video data is converted into discrete cosine transform ("DCT") coefficients. The DCT coefficients are then quantized by quantization factors. The quantized DCT coefficients are Huffman encoded to form Huffman codewords. Such an encoding of the video data contained in the bitstreams is commonly used to construct a minimum redundant variable-length code for a known data statistic. One set of standards using Huffman encoding for compression of motion picture video images for transmission or storage is known as the Motion Picture Experts Group ("MPEG") set of standards. Each MPEG standard is an international standard for the compression of motion video pictures and audio. The MPEG standards allow motion picture video to be compressed along with the corresponding high quality sound and provide other features such as single frame advance, reverse motion, and still frame video.
The decoding and processing of the MPEG video bitstreams are critical to the performance of any MPEG decoding system. The compressed MPEG video bitstreams contain the various parameters needed in the reconstruction of the audio and video data. The MPEG bitstream can easily be divided into two bitstreams, audio and video. The MPEG video bitsream consists of the video parameters, as well as the actual compressed video data. Two versions of the MPEG video standard which have received widespread adoption are commonly known as the MPEG-1 and MPEG-2 standards. In general, the MPEG-2 standard has higher resolution than the MPEG-1 standard and enables broadcast transmission at a rate of 4-6 Mbps. In addition to the MPEG-1 and MPEG- 2 standards, a proposed MPEG-4 standard is currently being standardized by the ISO/IEC. The MPEG4 standard is intended to facilitate, for example, content-based interactivity and certain wireless applications.
The video codecs specified by the standards provide compression of a digital video sequence by utilizing a block motion-compensated DCT. In a first block- matching step of the DCT process, an algorithm estimates and compensates for the motion that occurs between two temporally adjacent frames. The frames are then compensated for the estimated motion and compared to form a difference image. By taking the difference between the two temporally adjacent frames, all existing temporal redundancy is removed. The only information that remains is new information that could not be compensated for in the motion estimation and compensation algorithm.
In a second step, this new information is transformed into the frequency domain using the DCT. The DCT has the property of compacting the energy of this new information into a few low frequency components. Further compression of the video sequence is obtained by limiting the amount of high frequency information encoded.
The majority of the compression provided by this approach to video encoding is obtained by the motion estimation and compensation algorithm. That is, it has been found to be more efficient to transmit information regarding the motion that exists in a video sequence, as opposed to information about the intensity and color. The motion information is represented using vectors which point from a particular location in the current intensity frame to where that same location originated in the previous intensity frame. For block-matching, the locations are predetermined non-overlapping blocks of equal size called macroblocks ("MBs"). All pixels contained in a MB are assumed to have the same motion. The motion vector associated with a particular MB in the present frame of a video sequence is found by searching over a predetermined search are in the previous temporally adjacent frame for a best match. The motion vector points from the center of the MB in the current frame to the center of the block which provides the best match in the previous frame. Utilizing the estimated motion vectors, a copy of the previous frame is altered by each vector to produce a prediction of the current frame. This operation is referred to as motion compensation. As described above, each predicted MB is subtracted from the current MB to produce a differential MB which is transformed into the spatial frequency domain by the DCT. These spatial frequency coefficients are quantized and entropy encoded providing further compression of the original video sequence. The motion vectors are compressed using differential pulse code modulation ("DCPM"), and entropy encoding. Both the motion vectors and the DCT coefficients are transmitted to the decoder, where the inverse operations are performed to produce the decoded video sequence. Because the video codecs specified by the standards are very efficient at removing all but the most essential information, any errors in the reconstruction process effected by the decoder result in a portion of the video being constructed incorrectly.
Efforts have been made to design the MPEG4 standard to be particularly robust in its ability to accommodate transmission errors in order to allow accessing of image or video information over a wide range of storage and transmission media. In this regard a number of different types of tools have been developed to enhance the error resiliency of the MPEG4 standard. These tools may be characterized as relating to resynchronization, data recovery and error concealment. In a particular error resilient mode of the MPEG4 standard, fixed length packets separated by resynchronization markers are used to transmit the video data. Within each packet, header information for the packet is placed in an initial packet segment and the actual encoded video data occupies the remainder of the packet. Information contained in the header portion of the packet includes and index to the first MB in the packet, quantization information, information concerning macroblock type and coded block pattern for chrominance ("MCBPC"), and motion information.
Detection, location and correction of any errors present in the header information is crucial to ensuring that the decoded video information is of sufficient quality. This is particularly important in the context of wireless communication systems, which operate in particularly error-prone environments. SUMMARY OF THE INVENTION
Briefly, therefore, this invention provides for a method and apparatus for decoding encoded parameter data included within a packet containing encoded video data. The inventive method contemplates determining a bit length L of the encoded parameter data. Candidate sequences of codewords are then compared to the encoded parameter data in accordance with a predetermined distortion metric. An optimized sequence is selected from the candidate sequences based upon predefined criteria related to the distortion metric. The optimized sequence collectively has a number of bits equivalent to the bit length L and is usable in decoding the encoded video data.
A first of the candidate sequences is preferably generated by selecting a first codeword hypothesis and determining a first conditionally optimal sequence of N-l codewords associated with the first codeword hypothesis. Other candidate sequences are then generated by selecting different codeword hypotheses and determining associated conditionally optimal sequences of N-l codewords.
BRIEF DESCRIPTION OF THE DRAWINGS
In the accompanying drawings:
FIG. 1 is a block diagram of a video decoder operative to decode packet header data in accordance with the dynamic soft decoding technique of the present invention.
FIG. 2 provides a diagrammatic representation of an exemplary encoded video packet included within the bitstream provided to the video decoder of FIG. 1.
FIG. 3 is a generalized flow diagram of a preferred embodiment of a method for dynamic soft decoding of header information included within an encoded video packet.
FIGS. 4(a)-4(c) illustratively provide an example of the dynamic soft decoding of a packet header within the context of a simplified encoding system.
FIG. 4(d) is a code table containing three codewords referred to in the example represented by FIG. 4(a)-4(c).
FIG. 5 is a flow diagram representative of a preferred recursive routine disposed to implement the dynamic soft decoding procedure of the present invention.
FIG. 6 is a flow diagram representative of a preferred non-recursive routine disposed to implement the dynamic soft decoding procedure of the present invention. DETAILED DESCRIPTION OF THE INVENTION
The present invention is more fully described with reference to FIGS. 1 - 6. FIG. 1 is a block diagram of a data transmission system which includes a video decoder 100 operative in the manner described herein. The video decoder 100 functions to decode packet header data within encoded video packets contained within a received bitstream 104 using the dynamic soft decoding technique of the present invention. As is indicated by FIG. 1, a multiplexed audio and video bitstream 105 generated by an audio/video encoder 106 is provided via transmission channel 108 to the video decoder 100. Due to the unreliable nature of the transmission channel 108 errors are introduced into the bitstream 105, which results in particular bits of the received bitstream 104 differing from corresponding bits in the transmitted bitstream 105. If the video decoder 100 were to blindly employ a hard-decision based decoding algorithm, these errors could have a disastrous effect on the visual quality of the resulting video. In order to avoid such a result, header data within each packet of the received bitstream 104 is decoded in the manner described below.
The video decoder 100 includes a demultiplexer 114 for separating encoded audio information from encoded video information included within the received bitstream 104. The encoded audio bitstream is provided to an audio decoder 118, while the encoded video bitstream is provided to a video bitstream decoder 120. Within the video bitstream decoder 120, the header of each packet of encoded video information is decoded in accordance with the present invention. Once the header of a given packet has been decoded, the resultant decoding parameters are used to decode the encoded video information within the macroblocks included in such packet. The decoded video data is then provided to a conventional inverse quantizer and inverse DCT module 124. When motion compensation is desired to be effected, a controller (not shown) sets switch 125 such that the output of the inverse DCT module 124 is modified by a motion compensation module 128 at difference block 130. The motion compensated video bitstream produced by the difference block 130 is fed back to the motion compensation module 128, and is provided to a standard postprocessor unit 134. When motion compensation is not desired, the controller sets switch 125 such that the output of the inverse DCT module 124 is provided directly to the postprocessor 134. FIG. 2 provides a diagrammatic representation of an exemplary encoded video packet 140 included within the bitstream 104 provided to the video decoder 100. In an exemplary implementation, the encoder 106 places resynchronization markers 142a, 142b at approximately evenly spaced (in terms of bits) locations within the bitstream 104. Each resynchronization marker 142 defines the beginning of an individual video packet. Within encoder 106, successive macroblocks within a given video packet are encoded until the number of bits included in such packet exceeds a predetermined threshold. At this point a new video packet is created, and a resynchronization marker inserted, upon beginning encoding of the next macroblock. The video packet 140 includes header information 144, which consists of the resynchronization marker 142a and other packet control information 146 necessary for restarting the decoding process. The packet control information 146 is separated by a motion marker 148 from the remainder of the packet 140, which contains texture information 150 in the form of encoded macroblocks. The packet control information 146 includes an index 152 to the location of the first macroblock in the packet 140, absolute quantization information 154, and interleaved MCBPC parameter and absolute motion vector information 156. The quantization information 154 enables the differential decoding process to be restarted at the location of the first macroblock specified by the index 152. The partitioning of texture information 150 and MCBPC/motion information 156 allows such information to be used in concealing errors which would otherwise arise as a result of loss of any texture information 150.
As mentioned in the Background of the Invention, differential encoding is used to represent motion vectors associated with particular macroblocks. Upon resynchronization of the decoder 100 at each resynchronization marker 142, absolute values of motion vectors and of other information (e.g., quantization factors) associated with the immediately preceding video packet is extracted from the header information 144. Accordingly, if the video packet immediately preceding the packet 140 is lost, the values of the motion vectors and quantization factors for the first macroblock of the packet 140 can be obtained by finding the sum of the absolute values of these parameters for the immediately preceding packet and the differential values for these parameters associated with the first macroblock.
FIG. 3 is a generalized flow diagram of a preferred embodiment of a method for dynamic soft decoding of header information included within the encoded video packet 140. In an initial step 160 the locations of the resynchronization markers 142a and 142b, and of the motion marker 148, are identified. Next, in step 164 the length "L" (in bits) of the header information 144 is determined by comparing the locations of the motion marker 148 and the resynchronization marker 142a. The number "N" of codewords to be used in decoding the texture information 150 is then determined by examining the indices to the first MBs in the current and subsequent packets. In step 166, a sequence of "N" codewords which contains L bits and which corresponds to the decoded header information for the packet 140 is found in accordance with the dynamic soft decoding technique described below.
In accordance with the present invention, an optimal decoding of the header information 144 into "N" codewords spanning "L" bits is given by D*(N,L), where
D *(N,L) =Z *(N-l,L-lH*ω) +H*(1), ( 1 )
H*(l)=arg minl=l ,KD*(N-l,L-lH(l))+ Dist(MCBPC,=i). (2) wherein H*(l) is the first codeword within an optimum sequence of codewords defined by D*(N,L), lH*(i) is the number of bits included within H*(l), K is the number of codewords in the encoding system being utilized, and
Figure imgf000009_0001
is a measure of the distance or distortion between the bitstream 104 and the most closely matching concatenation of codewords when the ith codeword is assumed to be the first codeword in the video packet 140. The distance or distortion metric Dist(.) can be be a hard-decision based metric (e.g., Hamming distance), or can be a soft-decision based metric in cases where the bitstream decoder 120 is provided with some indication of the reliability of particular bits in the bitstream 104. Such an indication could be provided by, for example, a channel decoder having access to channel quality information. It follows that the optimal decoding result, D*(N,L), is the sequence of available codewords defining a bit pattern which minimizes a predefined distance metric when compared to the bitstream 104.
FIGS. 4(a)-4(c) illustratively provide an example of the dynamic soft decoding of packet header data in accordance with the present invention in the context of a simplified three-codeword encoding system. FIG. 4(d) is a code table containing the three codewords, {"A"="0", "B"="10", "C"="l l"}, of the simplified encoding system. Referring to FIG. 4(a), consider a transmitted bitstream 105 which includes a 3-bit packet "011" corresponding to the message "AC" (L=3, N=2). Also suppose that this packet is corrupted by the transmission channel 108 and received as the packet "001" is received at the decoder 100 (FIG. 4(b)). In this case a conventional hard-decision, look-up-based decoder will output "AA" and then detect an error upon encountering a single "1" at the end of the packet. As is indicated by FIG. 4(b), such an error is detected because a single "1" is not a codeword included within the code table of FIG. 4(d).
FIG. 4(c) illustratively represents the inventive dynamic soft decoding process and corresponding results in the context of the present example. In FIG. 4(c), the oval-shaped closed curves 167 indicate that a choice is to be made among the operations represented by the arrowed lines encircled thereby. The thin arrowed lines 168 designate possible decompositions of the original opitmization problem D*(N,L). Optimal results returned by lower-level operations performed during the process of determining D*(N,L) are represented by thicker arrowed lines 169.
As is indicated by FIG. 4(c), applying equations (1) and (2) to the present example results in
D*(N,L)= D*(2,3)=D*(l,3-lH>a))+H*(l), (3) H*(l) =arg minl=1 2ι D*(l, 3-lH(I)) + Dist(codeword,=i)
(4)
If it is assumed that the distance metric Dist(.) in equation (4) is the Hamming distance (i.e. the total number of different bits in the decoded and received bit sequence), then for the received packet "001" equation (4) involves finding
Min {D*(l,2)+ 0, D*(l,l)+ 1, D*(l,l)+2}. (5) where (5) corresponds to the optimization involved when the first codeword in the packet is assumed to be A, B and C respectively. With regard to the first term in (5), Dist(A) = 0 since the first received bit in the packet "001" is "0", and from FIG. 4(d) the value of "A" is also "0". The values for Dist(B) = 1 and Dist(C) = 2 may be obtained similarly. Since D*(l,l)+ 1 is clearly less than D*(l,])+2, (5) is equivalent to finding
Min {D*(l, 2)+ 0, D*(1, 1)+ 1}, (6)
That is, the reduction of expression (5) into expression (6) indicates that if the first codeword in the packet is a 2-bit code word it should be "B" rather than "C". The bitstream decoder 120 is operative to decompose the problem posed by (6) by finding D*(l,2) and D*(l,l), with respect to the last two bits in the received packet "001", using equations (1) and (2). By comparing the last two bits in the received packet "001" to the code table of FIG. 4(d), it is clear that D*(l,2)=l (i.e. the Dist(.) metric is minimized and equal to "1" when the last two bits "01" in the received packet "001" are assumed to be the 2-bit code word "C"="l l"). Performing the same type of compaπson yields D*(l,l)=l (i.e. the last bit m the received packet "001" has a value of "1" and is a distance of "1" away from the only 1- bit code word in the code table of FIG. 4(d), "A"=0). Inserting D*(l,2) = 1 and D*(l,l) = 1 into (6) gives
D*(2,3)=Min {D*(l,2)+ 0, D*(l,l)+ l}=Mιn {1, 2}, (1)
It follows that in the present example the optimal decoding result, D*(2,3), is equivalent to D*(l,2)+ 0, which has been shown to minimize (7) and be of value "1". Since the term D*(l,2)+ 0 arises in equations (6) and (7) under the assumption that the first codeword is "A", and since (7) is minimized when the last codeword in the received packet is "C", the optimal decoding result in the present example is "AC".
FIG. 5 is a flow diagram representative of a preferred recursive routine 170 disposed to implement the dynamic soft decoding procedure of the present invention. In FIG. 5 it is assumed that all operations in FIG. 3 have been performed on the received packet header information except for step 166. That is, the procedure of
FIG. 5 is used to find the optimal decoding result D*(currN=N,currL-L) once the bit length L of the packet header information 144 and the number of codewords N to be included in the decoded header information have been determined as described above.
Referring to FIG. 5, in step 180 it is determined whether D*(currN, currL) has already been obtained when the recursive routine 170 was initially called (with a reduced currN and currL) in connection with solution of the original problem D*(N,L) corresponding to the entirety of the packet being decoded. If so, the saved result D*(currN, currL) is returned (step 184) and decoding of the header information within the next received video packet is commenced. If not, a parameter Rest is set to an infinite value in a step 186 and a codeword C, is selected from a table of available codewords (e.g., see FIG. 4(d)) in step 188.
In step 190, a difference E», is calculated by comparing the bits of the selected codeword C, to the first / bits in the header information of the received packet in accordance with the applicable distance metric. This bit length / is recorded in step 194, and corresponds to the bit length of the selected codeword C,.. If the number N of codewords is one (step 198), then a temporary variable TmpDist is assigned the value of the difference D, (step 202). If the number N of codewords is not equal to one, then TmpDist is assigned the value of + D*(currN-l, currL-l) (step 204). In this case the recursive routine 170 is again called to evaluate D*(currN-l, currL-l) in the manner contemplated by FIG. 5 (step 205). Once this called instance of routine 170 has evaluated D*(currN-\, currL-l), which may involve making one or more further calls to the recursive routine 170, a corresponding value of TmpDist is returned. The originally calling instance of routine 170 then determines whether the returned value for TmpDist is less than the current value of Best (step 206). If so, the value of Best is set to the current value of TmpDist (step 208), and the index "/" of C, is saved along with the bit length / of C, (step 210).
As is indicated by FIG. 5, in step 214 it is determined whether the routine 170 has evaluated D*(N, L) using each of the K available codewords (Cι, C2, ..., CK) as the first codeword in the decoded sequence. If so, a flag is set and the routine 170 is terminated upon returning a value of Best as the distortion value associated with the optimal sequence of N codewords (step 216). If all K codewords have not been used as C„ then the value of the index i is incremented by one (step 218). Processing then continues at step 188 using the next codeword C,. FIG. 6 is a flow diagram representative of a preferred non-recursive routine
250 disposed to implement the dynamic soft decoding procedure of the present invention. As in the case of FIG. 5, upon entering routine 250 it is assumed that all operations in FIG. 3 have been performed on the received packet header information except for step 166. The non-recursive routine 250 for determining D*(N, L) is executed using a memory stack having a plurality of stack elements. Each stack element includes three fields for holding corresponding values of the parameters (currDist, currN, currL, currS). As will be further described with reference to FIG. 6, the parameter currDist reflects an accumulated value of the applicable distance metric for a partially decoded sequence in which a particular codeword has been selected as the first codeword in the sequence. That is, the value of currDist is incremented in accordance with the applicable distance metric each time a new codeword is added to the partially decoded sequence. The parameter currN specifies the number of additional codewords remaining to be added to the partially decoded sequence, and the parameter currL reflects the remaining number of bits available to encode such additional currN codewords. In addition, the parameter currS consists of an aggregation of the Ν-cwrrN codewords (consuming -currL bits) which have been decoded as of the time when currN codewords remain to be decoded.
Since the routine 250 is non-recursive, it does not call itself (as does the recursive routine 170) to solve "intermediate" problems of the form D*(N-n, L-l). Rather, these intermediate problems are saved in the stack in a way facilitating evaluation of the expression of ultimate interest, E>*(N, L). In this regard the stack is loaded in a "first-in, last-out" manner such that the (currDist, currN, currL, currS) parameters corresponding to the problem of ultimate interest, i.e., (0, N, L, φ) wherein φ denotes the empty string, are pushed into the stack first and popped from the stack last. This reflects an intention to decompose the problem of ultimate interest into a set of smaller, intermediate problems, each of which hypothesizes a different codeword as the first codeword in the decoded sequence.
As is indicated by FIG. 6, in an initialization step 254 the parameters (0, N L, φ) are pushed into the stack and the parameter Best is assigned an infinite value. If the stack is determined not to be empty (step 258), then the parameters (currDist, currN, currL, currS) are popped from the stack (step 262) and are used as described below. If currN is not equal to zero (step 264), then the value of an index "/ " is set to one (step 268) and a codeword C, is selected from a table of available codewords (step 270) to be the first codeword in a potentially optimal decoded sequence.
A value for the parameter TmpDist is then determined by comparing the bits of the selected codeword C, to the first , bits in the header information of the received packet in accordance with the applicable distance metric, where /, is the bit length of the selected codeword C, (step 274). This bit length I, is also recorded in step 274. In step 275 it is determined whether the value of the expression currDist + tmpDist is less than the value of currDistE associated with any other element E=(currDistE, currNg, curr∑E, currSε) in the stack with currN g=currN-l, and curr∑E=currL-l, . After appending C, to currS (step 276), the parameters (currDist + TmpDist, currN-l, currL-l,) are then pushed into the stack (step 278). In a step 280 it is determined whether all K available codewords have been used as the initial codeword C, in a potentially optimal decoded sequence (i.e., whether i is less than K). If not, the index i is incremented by one (step 282), and a new codeword C, is selected (step 284). Steps 274 and 278 are then repeated for each available codeword C„ i=\, 2, ..., K, at which point the routine 250 returns to step 258. If it is found that the stack is not empty (step 258) and if currN=0, then it is determined whether currDist is less than the current value of the parameter Best (step 294). If not, the routine 250 returns to step 258. If so, the value of the parameter Best is made equal to currDist, currS is saved as the current optimal decoding result BestS, and processing returns to step 258. If the stack is found to be empty at step 258, then the optimal codeword sequence BestS is returned together with the associated value D*(N,L) (represented by the parameter Best) of the applicable distance metric and the routine 250 terminates (step 262). Although the above application has been described primarily in the context of a system in which the header information included within received video packets is decoded and then used to decode associated macroblock-based encoded video information, one skilled in the art can readily appreciate that the teachings of the present invention may be applied to the decoding of other packet formats. Thus the application is meant only to be limited by the scope of the appended claims.

Claims

What is claimed is:
1. A method for decoding encoded parameter data included within a packet containing encoded video data, said method comprising the steps of: determining a bit length L of said encoded parameter data; comparing candidate sequences of codewords to said encoded parameter data in accordance with a distortion metric; and selecting an optimal sequence from said candidate sequences based upon predefined criteria related to said distortion metric.
2. The method of claim 1 wherein said optimal sequence collectively has a number of bits equivalent to said bit length L and is usable to decode said encoded video data.
3. The method of claim 1 wherein wherein said step of determining said bit length L includes the step of calculating a number of bits between first and second markers included within said packet, and said step of comparing includes the step of selecting a first codeword hypothesis and determining a first conditionally optimal sequence of N-l of said codewords associated with said first codeword hypothesis.
4. The method of claim 3 wherein said step of comparing includes the step of selecting a second codeword hypothesis and determining a second conditionally optimal sequence of N-l of said codewords associated with said second codeword hypothesis.
5. The method of claim 4 wherein said step of comparing includes the steps of determining a first error associated with said first conditionally optimal sequence and a second error associated with said second conditionally optimal sequence, and comparing said first error to said second error.
6. A device for decoding encoded parameter data included within a packet containing encoded video data, said device comprising: means for determining a bit length L of said encoded parameter data; and means for comparing candidate sequences of codewords to said encoded parameter data in accordance with a distortion metric; and means for selecting an optimal sequence from said candidate sequences based upon predefined criteria related to said distortion metric.
7. The device of claim 6 wherein said optimal sequence collectively has a number of bits equivalent to said bit length L and is usable to decode said encoded video data, and wherein said means for determining said bit length L includes means for calculating a number of bits between first and second markers included within said packet.
8. The device of claim 6 wherein said means for comparing includes means for selecting a first codeword hypothesis and for determining a first conditionally optimal sequence of N-l of said codewords associated with said first codeword hypothesis.
9. The device of claim 8 wherein said means for comparing includes means for selecting a second codeword hypothesis and for determining a second conditionally optimal sequence of N-l of said codewords associated with said second codeword hypothesis.
10. The device of claim 9 wherein said means for comparing includes: means for determining a first error associated with said first conditionally optimal sequence and a second error associated with said second conditionally optimal sequence, and means for comparing said first error to said second error.
11. A decoder for decoding a packet containing macroblocks of encoded video data wherein said packet includes encoded parameter data, said decoder comprising: means for determining a bit length L of said encoded parameter data; means for comparing candidate sequences of codewords to said encoded parameter data in accordance with a distortion metric; means for selecting an optimal sequence from said candidate sequences based upon predefined criteria related to said distortion metric; and a decoding unit for decoding said macroblocks of encoded video data using said optimal sequence.
12. The decoder of claim 11 wherein said optimal sequence collectively has a number of bits equivalent to said bit length L and is usable to decode said macroblocks of encoded video data, and wherein said means for determining said bit length L includes means for calculating a number of bits between first and second markers included within said packet.
13. The decoder of claim 11 wherein said means for comparing includes means for selecting a first codeword hypothesis and for determining a first conditionally optimal sequence of N-l of said codewords associated with said first codeword hypothesis.
PCT/US2000/000373 1999-01-07 2000-01-07 Method and device for robust decoding of header information in macroblock-based compressed video data WO2000041396A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP00906881A EP1142344A1 (en) 1999-01-07 2000-01-07 Method and device for robust decoding of header information in macroblock-based compressed video data
KR1020017008651A KR20010108077A (en) 1999-01-07 2000-01-07 Method and device for robust decoding of header information in macroblock-based compressed video data
JP2000593025A JP2002534920A (en) 1999-01-07 2000-01-07 Robust decoding method and apparatus for header information in macroblock-based compressed video data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/226,227 US6356661B1 (en) 1999-01-07 1999-01-07 Method and device for robust decoding of header information in macroblock-based compressed video data
US09/226,227 1999-01-07

Publications (1)

Publication Number Publication Date
WO2000041396A1 true WO2000041396A1 (en) 2000-07-13

Family

ID=22848076

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/000373 WO2000041396A1 (en) 1999-01-07 2000-01-07 Method and device for robust decoding of header information in macroblock-based compressed video data

Country Status (6)

Country Link
US (1) US6356661B1 (en)
EP (1) EP1142344A1 (en)
JP (1) JP2002534920A (en)
KR (1) KR20010108077A (en)
CN (1) CN1342371A (en)
WO (1) WO2000041396A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100454501B1 (en) * 2001-12-26 2004-10-28 브이케이 주식회사 Apparatus for prediction to code or decode image signal and method therefor

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3593883B2 (en) * 1998-05-15 2004-11-24 株式会社日立製作所 Video stream transmission / reception system
US7010737B2 (en) * 1999-02-12 2006-03-07 Sony Corporation Method and apparatus for error data recovery
US7812856B2 (en) 2000-10-26 2010-10-12 Front Row Technologies, Llc Providing multiple perspectives of a venue activity to electronic wireless hand held devices
US7630721B2 (en) 2000-06-27 2009-12-08 Ortiz & Associates Consulting, Llc Systems, methods and apparatuses for brokering data between wireless devices and data rendering devices
US20060007201A1 (en) * 2004-07-06 2006-01-12 Her-Ming Jong Image display controller with processing data protection
US9172952B2 (en) * 2012-06-25 2015-10-27 Cisco Technology, Inc. Method and system for analyzing video stream accuracy in a network environment
CN107667495B (en) * 2015-04-08 2021-03-12 瑞典爱立信有限公司 Method and apparatus for decoding a message
WO2016173922A1 (en) * 2015-04-30 2016-11-03 Telefonaktiebolaget Lm Ericsson (Publ) Decoding of messages

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5440572A (en) * 1993-09-20 1995-08-08 Kabushiki Kaisha Toshiba Digital signal decoding apparatus and a method thereof having a function of initializing a pass metric for at least one compression block
US5629958A (en) * 1994-07-08 1997-05-13 Zenith Electronics Corporation Data frame structure and synchronization system for digital television signal

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5642437A (en) * 1992-02-22 1997-06-24 Texas Instruments Incorporated System decoder circuit with temporary bit storage and method of operation
US5729556A (en) * 1993-02-22 1998-03-17 Texas Instruments System decoder circuit with temporary bit storage and method of operation
US5668925A (en) 1995-06-01 1997-09-16 Martin Marietta Corporation Low data rate speech encoder with mixed excitation
US5724369A (en) * 1995-10-26 1998-03-03 Motorola Inc. Method and device for concealment and containment of errors in a macroblock-based video codec
US5778191A (en) * 1995-10-26 1998-07-07 Motorola, Inc. Method and device for error control of a macroblock-based video compression technique
US5767799A (en) 1995-12-05 1998-06-16 Mitsubishi Semiconductor America, Inc. Low power high speed MPEG video variable length decoder
US5867221A (en) * 1996-03-29 1999-02-02 Interated Systems, Inc. Method and system for the fractal compression of data using an integrated circuit for discrete cosine transform compression/decompression
JP3823275B2 (en) 1996-06-10 2006-09-20 富士通株式会社 Video encoding device
US6141453A (en) * 1998-02-11 2000-10-31 Motorola, Inc. Method, device and digital camera for error control and region of interest localization of a wavelet based image compression system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5440572A (en) * 1993-09-20 1995-08-08 Kabushiki Kaisha Toshiba Digital signal decoding apparatus and a method thereof having a function of initializing a pass metric for at least one compression block
US5629958A (en) * 1994-07-08 1997-05-13 Zenith Electronics Corporation Data frame structure and synchronization system for digital television signal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FORNEY G D: "THE VITERBI ALGORITHM", PROCEEDINGS OF THE IEEE,US,IEEE. NEW YORK, vol. 61, no. 3, March 1973 (1973-03-01), pages 268 - 278, XP000760425, ISSN: 0018-9219 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100454501B1 (en) * 2001-12-26 2004-10-28 브이케이 주식회사 Apparatus for prediction to code or decode image signal and method therefor

Also Published As

Publication number Publication date
JP2002534920A (en) 2002-10-15
CN1342371A (en) 2002-03-27
EP1142344A1 (en) 2001-10-10
KR20010108077A (en) 2001-12-07
US6356661B1 (en) 2002-03-12

Similar Documents

Publication Publication Date Title
JP5007012B2 (en) Video encoding method
US7212576B2 (en) Picture encoding method and apparatus and picture decoding method and apparatus
JP4362259B2 (en) Video encoding method
US20050123056A1 (en) Encoding and decoding of redundant pictures
US10484688B2 (en) Method and apparatus for encoding processing blocks of a frame of a sequence of video frames using skip scheme
US6356661B1 (en) Method and device for robust decoding of header information in macroblock-based compressed video data
JP3756897B2 (en) Moving picture coding apparatus and moving picture coding method
WO2002019709A1 (en) Dual priority video transmission for mobile applications
JP4302093B2 (en) Moving picture coding apparatus and moving picture coding method
JP3756901B2 (en) Moving picture decoding apparatus and moving picture decoding method
JP3756902B2 (en) Moving picture decoding apparatus and moving picture decoding method
KR100557047B1 (en) Method for moving picture decoding
JP3756900B2 (en) Moving picture decoding apparatus and moving picture decoding method
JP4302094B2 (en) Moving picture decoding apparatus and moving picture decoding method
KR100312418B1 (en) Intra mode code selection method in video coder
JP3756898B2 (en) Moving picture coding apparatus and moving picture coding method
JP3756899B2 (en) Moving picture decoding apparatus and moving picture decoding method
WO2001015458A2 (en) Dual priority video transmission for mobile applications
JP2001251633A (en) Moving picture coder and moving picture coding method

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 00804653.0

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): CA CN JP KR

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 1020017008651

Country of ref document: KR

ENP Entry into the national phase

Ref document number: 2000 593025

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2000906881

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2000906881

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1020017008651

Country of ref document: KR

WWW Wipo information: withdrawn in national office

Ref document number: 2000906881

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1020017008651

Country of ref document: KR