WO2006112643A1 - Context-based adaptive arithmetic coding and decoding methods and apparatuses with improved coding efficiency and video coding and decoding methods and apparatuses using the same - Google Patents

Context-based adaptive arithmetic coding and decoding methods and apparatuses with improved coding efficiency and video coding and decoding methods and apparatuses using the same Download PDF

Info

Publication number
WO2006112643A1
WO2006112643A1 PCT/KR2006/001420 KR2006001420W WO2006112643A1 WO 2006112643 A1 WO2006112643 A1 WO 2006112643A1 KR 2006001420 W KR2006001420 W KR 2006001420W WO 2006112643 A1 WO2006112643 A1 WO 2006112643A1
Authority
WO
WIPO (PCT)
Prior art keywords
slice
context model
given slice
given
data symbol
Prior art date
Application number
PCT/KR2006/001420
Other languages
French (fr)
Inventor
Sang-Chang Cha
Woo-Jin Han
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020050059369A external-priority patent/KR100703776B1/en
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Priority to EP06757477A priority Critical patent/EP1878253A1/en
Publication of WO2006112643A1 publication Critical patent/WO2006112643A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • H04N19/615Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding using motion compensated temporal filtering [MCTF]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/53Multi-resolution motion estimation; Hierarchical motion estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Definitions

  • Apparatuses and methods consistent with the present invention relate to context- based adaptive arithmetic coding and decoding with improved coding efficiency, and more particularly, to context-based adaptive arithmetic coding and decoding methods and apparatuses providing improved coding efficiency by initializing a context model for a given slice of an input video to a context model for a base layer slice at the same temporal position as the given slice for arithmetic coding and decoding.
  • a video encoder performs entropy coding to convert data symbols representing video input elements into bitstreams suitably compressed for transmission or storage.
  • the data symbols may include quantized transform coefficients, motion vectors, various headers, and the like.
  • Examples of the entropy coding include predictive coding, variable length coding, arithmetic coding, and so on. Particularly, arithmetic coding offers the highest compression efficiency.
  • context-based adaptive arithmetic coding utilizes local, spatial or temporal features.
  • a Joint Video Team (JVT) scalable video model utilizes the context-based adaptive arithmetic coding in which probability models are adaptively updated using the symbols to be coded. Disclosure of Invention
  • the context-based adaptive arithmetic coding method requires an increased number of coded blocks and accumulation of information.
  • the conventional context-based adaptive arithmetic coding method has a drawback in that when a context model is intended to be initialized to a predefined probability model for each slice, unnecessary bits may be consumed to reach a predetermined coding efficiency after initialization.
  • the present invention provides video coding and decoding methods and ap- paratuses improving coding efficiency and reducing error propagation by initializing a context model for a given slice to a context model for a base layer slice at the same temporal position as the given slice.
  • a method for performing context-based adaptive arithmetic coding on a given slice in an enhancement layer frame of a video signal having a multi-layered structure including: resetting a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice; arithmetically coding a data symbol of the given slice using the reset context model; and updating the context model based on a value of the arithmetically coded data symbol.
  • a method for performing context-based adaptive arithmetic decoding on a given slice in an enhancement layer frame of a video signal having a multi-layered structure including: resetting a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice; arithmetically decoding a bitstream corresponding to the given slice using the reset context model to generate a data symbol of the given slice; and updating the context model based on a value of the data symbol.
  • a method for performing context-based adaptive arithmetic coding on a given slice in an enhancement layer frame of a video signal having a multi-layered structure including: resetting a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice, a context model for a slice coded temporally before the given slice, and a predetermined value; arithmetically coding a data symbol of the given slice using the reset context model; and updating the context model based on a value of the arithmetically coded data symbol.
  • a method for performing context-based adaptive arithmetic decoding on a given slice in an enhancement layer frame of a video signal having a multi-layered structure including: resetting a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice, a context model for a slice de coded temporally before the given slice, and a predetermined value; arithmetically decoding a bitstream corresponding to the given slice using the reset context model to generate a data symbol of the given slice; and updating the context model based on a value of the data symbol.
  • a video coding method including a method for performing context-based adaptive arithmetic coding on a given slice in an enhancement layer frame having a multi-layered structure, the video coding method including: subtracting a predicted image for the given slice from the given slice and generating a residual image: performing spatial transform on the residual image and generating a transform coefficient; quantizing the transform coefficient; resetting a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice; arithmetically coding a data symbol of the given slice using the reset context model; updating the context model based on a value of the arithmetically coded data symbol; generating a bitstream containing the arithmetically coded data symbol; and transmitting the bitstream.
  • a video decoding method including a method for performing context-based adaptive arithmetic decoding on a given slice in an enhancement layer frame having a multi-layered structure, the video de coding method including: parsing a bitstream and extracting data about the given slice to be reconstructed; resetting a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice according to the data; arithmetically decoding a data symbol corresponding to the given slice using the reset context model to generate a data symbol of the given slice; updating the context model based on a value of the data symbol; dequantizing the data symbol to generate a transform coefficient; performing inverse spatial transform on the transform coefficient to reconstruct a residual image obtained by subtracting a predicted image from the given slice; and adding the predicted image reconstructed by motion compensation to the reconstructed residual image and reconstructing the given slice.
  • a method for coding a given slice in an enhancement layer frame of a video signal having a multi-layered structure including: subtracting a predicted image for the given slice from the given slice and generating a residual image; performing spatial transform on the residual image and generate a transform coefficient; quantizing the transform coefficient; resetting a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice, a context model for a slice coded temporally before the given slice, and a predetermined value; arithmetically coding a data symbol of the given slice using the reset context model; updating the context model based on a value of the arithmetically coded data symbol; generating a bitstream containing the arithmetically coded data symbol; and transmitting the bitstream.
  • a method for decoding a given slice in an enhancement layer frame of a video signal having a multi-layered structure including: parsing a bitstream and extracting data about the given slice to be reconstructed; resetting a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice, a context model for a slice de coded temporally before the given slice, and a predetermined value according to the data; arithmetically decoding a bitstream corresponding to the given slice using the reset context model to generate a data symbol of the given slice; updating the context model based on a value of the data symbol,; dequantizing the data symbol to generate a transform coefficient; performing inverse spatial transform on the transform coefficient to reconstruct a residual image obtained by subtracting a predicted image from the given slice; and adding the predicted image reconstructed by motion compensation to the reconstructed residual image and reconstructing the given slice.
  • a video encoder for compressing a given slice in an enhancement layer frame having a multi-layered structure, the encoder including: a unit which subtracts a predicted image for the given slice from the given slice and generates a residual image; a unit which performs spatial transform on the residual image and generates a transform coefficient; a unit which quantizes the transform coefficient; a unit which resets a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice; a unit which arithmetically codes a data symbol of the given slice using the reset context model; a unit which updates the context model based on a value of the arithmetically coded data symbol; a unit which generates a bitstream containing the arithmetically coded data symbol; and a unit which transmits the bitstream.
  • a video encoder for compressing a given slice in an enhancement layer frame having a multi- layered structure, the encoder including: a unit which subtracts a predicted image for the given slice from the given slice and generates a residual image; a unit which performs spatial transform on the residual image and generates a transform coefficient; a unit which quantizes the transform coefficient; a unit which resets a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice and a context model for a slice coded temporally before the given slice, and a predetermined value; a unit which arithmetically codes a data symbol of the given slice using the reset context model; a unit which updates the context model based on the value of the arithmetically coded data symbol; a unit which generates a bitstream containing the arithmetically coded data symbol; and a unit which transmits the bitstream.
  • a video decoder for reconstructing a given slice in an enhancement layer frame having a multi-layered structure, the decoder including: a unit which parses a bitstream and extracts data about the given slice to be reconstructed; a unit which resets a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice and a context model for a slice de coded temporally before the given slice according to the data and a predetermined value; a unit which arithmetically decodes a bitstream corresponding to the given slice using the reset context model to generate a data symbol of the given slice; a unit which updates the context model based on a value of the data symbol, a unit which de- quantizes the data symbol to generate a transform coefficient; a unit which performs inverse spatial transform on the transform coefficient to reconstruct a residual image obtained by subtracting a predicted image from the given slice; and a unit
  • FlG. 1 illustrates a context-based adaptive arithmetic coding method according to a first exemplary embodiment of the present invention
  • FlG. 5 is a flowchart illustrating a video decoding method including a context- based adaptive arithmetic decoding method according to an exemplary embodiment of the present invention
  • FlG. 6 is a flowchart illustrating a video coding method including a context-based adaptive arithmetic coding method according to an exemplary embodiment of the present invention
  • FlG. 7 is a flowchart illustrating a video decoding method including a context- based adaptive arithmetic decoding method according to an exemplary embodiment of the present invention
  • FlG. 8 is a block diagram of a video encoder according to an exemplary embodiment of the present invention.
  • FlG. 9 is a block diagram of a video decoder according to an exemplary embodiment of the present invention.
  • Context-based Adaptive Binary Arithmetic Coding achieves high compression performance by selecting a probability model for each symbol based on a symbol context, adapting probability estimates corresponding to the probability model based on local statistics and performing arithmetic coding on the symbol.
  • the coding process of the data symbol consists of at most four elementary steps: 1. Binarization; 2. Context modeling; 3. Arithmetic coding; and 4. Probability updating.
  • CABAC Context-based Adaptive Binary Arithmetic Coding
  • a context model which is a probability model for one or more bins of binarized symbols and chosen based on the recently coded data symbol statistics, stores a probability for each bin to be '1 ' or 1 O'.
  • An arithmetic encoder codes each bin based on the chosen probability model. Each bin has only two probability sub-ranges corresponding to values of T and 1 O', respectively.
  • the chosen probability model is updated using actually coded values. That is to say, if the bin value is T, the frequency count of l's is incremented by one.
  • CABAC CABAC since context modeling is performed in units of slices, probability values of context models are initialized using fixed tables at the start of each slice.
  • VLC variable length coding
  • the CABAC technique is required that a predetermined amount of information accumulate such that context models are constantly updated using the statistics of the recently coded data symbols.
  • initializing context models for each slice using predefined probability models may result in unnecessary consumption of bits until degraded performance, which is due to an increase in the number of blocks after initialization, is traded off.
  • the present invention proposes improved a CABAC technique by reducing a reduction in the coding efficiency immediately after initializing context models using statistical characteristics of the slice coded temporally before the base layer slice as an initial value of a context model for the given slice.
  • a context model for a given slice in an enhancement layer high-pass frame is initialized to a context model for a corresponding slice in a base layer high-pass frame at the same temporal position for multi-layered video coding.
  • a context model for an enhancement layer high-pass frame 111 can be initialized to a context model for a base layer high-pass frame 121 at the same temporal position.
  • the context models for enhancement layer low-pass frames 118 and 119 can also be initialized to context models for base layer low-pass frames 128 and 129 at the same temporal positions, respectively, thereby preventing degradation in coding efficiency during an initial stage of frame coding.
  • FlG. 2 illustrates a context-based adaptive arithmetic coding method according to a second exemplary embodiment of the present invention in which a context model for a previously coded frame is used as a context model for an enhancement layer frame having no corresponding base layer frame.
  • Context models for enhancement layer frames 111 through 113 having corresponding base layer frames at the same temporal positions are initialized to context models for their corresponding base layer frames 121 through 123 as described above with reference to FlG. 1.
  • context models for previously coded frames can be used as context models for enhancement layer frames 114 through 117 having no corresponding base layer frames.
  • a context model for a slice coded immediately before the given slice may be used as an initial value of a context model for the given high-pass frame slice.
  • the high-pass frames are coded in the order from the lowest level to the highest level consecutively using a context model for a slice coded immediately before the given slice as an initial value of a context model for the given slice.
  • the slice coded immediately before the given slice may indicate a corresponding slice of a neighboring high-pass frame in the same temporal level or a slice coded immediately before the given slice in the same high-pass frame.
  • the method of coding the given slice using the context model for the slice that has been coded immediately before the given slice may not provide high coding efficiency.
  • the present invention can provide for high coding efficiency by using statistical information on a slice in the lower level that is temporally closest to the given slice. Further, the method using the statistical information on a slice in the lower level that is temporally closest to the given slice can reduce error propagation compared to the methods of the first and second exemplary embodiments because an error occurring within a slice can propagate to only a slice at a higher level that uses the slice as a reference.
  • the context model for the slice that has been coded immediately before the given slice or the context model for the slice in the lower level that is temporally closest to the given slice may be selectively referred to.
  • probability models constituting a context model of a slice may be selectively referred to.
  • information about whether or not the respective probability models have been referred to may be inserted into a bitstream for transmission to a decoder part.
  • a context model that enhances the coding efficiency of a given slice most is selected for performing arithmetic coding of the given slice.
  • This procedure may consist of determining whether or not a slice (e.g., slice 113 ) is to be arithmetically coded using an empirically predefined initial value, determining whether or not a context model, as indicated by an arrow labeled 131, for a corresponding base layer slice is to be referred to, determining whether or not a context model of a slice coded immediately before the given slice 113, as indicated by an arrow labeled 132, and determining whether a context model for a slice that is temporally closest to the given slice is to be referred to, as indicated by an arrow labeled 133.
  • Probability models constituting one context model selected for initialization can be selectively used as described above with reference to FIGS. 2 and 3.
  • data is inserted into a bitstream and transmitted to a decoder, the data including information about whether or not a predefined value has been used, information about whether or not a context model of a corresponding base layer slice has been used, information about whether or not a slice coded temporally before the given slice has been used. If the given slice is arithmetically coded by the context model for a base layer slice or a slice coded temporally before the given slice, the data may include information about whether or not each of probability models constituting a context model for a slice coded temporally before the given slice has been used as a reference model.
  • FIG. 4 is a flowchart illustrating a video coding method including a context-based adaptive coding method according to an exemplary embodiment of the present invention.
  • the video coding method includes subtracting a predicted image for a given slice to be compressed from the given slice to generate a residual signal (step S410), performing spatial transform on the residual signal and generate a transform coefficient (step S420), quantizing data symbols containing a transform coefficient and a motion vector obtained during generation of the predicted image (step S430), entropy coding the quantized data symbols (steps S440 through S470), and generating a bitstream for transmission to a decoder (steps S480 and S490).
  • the entropy coding process includes binarization (step S440), resetting of a context model (step S454 or S456 ), arithmetic coding (step S460), and update of a context model (step S470).
  • binarization step S440 may be skipped.
  • a data symbol having a non-binary value is converted or binarized into a binary value.
  • a context model for the slice is reset in steps S452 through S456.
  • the entropy coding is performed in units of blocks and a context model is reset in units of slices to ensure independence of slices.
  • the context model is reset for symbols of the first block in the slice.
  • context models corresponding thereto are adaptively updated.
  • a selected context model is reset by referring to a context model for the slice coded temporally before the slice, which is as described above with reference to FIGS. 2 and 3.
  • it can refer to a part of probability models of the context model.
  • the bit stream can be transferred which contains the information of reference of each probability model.
  • FIGS. 2 and 3 Examples of a slice that will be used to reset a context model for a given slice are shown in FIGS. 2 and 3.
  • a video coding method including the arithmetic coding method according to the second or third exemplary embodiment of the present invention, as shown in FIG. 2 or 3, may further include selecting one of context models available for reference. Criteria of selecting one of context models available for reference include coding efficiency, an error propagation probability, and so on. In other words, a context model having a highest coding efficiency or a context model having a least error propagation probability may be selected among context model candidates.
  • step S460 the binarized symbol is subjected to arithmetic coding according to a probability model having a context model for a previously selected slice as an initial value.
  • FIG. 5 is a flowchart illustrating a video decoding method including a context- based adaptive arithmetic decoding method according to an exemplary embodiment of the present invention.
  • a decoder parses a received bitstream in order to extract data for reconstructing a video frame in step S510.
  • the data may include information about a selected context model, for example, slice information of the selected context model when one of context models of a slice coded temporally before the given slice is selected for initialization of a context model of the given slice during arithmetic coding performed by an encoder.
  • a context model for the given slice is reset in steps S522 through 526.
  • the context model for the given slice is reset to a context model for the base layer slice in the step S524.
  • the context model for the given slice is reset to a context model for a slice de coded temporally before the given slice in the step S526.
  • step S530 a bitstream corresponding to the slice is arithmetically decoded according to the context model.
  • step S540 the context model is updated based on the actual value of the decoded data symbol.
  • the arithmetically decoded data symbol is converted or de- binarized into a non-binary value in step S550.
  • step S560 dequantization is performed on the debinarized data symbol and generates a transform coefficient and, in step S570, inverse spatial transform is performed on the transform coefficient to reconstruct a residual signal for the given slice.
  • step S580 a predicted image for the given block reconstructed by motion compensation is added to the residual signal, thereby reconstructing the given slice.
  • FIG. 6 is a flowchart illustrating a video coding method including a context-based adaptive arithmetic coding method according to an exemplary embodiment of the present invention.
  • the video coding method includes subtracting a predicted image for a given slice from the given slice to generate a residual image (step S610), performing spatial transform on the residual image and generate a transform coefficient (step S620), quantizing the transform coefficient (step S630), entropy coding the quantized transform coefficient (steps S640 through S670), generating a bitstream (step S680), and transmitting the bitstream to a decoder (step S690).
  • entropy coding is performed in the following manner.
  • a context model for the given slice is reset to a context model for a corresponding base layer slice, a context model for a slice coded temporally before the given slice, or a predetermined initial value provided by a video encoder.
  • the video coding method may further comprise selecting one of a context model for a base layer slice corresponding to the given slice in an enhancement layer, a context model for a slice coded temporally before the given slice in the same enhancement layer, and a predetermined initial value provided by a video encoder.
  • the video coding method according to the illustrative embodiment may further comprise selecting a probability model to be used as a reference model.
  • a selected context model among two or more context models is initialized in step
  • step S655 and then arithmetically coded in step S660.
  • step S670 the context model is updated using an arithmetically coded data symbol value.
  • a bitstream generated through the above steps may contain information about a slice used in resetting a context model for the given slice, or information about whether or not each of probability models constituting a context model for a slice coded temporally before the given slice has been used as a reference model.
  • FIG. 7 is a flowchart illustrating a video decoding method including a context- based adaptive arithmetic decoding method according to an exemplary embodiment of the present invention.
  • a video decoder parses a bitstream in order to extract data about a given slice to be reconstructed in step S710.
  • the data about the given slice may include information about a slice used for initializing a context model for the given slice, information about the context model for the given slice, or information about whether or not each of probability models constituting a context model for a slice coded temporally before the given slice has been used as a reference model.
  • a context model for the given slice is reset to either a context model for a corresponding base layer slice or a context model for a slice de coded temporally before the given slice according to the information about an initial value of the context model extracted from the bitstream in step S725.
  • step S730 a bitstream corresponding to the given slice is arithmetically decoded using the context model.
  • step S740 the context model is updated based on the value of arithmetically decoded data symbol.
  • step S750 the arithmetically decoded value is converted or debinarized into a non-binary value.
  • step S760 dequantization is performed on the debinarized value and a transform coefficient is generated.
  • CABAC context-based adaptive binary arithmetic coding
  • the video decoder performs inverse spatial transform on the transform coefficient to reconstruct a residual image in step S770 and adds a predicted image reconstructed by motion compensation to the residual image in order to reconstruct the given slice in step S780.
  • FlG. 8 is a block diagram of a video encoder 800 according to an exemplary embodiment of the present invention.
  • the video encoder 800 includes a spatial transformer 840, a quantizer 850, an entropy coding unit 860, a motion estimator 810, and a motion compensator 820.
  • the motion estimator 810 performs motion estimation on a given frame among input video frames using a reference frame to obtain motion vectors.
  • a block matching algorithm is widely used for the motion estimation.
  • a given motion block is moved in units of pixels within a particular search area in the reference frame, and displacement giving a minimum error is estimated as a motion vector.
  • hierarchical variable size block matching HVSBM
  • simple fixed block size motion estimation is used.
  • the motion estimator 810 transmits motion data such as motion vectors obtained as a result of motion estimation, a motion block size, and a reference frame number to the entropy coding unit 860.
  • the motion compensator 820 performs motion compensation on the reference frame using the motion vectors calculated by the motion estimator 810 and generates a predicted frame for the given frame.
  • a subtracter 830 calculates a difference between the given frame and the predicted frame in order to remove temporal redundancy within the input video frame.
  • the spatial transformer 840 uses spatial transform technique supporting spatial scalability to remove spatial redundancy within the frame in which temporal redundancy has been removed by the subtractor 830.
  • the spatial transform method may include a Discrete Cosine Transform (DCT), or wavelet transform. Spatially- transformed values are referred to as transform coefficients.
  • DCT Discrete Cosine Transform
  • transform coefficients Spatially- transformed values are referred to as transform coefficients.
  • the entropy coding unit 860 losslessly codes data symbols including the quantized transform coefficient obtained by the quantizer 850 and the motion data received from the motion estimator 810.
  • the entropy coding unit 860 includes a binarizer 861, a context model selector 862, an arithmetic encoder 863, and a context model updater 864.
  • the context model selector 862 selects either an initial value predefined as an initial value of a context model for a given slice or a context model for a slice coded temporally before the given slice. Information about the selected initial value of the c ontext model is sent to a bitstream generator 870 and inserted into a bitstream for transmission. Meanwhile, when a method of referring to slices coded temporally before the given slice in order to initialize a context model for a given slice is predefined between an encoder part and a decoder part, the context model selector 862 may not be provided.
  • the context model updater 864 updates the context model based on the value of the arithmetically coded data symbol.
  • the video encoder 800 may further include a dequantizer and an inverse spatial transformer.
  • FlG. 9 is a block diagram of a video decoder 900 according to an exemplary embodiment of the present invention.
  • the bitstream parser 910 parses a bitstream received from an encoder to extract information needed for the entropy decoding unit 920 to decode the bitstream.
  • the entropy decoding unit 920 performs lossless decoding that is the inverse operation of entropy coding to extract motion data that are then fed to the motion compensator 950 and texture data that are then fed to the dequantizer 930.
  • the entropy decoding unit 920 includes a context model setter 921, an arithmetic decoder 922, a context model updater 923, and a debinarizer 924.
  • the context model setter 921 initializes a context model for a slice to be decoded according to the information extracted by the bitstream parser 910.
  • the information extracted by the bitstream parser 910 may contain information about a slice having a context model to be used as an initial value of a context model for a given slice and in- formation about a probability model to be used as the initial value of the context model for the given slice.
  • context models independent of type of the block in a slice may be initialized.
  • the arithmetic decoder 922 performs context-based adaptive arithmetic decoding on a bitstream corresponding to data symbols of the given slice according to the context model set by the context model setter 921.
  • the dequantizer 930 dequantizes texture information received from the entropy decoding unit 920.
  • the dequantization is a process of obtaining quantized coefficients from matched quantization indices received from the encoder.
  • an adder 960 adds a motion-compensated image received from the motion compensator 950 to the residual image in order to reconstruct a video frame.
  • context-based adaptive arithmetic coding and decoding methods and apparatuses of the present invention according to the exemplary em- bodiments of the present invention provide at least the following advantages.
  • the video coding and decoding methods and apparatuses can improve overall coding efficiency and reduce error propagation by initializing a context model for a given slice to a context model for a base layer slice.

Abstract

Context-based adaptive arithmetic coding and decoding methods and apparatuses with improved coding efficiency and video coding and decoding methods and apparatuses using the same are provided. The method for performing context-based adaptive arithmetic coding on a given slice in an enhancement layer frame of a video signal having a multi-layered structure includes steps of resetting a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice, arithmetically coding a data symbol of the given slice using the reset context model, and updating the context model based on the value of the arithmetically coded data symbol.

Description

Description CONTEXT-BASED ADAPTIVE ARITHMETIC CODING AND
DECODING METHODS AND APPARATUSES WITH
IMPROVED CODING EFFICIENCY AND VIDEO CODING AND
DECODING METHODS AND APPARATUSES USING THE
SAME
Technical Field
[1] Apparatuses and methods consistent with the present invention relate to context- based adaptive arithmetic coding and decoding with improved coding efficiency, and more particularly, to context-based adaptive arithmetic coding and decoding methods and apparatuses providing improved coding efficiency by initializing a context model for a given slice of an input video to a context model for a base layer slice at the same temporal position as the given slice for arithmetic coding and decoding.
Background Art
[2] A video encoder performs entropy coding to convert data symbols representing video input elements into bitstreams suitably compressed for transmission or storage. The data symbols may include quantized transform coefficients, motion vectors, various headers, and the like. Examples of the entropy coding include predictive coding, variable length coding, arithmetic coding, and so on. Particularly, arithmetic coding offers the highest compression efficiency.
[3] Successful entropy coding depends upon accurate probability models of symbols.
In order to estimate a probability of symbols to be coded, context-based adaptive arithmetic coding utilizes local, spatial or temporal features. A Joint Video Team (JVT) scalable video model utilizes the context-based adaptive arithmetic coding in which probability models are adaptively updated using the symbols to be coded. Disclosure of Invention
Technical Problem
[4] However, in order to provide for adequate coding efficiency, the context-based adaptive arithmetic coding method requires an increased number of coded blocks and accumulation of information. Thus, the conventional context-based adaptive arithmetic coding method has a drawback in that when a context model is intended to be initialized to a predefined probability model for each slice, unnecessary bits may be consumed to reach a predetermined coding efficiency after initialization.
Technical Solution
[5] The present invention provides video coding and decoding methods and ap- paratuses improving coding efficiency and reducing error propagation by initializing a context model for a given slice to a context model for a base layer slice at the same temporal position as the given slice.
[6] The above stated aspect as well as other aspects of the present invention will become clear to those skilled in the art upon review of the following description.
[7] According to an aspect of the present invention, there is provided a method for performing context-based adaptive arithmetic coding on a given slice in an enhancement layer frame of a video signal having a multi-layered structure, the method including: resetting a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice; arithmetically coding a data symbol of the given slice using the reset context model; and updating the context model based on a value of the arithmetically coded data symbol.
[8] According to another aspect of the present invention, there is provided a method for performing context-based adaptive arithmetic decoding on a given slice in an enhancement layer frame of a video signal having a multi-layered structure, the method including: resetting a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice; arithmetically decoding a bitstream corresponding to the given slice using the reset context model to generate a data symbol of the given slice; and updating the context model based on a value of the data symbol.
[9] According to still another aspect of the present invention, there is provided a method for performing context-based adaptive arithmetic coding on a given slice in an enhancement layer frame of a video signal having a multi-layered structure, the method including: resetting a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice, a context model for a slice coded temporally before the given slice, and a predetermined value; arithmetically coding a data symbol of the given slice using the reset context model; and updating the context model based on a value of the arithmetically coded data symbol.
[10] According to yet another aspect of the present invention, there is provided a method for performing context-based adaptive arithmetic decoding on a given slice in an enhancement layer frame of a video signal having a multi-layered structure, the method including: resetting a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice, a context model for a slice de coded temporally before the given slice, and a predetermined value; arithmetically decoding a bitstream corresponding to the given slice using the reset context model to generate a data symbol of the given slice; and updating the context model based on a value of the data symbol. [11] According to a further aspect of the present invention, there is provided a video coding method including a method for performing context-based adaptive arithmetic coding on a given slice in an enhancement layer frame having a multi-layered structure, the video coding method including: subtracting a predicted image for the given slice from the given slice and generating a residual image: performing spatial transform on the residual image and generating a transform coefficient; quantizing the transform coefficient; resetting a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice; arithmetically coding a data symbol of the given slice using the reset context model; updating the context model based on a value of the arithmetically coded data symbol; generating a bitstream containing the arithmetically coded data symbol; and transmitting the bitstream.
[12] According to yet a further aspect of the present invention, there is provided a video decoding method including a method for performing context-based adaptive arithmetic decoding on a given slice in an enhancement layer frame having a multi-layered structure, the video de coding method including: parsing a bitstream and extracting data about the given slice to be reconstructed; resetting a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice according to the data; arithmetically decoding a data symbol corresponding to the given slice using the reset context model to generate a data symbol of the given slice; updating the context model based on a value of the data symbol; dequantizing the data symbol to generate a transform coefficient; performing inverse spatial transform on the transform coefficient to reconstruct a residual image obtained by subtracting a predicted image from the given slice; and adding the predicted image reconstructed by motion compensation to the reconstructed residual image and reconstructing the given slice.
[13] According to still yet another aspect of the present invention, there is provided a method for coding a given slice in an enhancement layer frame of a video signal having a multi-layered structure, the method including: subtracting a predicted image for the given slice from the given slice and generating a residual image; performing spatial transform on the residual image and generate a transform coefficient; quantizing the transform coefficient; resetting a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice, a context model for a slice coded temporally before the given slice, and a predetermined value; arithmetically coding a data symbol of the given slice using the reset context model; updating the context model based on a value of the arithmetically coded data symbol; generating a bitstream containing the arithmetically coded data symbol; and transmitting the bitstream. [14] According to still yet a further aspect of the present invention, there is provided a method for decoding a given slice in an enhancement layer frame of a video signal having a multi-layered structure, the method including: parsing a bitstream and extracting data about the given slice to be reconstructed; resetting a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice, a context model for a slice de coded temporally before the given slice, and a predetermined value according to the data; arithmetically decoding a bitstream corresponding to the given slice using the reset context model to generate a data symbol of the given slice; updating the context model based on a value of the data symbol,; dequantizing the data symbol to generate a transform coefficient; performing inverse spatial transform on the transform coefficient to reconstruct a residual image obtained by subtracting a predicted image from the given slice; and adding the predicted image reconstructed by motion compensation to the reconstructed residual image and reconstructing the given slice.
[15] According to an aspect of the present invention, there is provided a video encoder for compressing a given slice in an enhancement layer frame having a multi-layered structure, the encoder including: a unit which subtracts a predicted image for the given slice from the given slice and generates a residual image; a unit which performs spatial transform on the residual image and generates a transform coefficient; a unit which quantizes the transform coefficient; a unit which resets a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice; a unit which arithmetically codes a data symbol of the given slice using the reset context model; a unit which updates the context model based on a value of the arithmetically coded data symbol; a unit which generates a bitstream containing the arithmetically coded data symbol; and a unit which transmits the bitstream.
[16] According to another aspect of the present invention, there is provided a video decoder for reconstructing a given slice in an enhancement layer frame having a multi- layered structure, the decoder including: a unit which parses a bitstream and extracts data about the given slice to be reconstructed; a unit which resets a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice according to the data; a unit which arithmetically decodes a bitstream corresponding to the given slice using the reset context model to generate a data symbol of the given slice; a unit which updates the context model based on a value of the data symbol; a unit which dequantizes the data symbol to generate a transform coefficient; a unit which performs inverse spatial transform on the transform coefficient to reconstruct a residual image obtained by subtracting a predicted image from the given slice; and a unit which adds the predicted image reconstructed by motion compensation to the reconstructed residual image and reconstructs the given slice. [17] According to yet another aspect of the present invention, there is provided a video encoder for compressing a given slice in an enhancement layer frame having a multi- layered structure, the encoder including: a unit which subtracts a predicted image for the given slice from the given slice and generates a residual image; a unit which performs spatial transform on the residual image and generates a transform coefficient; a unit which quantizes the transform coefficient; a unit which resets a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice and a context model for a slice coded temporally before the given slice, and a predetermined value; a unit which arithmetically codes a data symbol of the given slice using the reset context model; a unit which updates the context model based on the value of the arithmetically coded data symbol; a unit which generates a bitstream containing the arithmetically coded data symbol; and a unit which transmits the bitstream.
[18] According to still yet another aspect of the present invention, there is provided a video decoder for reconstructing a given slice in an enhancement layer frame having a multi-layered structure, the decoder including: a unit which parses a bitstream and extracts data about the given slice to be reconstructed; a unit which resets a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice and a context model for a slice de coded temporally before the given slice according to the data and a predetermined value; a unit which arithmetically decodes a bitstream corresponding to the given slice using the reset context model to generate a data symbol of the given slice; a unit which updates the context model based on a value of the data symbol, a unit which de- quantizes the data symbol to generate a transform coefficient; a unit which performs inverse spatial transform on the transform coefficient to reconstruct a residual image obtained by subtracting a predicted image from the given slice; and a unit which adds the predicted image reconstructed by motion compensation to the reconstructed residual image and reconstructs the given slice.
Description of Drawings
[19] The above and other features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings, in which:
[20] FlG. 1 illustrates a context-based adaptive arithmetic coding method according to a first exemplary embodiment of the present invention;
[21] FlG. 2 illustrates a context-based adaptive arithmetic coding method according to a second exemplary embodiment of the present invention;
[22] FlG. 3 illustrates a context-based adaptive arithmetic coding method according to a third exemplary embodiment of the present invention; [23] FlG. 4 is a flowchart illustrating a video coding method including a context-based adaptive arithmetic coding method according to an exemplary embodiment of the present invention;
[24] FlG. 5 is a flowchart illustrating a video decoding method including a context- based adaptive arithmetic decoding method according to an exemplary embodiment of the present invention;
[25] FlG. 6 is a flowchart illustrating a video coding method including a context-based adaptive arithmetic coding method according to an exemplary embodiment of the present invention;
[26] FlG. 7 is a flowchart illustrating a video decoding method including a context- based adaptive arithmetic decoding method according to an exemplary embodiment of the present invention;
[27] FlG. 8 is a block diagram of a video encoder according to an exemplary embodiment of the present invention; and
[28] FlG. 9 is a block diagram of a video decoder according to an exemplary embodiment of the present invention.
Mode for Invention
[29] Aspects and features of the present invention and methods of accomplishing the same may be understood more readily by reference to the following detailed description of exemplary embodiments and the accompanying drawings. The present invention may, however, be embodied in many different forms and should not be construed as being limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of the invention to those skilled in the art, and the present invention will only be defined by the appended claims. Like reference numerals refer to like elements throughout the specification.
[30] The present invention will now be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown.
[31] Context-based Adaptive Binary Arithmetic Coding (CABAC) achieves high compression performance by selecting a probability model for each symbol based on a symbol context, adapting probability estimates corresponding to the probability model based on local statistics and performing arithmetic coding on the symbol. The coding process of the data symbol consists of at most four elementary steps: 1. Binarization; 2. Context modeling; 3. Arithmetic coding; and 4. Probability updating.
[32] 1. Binarization
[33] Among Context-based Adaptive Binary Arithmetic Coding (CABAC) techniques, binary arithmetic coding allows a given non-binary valued symbol to be uniquely mapped to a binary sequence. In CABAC, only a binary decision enters a coding process. Non-binary valued symbols, such as transform coefficients or motion vectors, are converted into binary codes prior to the actual arithmetic coding process. This process is similar to converting data symbols to variable length codes except that the binary codes are previously coded by an arithmetic encoder prior to transmission.
[34] For brevity, the present invention will now be discussed along with details on
CABAC set forth but the invention is not limited thereto.
[35] The following elementary operations of context modeling, arithmetic coding, and probability updating are recursively performed on the respective bits of the binarized codes, i.e., bins.
[36] 2. Context modeling
[37] A context model, which is a probability model for one or more bins of binarized symbols and chosen based on the recently coded data symbol statistics, stores a probability for each bin to be '1 ' or 1O'.
[38] 3. Arithmetic coding
[39] An arithmetic encoder codes each bin based on the chosen probability model. Each bin has only two probability sub-ranges corresponding to values of T and 1O', respectively.
[40] 4. Probability updating.
[41] The chosen probability model is updated using actually coded values. That is to say, if the bin value is T, the frequency count of l's is incremented by one.
[42] In the above-described CABAC technique, since context modeling is performed in units of slices, probability values of context models are initialized using fixed tables at the start of each slice. Compared to the conventional variable length coding (VLC) technique, to offer a higher coding efficiency, the CABAC technique is required that a predetermined amount of information accumulate such that context models are constantly updated using the statistics of the recently coded data symbols. Thus, initializing context models for each slice using predefined probability models may result in unnecessary consumption of bits until degraded performance, which is due to an increase in the number of blocks after initialization, is traded off.
[43] In multi-layered video coding, data symbols of a given slice to be currently coded tend to have a statistical distribution similar to that of data symbols of a base layer slice. Thus, the present invention proposes improved a CABAC technique by reducing a reduction in the coding efficiency immediately after initializing context models using statistical characteristics of the slice coded temporally before the base layer slice as an initial value of a context model for the given slice.
[44] Using the statistical characteristics of the base layer slice improves coding performance and reduces error propagation in a temporally filtered hierarchical structure, for example, a motion-compensated temporally filtered (MCTF) structure. [45] FlG. 1 illustrates a context-based adaptive arithmetic coding method according to a first exemplary embodiment of the present invention.
[46] A context model for a given slice in an enhancement layer high-pass frame is initialized to a context model for a corresponding slice in a base layer high-pass frame at the same temporal position for multi-layered video coding. For example, a context model for an enhancement layer high-pass frame 111 can be initialized to a context model for a base layer high-pass frame 121 at the same temporal position.
[47] In a video coding method for implementing each layer of a multi-layered structure by MCTF, predetermined frames are referred to for the purpose of maintaining the structure implemented by MCTF. On this account, in a case where an error occurs to a given frame, the error may propagate to a temporally higher level. In the video coding method using MCTF, however, a robust system demonstrating a low error propagation rate can be realized by referring to a context model for a high-pass frame of a base layer for initialization of context models for low-pass frames of an enhancement layer.
[48] Meanwhile, when referring to context models for base layer low-pass frames, some of probability models constituting the context models may be selectively referred to. In this instance, information about whether or not the respective probability models have been referred to may be inserted into a bitstream for transmission to a decoder part.
[49] In the present exemplary embodiment, the context models for enhancement layer low-pass frames 118 and 119 can also be initialized to context models for base layer low-pass frames 128 and 129 at the same temporal positions, respectively, thereby preventing degradation in coding efficiency during an initial stage of frame coding.
[50] When resolution levels of the base layer and the enhancement layer are different from each other, there may not be base layer frames at the same temporal positions as enhancement layer high-pass frames. Referring to FlG. 1, since the frame 114 may not have a frame corresponding to the base layer, the context model corresponding thereto is initialized using a predefined initialization value like in the conventional CABAC coding technique. In addition, the frame 114 may be initialized using a context model for a frame coded temporally before the frame 114, which will now be described with reference to FlG. 2.
[51] FlG. 2 illustrates a context-based adaptive arithmetic coding method according to a second exemplary embodiment of the present invention in which a context model for a previously coded frame is used as a context model for an enhancement layer frame having no corresponding base layer frame.
[52] Context models for enhancement layer frames 111 through 113 having corresponding base layer frames at the same temporal positions are initialized to context models for their corresponding base layer frames 121 through 123 as described above with reference to FlG. 1. On the other hand, context models for previously coded frames can be used as context models for enhancement layer frames 114 through 117 having no corresponding base layer frames.
[53] In the temporally filtered hierarchical structure, high-pass frames tend to have similar statistical characteristics with one another. Thus, a context model for a slice coded immediately before the given slice may be used as an initial value of a context model for the given high-pass frame slice. Here, the high-pass frames are coded in the order from the lowest level to the highest level consecutively using a context model for a slice coded immediately before the given slice as an initial value of a context model for the given slice. When a frame is divided into two or more slices, the slice coded immediately before the given slice may indicate a corresponding slice of a neighboring high-pass frame in the same temporal level or a slice coded immediately before the given slice in the same high-pass frame.
[54] Therefore, in a temporally filtered hierarchical structure as shown in FlG. 1, slices in a high-pass frame are coded in order from the lowest level to the highest level consecutively using a context model for a slice coded immediately before the given slice as an initial value of a context model for the given slice. Arrows in FIGS. 1-3 indicate the direction that a context model is referred to. In those exemplary embodiments, the context model for a slice coded temporally before the given slice is also used as an initial value of the context model for the given slice indicated by the arrow.
[55] When a sharp scene change is detected from video inputs, statistical characteristics of a slice that has been coded immediately before a given slice are different from those of the given slice. Thus, the method of coding the given slice using the context model for the slice that has been coded immediately before the given slice may not provide high coding efficiency. In this regard, the present invention can provide for high coding efficiency by using statistical information on a slice in the lower level that is temporally closest to the given slice. Further, the method using the statistical information on a slice in the lower level that is temporally closest to the given slice can reduce error propagation compared to the methods of the first and second exemplary embodiments because an error occurring within a slice can propagate to only a slice at a higher level that uses the slice as a reference.
[56] In another exemplary embodiment, the context model for the slice that has been coded immediately before the given slice or the context model for the slice in the lower level that is temporally closest to the given slice may be selectively referred to.
[57] In alternative exemplary embodiment, only some of probability models constituting a context model of a slice may be selectively referred to. In this instance, as described above, information about whether or not the respective probability models have been referred to may be inserted into a bitstream for transmission to a decoder part.
[58] FlG. 3 illustrates a context-based adaptive arithmetic coding method according to a third exemplary embodiment of the present invention.
[59] A context model for a given slice in an enhancement layer of every high-pass frame is initialized to one selected from context models for a base layer slice at the same temporal position as the given slice and slices coded temporally before the given slice in the same enhancement layer . That is, for slices 114 through 117 without their corresponding base layers at the same temporal position, the context model for the given slice is initialized to one of context models for slices coded temporally before the given slice in the same enhancement layer. For slices 111 through 113 with their corresponding base layers at the same temporal position, the context model for the given slice is initialized to a context model for the corresponding base layer slice or to one of context models for slices coded temporally before the given slice.
[60] Among context models for slices having similar statistical characteristics, a context model that enhances the coding efficiency of a given slice most is selected for performing arithmetic coding of the given slice. This procedure may consist of determining whether or not a slice (e.g., slice 113 ) is to be arithmetically coded using an empirically predefined initial value, determining whether or not a context model, as indicated by an arrow labeled 131, for a corresponding base layer slice is to be referred to, determining whether or not a context model of a slice coded immediately before the given slice 113, as indicated by an arrow labeled 132, and determining whether a context model for a slice that is temporally closest to the given slice is to be referred to, as indicated by an arrow labeled 133.
[61] Probability models constituting one context model selected for initialization can be selectively used as described above with reference to FIGS. 2 and 3.
[62] In the second and third exemplary embodiments, data is inserted into a bitstream and transmitted to a decoder, the data including information about whether or not a predefined value has been used, information about whether or not a context model of a corresponding base layer slice has been used, information about whether or not a slice coded temporally before the given slice has been used. If the given slice is arithmetically coded by the context model for a base layer slice or a slice coded temporally before the given slice, the data may include information about whether or not each of probability models constituting a context model for a slice coded temporally before the given slice has been used as a reference model.
[63] FIG. 4 is a flowchart illustrating a video coding method including a context-based adaptive coding method according to an exemplary embodiment of the present invention.
[64] The video coding method includes subtracting a predicted image for a given slice to be compressed from the given slice to generate a residual signal (step S410), performing spatial transform on the residual signal and generate a transform coefficient (step S420), quantizing data symbols containing a transform coefficient and a motion vector obtained during generation of the predicted image (step S430), entropy coding the quantized data symbols (steps S440 through S470), and generating a bitstream for transmission to a decoder (steps S480 and S490).
[65] The entropy coding process includes binarization (step S440), resetting of a context model (step S454 or S456 ), arithmetic coding (step S460), and update of a context model (step S470). However, when context-based adaptive arithmetic coding is used instead of CABAC, the binarization step S440 may be skipped.
[66] In the binarization step S440, a data symbol having a non-binary value is converted or binarized into a binary value.
[67] When the block being currently compressed is a first block in the slice in the step
S450, a context model for the slice is reset in steps S452 through S456. The entropy coding is performed in units of blocks and a context model is reset in units of slices to ensure independence of slices. In other words, the context model is reset for symbols of the first block in the slice. As the number of blocks to be coded increases, context models corresponding thereto are adaptively updated. In the present exemplary embodiments, a selected context model is reset by referring to a context model for the slice coded temporally before the slice, which is as described above with reference to FIGS. 2 and 3. Also, in the case of using one context model, it can refer to a part of probability models of the context model. In this case, the bit stream can be transferred which contains the information of reference of each probability model.
[68] Examples of a slice that will be used to reset a context model for a given slice are shown in FIGS. 2 and 3. A video coding method including the arithmetic coding method according to the second or third exemplary embodiment of the present invention, as shown in FIG. 2 or 3, may further include selecting one of context models available for reference. Criteria of selecting one of context models available for reference include coding efficiency, an error propagation probability, and so on. In other words, a context model having a highest coding efficiency or a context model having a least error propagation probability may be selected among context model candidates.
[69] In step S460, the binarized symbol is subjected to arithmetic coding according to a probability model having a context model for a previously selected slice as an initial value.
[70] In step S470, the context model is updated based on the actual value of the binarized symbol. For example, if one bin of the data symbol has a value of 1O,' the frequency count of O's is increased. Thus, the next time this model is selected, the probability of a '0' will be slightly higher.
[71] FIG. 5 is a flowchart illustrating a video decoding method including a context- based adaptive arithmetic decoding method according to an exemplary embodiment of the present invention.
[72] A decoder parses a received bitstream in order to extract data for reconstructing a video frame in step S510. The data may include information about a selected context model, for example, slice information of the selected context model when one of context models of a slice coded temporally before the given slice is selected for initialization of a context model of the given slice during arithmetic coding performed by an encoder.
[73] When the given block is a first block in a slice (YES in step S520), a context model for the given slice is reset in steps S522 through 526. When there is a base layer slice having the same temporal position as the given slice (YES in the step S522), the context model for the given slice is reset to a context model for the base layer slice in the step S524. Conversely, when there is no base layer slice having the same temporal position as the given slice (NO in the step S522), the context model for the given slice is reset to a context model for a slice de coded temporally before the given slice in the step S526.
[74] The context models for previously decoded slices that can be referred to are as described above with reference to FIGS. 2 and 3.
[75] In step S530, a bitstream corresponding to the slice is arithmetically decoded according to the context model. In step S540, the context model is updated based on the actual value of the decoded data symbol. When context-based adaptive binary arithmetic decoding is used, the arithmetically decoded data symbol is converted or de- binarized into a non-binary value in step S550.
[76] In step S560, dequantization is performed on the debinarized data symbol and generates a transform coefficient and, in step S570, inverse spatial transform is performed on the transform coefficient to reconstruct a residual signal for the given slice. In step S580, a predicted image for the given block reconstructed by motion compensation is added to the residual signal, thereby reconstructing the given slice.
[77] FIG. 6 is a flowchart illustrating a video coding method including a context-based adaptive arithmetic coding method according to an exemplary embodiment of the present invention.
[78] The video coding method includes subtracting a predicted image for a given slice from the given slice to generate a residual image (step S610), performing spatial transform on the residual image and generate a transform coefficient (step S620), quantizing the transform coefficient (step S630), entropy coding the quantized transform coefficient (steps S640 through S670), generating a bitstream (step S680), and transmitting the bitstream to a decoder (step S690).
[79] In the illustrative embodiment, entropy coding is performed in the following manner. In a video coding method in which a CAB AC-based context model is initialized for each slice, if the given block is a first block in a given slice (YES in step S650), a context model for the given slice is reset to a context model for a corresponding base layer slice, a context model for a slice coded temporally before the given slice, or a predetermined initial value provided by a video encoder. In other words, the video coding method according to the illustrative embodiment may further comprise selecting one of a context model for a base layer slice corresponding to the given slice in an enhancement layer, a context model for a slice coded temporally before the given slice in the same enhancement layer, and a predetermined initial value provided by a video encoder.
[80] Meanwhile, when only some of probability models constituting a context model of a slice are referred to, the video coding method according to the illustrative embodiment may further comprise selecting a probability model to be used as a reference model.
[81] A selected context model among two or more context models is initialized in step
S655 and then arithmetically coded in step S660. In step S670, the context model is updated using an arithmetically coded data symbol value.
[82] A bitstream generated through the above steps may contain information about a slice used in resetting a context model for the given slice, or information about whether or not each of probability models constituting a context model for a slice coded temporally before the given slice has been used as a reference model.
[83] FIG. 7 is a flowchart illustrating a video decoding method including a context- based adaptive arithmetic decoding method according to an exemplary embodiment of the present invention.
[84] A video decoder parses a bitstream in order to extract data about a given slice to be reconstructed in step S710. The data about the given slice may include information about a slice used for initializing a context model for the given slice, information about the context model for the given slice, or information about whether or not each of probability models constituting a context model for a slice coded temporally before the given slice has been used as a reference model.
[85] When a currently decoded block is a first block in the given slice (YES in step
S720), a context model for the given slice is reset to either a context model for a corresponding base layer slice or a context model for a slice de coded temporally before the given slice according to the information about an initial value of the context model extracted from the bitstream in step S725.
[86] In step S730, a bitstream corresponding to the given slice is arithmetically decoded using the context model. In step S740, the context model is updated based on the value of arithmetically decoded data symbol. In step S750, the arithmetically decoded value is converted or debinarized into a non-binary value. In step S760, dequantization is performed on the debinarized value and a transform coefficient is generated. However, when context-based adaptive binary arithmetic coding (CABAC) is not used, the debi- narization step S750 may be skipped.
[87] The video decoder performs inverse spatial transform on the transform coefficient to reconstruct a residual image in step S770 and adds a predicted image reconstructed by motion compensation to the residual image in order to reconstruct the given slice in step S780.
[88] FlG. 8 is a block diagram of a video encoder 800 according to an exemplary embodiment of the present invention.
[89] The video encoder 800 includes a spatial transformer 840, a quantizer 850, an entropy coding unit 860, a motion estimator 810, and a motion compensator 820.
[90] The motion estimator 810 performs motion estimation on a given frame among input video frames using a reference frame to obtain motion vectors. A block matching algorithm is widely used for the motion estimation. In detail, a given motion block is moved in units of pixels within a particular search area in the reference frame, and displacement giving a minimum error is estimated as a motion vector. For motion estimation, hierarchical variable size block matching (HVSBM) may be used. However, in exemplary embodiments of the present invention, simple fixed block size motion estimation is used. The motion estimator 810 transmits motion data such as motion vectors obtained as a result of motion estimation, a motion block size, and a reference frame number to the entropy coding unit 860.
[91] The motion compensator 820 performs motion compensation on the reference frame using the motion vectors calculated by the motion estimator 810 and generates a predicted frame for the given frame.
[92] A subtracter 830 calculates a difference between the given frame and the predicted frame in order to remove temporal redundancy within the input video frame.
[93] The spatial transformer 840 uses spatial transform technique supporting spatial scalability to remove spatial redundancy within the frame in which temporal redundancy has been removed by the subtractor 830. The spatial transform method may include a Discrete Cosine Transform (DCT), or wavelet transform. Spatially- transformed values are referred to as transform coefficients.
[94] The quantizer 850 applies quantization to the transform coefficient obtained by the spatial transformer 840. Quantization means the process of expressing the transform coefficients formed in arbitrary real values by discrete values, and matching the discrete values with indices according to the predetermined quantization table. The quantized result value is referred to as a quantized coefficient.
[95] The entropy coding unit 860 losslessly codes data symbols including the quantized transform coefficient obtained by the quantizer 850 and the motion data received from the motion estimator 810. The entropy coding unit 860 includes a binarizer 861, a context model selector 862, an arithmetic encoder 863, and a context model updater 864.
[96] The binarizer 861 converts the data symbols into a binary value that is then sent to the context model selector 862. The binarizer 861 may be omitted when CABAC is not used.
[97] The context model selector 862 selects either an initial value predefined as an initial value of a context model for a given slice or a context model for a slice coded temporally before the given slice. Information about the selected initial value of the c ontext model is sent to a bitstream generator 870 and inserted into a bitstream for transmission. Meanwhile, when a method of referring to slices coded temporally before the given slice in order to initialize a context model for a given slice is predefined between an encoder part and a decoder part, the context model selector 862 may not be provided.
[98] The arithmetic encoder 863 performs context-based adaptive arithmetic coding on data symbols of a given block using the context model.
[99] The context model updater 864 updates the context model based on the value of the arithmetically coded data symbol.
[100] To support closed-loop coding in order to reduce a drifting error caused due to a mismatch between an encoder and a decoder, the video encoder 800 may further include a dequantizer and an inverse spatial transformer.
[101] FlG. 9 is a block diagram of a video decoder 900 according to an exemplary embodiment of the present invention.
[102] The video decoder 900 includes a bitstream parser 910, an entropy decoding unit
920, a dequantizer 930, an inverse spatial transformer 940, and a motion compensator 950.
[103] The bitstream parser 910 parses a bitstream received from an encoder to extract information needed for the entropy decoding unit 920 to decode the bitstream.
[104] The entropy decoding unit 920 performs lossless decoding that is the inverse operation of entropy coding to extract motion data that are then fed to the motion compensator 950 and texture data that are then fed to the dequantizer 930. The entropy decoding unit 920 includes a context model setter 921, an arithmetic decoder 922, a context model updater 923, and a debinarizer 924.
[105] The context model setter 921 initializes a context model for a slice to be decoded according to the information extracted by the bitstream parser 910. The information extracted by the bitstream parser 910 may contain information about a slice having a context model to be used as an initial value of a context model for a given slice and in- formation about a probability model to be used as the initial value of the context model for the given slice. In the exemplary embodiments of the present invention, context models independent of type of the block in a slice may be initialized.
[106] The arithmetic decoder 922 performs context-based adaptive arithmetic decoding on a bitstream corresponding to data symbols of the given slice according to the context model set by the context model setter 921.
[107] The context model updater 923 updates the given context model based on the value of the arithmetically decoded data symbol. When context-based adaptive binary arithmetic decoding is used, the debinarizer 924 converts the decoded binary values obtained by the arithmetic decoder 922 into non-binary values. The d ebinarizer performs inversely binarizing data.
[108] The dequantizer 930 dequantizes texture information received from the entropy decoding unit 920. The dequantization is a process of obtaining quantized coefficients from matched quantization indices received from the encoder.
[109] The inverse spatial transformer 940 performs inverse spatial transform on coefficients obtained after the dequantization to reconstruct a residual image in a spatial domain. The motion compensator 950 performs motion compensation on the previously reconstructed video frame using the motion data from the entropy decoding unit 920 and generates a motion-compensated frame.
[110] When the residual image reconstructed by the inverse spatial transformer 940 is generated using temporal prediction, an adder 960 adds a motion-compensated image received from the motion compensator 950 to the residual image in order to reconstruct a video frame.
[Ill] In FIGS. 8 through 9, various components mean, but are not limited to, software or hardware components, such as a Field Programmable Gate Arrays (FPGAs) or Application Specific Integrated Circuits (ASICs), which perform certain tasks. The components may advantageously be configured to reside on the addressable storage media and configured to execute on one or more processors. The functionality provided for in the components and modules may be combined into fewer components and modules or further separated into additional components and modules.
[112] While the invention has been shown and described with reference to context models initialized in units of slices by way of several exemplary embodiments, it should be understood by those skilled in the art that the present invention is not limited to the above-mentioned exemplary embodiments and context models can also be initialized within the scope of the invention.
Industrial Applicability
[113] As described above, context-based adaptive arithmetic coding and decoding methods and apparatuses of the present invention according to the exemplary em- bodiments of the present invention provide at least the following advantages.
[114] First, the video coding and decoding methods and apparatuses can improve overall coding efficiency and reduce error propagation by initializing a context model for a given slice to a context model for a base layer slice.
[115] Second, the video coding and decoding methods and apparatuses also provide improved coding performance by initializing a context model for a given slice to one of context models for two or more previously coded slices.
[116] In concluding the detailed description, those skilled in the art will appreciate that many variations and modifications can be made to the exemplary embodiments without substantially departing from the principles of the present invention. Therefore, the disclosed exemplary embodiments of the invention are used in a generic and descriptive sense only and not for purposes of limitation.

Claims

Claims
[1] A method for performing context-based adaptive arithmetic coding on a given slice in an enhancement layer frame of a video signal comprising a multi-layered structure, the method comprising: resetting a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice; arithmetically coding a data symbol of the given slice using the reset context model; and updating the context model based on a value of the arithmetically coded data symbol.
[2] The method of claim 1, further comprising binarizing the data symbol, wherein the data symbol of the given slice is the binarized data symbol.
[3] The method of claim 1, wherein the resetting of the context model for the given slice comprises: if there is the base layer slice at the same temporal position as the given slice, resetting the context model for the given slice to the context model for the base layer slice; and if there is no base layer slice at the same temporal position as the given slice, resetting the context model for the given slice to a context model for a slice coded temporally before the given slice.
[4] The method of claim 3, wherein the context model for the slice coded temporally before the given slice is one selected among context models for at least two slices coded temporally before the given slice.
[5] The method of claim 3, wherein the context model for the slice coded temporally before the given slice is a context model selectively comprising at least one among a plurality of probability models contained in the context model for the slice coded temporally before the given slice.
[6] The method of claim 1, wherein the base layer slice at the same temporal position as the given slice is a slice of a low-pass frame or high-pass frame in the base layer.
[7] The method of claim 1, wherein the context model for the base layer slice at the same temporal position as the given slice is a context model selectively comprising at least one among a plurality of probability models contained in the context model for the base layer slice.
[8] A method for performing context-based adaptive arithmetic decoding on a given slice in an enhancement layer frame of a video signal comprising a multi-layered structure, the method comprising: resetting a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice; arithmetically decoding a bitstream corresponding to the given slice using the reset context model to generate a data symbol of the given slice; and updating the context model based on a value of the data symbol.
[9] The method of claim 8, further comprising debinarizing the data symbol.
[10] The method of claim 8, wherein the resetting of the context model for the given slice comprises: if there is the base layer slice at the same temporal position as the given slice, resetting the context model for the given slice to the context model for the base layer slice; and if there is no base layer slice at the same temporal position as the given slice, resetting the context model for the given slice to a context model for a slice decoded temporally before the given slice.
[11] The method of claim 10, wherein the context model for the slice de coded temporally before the given slice is one selected among context models for at least two slices de coded temporally before the given slice.
[12] The method of claim 10, wherein the context model for the slice de coded temporally before the given slice is a context model selectively comprising at least one among a plurality of probability models contained in the context model for the slice de coded temporally before the given slice.
[13] The method of claim 8, wherein the base layer slice at the same temporal position as the given slice is a slice of a low-pass frame or high-pass frame in the base layer.
[14] The method of claim 8, wherein the context model for the base layer slice at the same temporal position as the given slice is a context model selectively comprising at least one among a plurality of probability models contained in the context model for the base layer slice.
[15] A method for performing context-based adaptive arithmetic coding on a given slice in an enhancement layer frame of a video signal comprising a multi-layered structure, the method comprising: resetting a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice, a context model for a slice coded temporally before the given slice, and a predetermined value; arithmetically coding a data symbol of the slice using the reset context model; and updating the context model based on a value of the arithmetically coded data symbol.
[16] The method of claim 15, further comprising binarizing the data symbol, wherein the data symbol of the given slice is the binarized data symbol.
[17] The method of claim 15, wherein the base layer slice at the same temporal position as the slice is a slice of a low-pass frame or high-pass frame in the base layer.
[18] The method of claim 15, wherein the context model for the base layer slice at the same temporal position as the given slice is a context model selectively comprising at least one among a plurality of probability models contained in the context model for the base layer slice.
[19] The method of claim 15, further comprising determining whether the data symbol is a symbol for a first block of among a plurality of blocks if the given slice comprises the plurality of blocks, wherein if the data symbol is not the symbol for the first block, the resetting does not occur and the arithmetic coding is performed using the updated context model.
[20] A method for performing context-based adaptive arithmetic decoding on a given slice in an enhancement layer frame of a video signal comprising a multi-layered structure, the method comprising: resetting a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice, a context model for a slice de coded temporally before the given slice, and a predetermined value; arithmetically decoding a bitstream corresponding to the given slice using the reset context model to generate a data symbol of the given slice; and updating the context model based on a value of the data symbol.
[21] The method of claim 20, further comprising de binarizing the data symbol, wherein the data symbol of the slice is a binarized data symbol.
[22] The method of claim 20, wherein the base layer slice at the same temporal position as the given slice is a slice of a low-pass frame or high-pass frame in the base layer.
[23] The method of claim 20, wherein the context model for the base layer slice at the same temporal position as the given slice is a context model selectively comprising at least one among a plurality of probability models contained in the context model for the base layer slice.
[24] The method of claim 20, further comprising determining whether the bitstream comprises a data symbol for a first block among a plurality of blocks if the given slice comprises the plurality of blocks, wherein if the bitstream does not comprise the data symbol for the first block of the plurality of blocks, the resetting does not occur and the arithmetic de coding is performed using the updated context model.
[25] A video coding method comprising a method for performing context-based adaptive arithmetic coding on a given slice in an enhancement layer frame comprising a multi-layered structure, the video coding method comprising: subtracting a predicted image for the given slice from the given slice and generating a residual image; performing spatial transform on the residual image and generating a transform coefficient; quantizing the transform coefficient; resetting a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice; arithmetically coding a data symbol of the given slice using the reset context model; updating the context model based on a value of the arithmetically coded data symbol; generating a bitstream comprising the arithmetically coded data symbol; and transmitting the bitstream.
[26] The method of claim 25, further comprising binarizing the data symbol, wherein the data symbol of the given slice is the binarized data symbol.
[27] The method of claim 25, wherein the resetting of the context model for the given slice comprises: if there is the base layer slice at the same temporal position as the given slice, resetting the context model for the given slice to the context model for the base layer slice; and if there is no base layer slice at the same temporal position as the given slice, resetting the context model for the given slice to a context model for a slice coded temporally before the given slice.
[28] The method of claim 27, wherein the context model for the slice coded temporally before the given slice is one selected among context models for at least two slices coded temporally before the given slice.
[29] The method of claim 27, wherein the context model for the slice coded temporally before the given slice is a context model selectively comprising at least one among a plurality of probability models contained in the context model for the slice coded temporally before the given slice.
[30] The method of claim 27, wherein the bitstream comprises information about whether each of probability models constituting a context model for a slice coded temporally before the given slice has been used.
[31] The method of claim 25, wherein the base layer slice at the same temporal position as the given slice is a slice of a low-pass frame or high-pass frame in the base layer.
[32] The method of claim 25, wherein the context model for the base layer slice at the same temporal position as the given slice is a context model selectively comprising at least one among a plurality of probability models contained in the context model for the base layer slice.
[33] The method of claim 25, wherein the bitstream comprises information about whether or not each of probability models constituting a context model for the base layer slice at the same temporal position as the given slice has been used as a reference model.
[34] A video decoding method comprising a method for performing context-based adaptive arithmetic decoding on a given slice in an enhancement layer frame comprising a multi-layered structure, the video de coding method comprising: parsing a bitstream and extracting data about the given slice to be reconstructed; resetting a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice according to the data; arithmetically decoding a bitstream corresponding to the given slice using the reset context model to generate a data symbol of the given slice; updating the context model based on a value of the data symbol; dequantizing the data symbol to generate a transform coefficient; performing inverse spatial transform on the transform coefficient and reconstructing a residual image obtained by subtracting a predicted image from the given slice; and adding the predicted image reconstructed by motion compensation to the reconstructed residual image and reconstructing the given slice.
[35] The method of claim 34, further comprising debinarizing the data symbol.
[36] The method of claim 34, wherein the resetting of the context model for the given slice comprises: if there is the base layer slice at the same temporal position as the given slice, resetting the context model for the given slice to the context model for the base layer slice; and if there is no base layer slice at the same temporal position as the given slice, resetting the context model for the given slice to a context model for a slice decoded temporally before the given slice.
[37] The method of claim 36, wherein the data comprises information about whether or not each of probability models constituting a context model for a slice coded temporally before the given slice has been used as a reference model.
[38] The method of claim 34, wherein the base layer slice at the same temporal position as the given slice is a slice of a low-pass frame or high-pass frame in the base layer.
[39] The method of claim 34, wherein the data comprises information about whether or not each of probability models constituting a context model for the base layer slice at the same temporal position as the given slice has been used as a reference model.
[40] A method for coding a given slice in an enhancement layer frame of a video signal comprising a multi-layered structure, the method comprising: subtracting a predicted image for the given slice from the given slice and generating a residual image; performing spatial transform on the residual image and generating a transform coefficient; quantizing the transform coefficient; resetting a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice, a context model for a slice coded temporally before the given slice, and a predetermined value; arithmetically coding a data symbol of the given slice using the reset context model; updating the context model based on a value of the arithmetically coded data symbol; generating a bitstream comprising the arithmetically coded data symbol; and transmitting the bitstream.
[41] The method of claim 40, further comprising binarizing the data symbol, wherein the data symbol of the given slice is the binarized data symbol.
[42] The method of claim 40, wherein the base layer slice at the same temporal position as the given slice is a slice of a low-pass frame or high-pass frame in the base layer.
[43] The method of claim 40, wherein the context model for the base layer slice at the same temporal position as the given slice is a context model selectively comprising at least one among a plurality of probability models contained in the context model for the base layer slice.
[44] The method of claim 40, wherein the bitstream comprises information about whether or not each of probability models constituting a context model for a slice coded temporally before the given slice has been used as a reference model.
[45] A method for decoding a given slice in an enhancement layer frame of a video signal comprising a multi-layered structure, the method comprising: parsing a bitstream and extracting data about the given slice to be reconstructed; resetting a context model for the slice to at least one of a context model for a base layer slice at the same temporal position as the given slice, a context model for a slice de coded temporally before the given slice, and a predetermined value according to the data; arithmetically decoding a bitstream corresponding to the given slice using the reset context model to generate a data symbol of the given slice; updating the context model based on a value of the data symbol; dequantizing the data symbol to generate a transform coefficient; performing inverse spatial transform on the transform coefficient to reconstruct a residual image obtained by subtracting a predicted image from the given slice; and adding the predicted image reconstructed by motion compensation to the reconstructed residual image and reconstructing the given slice.
[46] The method of claim 45, further comprising de binarizing the data symbol, wherein the data symbol of the given slice is a binarized data symbol.
[47] The method of claim 45, wherein the base layer slice at the same temporal position as the given slice is a slice in a base layer low-pass frame.
[48] The method of claim 45, wherein the data comprises information about whether or not each of probability models constituting a context model for a slice coded temporally before the given slice has been used as a reference model.
[49] A video encoder for compressing a given slice in an enhancement layer frame comprising a multi-layered structure, the encoder comprising: a unit which subtracts a predicted image for the given slice from the given slice and generates a residual image; a unit which performs spatial transform on the residual image and generates a transform coefficient; a unit which quantizes the transform coefficient; a unit which resets a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice; a unit which arithmetically codes a data symbol of the given slice using the reset context model; a unit which updates the context model based on a value of the arithmetically coded data symbol; a unit which generates a bitstream comprising the arithmetically coded data symbol; and a unit which transmits the bitstream.
[50] A video decoder for reconstructing a given slice in an enhancement layer frame comprising a multi-layered structure, the decoder comprising: a unit which parses a bitstream and extracts data about the given slice to be reconstructed; a unit which resets a context model for the given slice to a context model for a base layer slice at the same temporal position as the given slice according to the data; a unit which arithmetically decodes a bitstream corresponding to the given slice using the reset context model to generate a data symbol of the given slice; a unit which updates the context model based on a value of the data symbol; a unit which dequantizes the data symbol to generate a transform coefficient; a unit which performs inverse spatial transform on the transform coefficient to reconstruct a residual image obtained by subtracting a predicted image from the given slice; and a unit which adds the predicted image reconstructed by motion compensation to the reconstructed residual image and reconstructs the given slice.
[51] A video encoder for compressing a given slice in an enhancement layer frame comprising a multi-layered structure, the encoder comprising: a unit which subtracts a predicted image for the given slice from the given slice and generates a residual image; a unit which performs spatial transform on the residual image and generates a transform coefficient; a unit which quantizes the transform coefficient; a unit which resets a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice, a context model for a slice coded temporally before the given slice, and a predetermined value; a unit which arithmetically codes a data symbol of the given slice using the reset context model; a unit which updates the context model based on a value of the arithmetically coded data symbol; a unit which generates a bitstream comprising the arithmetically coded data symbol; and a unit which transmit the bitstream.
[52] A video decoder for reconstructing a given slice in an enhancement layer frame comprising a multi-layered structure, the decoder comprising: a unit which parses a bitstream and extracts data about the given slice to be reconstructed; a unit which resets a context model for the given slice to at least one of a context model for a base layer slice at the same temporal position as the given slice, a context model for a slice de coded temporally before the given slice according to the data, and a predetermined value; a unit which arithmetically decodes a bitstream corresponding to the given slice using the reset context model to generate a data symbol of the given slice; a unit which updates the context model based on a value of the data symbol; a unit which dequantizes the data symbol to generate a transform coefficient; a unit which performs inverse spatial transform on the transform coefficient to reconstruct a residual image obtained by subtracting a predicted image from the given slice; and a unit which adds the predicted image reconstructed by motion compensation to the reconstructed residual image and reconstructs the given slice.
[53] A computer-readable recording program medium having a program for implementing the method of claim 1.
[54] A computer-readable recording program medium having a program for implementing the method of claim 8.
[55] A computer-readable recording program medium having a program for implementing the method of claim 15.
[56] A computer-readable recording program medium having a program for implementing the method of claim 20.
[57] A computer-readable recording program medium having a program for implementing the method of claim 25.
[58] A computer-readable recording program medium having a program for implementing the method of claim 34.
[59] A computer-readable recording program medium having a program for implementing the method of claim 40.
[60] A computer-readable recording program medium having a program for implementing the method of claim 45.
PCT/KR2006/001420 2005-04-19 2006-04-18 Context-based adaptive arithmetic coding and decoding methods and apparatuses with improved coding efficiency and video coding and decoding methods and apparatuses using the same WO2006112643A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP06757477A EP1878253A1 (en) 2005-04-19 2006-04-18 Context-based adaptive arithmetic coding and decoding methods and apparatuses with improved coding efficiency and video coding and decoding methods and apparatuses using the same

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US67254805P 2005-04-19 2005-04-19
US60/672,548 2005-04-19
KR10-2005-0059369 2005-07-01
KR1020050059369A KR100703776B1 (en) 2005-04-19 2005-07-01 Method and apparatus of context-based adaptive arithmetic coding and decoding with improved coding efficiency, and method and apparatus for video coding and decoding including the same

Publications (1)

Publication Number Publication Date
WO2006112643A1 true WO2006112643A1 (en) 2006-10-26

Family

ID=37115329

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2006/001420 WO2006112643A1 (en) 2005-04-19 2006-04-18 Context-based adaptive arithmetic coding and decoding methods and apparatuses with improved coding efficiency and video coding and decoding methods and apparatuses using the same

Country Status (2)

Country Link
EP (1) EP1878253A1 (en)
WO (1) WO2006112643A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10218541A1 (en) * 2001-09-14 2003-04-24 Siemens Ag Context-adaptive binary arithmetic video coding, e.g. for prediction error matrix spectral coefficients, uses specifically matched context sets based on previously encoded level values
JP2003319391A (en) * 2002-04-26 2003-11-07 Sony Corp Encoding apparatus and method, decoding apparatus and method, recording medium, and program
JP2004135251A (en) * 2002-10-10 2004-04-30 Sony Corp Method of encoding image information, and method of decoding the image information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10218541A1 (en) * 2001-09-14 2003-04-24 Siemens Ag Context-adaptive binary arithmetic video coding, e.g. for prediction error matrix spectral coefficients, uses specifically matched context sets based on previously encoded level values
JP2003319391A (en) * 2002-04-26 2003-11-07 Sony Corp Encoding apparatus and method, decoding apparatus and method, recording medium, and program
JP2004135251A (en) * 2002-10-10 2004-04-30 Sony Corp Method of encoding image information, and method of decoding the image information

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MARPE D. ET AL.: "Context-based adaptive binary arithmetic coding in JVT/H.26L", IMAGE PROCESSING. 2002. PROCEEDINGS. 2002 INTERNATIONAL CONFERENCE, vol. 2, 22 September 2002 (2002-09-22) - 25 September 2002 (2002-09-25), pages II-513 - II-516, XP010608021 *
MARPE D. ET AL.: "Video compression using context-based adaptive arithmetic coding", IMAGE PROCESSING, 2001. PROCEEDINGS. 2001 INTERNATIONAL CONFERENCE, vol. 3, 7 October 2001 (2001-10-07) - 10 October 2001 (2001-10-10), pages 558 - 561, XP001110199 *
WEN-HSIAO P. ET AL.: "Context-based binary Arithmetic coding for fine granuality scalability", SIGNAL PROCESSING AND ITS APPLICATIONS, 2003. PROCEEDINGS. SEVENTH INTERNATIONAL SYMPOSIUM, vol. 1, 1 July 2003 (2003-07-01) - 4 July 2003 (2003-07-04), pages 105 - 108, XP010653140 *

Also Published As

Publication number Publication date
EP1878253A1 (en) 2008-01-16

Similar Documents

Publication Publication Date Title
US7292165B2 (en) Context-based adaptive arithmetic coding and decoding methods and apparatuses with improved coding efficiency and video coding and decoding methods and apparatuses using the same
US20060233240A1 (en) Context-based adaptive arithmetic coding and decoding methods and apparatuses with improved coding efficiency and video coding and decoding methods and apparatuses using the same
KR100703773B1 (en) Method and apparatus for entropy coding and decoding, with improved coding efficiency, and method and apparatus for video coding and decoding including the same
CN106576172B (en) Method for encoding/decoding image and apparatus using the same
US8345752B2 (en) Method and apparatus for entropy encoding/decoding
US20070237240A1 (en) Video coding method and apparatus supporting independent parsing
US20070086516A1 (en) Method of encoding flags in layer using inter-layer correlation, method and apparatus for decoding coded flags
US8340181B2 (en) Video coding and decoding methods with hierarchical temporal filtering structure, and apparatus for the same
US20050163217A1 (en) Method and apparatus for coding and decoding video bitstream
US20050157794A1 (en) Scalable video encoding method and apparatus supporting closed-loop optimization
AU2006201490A1 (en) Method and apparatus for adaptively selecting context model for entropy coding
JP4837047B2 (en) Method and apparatus for encoding and decoding video signals in groups
KR20070077059A (en) Method and apparatus for entropy encoding/decoding
KR100834757B1 (en) Method for enhancing entropy coding efficiency, video encoder and video decoder thereof
US20070133676A1 (en) Method and apparatus for encoding and decoding video signal depending on characteristics of coefficients included in block of FGS layer
KR20070049816A (en) Apparatus for encoding and decoding image, and method theroff, and a recording medium storing program to implement the method
KR100813001B1 (en) Video Encoding and Decoding Apparatus and Methods using Separation of Amplitude and Sign of a differential image signal
WO2006109990A1 (en) Context-based adaptive arithmetic coding and decoding methods and apparatuses with improved coding efficiency and video coding and decoding methods and apparatuses using the same
EP1878253A1 (en) Context-based adaptive arithmetic coding and decoding methods and apparatuses with improved coding efficiency and video coding and decoding methods and apparatuses using the same
WO2006085725A1 (en) Video coding and decoding methods with hierarchical temporal filtering structure, and apparatus for the same
WO2006109974A1 (en) Method for entropy coding and decoding having improved coding efficiency and apparatus for providing the same
WO2006098586A1 (en) Video encoding/decoding method and apparatus using motion prediction between temporal levels
MXPA06004332A (en) Method and apparatus for adaptively selecting context model for entropy coding

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200680020062.2

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2006757477

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

WWP Wipo information: published in national office

Ref document number: 2006757477

Country of ref document: EP