CN107465930B - Method and apparatus for encoding video, and computer-readable storage medium - Google Patents

Method and apparatus for encoding video, and computer-readable storage medium Download PDF

Info

Publication number
CN107465930B
CN107465930B CN201710854232.7A CN201710854232A CN107465930B CN 107465930 B CN107465930 B CN 107465930B CN 201710854232 A CN201710854232 A CN 201710854232A CN 107465930 B CN107465930 B CN 107465930B
Authority
CN
China
Prior art keywords
previous
current
transform coefficient
binarization parameter
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710854232.7A
Other languages
Chinese (zh)
Other versions
CN107465930A (en
Inventor
金赞烈
金宰贤
朴正辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of CN107465930A publication Critical patent/CN107465930A/en
Application granted granted Critical
Publication of CN107465930B publication Critical patent/CN107465930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/40Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/40Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
    • H03M7/4006Conversion to or from arithmetic code
    • H03M7/4012Binary arithmetic codes
    • H03M7/4018Context adapative binary arithmetic codes [CABAC]
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/60General implementation details not specific to a particular type of compression
    • H03M7/6035Handling of unkown probabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/40Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
    • H03M7/4031Fixed length to variable length coding
    • H03M7/4037Prefix coding
    • H03M7/4043Adaptive prefix coding
    • H03M7/4068Parameterized codes
    • H03M7/4075Golomb codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • Computing Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Error Detection And Correction (AREA)

Abstract

A method and apparatus for encoding video and a computer-readable storage medium are provided. Syntax elements indicating transform coefficient levels are binarized by using parameters. The parameters are updated or maintained based on the result of comparing the size of the previous transform coefficient with a predetermined critical value obtained based on the previous parameters used in the inverse binarization for the previous transform coefficient level syntax element. The predetermined critical value is set to have a value proportional to the previous parameter, and when the previous parameter is updated, the updated parameter has a gradually increasing value compared to the previous parameter.

Description

Method and apparatus for encoding video, and computer-readable storage medium
The present application is a divisional application of the invention patent application having an application date of 2013, 04/15, application number of "201380031658.2", entitled "parameter updating method for entropy encoding and entropy decoding of transform coefficient levels, and entropy encoding device and entropy decoding device of transform coefficient levels using the same".
Technical Field
The present application relates to video encoding and decoding, and more particularly, to a method and apparatus for updating parameters used in entropy encoding and entropy decoding of size information of transform coefficients.
Background
An image is divided into a plurality of blocks having a predetermined size according to an image compression method, such as MPEG-1, MPEG-2, or MPEG-4h.264/MPEG-4 Advanced Video Coding (AVC), and then residual data of the plurality of blocks is obtained through inter prediction or intra prediction. The residual data is compressed by transform, quantization, scanning, run-length coding and entropy coding. In entropy encoding, a syntax element such as a transform coefficient or a motion vector is entropy encoded to output a bitstream. At the decoder side, syntax elements are extracted from the bitstream and decoding is performed based on the extracted syntax elements.
Disclosure of Invention
Technical problem
The present invention provides a method of updating parameters, by which parameters used in entropy encoding and entropy decoding of transform coefficients are gradually changed while preventing abrupt changes of the parameters.
The present invention also provides a method of updating parameters used in binarization for syntax elements such as transform coefficient levels by using a binarization method such as a Golomb-rice method or a concatenated code method.
Solution scheme
According to an embodiment of the present invention, there is provided a parameter updating method of gradually updating a parameter used in binarization for a transform coefficient level.
Advantageous effects
According to an embodiment of the present invention, by gradually changing parameters used in entropy encoding of level information of transform coefficients, the amount of bits generated during encoding can be reduced and the gain of an image can be increased.
Drawings
Fig. 1 is a block diagram of an apparatus for encoding video according to an embodiment of the present invention;
fig. 2 is a block diagram of an apparatus for decoding video according to an embodiment of the present invention;
fig. 3 is a diagram for describing a concept of a coding unit according to an embodiment of the present invention;
FIG. 4 is a block diagram of a video encoder based on coding units having a hierarchical structure according to an embodiment of the present invention;
FIG. 5 is a block diagram of a video decoder based on coding units having a hierarchical structure according to an embodiment of the present invention;
fig. 6 is a diagram illustrating a deeper coding unit and partition (partition) according to depth according to an embodiment of the present invention;
FIG. 7 is a diagram for describing a relationship between a coding unit and a transform unit according to an embodiment of the present invention;
fig. 8 is a diagram for describing encoding information of a coding unit corresponding to a coded depth according to an embodiment of the present invention;
FIG. 9 is a diagram of a deeper coding unit according to depth according to an embodiment of the present invention;
fig. 10 to 12 are diagrams for describing a relationship between a coding unit, a prediction unit, and a frequency transform unit according to an embodiment of the present invention;
fig. 13 is a diagram for describing a relationship among a coding unit, a prediction unit, and a transform unit according to the coding mode information of table 1;
fig. 14 is a flowchart illustrating an operation of entropy-encoding and entropy-decoding transform coefficient information included in a transform unit according to an embodiment of the present invention;
FIG. 15 illustrates a transform unit being entropy encoded according to an embodiment of the present invention;
FIG. 16 illustrates an activity diagram corresponding to the transform unit of FIG. 15, according to an embodiment of the present invention;
fig. 17 illustrates coeff _ abs _ level _ greater1_ flag corresponding to the 4 × 4 transform unit of fig. 15;
fig. 18 illustrates coeff _ abs _ level _ greater2_ flag corresponding to the 4 × 4 transform unit of fig. 15;
fig. 19 illustrates coeff _ abs _ level _ remaining corresponding to the 4 × 4 transform unit of fig. 15;
fig. 20 illustrates a table showing syntax elements related to the transform unit illustrated in fig. 15 to 19;
FIG. 21 illustrates another example of binarized coeff _ abs _ level _ remaining according to an embodiment of the present invention;
fig. 22 is a block diagram showing the structure of an entropy encoding apparatus according to an embodiment of the present invention;
fig. 23 is a block diagram showing the structure of a binarization device according to an embodiment of the present invention;
fig. 24 is a flowchart illustrating a method of entropy-encoding a syntax element indicating a transform coefficient level according to an embodiment of the present invention;
FIG. 25 is a block diagram illustrating an entropy decoding apparatus according to an embodiment of the present invention;
fig. 26 is a block diagram showing the structure of an anti-binarization device according to an embodiment of the present invention;
fig. 27 is a flowchart illustrating a method of entropy-decoding transform coefficient levels according to an embodiment of the present invention.
Best mode
According to an aspect of the present invention, there is provided a method of updating parameters for entropy decoding a transform coefficient level, the method including: parsing a transform coefficient level syntax element indicating a size of a transform coefficient included in a transform unit from a bitstream; determining whether to update a previous parameter by comparing a size of a previous transform coefficient restored before a current transform coefficient with a predetermined critical value obtained based on the previous parameter, wherein the previous parameter is used in inverse binarization of a previous transform coefficient level syntax element indicating the size of the previous transform coefficient; obtaining a parameter used in inverse binarization for a current transform coefficient level syntax element indicating a size of a current transform coefficient by updating or maintaining a previous parameter based on a result of the determination; the size of the current transform coefficient is obtained by inverse-binarizing the current transform coefficient level syntax element using the obtained parameter, wherein the predetermined critical value is set to have a value proportional to a previous parameter, and when the previous parameter is updated, the updated parameter has a gradually increasing value compared to the previous parameter.
According to another aspect of the present invention, there is provided an apparatus for entropy decoding a transform coefficient level, the apparatus comprising: a parsing unit that parses, from the bitstream, a transform coefficient level syntax element indicating a size of a transform coefficient included in the transform unit; a parameter determination unit that determines whether to update a previous parameter by comparing a size of a previous transform coefficient restored before the current transform coefficient with a predetermined critical value obtained based on the previous parameter, and obtains a parameter used in inverse binarization of a current transform coefficient level syntax element indicating the size of the current transform coefficient by updating or maintaining the previous parameter based on a result of the determination, wherein the previous parameter is used in inverse binarization of a previous transform coefficient level syntax element indicating the size of the previous transform coefficient; a syntax element restoration unit obtaining a size of the current transform coefficient by inverse-binarizing the current transform coefficient level syntax element using the obtained parameter, wherein the predetermined critical value is set to have a value proportional to a previous parameter, and when the previous parameter is updated, the updated parameter has a gradually increasing value compared to the previous parameter.
According to another aspect of the present invention, there is provided a method of updating parameters for entropy coding of transform coefficient levels, the method comprising: obtaining a transform coefficient level syntax element indicating a size of a transform coefficient included in a transform unit in a predetermined scan order; determining whether to update a previous parameter by comparing a size of a previous transform coefficient encoded before a current transform coefficient with a predetermined critical value obtained based on the previous parameter, wherein the previous parameter is used in binarization of a previous transform coefficient level syntax element indicating the size of the previous transform coefficient; obtaining a parameter used in binarization for a current transform coefficient level syntax element indicating a size of a current transform coefficient by updating or maintaining a previous parameter based on a result of the determination; outputting a bit string corresponding to a transform coefficient level syntax element of the current transform coefficient by binarizing the transform coefficient level syntax element of the current transform coefficient using the obtained parameter, wherein the predetermined critical value is set to have a value proportional to a previous parameter, and when the previous parameter is updated, the updated parameter has a gradually increasing value compared to the previous parameter.
According to another aspect of the present invention, an apparatus for entropy encoding transform coefficient levels, the apparatus comprising: a parameter determination unit obtaining a transform coefficient level syntax element indicating a size of a transform coefficient included in the transform unit in a predetermined scan order, determining whether to update a previous parameter by comparing a size of a previous transform coefficient encoded before the current transform coefficient with a predetermined critical value obtained based on the previous parameter, obtaining a parameter used in binarization of the current transform coefficient level syntax element indicating the size of the current transform coefficient by updating or maintaining the previous parameter based on a result of the determination, wherein the previous parameter is used in binarization of the previous transform coefficient level syntax element indicating the size of the previous transform coefficient; a bit string generation unit outputting a bit string corresponding to a transform coefficient level syntax element of the current transform coefficient by binarizing the transform coefficient level syntax element of the current transform coefficient using the obtained parameter, wherein the predetermined critical value is set to have a value proportional to a previous parameter, and when the previous parameter is updated, the updated parameter has a gradually increasing value compared to the previous parameter.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Hereinafter, a method and apparatus for updating parameters used in entropy-encoding and decoding size information of a transform unit according to an embodiment of the present invention will be described with reference to fig. 1 to 13. In addition, a method of entropy-encoding and entropy-decoding a syntax element obtained by using the method of entropy-encoding and decoding a video described with reference to fig. 1 to 13 will be described in detail with reference to fig. 14 to 27. When an expression such as "at least one of …" follows a list of elements, it modifies that entire list of elements but not individual elements listed.
Fig. 1 is a block diagram of a video encoding apparatus 100 according to an embodiment of the present invention.
The video encoding apparatus 100 includes a layered encoder 110 and an entropy encoder 120.
In detail, the hierarchical encoder 110 may divide the current picture, which is encoded, in units of predetermined data units to perform encoding on each data unit, wherein the maximum coding unit is a maximum-sized coding unit, the maximum coding unit according to an embodiment of the present invention may be a data unit of a size of 32 × 32, 64 × 64, 128 × 128, 256 × 256, etc., wherein the shape of the data unit is a square of which the width and length are several powers of 2 and greater than 8.
A coding unit according to an embodiment of the present invention may be characterized by a maximum size and depth. The depth represents the number of times the coding unit is spatially divided from the maximum coding unit, and as the depth deepens, the deeper coding units according to the depth may be divided from the maximum coding unit to the minimum coding unit. The depth of the maximum coding unit is the highest depth, and the depth of the minimum coding unit is the lowest depth. Since the size of the coding unit corresponding to each depth is reduced as the depth of the maximum coding unit is deepened, the coding unit corresponding to the higher depth may include a plurality of coding units corresponding to the lower depth.
As described above, the image data of the current picture is divided into maximum coding units according to the maximum size of the coding units, and each maximum coding unit may include deeper coding units divided according to depths. Since the maximum coding unit according to an embodiment of the present invention is divided according to depths, image data of a spatial domain included in the maximum coding unit may be hierarchically classified according to depths.
A maximum depth and a maximum size of the coding unit may be predetermined, wherein the maximum depth and the maximum size limit a total number of times the height and width of the maximum coding unit are hierarchically divided.
The layered encoder 110 encodes at least one divided region obtained by dividing a region of a maximum coding unit according to depths, and determines a depth for outputting finally encoded image data according to the at least one divided region. In other words, the layered encoder 110 determines the coded depth by encoding image data in deeper coding units according to depths according to the maximum coding unit of the current picture and selecting a depth having the smallest coding error. The determined coded depth and the encoded image data according to the maximum coding unit are output to the entropy encoder 120.
Encoding image data in the maximum coding unit based on deeper coding units corresponding to at least one depth equal to or lower than the maximum depth, and comparing results of encoding the image data based on each of the deeper coding units. The depth having the smallest coding error may be selected after comparing the coding errors of the deeper coding units. At least one coded depth may be selected for each maximum coding unit.
As the coding units are hierarchically divided according to depths and as the number of coding units increases, the size of the maximum coding unit is divided. In addition, even if the coding units correspond to the same depth in one maximum coding unit, whether to divide each coding unit corresponding to the same depth into lower depths is determined by separately measuring a coding error of image data of each coding unit. Accordingly, even when image data is included in one maximum coding unit, the image data is divided into regions according to depths, and coding errors are different according to the regions in the one maximum coding unit, and thus coded depths may be different according to the regions in the image data. Accordingly, one or more coded depths may be determined in one maximum coding unit, and image data of the maximum coding unit may be divided according to coding units of at least one coded depth.
Accordingly, the layered encoder 110 may determine the coding units having the tree structure included in the maximum coding unit. The "coding units having a tree structure" according to an embodiment of the present invention includes coding units corresponding to depths determined as coded depths among all deeper coding units included in a maximum coding unit. The coding units having coded depths may be hierarchically determined according to depths in the same region of the maximum coding unit, and the coding units having coded depths may be independently determined in different regions. Similarly, the coded depth in the current region may be determined independently of the coded depth in another region.
The maximum depth according to an embodiment of the present invention is an index related to the number of times of performing division from the maximum coding unit to the minimum coding unit. The first maximum depth according to an embodiment of the present invention may represent a total number of partitions from a maximum coding unit to a minimum coding unit. The second maximum depth according to an embodiment of the present invention may represent a total number of depth levels from a maximum coding unit to a minimum coding unit. For example, when the depth of the maximum coding unit is 0, the depth of a coding unit in which the maximum coding unit is divided once may be set to 1, and the depth of a coding unit in which the maximum coding unit is divided twice may be set to 2. Here, if the minimum coding unit is a coding unit in which the maximum coding unit is divided four times, there are 5 depth levels of depth 0, depth 1, depth 2, depth 3, and depth 4, and thus, the first maximum depth may be set to 4 and the second maximum depth may be set to 5.
The prediction encoding and the transformation may be performed according to a maximum coding unit. Also according to the maximum coding unit, prediction coding and transformation are performed based on a deeper coding unit according to a depth equal to or less than the maximum depth.
Since the number of deeper coding units increases each time the maximum coding unit is divided according to the depth, encoding including prediction encoding and transformation is performed on all deeper coding units generated as the depth deepens. For convenience of description, prediction coding and transformation will now be described based on a coding unit of a current depth in a maximum coding unit.
The video encoding apparatus 100 may variously select the size or shape of a data unit for encoding image data. In order to encode image data, operations such as predictive coding, transformation, and entropy coding are performed, and at this time, the same data unit may be used for all operations, or a different data unit may be used for each operation.
For example, the video encoding apparatus 100 may select not only a coding unit for encoding image data but also a data unit different from the coding unit to perform predictive encoding on the image data in the coding unit.
In order to perform prediction encoding in the maximum coding unit, prediction encoding may be performed based on a coding unit corresponding to a coded depth (i.e., based on a coding unit that is no longer divided into coding units corresponding to lower depths). Hereinafter, a coding unit that is no longer divided and becomes a basic unit for prediction coding will now be referred to as a "prediction unit". The partition obtained by dividing the prediction unit may include the prediction unit or a data unit obtained by dividing at least one of a height and a width of the prediction unit.
For example, when a coding unit of 2N × 2N (where N is a positive integer) is no longer divided and becomes a prediction unit of 2N × 2N, the size of the partition may be 2N × 2N, 2N × N, N × 2N, or N × N examples of the partition type include a symmetric partition obtained by symmetrically dividing the height or width of the prediction unit, a partition obtained by asymmetrically dividing the height or width of the prediction unit (such as 1: N or N:1), a partition obtained by geometrically dividing the prediction unit, and a partition having an arbitrary shape.
For example, the intra mode or the inter mode may be performed on partitions of 2N × 2N, 2N × N, N × 2N, or N × N, in addition, the skip mode may be performed on only partitions of 2N × 2N.
The video encoding apparatus 100 may also perform transformation on image data in a coding unit based on not only the coding unit used to encode the image data but also a data unit different from the coding unit.
In order to perform the transformation in the coding unit, the transformation may be performed based on a data unit having a size less than or equal to the coding unit. For example, the data units for the transform may include data units for intra mode and data units for inter mode.
The data units used as basis for the transformation will now be referred to as "transformation units". Similar to the coding unit, the transform unit in the coding unit may be recursively divided into regions of smaller sizes, so that the transform unit may be independently determined in units of regions. Accordingly, residual data in a coding unit may be divided according to a transform unit having a tree structure according to a transform depth.
For example, in a current coding unit of 2N × 2N, when the size of the transform unit is 2N × 2N, the transform depth may be 0, when the size of the transform unit is N × N, the transform depth may be 1, and when the size of the transform unit is N/2 × N/2, the transform depth may be 2.
Encoding information according to a coding unit corresponding to a coded depth requires not only information on the coded depth but also information related to prediction coding and transformation. Accordingly, the layered encoder 110 determines not only a coded depth having a minimum coding error but also a partition type in a prediction unit, a prediction mode according to the prediction unit, and a size of a transform unit used for transformation.
A coding unit according to a tree structure and a method of determining partitions in a maximum coding unit according to an embodiment of the present invention will be described in detail later with reference to fig. 3 to 12.
The layered encoder 110 may measure an encoding error of a deeper coding unit according to depth by using rate distortion optimization based on Lagrangian multiplier (Lagrangian multiplier).
The entropy encoder 120 outputs image data of a maximum coding unit, which is encoded based on at least one coded depth determined by the layered encoder 110, and information on an encoding mode according to the coded depth in a bitstream. The encoded image data may be an encoding result of residual data of the image. The information on the coding mode according to the coded depth may include information on the coded depth, information on a partition type in a prediction unit, prediction mode information, and size information of a transform unit. Specifically, as will be described later, the entropy encoder 120 binarizes a syntax element indicating the size of a transform coefficient from a bit string by using parameters that are gradually updated when entropy encoding the syntax element indicating the size of a transform unit. The operation of entropy-encoding the transform unit by using the entropy-encoding unit 120 will be described in detail later.
Information on the coded depth may be defined by using depth-dependent partition information indicating whether encoding is performed on coding units of lower depths rather than the current depth. If the current depth of the current coding unit is a coded depth, image data in the current coding unit is encoded and output, and thus the partition information may be defined not to partition the current coding unit to a lower depth. Alternatively, if the current depth of the current coding unit is not the coded depth, encoding is performed on coding units of lower depths, and thus, the partition information may be defined as partitioning the current coding unit to obtain coding units of lower depths.
If the current depth is not the coded depth, encoding is performed on the coding units divided into coding units of lower depths. Since at least one coding unit of a lower depth exists in one coding unit of the current depth, encoding is repeatedly performed on each coding unit of the lower depth, and thus, encoding can be recursively performed with respect to coding units having the same depth.
Since the coding units having the tree structure are determined for one maximum coding unit and the information on the at least one coding mode is determined for the coding units of the coded depth, the information on the at least one coding mode may be determined for one maximum coding unit. In addition, since image data is hierarchically divided according to depths, a coded depth of image data of a maximum coding unit may be different according to a location, and thus, information on a coded depth and a coding mode may be set for image data.
Accordingly, the entropy encoder 120 may allocate encoding information regarding the corresponding coded depth and encoding mode to at least one of a coding unit, a prediction unit, and a minimum unit included in the maximum coding unit.
The minimum unit according to an embodiment of the present invention is a data unit of a square shape obtained by dividing the minimum coding unit constituting the lowest depth into 4 pieces. Alternatively, the minimum unit may be a maximum square-shaped data unit that may be included in all coding units, prediction units, partition units, and transform units included in the maximum coding unit.
For example, the encoding information output by the entropy encoder 120 may be classified into encoding information according to a coding unit and encoding information according to a prediction unit. The encoding information according to the coding unit may include information on a prediction mode and information on a size of a partition. The encoding information according to the prediction unit may include information on an estimated direction of the inter mode, information on a reference picture index of the inter mode, information on a motion vector, information on a chrominance component of the intra mode, and information on an interpolation method of the intra mode. Also, information on a maximum size of a coding unit defined according to a picture, slice, or GOP and information on a maximum depth may be inserted into a header of a bitstream.
In the video encoding apparatus 100, the deeper coding units may be coding units obtained by dividing the height or width of a coding unit of a higher depth, which is an upper layer, into 2, in other words, when the size of a coding unit of a current depth is 2N × 2N, the size of a coding unit of a lower depth is N × N, and in addition, a coding unit of a current depth of size 2N × 2N may include coding units of up to 4 lower depths.
Accordingly, based on the size and maximum depth of the maximum coding unit determined in consideration of the characteristics of the current picture, the video encoding apparatus 100 may form the coding units having a tree structure by determining a coding unit having an optimal shape and an optimal size for each maximum coding unit. In addition, since encoding can be performed for each maximum coding unit by using any of various prediction modes and transforms, an optimal encoding mode can be determined in consideration of characteristics of coding units of various image sizes.
Therefore, if an image having a high resolution or a large data amount is encoded in the conventional macro block, the number of macro blocks per picture is sharply increased. Therefore, the number of pieces of compression information generated for each macroblock increases, and thus it is difficult to transmit the compression information and the data compression efficiency is degraded. However, by using the video encoding apparatus 100, since the encoding unit is adjusted while taking the characteristics of the image into consideration, and at the same time, the maximum size of the encoding unit is increased while taking the size of the image into consideration, the image compression efficiency can be increased.
Fig. 2 is a block diagram of a video decoding apparatus 200 according to an embodiment of the present invention.
The video decoding apparatus 200 includes a parser 210, an entropy decoder 220, and a layered decoder 230. Definitions of various terms (such as a coding unit, a depth, a prediction unit, a transform unit, and information regarding various coding modes) used for various operations of the video decoding apparatus 200 are the same as those described with reference to fig. 1 and the video encoding apparatus 100.
Parser 210 receives a bitstream of encoded video and parses syntax elements. The entropy decoder 220 extracts a syntax element indicating encoded image data based on coding units having a tree structure by performing entropy decoding on the parsed syntax element, and outputs the extracted syntax element to the hierarchical decoder 230. That is, the entropy decoder 220 performs entropy decoding on a syntax element received in the form of a bit string of 0 and 1, thereby restoring the syntax element.
In addition, the entropy decoder 220 extracts information on coded depth, an encoding mode, color component information, prediction mode information, and the like according to coding units having a tree structure of each maximum coding unit from the parsed bitstream. The extracted information on the coded depth and the coding mode is output to the layered decoder 230. The image data in the bitstream is divided into maximum coding units so that the layered decoder 230 can decode the image data for each maximum coding unit.
Information on a coded depth and a coding mode according to a maximum coding unit may be set for information on at least one coding unit corresponding to the coded depth, and the information on the coding mode may include information on a partition type of a corresponding coding unit corresponding to the coded depth, information on a prediction mode, and information on a size of a transform unit. In addition, the division information according to the depth may be extracted as information on the coded depth.
The information about the coded depth and the coding mode according to each maximum coding unit extracted by the entropy decoder 220 is information about the coded depth and the coding mode, that is: the coded depth and the encoding mode are determined to generate a minimum encoding error when an encoder, such as the video encoding apparatus 100, repeatedly performs encoding for each deeper coding unit according to depth according to each maximum coding unit. Accordingly, the video decoding apparatus 200 may restore an image by decoding image data according to a coded depth and a coding mode that generate a minimum coding error.
Since encoding information regarding a coded depth and an encoding mode may be allocated to a predetermined data unit among a corresponding coding unit, prediction unit, and minimum unit, the entropy decoder 220 may extract information regarding a coded depth and an encoding mode according to the predetermined data unit. When information on the coded depth and the coding mode of the corresponding maximum coding unit is allocated to each predetermined data unit, the predetermined data units allocated with the same information on the coded depth and the coding mode may be inferred as data units included in the same maximum coding unit.
Also, as will be described later, the entropy decoder 220 inverse-binarizes a syntax element indicating the size of the transform coefficient by using the parameters that are gradually updated. The operation of obtaining size information of a transform coefficient by inverse-binarizing a bit string corresponding to a syntax element indicating the size of a transform unit using the entropy decoder 220 will be described in detail later.
The layered decoder 230 may restore the current picture by decoding the image data in each maximum coding unit based on the information on the coded depth and the coding mode according to the maximum coding unit. In other words, the hierarchical decoder 230 may decode the encoded image data based on the extracted information on the partition type, the prediction mode, and the transform unit for each of the coding units having the tree structure included in each maximum coding unit. The decoding process may include prediction and inverse transform, wherein the prediction includes intra prediction and motion compensation.
The hierarchical decoder 230 may perform intra prediction or motion compensation according to a partition and a prediction mode of each coding unit based on information about a partition type and a prediction mode of a prediction unit of a coding unit according to a coded depth.
Also, the hierarchical decoder 230 may perform inverse transformation according to each transform unit in the coding unit based on the information regarding the size of the transform unit of the coding unit according to the coded depth, so as to perform inverse transformation according to the maximum coding unit.
The hierarchical decoder 230 may determine at least one coded depth of a current maximum coding unit by using division information according to depths. The current depth is a coded depth if the partitioning information indicates that the image data is no longer partitioned in the current depth. Accordingly, the hierarchical decoder 230 may decode a coding unit of a current depth with respect to image data of a current maximum coding unit by using information on a partition type of a prediction unit, a prediction mode, and a size of a transform unit.
In other words, a data unit containing coding information including the same division information may be collected by observing a set of coding information allocated for a predetermined data unit among a coding unit, a prediction unit, and a minimum unit, and the collected data unit may be regarded as one data unit to be decoded in the same coding mode by the layered decoder 230.
The video decoding apparatus 200 may obtain information on at least one coding unit that generates a minimum coding error when encoding is recursively performed on each maximum coding unit, and the video decoding apparatus 200 may use the information to decode the current picture. In other words, the encoded image data of the coding unit having the tree structure determined as the optimal coding unit in each maximum coding unit can be decoded.
Accordingly, even if image data has high resolution and a large data amount, the image data can be efficiently decoded and restored by using the size of the coding unit and the coding mode adaptively determined according to the characteristics of the image data by using information on the optimal coding mode received from the encoder.
A method of determining a coding unit, a prediction unit, and a transform unit having a tree structure according to an embodiment of the present invention will now be described with reference to fig. 3 to 13.
Fig. 3 is a diagram for describing a concept of a coding unit according to an embodiment of the present invention.
The size of the coding unit may be expressed in terms of width × height, and the coding unit of 64 × 64, 32 × 032, 16 × 116 and 8 × 28.64.64 28.64 × 364 may be divided into partitions of 64 × 464, 64 × 532, 32 × 664 or 32 × 732, the coding unit of 32 × 832 may be divided into partitions of 32 × 932, 32 × 16, 16 × 032 or 16 × 116, the coding unit of 16 × 216 may be divided into partitions of 16 × 316, 16 × 48, 8 × 16 or 8 × 8, and the coding unit of 8 × 8 may be divided into partitions of 8 × 8, 8 × 4, 4 × 8 or 4 × 4.
With respect to the video data 310, it is set that the resolution is 1920 × 1080, the maximum size of a coding unit is 64, and the maximum depth is 2, with respect to the video data 320, it is set that the resolution is 1920 × 1080, the maximum size of a coding unit is 64, and the maximum depth is 3, with respect to the video data 330, it is set that the resolution is 352 × 288, the maximum size of a coding unit is 16, and the maximum depth is 1, the maximum depth shown in fig. 3 represents the total number of divisions from the maximum coding unit to the minimum decoding unit.
If the resolution is high or the data amount is large, the maximum size of the coding unit may be large in order to not only improve the coding efficiency but also accurately reflect the characteristics of the image. Accordingly, the maximum size of the coding units of the video data 310 and the video data 320 having a higher resolution than the video data 330 may be 64.
Since the maximum depth of the video data 310 is 2, since the depth is deepened to two layers twice by dividing the maximum coding unit, the coding units 315 of the video data 310 may include the maximum coding unit having the long axis size of 64 and coding units having the long axis sizes of 32 and 16. Meanwhile, since the maximum depth of the video data 330 is 1, since the depth is deepened to one layer once by dividing the maximum coding unit, the coding units 335 of the video data 330 may include a maximum coding unit with a long axis size of 16 and a coding unit with a long axis size of 8.
Since the maximum depth of the video data 320 is 3, since the triple depth is deepened to 3 layers by dividing the maximum coding unit, the coding units 325 of the video data 320 may include the maximum coding unit having a long axis size of 64 and coding units having long axis sizes of 32, 16, and 8. As the depth deepens, detailed information can be accurately represented.
Fig. 4 is a block diagram of a video encoder 400 based on coding units having a hierarchical structure according to an embodiment of the present invention.
The intra predictor 410 performs intra prediction on a coding unit in an intra mode for the current frame 405, and the motion estimator 420 and the motion compensator 425 perform inter estimation and motion compensation on a coding unit in an inter mode by using the current frame 405 and the reference frame 495.
The data output from the intra predictor 410, the motion estimator 420, and the motion compensator 425 are output as quantized transform coefficients through the transformer 430 and the quantizer 440. The quantized transform coefficients are restored into data in a spatial domain through the inverse quantizer 460 and the inverse transformer 470, and the restored data in the spatial domain is output as a reference frame 495 after being post-processed through the deblocking unit 480 and the loop filtering unit 490. The quantized transform coefficients may be output as a bitstream 455 by an entropy encoder 450.
The entropy encoding unit 450 arithmetically encodes the following syntax elements of the transform unit: such as an active map indicating a position of a transform unit other than 0, a first critical value flag (coeff _ abs _ level _ gather 1_ flag) indicating whether the transform unit has a value greater than 1, a second critical value (coeff _ abs _ level _ gather 2_ flag) indicating whether the transform unit has a value greater than 2, and size information (coeff _ abs _ level _ remaining) of a transform coefficient corresponding to a difference between a base level (baseLevel) determined based on the first critical value and the second critical value and an actual transform coefficient (abscoeff).
In order for the video encoder 400 to be applied to the video encoding apparatus 100, all elements of the video encoder 400 (i.e., the intra predictor 410, the motion estimator 420, the motion compensator 425, the transformer 430, the quantizer 440, the entropy encoder 450, the inverse quantizer 460, the inverse transformer 470, the deblocking unit 480, and the loop filtering unit 490) must perform operations based on each of a plurality of coding units having a tree structure while considering a maximum depth of each maximum coding unit.
Specifically, the intra predictor 410, the motion estimator 420, and the motion compensator 425 determine a partition and a prediction mode of each of the plurality of coding units having a tree structure while considering a maximum size and a maximum depth of a current maximum coding unit, and the transformer 430 determines a size of a transform unit in each of the plurality of coding units having a tree structure.
Fig. 5 is a block diagram of a coding unit-based video decoder 500 according to an embodiment of the present invention.
The parser 510 parses encoded image data to be decoded and information about encoding required for decoding from the bitstream 505. The encoded image data is output as inverse quantized data through the decoder 520 and the inverse quantizer 530. The entropy decoder 520 obtains the following transform unit-related elements from the bitstream and arithmetically decodes the obtained syntax elements to restore the syntax elements: that is, an active map indicating a position of a transform unit other than 0, a first critical value flag (coeff _ abs _ level _ gather 1_ flag) indicating whether the transform unit has a value greater than 1, a second critical value (coeff _ abs _ level _ gather 2_ flag) indicating whether the transform unit has a value greater than 2, and size information (coeff _ abs _ level _ remaining) of a transform coefficient, wherein the size information corresponds to a difference between a base level (baseLevel) determined based on the first critical value and the second critical value and an actual transform coefficient (abscoeff).
The inverse transformer 540 restores the inverse quantized data to image data in the spatial domain. The intra predictor 550 performs intra prediction on a coding unit in an intra mode with respect to image data in a spatial domain, and the motion compensator 560 performs motion compensation on a coding unit in an inter mode by using the reference frame 585.
The image data in the spatial domain, which has passed through the intra predictor 550 and the motion compensator 560, may be output as a restored frame 595 after being post-processed through the deblocking unit 570 and the loop filtering unit 580. In addition, the image data post-processed by the deblocking unit 570 and the loop filtering unit 580 may be output as a reference frame 585.
In order for the image decoder 500 to be applied to the video decoding apparatus 200, all elements of the image decoder 500 (i.e., the parser 510, the entropy decoder 520, the inverse quantizer 530, the inverse transformer 540, the intra predictor 550, the motion compensator 560, the deblocking unit 570, and the loop filtering unit 580) perform operations based on coding units having a tree structure of each maximum coding unit.
The intra predictor 550 and the motion compensator 560 determine partitions and prediction modes for each coding unit having a tree structure, and the inverse transformer 540 must determine the size of a transform unit for each coding unit.
Fig. 6 is a diagram illustrating a deeper coding unit and a partition according to depth according to an embodiment of the present invention.
The video encoding apparatus 100 and the video decoding apparatus 200 use layered coding units to consider the characteristics of images. The maximum height, the maximum width, and the maximum depth of the coding unit may be adaptively determined according to characteristics of the image, or may be differently set by a user. The size of the deeper coding unit according to the depth may be determined according to a predetermined maximum size of the coding unit.
In the hierarchical structure 600 of coding units according to an embodiment of the present invention, the maximum height and the maximum width of a coding unit are both 64, and the maximum depth is 4. Since the depth deepens along the longitudinal axis of the hierarchical structure 600, both the height and the width of the deeper coding units are divided. In addition, prediction units and partitions, which are the basis for predictive coding of each deeper coding unit, are shown along the horizontal axis of the hierarchical structure 600.
In other words, coding unit 610 is the largest coding unit in hierarchical structure 600, where depth is 0, and size (i.e., height multiplied by width) is 64 × 64. depth deepens along the vertical axis, and there are coding unit 620 of size 32 × 32 and depth 1, coding unit 630 of size 16 × 16 and depth 2, coding unit 640 of size 8 × 8 and depth 3, and coding unit 650 of size 4 × 4 and depth 4. coding unit 650 of size 4 × 4 and depth 4 is the smallest coding unit.
In other words, if a coding unit 610 having a size of 64 × 64 and a depth of 0 is a prediction unit, the prediction unit may be divided into partitions included in the coding unit 610 (i.e., a partition 610 having a size of 64 × 64, a partition 612 having a size of 64 × 32, a partition 614 having a size of 32 × 64, or a partition 616 having a size of 32 × 32).
Similarly, a prediction unit of the coding unit 620 having the size of 32 × 32 and the depth of 1 may be divided into partitions included in the coding unit 620 (i.e., a partition 620 having a size of 32 × 32, a partition 622 having a size of 32 × 16, a partition 624 having a size of 16 × 32, and a partition 626 having a size of 16 × 16).
Similarly, the prediction unit of the coding unit 630 having the size 16 × 16 and the depth 2 may be divided into partitions included in the coding unit 630 (i.e., a partition having a size 16 × 16, a partition 632 having a size 16 × 8, a partition 634 having a size 8 × 16, and a partition 636 having a size 8 × 8 included in the coding unit 630).
Similarly, the prediction unit of the coding unit 640 having the size 8 × 8 and the depth 3 may be divided into partitions included in the coding unit 640 (i.e., a partition having a size 8 × 8, a partition 642 having a size 8 × 4, a partition 644 having a size 4 × 8, and a partition 646 having a size 4 × 4 included in the coding unit 640).
The coding unit 650 having the size of 4 × 4 and the depth of 4 is a minimum coding unit and a coding unit of the lowest depth, the prediction unit of the coding unit 650 is allocated only to a partition having the size of 4 × 4.
In order to determine at least one coded depth of the coding units constituting the maximum coding unit 610, the layered encoder 110 of the video encoding apparatus 100 performs encoding on the coding unit corresponding to each depth included in the maximum coding unit 610.
As the depth deepens, the number of deeper coding units according to the depth, which include data of the same range and the same size, increases. For example, four coding units corresponding to depth 2 are required to cover data included in one coding unit corresponding to depth 1. Therefore, in order to compare a plurality of encoding results of the same data according to depths, a coding unit corresponding to depth 1 and four coding units corresponding to depth 2 are encoded.
In order to perform encoding for a current depth of the plurality of depths, a minimum encoding error is selected for the current depth by performing encoding for each prediction unit of a plurality of coding units corresponding to the current depth along a horizontal axis of the hierarchical structure 600. Alternatively, as the depth deepens along the vertical axis of the hierarchical structure 600, the minimum coding error may be searched for by comparing the minimum coding error according to the depth by performing encoding for each depth. The depth and partition having the smallest coding error in the maximum coding unit 610 may be selected as the coded depth and partition type of the maximum coding unit 610.
Fig. 7 is a diagram for describing a relationship between an encoding unit 710 and a transformation unit 720 according to an embodiment of the present invention.
The video encoding apparatus 100 or the video decoding apparatus 200 encodes or decodes an image according to a coding unit having a size smaller than or equal to the maximum coding unit for each maximum coding unit. The size of a transform unit used for transformation during encoding may be selected based on a data unit that is not larger than a corresponding coding unit.
For example, in the video encoding apparatus 100 or the video decoding apparatus 200, if the size of the coding unit 710 is 64 × 64, the transform may be performed by using the transform unit 720 of size 32 × 32.
In addition, data of the coding unit 710 of size 64 × 64 may be encoded by performing a transform on each of the transform units of size 32 × 32, 16 × 16, 8 × 8, and 4 × 4 that are smaller than 64 × 64, and then a transform unit having the smallest coding error may be selected.
Fig. 8 is a diagram for describing coding information of a coding unit corresponding to a coded depth according to an embodiment of the present invention.
The output unit 130 of the video encoding apparatus 100 may encode the information 800 on the partition type, the information 810 on the prediction mode, and the information 820 on the size of the transform unit of each coding unit corresponding to the coded depth, and transmit the information 800, the information 810, and the information 820 as information on the coding mode.
The information 800 indicates information on a shape of a partition obtained by dividing a prediction unit of a current coding unit, wherein the partition is a data unit for prediction encoding the current coding unit, for example, the current coding unit CU _0 of size 2N × 2N may be divided into any one of a partition 802 of size 2N × 2N, a partition 804 of size 2N × N, a partition 806 of size N × 2N, and a partition 808 of size N × N, where the information 800 on the partition type is set to indicate one of a partition 802 of size 2N × 2N, a partition 804 of size 2N × N, a partition 806 of size N × 2N, and a partition 808 of size N × N.
The information 810 indicates a prediction mode of each partition. For example, information 810 may indicate a mode (i.e., intra mode 812, inter mode 814, or skip mode 816) of predictive coding performed on the partition indicated by information 800.
The information 820 indicates a transform unit on which a transform is based when performing a transform on the current coding unit. For example, the transform unit may be the first intra transform unit 822, the second intra transform unit 824, the first inter transform unit 826, or the second intra transform unit 828.
The image data and coding information extractor 210 of the video decoding apparatus 200 may extract and use information 800 regarding coding units, information 810 regarding prediction modes, and information 820 regarding sizes of transform units according to each deeper coding unit for decoding.
Fig. 9 is a diagram of a deeper coding unit according to depth according to an embodiment of the present invention.
The partitioning information may be used to indicate a change in depth. The partition information indicates whether the coding unit of the current depth is partitioned into coding units of lower depths.
The prediction unit 910 for prediction-encoding the coding unit 900 having the depth of 0 and the size of 2N _0 × 2N _0 may include partitions of partition types of a partition type 912 having a size of 2N _0 × 2N _0, a partition type 914 having a size of 2N _0 × N _0, a partition type 916 having a size of N _0 × 2N _0, and a partition type 918 having a size of N _0 × N _0 fig. 9 illustrates only the partition types 912 to 918 obtained by symmetrically dividing the prediction unit 910, but the partition types are not limited thereto, and the partitions of the prediction unit 910 may include asymmetric partitions, partitions having a predetermined shape, and partitions having a geometric shape.
Prediction encoding is repeatedly performed on one partition of size 2N _0 × 2N _0, two partitions of size 2N _0 × N _0, two partitions of size N _0 × 2N _0, and four partitions of size N _0 × N _0 according to each partition type, prediction encoding in intra mode and inter mode may be performed on partitions of size 2N _0 × 2N _0, N _0 × 2N _0, 2N _0 × N _0, and N _0 × N _0, prediction encoding in skip mode is performed only on partitions of size 2N _0 × 2N _ 0.
The prediction unit 910 may not be divided into lower depths if the coding error is smallest in one of the partition types 912 through 916 of sizes 2N _0 × 2N _0, 2N _0 × N _0, and N _0 × 2N _ 0.
If the coding error is minimum in the partition type 918 of size N _0 × N _0, the depth is changed from 0 to 1 to divide the partition type 918 and coding is repeatedly performed on the partition type coding unit of depth 2 and size N _0 × N _0 to search for the minimum coding error in operation 920.
The prediction unit 940 for prediction encoding the (partition type) coding unit 930 having the depth of 1 and the size of 2N _1 × 2N _1(═ N _0 × N _0) may include partitions of the partition type 942 of size 2N _1 × 2N _1, the partition type 944 of size 2N _1 × N _1, the partition type 946 of size N _1 × 2N _1, and the partition type 948 of size N _1 × N _ 1.
If the coding error is minimum in the partition type 948 of size N _1 × N _1, the depth is changed from 1 to 2 to divide the partition type 948 and encoding is repeatedly performed on the coding unit 960 of depth 2 and size N _2 × N _2 to search for the minimum coding error in operation 950.
In other words, when encoding is performed until the depth is d-1 after the coding unit corresponding to the depth d-2 is divided in operation 970, the prediction unit 990 for prediction encoding the coding unit 980 having the depth of d-1 and the size of 2N _ (d-1) × 2N _ (d-1) may include partitions of a partition type 992 having the size of 2N _ (d-1) × 2N _ (d-1), a partition type 994 having the size of 2N _ (d-1) × N _ (d-1), a partition type 996 having the size of N _ (d-1) × 2N _ (d-1), and a partition type 998 having the size of N (d-1) × N _ (d-1).
Predictive coding may be repeatedly performed on one partition of the partition types 992 through 998 of the size 2N _ (d-1) × 2N _ (d-1), two partitions of the size 2N _ (d-1) × N _ (d-1), two partitions of the size N _ (d-1) × 2N _ (d-1), and four partitions of the size N _ (d-1) × N _ (d-1) to search for a partition type having a minimum coding error.
Even when the size N _ (d-1) × N _ (d-1) daily partition type 998 has a minimum coding error, a coding unit CU _ (d-1) having a depth of d-1 is not divided into lower depths since the maximum depth is d, and the coding depth of a coding unit constituting the current maximum coding unit 900 is determined to be d-1 and the partition type of the current maximum coding unit 900 may be determined to be N _ (d-1) × N _ (d-1). additionally, since the maximum depth is d, the division information of the minimum coding unit 952 is not set.
The data unit 999 may be the "minimum unit" of the current maximum coding unit. The minimum unit according to an embodiment of the present invention may be a rectangular data unit obtained by dividing the minimum coding unit 980 into 4. By repeatedly performing encoding, the video encoding apparatus 100 may determine a coded depth by selecting a depth having a minimum coding error by comparing coding errors according to a plurality of depths of the coding unit 900, and set a corresponding partition type and a prediction mode as a coding mode of the coded depth.
In this way, the minimum coding error according to depths is compared in all depths 1 to d, and the depth having the minimum coding error may be determined as a coded depth. The coded depth, the partition type of the prediction unit, and the prediction mode may be encoded and transmitted as information on the encoding mode. In addition, since the coding unit is divided from depth 0 to coded depth, only the division information of the coded depth is set to 0, and the division information of depths other than the coded depth is set to 1.
The entropy decoder 220 of the video decoding apparatus 200 may extract and use information about the coded depth and the prediction unit of the coding unit 900 to decode the coding unit 912. The video decoding apparatus 200 may determine a depth, for which the partition information is 0, as a coded depth by using the partition information according to the depth, and use information regarding a coding mode of the corresponding depth for decoding.
Fig. 10 to 12 are diagrams for describing a relationship between an encoding unit 1010, a prediction unit 1060, and a transform unit 1070 according to an embodiment of the present invention.
The coding unit 1010 is a coding unit having a tree structure corresponding to the coded depth determined by the video encoding apparatus 100 among the maximum coding units. The prediction unit 1060 is a partition of the prediction unit of each coding unit 1010, and the transform unit 1070 is a transform unit of each coding unit 1010.
When the depth of the maximum coding unit is 0 in the coding unit 1010, the depths of the coding units 1012 and 1054 are 1, the depths of the coding units 1014, 1016, 1018, 1028, 1050, and 1052 are 2, the depths of the coding units 1020, 1022, 1024, 1026, 1030, 1032, and 1048 are 3, and the depths of the coding units 1040, 1042, 1044, and 1046 are 4.
In the prediction unit 1060, some coding units 1014, 1016, 1022, 1032, 1048, 1050, 1052, and 1054 are obtained by dividing coding units, in other words, the size of the partition type in the coding units 1014, 1022, 1050, and 1054 is 2N × N, the size of the partition type in the coding units 1016, 1048, and 1052 is N × 2N, and the size of the partition type of the coding unit 1032 is N × N.
The image data of the coding unit 1052 in the transform unit 1070 is transformed or inverse-transformed in a data unit smaller than the coding unit 1052. In addition, the coding units 1014, 1016, 1022, 1032, 1048, 1050, 1052, and 1054 in the transform unit 1070 are different in size and shape from the coding units 1014, 1016, 1022, 1032, 1048, 1050, 1052, and 1054 in the prediction unit 1060. In other words, the video encoding apparatus 100 and the video decoding apparatus 200 may separately perform intra prediction, motion estimation, motion compensation, transformation, and inverse transformation on data units in the same coding unit.
Accordingly, encoding is recursively performed on each coding unit having a hierarchical structure in each region of the maximum coding unit to determine the optimal coding unit, and thus coding units having a recursive tree structure can be obtained. The encoding information may include partition information on the coding unit, information on a partition type, information on a prediction mode, and information on a size of the transform unit.
Table 1 shows encoding information that can be set by the video encoding apparatus 100 and the video decoding apparatus 200.
TABLE 1
Figure GDA0001438484620000201
The entropy encoder 120 of the video encoding apparatus 100 may output encoding information regarding coding units having a tree structure, and the entropy decoder 220 of the video decoding apparatus 200 may extract the encoding information regarding the coding units having the tree structure from the received bitstream.
The partition information indicates whether the current coding unit is partitioned into coding units of lower depths. If the partition information of the current depth d is 0, the depth at which the current coding unit is no longer partitioned into lower depths is a coded depth, and thus information on a partition type, a prediction mode, and a size of a transform unit may be defined for the coded depth. If the current coding unit is further divided according to the division information, encoding is independently performed on four divided coding units of lower depths.
The intra mode and the inter mode may be defined in all partition types, and the skip mode may be defined only in a partition type of size 2N × 2N.
The information on the partition type may indicate symmetric partition types having sizes of 2N × N, 2N × N, N × N, and N × N obtained by symmetrically dividing the height or width of the prediction unit, and asymmetric partition types having sizes of 2N × nU, 2N × nD, nL × 2N, and nR × N obtained by asymmetrically dividing the height or width of the prediction unit, asymmetric partition types having sizes of 2N × nU and 2N × nD may be obtained by dividing the height of the prediction unit by 1: N and N:1, and asymmetric partition types having sizes of nL × 2N and nR × N (where N is an integer greater than 1) may be obtained by dividing the width of the prediction unit by 1: N and N:1, respectively.
In other words, if the partition information of the transform unit is 0, the size of the transform unit may be 2N × N, which is the size of the current coding unit, and if the partition information of the transform unit is 1, the transform unit may be obtained by dividing the current coding unit.
The encoding information on the coding units having the tree structure may include at least one of a coding unit, a prediction unit, and a minimum unit corresponding to the coded depth. The coding unit corresponding to the coded depth may include: at least one of a prediction unit and a minimum unit containing the same encoding information.
Accordingly, whether the neighboring data units are included in the same coding unit corresponding to the coded depth is determined by comparing the coding information of the neighboring data units. In addition, by determining a corresponding coding unit corresponding to a coded depth using coding information of a data unit, a distribution of coded depths in a maximum coding unit can be determined.
Accordingly, if the current coding unit is predicted based on the encoding information of the neighboring data units, the encoding information of the data units in the deeper coding units neighboring the current coding unit may be directly referred to and used.
Alternatively, if the current coding unit is predicted based on the encoding information of the neighboring data units, the data units neighboring the current coding unit are searched using the encoding information of the data units, and the searched neighboring coding units may be referred to predict the current coding unit.
Fig. 13 is a diagram for describing a relationship among a coding unit, a prediction unit, and a transform unit according to the coding mode information of table 1.
The maximum coding unit 1300 includes coding units 1302, 1304, 1306, 1312, 1314, 1316, and 1318 of a plurality of coded depths here, since the coding unit 1318 is a coding unit of a coded depth, partition information may be set to 0 information on a partition type of the coding unit 1318 of size 2N × 2N may be set to one of a partition type 1322 of size 2N × 2N, a partition type 1324 of size 2N × N, a partition type 1326 of size N × 2N, a partition type 1328 of size N × N, a partition type 1332 of size 2N × nU, a partition type 1334 of size 2N × nD, a partition type 1336 of size nL × 2N, and a partition type 1338 of size nR × 2N.
When the partition type is set to be symmetrical, i.e., partition type 1322, 1324, 1326, or 1328, if the partition information (TU size flag) of the transform unit is 0, a transform unit 1342 of size 2N × 2N is set, and if the TU size flag is 1, a transform unit 1344 of size N × N is set.
When the partition type is set to be asymmetric (i.e., partition type 1332, 1334, 1336, or 1338), a transform unit 1352 of size 2N × 2N is set if the TU size is marked as 0, and a transform unit 1354 of size N/2 × N/2 is set if the TU size is marked as 1.
The TU size flag may be a type of transform index; the size of the transform unit corresponding to the transform index may be modified according to a prediction unit type of the coding unit or a partition type of the coding unit.
When the partition type is set to be symmetrical (i.e., partition type 1322, 1324, 1326, or 1328), a transform unit 1342 of size 2N × 2N is set if the TU size of the transform unit is marked as 0, and a transform unit 1344 of size N × N is set if the TU size is marked as 1.
When the partition type is set to be asymmetric, i.e., partition type 1332(2N × nU), 1334(2N × nD), 1336(nL × 2N), or 1338(nR × 2N), a transform unit 1352 of size 2N × 2N is set if the TU size is marked as 0, and a transform unit 1354 of size N/2 × N/2 is set if the TU size is marked as 1.
Referring to fig. 9, the TU size flag described above is a flag having a value of 0 or 1, but the TU size flag is not limited to 1 bit, and the transform unit may be hierarchically divided while the TU size flag is increased from 0. Transform unit partition information (TU size flag) may be used as an example of the transform index.
In this case, when the TU size flag according to the embodiment is used together with the maximum size and the minimum size of the transform unit, the size of the transform unit that has been actually used may be represented. The video encoding apparatus 100 may encode maximum transform unit size information, minimum transform unit size information, and maximum transform unit partition information. The encoded maximum transform unit size information, minimum transform unit size information, and maximum transform unit partition information may be inserted into a Sequence Parameter Set (SPS). The video decoding apparatus 200 may use the maximum transform unit size information, the minimum transform unit size information, and the maximum transform unit partition information for decoding the video.
For example, (a) if the size of the current coding unit is 64 × 64 and the maximum transform unit is 32 × 32, (a-1) if the TU size flag is 0, the size of the transform unit is 32 × 32, (a-2) if the TU size flag is 1, the size of the transform unit is 16 × 16, and (a-3) if the TU size flag is 2, the size of the transform unit is 8 × 8.
Alternatively, (b) if the size of the current coding unit is 32 × 32 and the minimum transform unit is 32 × 32, (b-1) if the TU size flag is 0, the size of the transform unit is 32 × 32, and the TU size flag is not set since the size of the transform unit cannot be smaller than 32 × 32.
Alternatively, (c) if the size of the current coding unit is 64 × 64 and the maximum TU size flag is 1, the TU size flag may be 0 or 1 and other TU size flags cannot be set.
Accordingly, when the maximum TU size flag is defined as "maxtransformsize index", the minimum TU size is defined as "mintransfersize", and the transform unit (i.e., the basic transform unit RootTu) with the TU size flag of 0 is defined as "RootTuSize", the size "CurrMinTuSize" of the minimum transform unit that can be available in the current coding unit can be defined by the following equation (1).
CurrMinTuSize=max(MinTransformSize,RootTuSize/(2∧MaxTransformSizeIndex))……(1)
The basic transform unit size "RootTuSize", which is the size of the transform unit when the TU size flag is 0, may indicate a maximum transform unit size that can be selected in the system, compared to the size "CurrMinTuSize" of the smallest transform unit available in the current coding unit. That is, according to equation (1), "RootTuSize/(2 Λ maxtransformsilndex)" is the size of a transform unit obtained by dividing "RootTuSize", which is the size of the transform unit when the transform unit division information is 0, by the number of divisions corresponding to the maximum transform unit division information, and "MinTransformSize" is the size of the minimum transform unit, and thus the smaller value of "RootTuSize/(2 Λ maxtransformsilndex)" and "MinTransformSize" may be "CurrMinTuSize", which is the size of the minimum transform unit available in the current coding unit.
The size "RootTuSize" of the basic transformation unit according to the embodiment of the present invention may vary according to the prediction mode.
For example, if the current prediction mode is an inter mode, RootTuSize may be determined according to equation (2) below. In equation (2), "MaxTransformSize" represents the maximum transform unit size, and "PUSize" indicates the current prediction unit size.
RootTuSize=min(MaxTransformSize,PUSize)……(2)
In other words, if the current prediction mode is the inter mode, the size "RootTuSize" of the basic transform unit, which is the transform unit when the TU size flag is 0, may be set to the smaller value of the maximum transform unit size and the current prediction unit size.
If the prediction mode of the current partition unit is the intra mode, "RootTuSize" may be determined by using equation (3) below. "PartitionSize" indicates the size of the current partition unit.
RootTuSize=min(MaxTransformSize,PartitionSize)……(3)
In other words, if the current prediction mode is the intra mode, the basic transform unit size "RootTuSize" may be set to the smaller value of the maximum transform unit size and the current partition unit size.
However, it should be noted that the size of the basic transformation unit "RootTuSize", which is the current maximum transformation unit size according to an embodiment of the present invention and varies according to the prediction mode in the partition unit, is an example, and the factor for determining the current maximum transformation unit is not limited thereto.
Hereinafter, an entropy encoding operation of a syntax element performed in the entropy encoder 120 of the video encoding apparatus 100 of fig. 1 and an entropy decoding operation of a syntax element performed in the entropy decoder 220 of the video decoding apparatus 200 of fig. 2 will be described in detail.
As described above, the video encoding apparatus 100 and the video decoding apparatus 200 perform encoding and decoding by dividing a maximum coding unit into coding units smaller than or equal to the maximum coding unit. The prediction unit and the transform unit used in the prediction and the transform may be determined based on the cost independently of other data units. Since the optimal coding unit can be determined by recursively encoding each coding unit having a hierarchical tree structure included in the maximum coding unit, the data unit having a tree structure can be configured. In other words, for each maximum coding unit, a coding unit having a tree structure, and a prediction unit and a transform unit each having a tree structure may be configured. In order to perform decoding, it is necessary to transmit hierarchical information, which is information indicating structure information of a data unit having a hierarchical structure, and non-hierarchical information for decoding other than the hierarchical information.
The information related to the hierarchical structure is information required to determine a coding unit having a tree structure, a prediction unit having a tree structure, and a transform unit having a tree structure as described above with reference to fig. 10 to 12, and includes a size of a maximum coding unit, a coding depth, partition information of the prediction unit, a division flag indicating whether the coding unit is divided, size information of the transform unit, and a transform unit division flag (TU size flag) indicating whether the transform unit is divided. Examples of the encoding information other than the hierarchical structure information include prediction mode information applied to intra/inter prediction of each prediction unit, motion vector information, prediction direction information, color component information applied to each data unit in the case where a plurality of color components are used, and transform coefficient information. Hereinafter, the layer information and the additional layer information may be referred to as syntax elements to be entropy-encoded or entropy-decoded.
In particular, according to an embodiment of the present invention, there is provided a method of determining a context model for efficiently entropy-encoding and entropy-decoding a level of a transform coefficient (i.e., size information of a syntax element). Hereinafter, a method of determining a context model for entropy-encoding and entropy-decoding a level of a transform coefficient will be described in detail.
Fig. 14 is a flowchart illustrating an operation of entropy-encoding and entropy-decoding transform coefficient information included in a transform unit according to an embodiment of the present invention.
Referring to fig. 14, a coded _ block _ flag indicating whether a transform coefficient other than 0 (hereinafter, referred to as a "significant coefficient") exists among transform coefficients included in a current transform unit is first entropy-encoded or entropy-decoded in operation 1410.
If the coded _ block _ flag is 0, only the transform coefficient of 0 exists in the current transform unit, and thus, only the value of 0 is entropy-encoded or entropy-decoded into the coded _ block _ flag, and the transform coefficient level information is not entropy-encoded or entropy-decoded.
In operation 1420, if there is a significant coefficient in the current transform unit, a significant map SigMap indicating the position of the significant coefficient is entropy-encoded or entropy-decoded.
The significance map SigMap may be formed of significance bits and predetermined information indicating the position of the last significant coefficient. The significant bit indicates whether the transform coefficient according to each scan index is a significant coefficient or 0, and may be represented by significant _ coeff _ flag [ i ]. As will be described later, the significance map is set in units of subsets having a predetermined size obtained by dividing the transform unit. Accordingly, significant _ coeff _ flag [ i ] indicates whether or not a transform coefficient of the ith scan index among transform coefficients included in the subset in the transform unit is 0.
According to the conventional h.264, a flag (End-Of-Block) indicating whether each significant coefficient is the last significant coefficient is additionally entropy-encoded or entropy-decoded, however, according to an embodiment Of the present invention, location information Of the last significant coefficient itself is entropy-encoded or entropy-decoded, as described above with reference to fig. 1 to 13, the size Of a transform unit according to an embodiment Of the present invention is not limited to 4 × 4, and may also be a larger size, such as 8 × 8, 16 × 16 or 32 × 32, since the size Of the flag (End-Of-Block) increases, the flag (End-Of-Block) indicating whether each significant coefficient is the last significant coefficient is additionally entropy-encoded or entropy-decoded is not efficient.
In operation 1430, transform coefficient levels indicating the size of the transform coefficients are entropy-encoded or entropy-decoded. According to the conventional h.264/AVC, level information of transform coefficients is represented by coeff _ abs _ level _ minus1 as a syntax element. According to an embodiment of the present invention, level information of a transform coefficient may be represented by coeff _ abs _ level _ gauge 1_ flag, coeff _ abs _ level _ gauge 2_ flag, and coeff _ abs _ level 1_ remaining, where coeff _ abs _ level _ gauge 1_ flag is a syntax element regarding whether an absolute value of the transform coefficient is greater than 1, coeff _ abs _ level _ gauge 2_ flag is a syntax element regarding whether an absolute value of the transform coefficient is greater than 2, and coeff _ abs _ level 1_ remaining indicates size information of remaining transform coefficients.
A syntax element coeff _ abs _ level 1_ remaining indicating size information of the remaining transform coefficients has a difference between the size of the transform coefficient (absCoeff) and a base level value baseLevel, which is determined by using coeff _ abs _ level _ header 1_ flag and coeff _ abs _ level _ header 2_ flag. The base level value is determined according to the equation (base level 1+ coeff _ abs _ level _ group 1_ flag + coeff _ abs _ level _ group 2_ flag), and coeff _ abs _ level _ remaining is determined according to the equation (coeff _ abs _ level _ remaining-abs _ base level). When coeff _ abs _ level _ grease 1_ flag and coeff _ abs _ level _ grease 2_ flag have values of 0 or 1, the base level value baseLevel may have a value from 1 to 3. Accordingly, coeff _ abs _ leave 1_ remaining can be changed from (abs Coeff-1) to (abs Coeff-3). As described above, (absCoeff-baseLevel), which is the difference between the size of the original transform coefficient absCoeff and the base level value baseLevel, is transmitted as the size information of the transform coefficient in order to reduce the size of the transmitted data.
Fig. 22 is a block diagram illustrating the structure of an entropy encoding apparatus 2200 according to an embodiment of the present invention. The entropy encoding apparatus 2200 of fig. 22 corresponds to the entropy encoder 120 of the video encoding apparatus 100 of fig. 1.
Referring to fig. 22, the entropy encoding apparatus 2200 includes a binarizer 2210, a context modeler 2220, and a binary arithmetic encoder 2230. In addition, the binary arithmetic encoder 2230 includes a regular coding engine 2232 and a bypass coding engine 2234.
When the syntax element input to the entropy encoding apparatus 2100 is not a binary value, the binarizer 2210 binarizes the syntax element to output a binary bit (Bin) string configured with a binary value of 0 or 1. The binary bit represents each bit of a stream composed of 0 or 1, and is encoded by Context Adaptive Binary Arithmetic Coding (CABAC). If the syntax element is data including 0 and 1 of the same frequency, the syntax element is output to the bypass coding engine 2234 not using probability to be coded.
Specifically, the binarizator 2210 binarizes coeff _ abs _ leave 1_ remaining, which is a syntax element indicating size information of transform coefficients, into a prefix bit string and a suffix bit string by using a parameter (cRiceParam). An operation of binarizing coeff _ abs _ leave 1_ remaining, which is a syntax element indicating size information of a transform coefficient, by using the binarizer 2210 will be described later.
The context modeler 2220 provides the probability model for encoding the bit string corresponding to the syntax element to the conventional encoding engine 2232. In detail, the context modeler 2220 outputs the probability of a binary value used to encode each binary value of the bit string of the current syntax element to the binary arithmetic encoder 2230.
The context model is a probability model of binary bits and includes information on which one of 0 and 1 corresponds to a Most Probable Symbol (MPS) and a Least Probable Symbol (LPS) and information on a probability of the MPS or a probability of the LPS.
The conventional encoding engine 2232 performs binary arithmetic encoding on the bit string corresponding to the syntax element based on the information on the MPS and the LPS and the probability information of the MPS or the LPS, which are provided by the context modeler 2220.
A context model used when encoding coeff _ abs _ leave 1_ remaining, which is a syntax element indicating size information of a transform coefficient, may be previously set according to a bin index of the transform coefficient.
Fig. 23 is a block diagram showing the structure of the binarizing apparatus 2300 according to an embodiment of the present invention. The binarizing apparatus 2300 of fig. 23 corresponds to the binarizer 2210 of fig. 22.
Referring to fig. 23, the binarization apparatus 2300 includes a parameter determination unit 2310 and a bit string generation unit 2320.
The parameter determination unit 2310 compares the size of a previous transform coefficient encoded prior to the current transform coefficient with a predetermined critical value obtained based on the previous parameter used in the binarization of the previous transform coefficient level syntax element indicating the size of the previous transform coefficient, thereby determining whether to update (renew) the previous parameter. Further, the parameter determination unit 2310 obtains a parameter to be used in binarization for a transform coefficient level syntax element indicating the size of the current transform coefficient by updating or holding the previous parameter according to the result of the determination.
In detail, when the size of the previous transform coefficient is claiff and the previous parameter is claicaparam, the parameter determination unit 2310 determines the parameters claicaparam and coeff _ abs _ leave 1_ remaining, based on the following algorithm, where the claicaparam is to be used in binarization for the transform coefficient level syntax element, and coeff _ abs _ leave 1_ remaining indicates the size of the current transform coefficient.
cRiceParam=Min(cLastRiceParam+(cLastAbsLevel>(3*(1<<cLastRiceParam))?1:0),4)
The algorithm may be implemented by the following pseudo code.
Figure GDA0001438484620000281
As described in the above algorithm, the parameter determination unit 2310 compares the critical value th obtained based on the following equation with the claistriceparam: th ═ 3 (1< < cLastRiceParam). The parameter determination unit 2310 updates the previous parameter (claicaparam) by increasing the previous parameter by 1 when the clailabscoff is greater than the threshold th, and maintains the previous parameter without updating when the clailabscoff is not greater than the threshold th.
The initial parameter is set to 0. When the parameter (cRiceParam) is updated, the parameter is gradually increased by +1 compared to the previous parameter (cLastRiceParam). Further, the critical value th used in determining the condition for updating the parameter is determined according to the parameter (cRiceParam), and thus, as the parameter (cRiceParam) is updated, the critical value th is also gradually increased. That is, the critical value th is set to have a value proportional to the previous parameter (cLastRiceParam), and when the previous parameter (cLastRiceParam) is updated, the parameter (claceparam) has a value gradually increased by +1 compared to the previous parameter (cLastRiceParam). The critical value th is gradually increased to 3, 6, 12 and 24 as the parameter (cRiceParam) is updated in the range of 0 to 4.
The bit string generation unit 2320 binarizes the transform coefficient level syntax element (coeff _ abs _ leave 1_ remaining) of the transform coefficient by using the parameter, and outputs a bit string corresponding to the transform coefficient level syntax element (coeff _ abs _ leave 1_ remaining) of the transform coefficient.
In detail, the bit string generating unit 2320 obtains the parameter crtramx according to the following equation by using the obtained parameter (cRiceParam): cTrMax ═ 4< < cRiceParam. The parameter crtramax is used as a criterion for dividing a transform coefficient level syntax element (coeff _ abs _ level _ remaining) into a prefix and a suffix.
The bit string generation unit 2320 divides the value of the transform coefficient level syntax element (coeff _ abs _ level _ remaining) based on the parameter cTrMax so as to obtain a prefix having a value not exceeding the parameter cTrMax and a suffix indicating a portion exceeding the parameter cTrMax. The bit string generating unit 2320 determines a prefix within a range not exceeding cTrMax according to the following equation: prefix ═ Min (cTrMax, coeff _ abs _ level _ remaining). The suffix exists only when the transform coefficient level syntax element (coeff _ abs _ level _ remaining) has a value greater than cTrMax. The suffix is a value corresponding to (coeff _ abs _ level _ remaining-cTrMax). When the transform coefficient level syntax element (coeff _ abs _ level _ remaining) does not exceed cTrMax, only a prefix exists. For example, when the transform coefficient level syntax element (coeff _ abs _ level _ remaining) is 10 and the parameter cTrMax is 7, the transform coefficient level syntax element (coeff _ abs _ level _ remaining) is classified into a prefix having a value of 7 and a suffix having a value of 3. Alternatively, when the transform coefficient level syntax element (coeff _ abs _ level _ remaining) is 6 and the parameter cTrMax is 7, the transform coefficient level syntax element (coeff _ abs _ level _ remaining) is classified as a prefix having a value of 6 and does not have a suffix.
When the prefix and the suffix are determined by dividing the value of the transform coefficient level syntax element (coeff _ abs _ level _ remaining) based on the parameter ctmax, the bit string generation unit 2320 binarizes the prefix and the suffix by using a predetermined binarization method set in advance to output a bit string corresponding to the prefix and the suffix. For example, the bit string generating unit 2320 may output a bit string by binarizing a prefix having a value corresponding to Min (cTrMax, coeff _ abs _ level _ remaining) using a truncated unary binarization method, and may output a bit string by binarizing a suffix having a value corresponding to (coeff _ abs _ level _ remaining-cTrMax) using an exponential Golomb (expnential) method of order k. The value k may be determined by using the parameter (cRiceParam) determined by using the parameter determination unit 2310. For example, the value k may have a value (cRiceParam + 1).
According to the truncated unary binarization method, as shown in table 2 below, a prefix having a value Min (cTrMax, coeff _ abs _ level _ remaining) can be binarized.
[ Table 2]
Figure GDA0001438484620000291
Figure GDA0001438484620000301
The bit string generating unit 2320 may generate a bit string corresponding to the prefix and the suffix according to the parameter (cRiceParam) by referring to a preset table. According to the lookup table method, the preset table may be set as follows: as the value of the parameter (cRiceParam) increases, the length of the bit string corresponding to a relatively large value is reduced.
An operation of entropy-encoding a syntax element related to a transform unit according to an embodiment of the present invention will be described in detail with reference to fig. 15 to 21.
Fig. 15 illustrates an entropy-encoded transform unit 1500 according to an embodiment of the present invention although a transform unit 1500 of size 16 × 16 is illustrated in fig. 15, the size of the transform unit 1500 is not limited to the illustrated size 16 × 16, and may also be various sizes ranging from 4 × 4 to 32 × 32.
Referring to fig. 15, in order to entropy-encode and entropy-decode transform coefficients included in a transform unit 1500, the transform unit 1500 may be divided into smaller transform units, hereinafter, an operation of entropy-encoding syntax elements related to a 4 × 4 transform unit 1510 included in the transform unit 1500 will be described, and this operation of entropy-encoding syntax elements related to a 4 × 4 transform unit 1510 may also be applied to transform units of different sizes.
The transform coefficients included in the 4 × 4 transform unit 1510 each have a transform coefficient (absCoeff) as shown in fig. 15. the transform coefficients included in the 4 × 4 transform unit 1510 may be serialized (seriize) and sequentially processed according to a predetermined scan order as shown in fig. 15.
As described above, examples of syntax elements related to the 4 × 4 transform unit 1510 are significant _ coeff _ flag, coeff _ abs _ level _ significant 1_ flag, coeff _ abs _ level _ significant 2_ flag, coeff _ abs _ level _ significant 1_ remaining, where significant _ coeff _ flag is a syntax indicating whether each transform coefficient included in the transform unit is an effective transform coefficient having a value other than 0, coeff _ abs _ level _ significant 1_ flag is a syntax element indicating whether the absolute value of the transform coefficient is greater than 1, coeff _ abs _ level _ significant 2_ flag is a syntax element indicating whether the absolute value is greater than 2, and coeff _ abs _ level _ significant _ residual 1_ remaining is syntax element indicating size information of remaining transform coefficients.
Fig. 16 shows an effective map SigMap1600 corresponding to the transformation unit of fig. 15, according to an embodiment of the present invention.
Referring to fig. 15 and 16, an significance map SigMap1600 is set, wherein the significance map1600 has a value of 1 for each significant transformation coefficient having a value of not 0 among transformation coefficients included in the 4 × 4 transformation unit 1510 of fig. 15. the significance map SigMap1600 is entropy-encoded or entropy-decoded by using a previously set context model.
Fig. 17 illustrates coeff _ abs _ level _ grease 1_ flag1700 corresponding to the 4 × 4 transform unit 1510 of fig. 15.
Referring to fig. 15 to 17, coeff _ abs _ level _ greater1_ flag1700 is provided, wherein coeff _ abs _ level _ greater1_ flag1700 is a flag indicating whether or not a corresponding significant transform coefficient has a value greater than 1 with respect to the significance map SigMap 1600. When coeff _ abs _ level _ size 1_ flag1700 is 1, this indicates that the corresponding transform coefficient is a transform coefficient having a value greater than 1, and when coeff _ abs _ level _ size 1_ flag1700 is 0, this indicates that the corresponding transform coefficient is a transform coefficient having a value of 1. In fig. 17, when coeff _ abs _ level _ greater1_ flag1710 is at the position of a transform coefficient having a value of 1, the value of coeff _ abs _ level _ greater1_ flag1710 is 0.
Fig. 18 illustrates coeff _ abs _ level _ grease 2_ flag1800 corresponding to the 4 × 4 transform unit 1510 of fig. 15.
Referring to fig. 15 to 18, coeff _ abs _ level _ header 2_ flag1800 is set, wherein coeff _ abs _ level _ header 2_ flag1800 indicates whether or not a corresponding transform coefficient has a value greater than 2 with respect to a transform coefficient for which coeff _ abs _ level _ header 1_ flag1700 is set to 1. When coeff _ abs _ level _ size 2_ flag1800 is 1, this indicates that the corresponding transform coefficient is a transform coefficient having a value greater than 2, and when coeff _ abs _ level _ size 2_ flag1800 is 0, this indicates that the corresponding transform coefficient is a transform coefficient having a value of 2. In fig. 18, when coeff _ abs _ level _ greater2_ flag 1810 is at the position of a transform coefficient having a value of 2, the value of coeff _ abs _ level _ greater2_ flag 1810 is 0.
Fig. 19 illustrates coeff _ abs _ level _ remaining1910 of fig. 15 corresponding to the 4 × 4 transform unit 1510.
Referring to fig. 15 to 19, coeff _ abs _ level _ remaining1900 may be obtained by calculating (abs coeff-base level) of each transform coefficient, where coeff _ abs _ level _ remaining1900 is a syntax element indicating size information of remaining transform coefficients.
As described above, coeff _ abs _ level _ remaining1900, which is a syntax element indicating size information of remaining transform coefficients, has a difference between the size of the transform coefficient (abs coeff) and a base level value baseLevel determined by using coeff _ abs _ level _ header 1_ flag and coeff _ abs _ level _ header 2_ flag. The base level value base level is determined according to the equation (1 + coeff _ abs _ level _ gather 1_ flag + coeff _ abs _ level _ gather 2_ flag), and coeff _ abs _ level _ remaining is determined according to the equation (coeff _ abs _ level _ remaining ═ abs _ coeff-base level).
The parameter determination unit 2310 reads coeff _ abs _ level _ remaining1900 according to the illustrated scanning order to obtain the size of the transform coefficient, such as "0312333445588".
The parameter determination unit 2310 sequentially determines a parameter (cRiceParam) used in binarization for size information of each transform coefficient according to a scan order. First, an initial parameter (cRiceParam) is set to 0. According to the above algorithm, the parameter is increased only when the condition cLastAbsCoeff >3 ″ (1< < cLastRiceParam) is satisfied. The initially set parameter (cRiceParam) is 0, and the initially set parameter (cRiceParam) holds the value until the value of the size of the previous transform coefficient (clairabsscoeff) is 3 x (1< <0) (i.e., a value greater than 3). Referring to fig. 19, the size "12" (1920) of the transform coefficient is greater than 3, and thus, when the size of the transform coefficient after the transform coefficient "12" (1920) is binarized, a parameter (cRiceParam) having a value updated from 0 to 1 is used. When the parameter (cRiceParam) is updated to 1, the parameter (cRiceParam) is updated again only when the condition of cLastAbsCoeff >3 x (1< <1) (i.e., cLastAbsCoeff >6) is satisfied. Referring to fig. 19, "8" (1930), which is the size of the second-to-last transform coefficient, is greater than 6, and thus, the parameter (cRiceParam) is updated from 1 to 2.
Fig. 20 illustrates a table showing syntax elements related to the transformation units 1510, 1600, 1700, 1800, and 1900 illustrated in fig. 15 to 19. In fig. 20, GTR1 denotes coeff _ abs _ level _ grease 1_ flag, GTR2 denotes coeff _ abs _ free _ grease 2_ flag, and Remaining denotes coeff _ abs _ free 1_ Remaining. Referring to fig. 20, a syntax element indicating a transform coefficient level, coeff _ abs _ leave 1_ remaining, is not a binary value, and thus is binarized by using a parameter.
Fig. 21 shows another example of coeff _ abs _ leave 1_ remaining binarized according to an embodiment of the present invention.
As described above, the initial parameter (cRiceParam) is set to 0, and the initial parameter (cRiceParam) is increased by +1 only when the condition of cLastAbsCoeff >3 × (1< < clastricepam) is satisfied. The value of the initial parameter (cRiceParam) is Min (cLastRiceParam +1,4), so the updated parameter will not have a value exceeding 4. The critical value 3 (1< < cLastRiceParam) used in determining whether to update the parameter has a value of 3 (1< <0), 3 (1< <1), 3 (1< <2), or 3 (1< <3) according to the previous parameter (cRiceParam) used in binarizing the magnitude of the previous transform coefficient. Thus, after processing transform coefficients having a value greater than 3, the parameter (cRiceParam) is increased by +1, then after processing transform coefficients having a value greater than 6, the parameter (cRiceParam) is increased by +1, then after processing transform coefficients having a value greater than 12, the parameter (cRiceParam) is increased by +1, and finally after processing transform coefficients having a value greater than 24, the parameter (cRiceParam) is increased by + 1. That is, when there is a sudden change in value after the transform coefficient, the parameter (cRiceParam) is also gradually increased by + 1.
Referring to fig. 21, after processing the transform coefficient 2110 having a value of 12 and being greater than 3 for the first time, the initially set parameter (cRiceParam) having a value of 0 is increased by + 1. After the transform coefficient 2110 of value 12, the updated parameter (cRiceParam) is maintained until a transform coefficient greater than 6, which is the next critical value, is processed. After the transform coefficient 2120 of value 8 (greater than 6 as the next critical value) is processed, the parameter (cRiceParam) is increased by +1 to have a value of 2. After the transform coefficient 2120 of value 8, the updated parameter (cRiceParam) is held until a transform coefficient greater than 12 as a next critical value is processed. After the transform coefficient 2120 of value 8, after the transform coefficient 2130 of value 13 (greater than 12 as the next critical value) is processed, the parameter (cRiceParam) is increased by +1 to have a value of 3. After the transform coefficient 2130 having a value of 13, the updated parameter (cRiceParam) is maintained until a transform coefficient greater than 24, which is the next critical value, is processed. After the transform coefficient 2130 of value 13, the parameter (cRiceParam) is increased by +1 to have a value of 4 after the transform coefficient 2140 of value 25 (greater than 24, which is the next critical value) is processed. In the binarization operation of the transform coefficient after the transform coefficient 2140 having a value of 25, since the parameter (cRiceParam) has reached the maximum value of 4, the parameter (cRiceParam)4 is used, and the update operation is not performed any more.
As described above, when the parameter (cRiceParam) used when the transform coefficient level syntax element coeff _ abs _ level _ remaining is binarized, which indicates the size of the current transform coefficient, is determined by using the parameter determination unit 2310, the bit string generation unit 2320 classifies the transform coefficient level syntax element coeff _ abs _ level _ remaining as a prefix and a suffix based on the parameter (cTrMax) determined by using the parameter (cRiceParam), and binarizes the prefix and the suffix by applying a preset binarization method for the prefix and the suffix, thereby outputting a bit string corresponding to coeff _ abs _ level _ remaining.
Fig. 24 is a flowchart illustrating an entropy encoding method of a syntax element indicating a transform coefficient level according to an embodiment of the present invention.
Referring to fig. 24, in operation 2410, the parameter determination unit 2310 obtains a transform coefficient level syntax element (coeff _ abs _ level _ remaining) indicating the size of a transform coefficient included in the transform unit according to a predetermined scan order.
In operation 2420, the parameter determination unit 2310 compares the size of the previous transform coefficient (clairabesccoeff) encoded before the current transform coefficient, which is used in the binarization of the previous transform coefficient level syntax element indicating the size of the previous transform coefficient (clairabescopam), with a predetermined critical value obtained based on the previous parameter (clairabaram), thereby determining whether to update the previous parameter (clairabaram).
In operation 2430, the parameter determination unit 2310 updates or holds the previous parameters based on the result of the determination of operation 2420, thereby obtaining parameters used in the binarization of the transform coefficient level syntax element indicating the size of the current transform coefficient. As described above, the parameter determination unit 2310 compares the critical value th obtained based on th ═ 3 × (1< < clastricepaam) with the previous parameter clastricepaam; when the cLastAbsCoeff is greater than th, the parameter determination unit 2310 updates the previous parameter by increasing the previous parameter by 1; when the cLastAbsCoeff is not greater than th, the parameter determination unit 2310 does not update but maintains the previous parameter. When the previous parameter is updated, the updated parameter is gradually increased by + 1.
In operation 2440, the bit string generating unit 2320 binarizes the transform coefficient level syntax element (coeff _ abs _ level _ remaining) by using the obtained parameter (cRiceParam), thereby outputting a bit string corresponding to the transform coefficient level syntax element (coeff _ abs _ level _ remaining) of the current transform coefficient.
According to the above-described operation of entropy-encoding a transform coefficient level syntax element according to an embodiment of the present invention, even if there is a transform coefficient whose value becomes suddenly large among transform coefficients processed according to a predetermined scan order, the parameter does not have to be suddenly modified but can be gradually increased by + 1.
Meanwhile, the above-described operation of updating parameters according to an embodiment of the present invention so as to entropy-encode the transform coefficient level syntax element may also be applied to binarization for syntax elements other than the transform coefficient level syntax element.
The operation of updating parameters according to an embodiment of the present invention can be applied to parameters used when binarizing other syntax elements by using a Golomb-Rice code. In addition, the method of updating parameters according to an embodiment of the present invention is applicable to updating parameters used when binarizing syntax elements by applying a binarization method such as a concatenated code binarization method. When the concatenated code is used, syntax elements are classified into prefixes and suffixes, and the method of updating parameters according to an embodiment of the present invention is applicable to update predetermined parameters for the purpose of determining prefixes and suffixes. Similarly, the method of updating parameters according to an embodiment of the present invention may be applied to update parameters used when encoding syntax elements by using fixed-length codes and variable-length code (VLC) tables as in a Low Complexity Entropy Coding (LCEC) method.
Fig. 25 is a block diagram illustrating an entropy-decoding apparatus 2500 according to an embodiment of the present invention. The entropy decoding apparatus 2500 corresponds to the entropy decoder 220 of the video decoding apparatus 200 of fig. 2. The entropy decoding apparatus 2500 performs the inverse operation of the entropy encoding operation performed by the entropy encoding apparatus 2000 described above.
Referring to fig. 25, the entropy decoding apparatus 2500 includes a context modeler 2510, a normal decoding engine 2520, a bypass decoding engine 2530, and an inverse binarizer 2540.
The syntax elements encoded by using the bypass coding are output to the bypass decoder 2530 to be decoded, and the syntax elements encoded by using the conventional coding are decoded by using the conventional decoder 2520. The conventional decoder 2520 arithmetically decodes a binary value of a current syntax element based on a context model provided by using the context modeler 2510, thereby outputting a bit string. A context model used when arithmetically decoding a syntax element coeff _ abs _ level _ remaining indicating size information of a transform coefficient may be set in advance according to a binary bit index of the transform coefficient.
The inverse binarizer 2540 restores the bit string arithmetically decoded by using the conventional decoding engine 2520 or the bypass decoding engine 2530 to the syntax element again.
The entropy decoding apparatus 2500 arithmetically decodes a syntax element related to a transform unit, such as SigMap, coeff _ abs _ level _ header 1_ flag, or coeff _ abs _ level _ header 2_ flag, in addition to arithmetically decoding coeff _ abs _ level _ remaining, and outputs the same. When the syntax element related to the transform unit is restored, the data included in the transform unit is decoded by using inverse quantization, inverse transformation, and predictive decoding based on the restored syntax element.
Fig. 26 is a block diagram illustrating the structure of an inverse binarization device 2600 according to an embodiment of the present invention. The inverse binarizing apparatus 2600 of fig. 26 corresponds to the inverse binarizer 2540 of fig. 25.
Referring to fig. 26, the inverse-binarization apparatus 2600 includes a parameter determination unit 2610 and a syntax element restoration unit 2620.
The parameter determination unit 2610 compares the size of a previous transform coefficient decoded before the current transform coefficient with a predetermined critical value obtained based on the previous parameter, which is used in inverse binarization to a previous transform coefficient level syntax element indicating the size of the previous transform coefficient, to thereby determine whether to update the previous parameter. The parameter determination unit 2610 updates or holds the previous parameters based on the result of the determination, thereby obtaining parameters used when inverse-binarizing the transform coefficient level syntax element indicating the size of the current transform coefficient. In the same manner as the parameter determination unit 2310 of fig. 23 described above, the parameter determination unit 2610 compares the critical value th obtained based on the equation (th-3 × (1< < clastricepaam)) with the previous parameter clastricepaam. When the cLastAbsCoeff is greater than th, the parameter determination unit 2610 updates the previous parameter (clastricepaam) by increasing the previous parameter (clastricepaam) by 1; when the cLastAbsCoeff is not greater than th, the parameter determination unit 2610 does not update but maintains the previous parameter (cLastRiceParam).
The syntax element restoration unit 2620 performs inverse binarization on a bit string corresponding to the current transform coefficient level syntax element by using the obtained parameter, thereby restoring a syntax element (coeff _ abs _ level _ remaining) indicating the size of the current transform coefficient. In detail, the syntax element recovery unit 2620 classifies the bit string into a prefix bit string and a suffix bit string, and recovers the syntax element (coeff _ abs _ level _ remaining) by inverse-binarizing the prefix bit string using a truncated unary binarization method and by inverse-binarizing the suffix bit string using a k-order (k is cRiceParam +1) exponential golomb method, wherein the prefix bit string corresponds to a bit string obtained by binarizing a value corresponding to Min (cTrMax, coeff _ abs _ level _ remaining) using a truncated unary binarization method, and the suffix bit string corresponds to a bit string obtained by binarizing a value corresponding to (coeff _ abs _ level _ remaining-cTrMax) using a k-order exponential golomb method.
Fig. 27 is a flowchart illustrating a method of entropy-decoding transform coefficient levels according to an embodiment of the present invention.
Referring to fig. 27, in operation 2710, a transform coefficient level syntax element indicating the size of a transform coefficient included in a transform unit is parsed from a bitstream. The parsed transform coefficient level syntax elements are bit strings each composed of 0 and 1.
In operation 2720, the parameter determination unit 2610 compares the size of the previous transform coefficient (claiparif) restored before the current transform coefficient with a predetermined critical value obtained based on the previous parameter (claipariam) used in the binarization of the previous transform coefficient level syntax element indicating the size of the previous transform coefficient (claiparif), thereby determining whether to update the previous parameter (claipariam).
In operation 2730, the parameter determination unit 2610 updates or holds the previous parameter (claiparim) based on the result of the determination, thereby obtaining a parameter (claiparim) used in binarization of a transform coefficient level syntax element (coeff _ abs _ level _ remaining) indicating the size of the current transform coefficient. As described above, the parameter determination unit 2610 compares the critical value th obtained based on the equation (th-3 × (1< < clastricepaam)) with the previous parameter clastricepaam. When the cLastAbsCoeff is greater than th, the parameter determination unit 2610 updates the previous parameter (clastricepaam) by increasing the previous parameter (clastricepaam) by 1; when the cLastAbsCoeff is not greater than th, the parameter determination unit 2610 is not updated but maintains the previous parameter. When a parameter is updated, the updated parameter is gradually increased by + 1.
In operation 2740, the syntax element restoration unit 2620 inverse-binarizes the current transform coefficient level syntax element by using the obtained parameters to obtain size information of the current transform coefficient. As described above, since coeff _ abs _ level _ remaining ═ absCoeff-base level, coeff _ abs _ level _ header 1_ flag and coeff _ abs _ level _ header 2_ flag are also restored in addition to coeff _ abs _ level _ remaining, when the base level value base level is determined according to the equation (base level 1+ coeff _ abs _ level _ header 1_ flag + coeff _ abs _ level _ header 2_ flag), the size of the current transform coefficient may be determined according to the equation (absCoeff _ abs _ level _ remaining + base level).
The present invention can also be embodied as computer readable codes on a computer readable recording medium. The computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system. Examples of the computer readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and the like. The computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims. Therefore, the scope of the invention is defined not by the detailed description of the invention but by the appended claims, and all differences within the scope will be construed as being included in the present invention.

Claims (10)

1. A method of encoding video, the method comprising:
determining a current binarization parameter as one of a previous binarization parameter and an updated value of the previous binarization parameter based on a comparison between a predetermined value and a value of the previous transform coefficient, wherein the current binarization parameter is used to determine a prefix bin string from a bin string corresponding to current transform coefficient level information, wherein the current transform coefficient level information indicates a size of a transform coefficient included in a transform unit, the updated value of the previous binarization parameter is obtained by adding n to the previous binarization parameter, wherein n is an integer and n is equal to or greater than 1;
obtaining the binary bit string by binarizing a binary value of current transform coefficient level information using a current binarization parameter;
a bit stream is obtained by entropy coding the binary string,
wherein, the step of determining the current binarization parameters further comprises:
determining that the current binarization parameter is maintained as the previous binarization parameter when the size of the previous transform coefficient is equal to or less than the predetermined value;
determining a current binarization parameter as the updated value when the magnitude of the previous transform coefficient is greater than the predetermined value.
2. An apparatus for encoding video, the apparatus comprising:
a parameter determination unit that determines a current binarization parameter as one of a previous binarization parameter and an updated value of the previous binarization parameter based on a comparison between a predetermined value and a value of the previous transform coefficient, wherein the current binarization parameter is used to determine a prefix bin string from a bin string corresponding to current transform coefficient level information, wherein the current transform coefficient level information indicates a size of a transform coefficient included in the transform unit, the updated value of the previous binarization parameter is obtained by adding n to the previous binarization parameter, wherein n is an integer and n is equal to or greater than 1;
a binary bit string generating unit that obtains the binary bit string by binarizing a binary value of the current transform coefficient level information using the current binarization parameter;
an entropy encoding unit obtaining a bitstream by entropy encoding the binary bit string,
wherein the parameter determination unit determines that the current binarization parameter is held as the previous binarization parameter when the size of the previous transform coefficient is equal to or smaller than the predetermined value;
when the magnitude of the previous transform coefficient is larger than the predetermined value, the parameter determination unit determines the current binarization parameter as the updated value.
3. A method of encoding video, the method comprising:
determining a current binarization parameter as one of a previous binarization parameter and an updated value of the previous binarization parameter based on a comparison between a predetermined value and a value of the previous transform coefficient, wherein the current binarization parameter is used to determine a prefix bin string and a suffix bin string from a bin string corresponding to current transform coefficient level information, wherein the current transform coefficient level information indicates a size of a transform coefficient included in a transform unit, the updated value of the previous binarization parameter is obtained by adding n to the previous binarization parameter, wherein n is an integer and n is equal to or greater than 1;
obtaining the binary bit string by binarizing a binary value of current transform coefficient level information using a current binarization parameter;
a bit stream is obtained by entropy coding the binary string,
wherein, the step of determining the current binarization parameters further comprises:
determining that the current binarization parameter is maintained as the previous binarization parameter when the size of the previous transform coefficient is equal to or less than the predetermined value;
determining a current binarization parameter as the updated value when the magnitude of the previous transform coefficient is greater than the predetermined value,
wherein the prefix binary bit string is obtained according to a first binarization method, and the suffix binary bit string is obtained according to a second binarization method different from the first binarization method.
4. An apparatus for encoding video, the apparatus comprising:
a parameter determination unit that determines a current binarization parameter as one of a previous binarization parameter and an updated value of the previous binarization parameter based on a comparison between a predetermined value and a value of the previous transform coefficient, wherein the current binarization parameter is used to determine a prefix bin string and a suffix bin string from a bin string corresponding to current transform coefficient level information, wherein the current transform coefficient level information indicates a size of a transform coefficient included in the transform unit, the updated value of the previous binarization parameter is obtained by adding n to the previous binarization parameter, wherein n is an integer and n is equal to or greater than 1;
a binary bit string generating unit that obtains the binary bit string by binarizing a binary value of the current transform coefficient level information using the current binarization parameter;
an entropy encoding unit obtaining a bitstream by entropy encoding the binary bit string,
wherein the parameter determination unit determines that the current binarization parameter is held as the previous binarization parameter when the size of the previous transform coefficient is equal to or smaller than the predetermined value;
a parameter determining unit determines a current binarization parameter as the updated value when the magnitude of the previous transform coefficient is greater than the predetermined value,
wherein the prefix binary bit string is obtained according to a first binarization method, and the suffix binary bit string is obtained according to a second binarization method different from the first binarization method.
5. A method of encoding video, the method comprising:
determining a current binarization parameter as one of a previous binarization parameter and an updated value of the previous binarization parameter based on a comparison between a predetermined value and a value of the previous transform coefficient, wherein the current binarization parameter is used to determine a prefix bin string from a bin string corresponding to current transform coefficient level information, wherein the current transform coefficient level information indicates a size of a transform coefficient included in a transform unit, the updated value of the previous binarization parameter is obtained by adding n to the previous binarization parameter, wherein n is an integer and n is equal to or greater than 1;
obtaining the binary bit string by binarizing a binary value of current transform coefficient level information using a current binarization parameter;
a bit stream is obtained by entropy coding the binary string,
wherein, the step of determining the current binarization parameters further comprises:
determining that the current binarization parameter is maintained as the previous binarization parameter when the size of the previous transform coefficient is equal to or less than the predetermined value;
determining a current binarization parameter as the updated value when the magnitude of the previous transform coefficient is greater than the predetermined value,
wherein the predetermined value has a value of 3 x (1< < cLastRiceParam), wherein cLastRiceParam indicates the previous binarization parameter.
6. A computer-readable medium storing a computer program, wherein the computer program, when executed by a processor, performs a method of encoding an image, the method comprising:
determining a current binarization parameter as one of a previous binarization parameter and an updated value of the previous binarization parameter based on a comparison between a predetermined value and a value of the previous transform coefficient, wherein the current binarization parameter is used to determine a prefix bin string from a bin string corresponding to current transform coefficient level information, wherein the current transform coefficient level information indicates a size of a transform coefficient included in a transform unit, the updated value of the previous binarization parameter is obtained by adding n to the previous binarization parameter, wherein n is an integer and n is equal to or greater than 1;
obtaining the binary bit string by binarizing a binary value of current transform coefficient level information using a current binarization parameter; and
a bit stream is obtained by entropy coding the binary string,
wherein, the step of determining the current binarization parameters further comprises:
determining that the current binarization parameter is maintained as the previous binarization parameter when the size of the previous transform coefficient is equal to or less than the predetermined value; and
determining a current binarization parameter as the updated value when the magnitude of the previous transform coefficient is greater than the predetermined value.
7. A computer-readable medium storing a computer program, wherein the computer program, when executed by a processor, performs a method of encoding an image, the method comprising:
determining a current binarization parameter as one of a previous binarization parameter and an updated value of the previous binarization parameter based on a comparison between a predetermined value and a value of the previous transform coefficient, wherein the current binarization parameter is used to determine a prefix bin string and a suffix bin string from a bin string corresponding to current transform coefficient level information, wherein the current transform coefficient level information indicates a size of a transform coefficient included in a transform unit, the updated value of the previous binarization parameter is obtained by adding n to the previous binarization parameter, wherein n is an integer and n is equal to or greater than 1;
obtaining the binary bit string by binarizing a binary value of current transform coefficient level information using a current binarization parameter; and
a bit stream is obtained by entropy coding the binary string,
wherein, the step of determining the current binarization parameters further comprises:
determining that the current binarization parameter is maintained as the previous binarization parameter when the size of the previous transform coefficient is equal to or less than the predetermined value; and
determining a current binarization parameter as the updated value when the magnitude of the previous transform coefficient is greater than the predetermined value,
wherein the prefix binary bit string is obtained according to a first binarization method, and the suffix binary bit string is obtained according to a second binarization method different from the first binarization method.
8. A computer-readable medium storing a computer program, wherein the computer program, when executed by a processor, performs a method of encoding an image, the method comprising:
determining a current binarization parameter as one of a previous binarization parameter and an updated value of the previous binarization parameter based on a comparison between a predetermined value and a value of the previous transform coefficient, wherein the current binarization parameter is used to determine a prefix bin string from a bin string corresponding to current transform coefficient level information, wherein the current transform coefficient level information indicates a size of a transform coefficient included in a transform unit, the updated value of the previous binarization parameter is obtained by adding n to the previous binarization parameter, wherein n is an integer and n is equal to or greater than 1;
obtaining the binary bit string by binarizing a binary value of current transform coefficient level information using a current binarization parameter; and
a bit stream is obtained by entropy coding the binary string,
wherein, the step of determining the current binarization parameters further comprises:
determining that the current binarization parameter is maintained as the previous binarization parameter when the size of the previous transform coefficient is equal to or less than the predetermined value;
determining a current binarization parameter as the updated value when the magnitude of the previous transform coefficient is greater than the predetermined value,
wherein the current binarization parameter has a value equal to or less than 4,
wherein the predetermined value has a value of 3 x (1< < cLastRiceParam), wherein cLastRiceParam indicates the previous binarization parameter.
9. A computer-readable medium storing a computer program, wherein the computer program, when executed by a processor, performs a method of encoding an image, the method comprising:
determining a current binarization parameter as one of a previous binarization parameter and an updated value of the previous binarization parameter based on a comparison between a predetermined value and a value of the previous transform coefficient, wherein the current binarization parameter is used to determine a prefix bin string from a bin string corresponding to current transform coefficient level information, wherein the current transform coefficient level information indicates a size of a transform coefficient included in a transform unit, the updated value of the previous binarization parameter is obtained by adding n to the previous binarization parameter, wherein n is an integer and n is equal to or greater than 1;
obtaining the binary bit string by binarizing a binary value of current transform coefficient level information using a current binarization parameter; and
a bit stream is obtained by entropy coding the binary string,
wherein, the step of determining the current binarization parameters further comprises:
determining that the current binarization parameter is maintained as the previous binarization parameter when the size of the previous transform coefficient is equal to or less than the predetermined value;
determining a current binarization parameter as the updated value when the magnitude of the previous transform coefficient is greater than the predetermined value,
wherein the current binarization parameter has a value equal to or less than 4.
10. A computer-readable medium storing a computer program, wherein the computer program, when executed by a processor, performs a method of encoding an image, the method comprising:
determining a current binarization parameter as one of a previous binarization parameter and an updated value of the previous binarization parameter based on a comparison between a predetermined value and a value of the previous transform coefficient, wherein the current binarization parameter is used to determine a prefix bin string from a bin string corresponding to current transform coefficient level information, wherein the current transform coefficient level information indicates a size of a transform coefficient included in a transform unit, the updated value of the previous binarization parameter is obtained by adding n to the previous binarization parameter, wherein n is an integer and n is equal to or greater than 1;
obtaining the binary bit string by binarizing a binary value of current transform coefficient level information using a current binarization parameter; and
a bit stream is obtained by entropy coding the binary string,
wherein, the step of determining the current binarization parameters further comprises:
determining that the current binarization parameter is maintained as the previous binarization parameter when the size of the previous transform coefficient is equal to or less than the predetermined value; and
determining a current binarization parameter as the updated value when the magnitude of the previous transform coefficient is greater than the predetermined value,
wherein the predetermined value has a value of 3 x (1< < cLastRiceParam), wherein cLastRiceParam indicates the previous binarization parameter.
CN201710854232.7A 2012-04-15 2013-04-15 Method and apparatus for encoding video, and computer-readable storage medium Active CN107465930B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201261624358P 2012-04-15 2012-04-15
US61/624,358 2012-04-15
CN201380031658.2A CN104365099B (en) 2012-04-15 2013-04-15 For the entropy code of transform coefficient levels and the parameter updating method of entropy decoding and the entropy code device and entropy decoding device of the transform coefficient levels for using this method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201380031658.2A Division CN104365099B (en) 2012-04-15 2013-04-15 For the entropy code of transform coefficient levels and the parameter updating method of entropy decoding and the entropy code device and entropy decoding device of the transform coefficient levels for using this method

Publications (2)

Publication Number Publication Date
CN107465930A CN107465930A (en) 2017-12-12
CN107465930B true CN107465930B (en) 2020-06-23

Family

ID=49383689

Family Applications (7)

Application Number Title Priority Date Filing Date
CN201510198403.6A Active CN104869423B (en) 2012-04-15 2013-04-15 The entropy code device and entropy decoding device of transform coefficient levels
CN201510236753.7A Active CN105007496B (en) 2012-04-15 2013-04-15 The method and apparatus that video is decoded
CN201510202243.8A Active CN104869424B (en) 2012-04-15 2013-04-15 A kind of method decoded to video
CN201510236469.XA Active CN105049869B (en) 2012-04-15 2013-04-15 The method and apparatus that video is decoded
CN201380031658.2A Active CN104365099B (en) 2012-04-15 2013-04-15 For the entropy code of transform coefficient levels and the parameter updating method of entropy decoding and the entropy code device and entropy decoding device of the transform coefficient levels for using this method
CN201710854232.7A Active CN107465930B (en) 2012-04-15 2013-04-15 Method and apparatus for encoding video, and computer-readable storage medium
CN201510236415.3A Active CN105049868B (en) 2012-04-15 2013-04-15 The method and apparatus that video is decoded

Family Applications Before (5)

Application Number Title Priority Date Filing Date
CN201510198403.6A Active CN104869423B (en) 2012-04-15 2013-04-15 The entropy code device and entropy decoding device of transform coefficient levels
CN201510236753.7A Active CN105007496B (en) 2012-04-15 2013-04-15 The method and apparatus that video is decoded
CN201510202243.8A Active CN104869424B (en) 2012-04-15 2013-04-15 A kind of method decoded to video
CN201510236469.XA Active CN105049869B (en) 2012-04-15 2013-04-15 The method and apparatus that video is decoded
CN201380031658.2A Active CN104365099B (en) 2012-04-15 2013-04-15 For the entropy code of transform coefficient levels and the parameter updating method of entropy decoding and the entropy code device and entropy decoding device of the transform coefficient levels for using this method

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201510236415.3A Active CN105049868B (en) 2012-04-15 2013-04-15 The method and apparatus that video is decoded

Country Status (25)

Country Link
US (7) US9277242B2 (en)
EP (5) EP3926832B1 (en)
KR (7) KR101477621B1 (en)
CN (7) CN104869423B (en)
AU (4) AU2013250108B2 (en)
BR (2) BR122018010479B1 (en)
CA (2) CA2870531C (en)
CY (1) CY1120738T1 (en)
DK (2) DK2840789T3 (en)
ES (3) ES2687522T3 (en)
HR (1) HRP20181467T1 (en)
HU (3) HUE049811T2 (en)
LT (1) LT2840789T (en)
MX (4) MX364043B (en)
MY (3) MY185273A (en)
PH (5) PH12014502262B1 (en)
PL (4) PL3926832T3 (en)
PT (1) PT2840789T (en)
RS (1) RS57654B1 (en)
RU (4) RU2589382C2 (en)
SG (6) SG10201710903VA (en)
SI (1) SI2840789T1 (en)
TW (3) TWI640191B (en)
WO (1) WO2013157794A1 (en)
ZA (4) ZA201600978B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10091529B2 (en) * 2010-07-09 2018-10-02 Samsung Electronics Co., Ltd. Method and apparatus for entropy encoding/decoding a transform coefficient
SG10201710903VA (en) * 2012-04-15 2018-02-27 Samsung Electronics Co Ltd Parameter update method for entropy coding and decoding of conversion coefficient level, and entropy coding device and entropy decoding device of conversion coefficient level using same
US10432928B2 (en) 2014-03-21 2019-10-01 Qualcomm Incorporated Using a current picture as a reference for video coding
CN107710759B (en) * 2015-06-23 2020-11-03 联发科技(新加坡)私人有限公司 Method and device for coding and decoding conversion coefficient
US10123045B2 (en) * 2015-07-24 2018-11-06 Qualcomm Incorporated Modification to block size for transform mode in display stream compression
KR20240034898A (en) * 2015-10-13 2024-03-14 삼성전자주식회사 Method and device for encoding or decoding image
WO2017188739A1 (en) * 2016-04-29 2017-11-02 세종대학교 산학협력단 Method and device for encoding and decoding image signal
CN110572649B (en) * 2016-04-29 2023-04-07 世宗大学校产学协力团 Method and apparatus for encoding and decoding image signal
EP4213096A1 (en) * 2018-01-18 2023-07-19 BlackBerry Limited Methods and devices for entropy coding point clouds
US10491914B2 (en) * 2018-03-29 2019-11-26 Tencent America LLC Transform information prediction
US11451840B2 (en) * 2018-06-18 2022-09-20 Qualcomm Incorporated Trellis coded quantization coefficient coding
CN116723334A (en) 2018-09-20 2023-09-08 Lg电子株式会社 Image decoding apparatus, image encoding apparatus, and bit stream transmitting apparatus
WO2020071879A1 (en) 2018-10-05 2020-04-09 엘지전자 주식회사 Transform coefficient coding method and device therefor
KR20230165360A (en) * 2018-10-05 2023-12-05 엘지전자 주식회사 Method for coding transform coefficient and device therefor
CN112997505B (en) * 2018-11-12 2023-03-24 三星电子株式会社 Method and apparatus for entropy coding coefficient levels and method and apparatus for entropy decoding coefficient levels
US11477486B2 (en) 2019-01-02 2022-10-18 Qualcomm Incorporated Escape coding for coefficient levels
MX2021009649A (en) * 2019-03-12 2021-12-10 Lg Electronics Inc Transform-based image coding method and device therefor.
WO2021040319A1 (en) * 2019-08-23 2021-03-04 엘지전자 주식회사 Method and apparatus for deriving rice parameter in video/image coding system
US11303914B2 (en) 2020-01-08 2022-04-12 Tencent America LLC Method and apparatus for video coding
CN116671101A (en) 2020-06-22 2023-08-29 抖音视界有限公司 Signaling of quantization information in a codec video

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794815A (en) * 2004-12-22 2006-06-28 汤姆森许可贸易公司 Optimisation of a quantisation matrix for image and video coding
CN1874509A (en) * 2001-09-14 2006-12-06 诺基亚有限公司 Method and system for context-based adaptive binary arithmetic coding
CN1299510C (en) * 2001-01-30 2007-02-07 株式会社欧菲士诺亚 Moving-picture information compressing method and system
CN101039422A (en) * 2006-03-17 2007-09-19 佳能株式会社 Image encoding apparatus, image decoding apparatus and control method therefor
WO2010133763A1 (en) * 2009-05-19 2010-11-25 Nokia Corporation Method for variable length coding and apparatus
WO2011127403A1 (en) * 2010-04-09 2011-10-13 Ntt Docomo, Inc. Adaptive binarization for arithmetic coding

Family Cites Families (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2289086B (en) * 1994-05-03 1997-11-19 Interbold Delivery access device
TW515192B (en) * 2000-06-06 2002-12-21 Noa Kk Off Compression method of motion picture image data and system there for
KR20030083703A (en) * 2001-01-30 2003-10-30 가부시키가이샤 오피스 노아 Moving picture information compressing method and its system
US6735254B2 (en) 2001-06-29 2004-05-11 Qualcomm, Inc. DCT compression using Golomb-Rice coding
US7877273B2 (en) * 2002-01-08 2011-01-25 Fredric David Abramson System and method for evaluating and providing nutrigenomic data, information and advice
JP4240283B2 (en) * 2002-10-10 2009-03-18 ソニー株式会社 Decoding device and decoding method
CA2547891C (en) * 2003-12-01 2014-08-12 Samsung Electronics Co., Ltd. Method and apparatus for scalable video encoding and decoding
US7599435B2 (en) * 2004-01-30 2009-10-06 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Video frame encoding and decoding
KR100624432B1 (en) * 2004-08-05 2006-09-19 삼성전자주식회사 Context adaptive binary arithmetic decoder method and apparatus
US20060083298A1 (en) * 2004-10-14 2006-04-20 Nokia Corporation Reference picture management in video coding
US7580585B2 (en) * 2004-10-29 2009-08-25 Microsoft Corporation Lossless adaptive Golomb/Rice encoding and decoding of integer data using backward-adaptive rules
TWI249290B (en) * 2004-12-17 2006-02-11 Univ Nat Cheng Kung Lossless and near lossless image compression and coding methods
KR100718134B1 (en) * 2005-07-21 2007-05-14 삼성전자주식회사 Method and apparatus of encoding/decoding video data using bitrate adaptive binary arithmetic coding
KR100647835B1 (en) * 2005-07-21 2006-11-23 삼성전기주식회사 Stator core for bldc motor
US8189962B2 (en) * 2006-12-19 2012-05-29 Hitachi Kokusai Electric Inc. Image processing apparatus
KR101356733B1 (en) * 2007-03-07 2014-02-05 삼성전자주식회사 Method and apparatus for Context Adaptive Binary Arithmetic Coding and decoding
JP4513841B2 (en) * 2007-08-28 2010-07-28 ソニー株式会社 Encoding apparatus, encoding method, encoding method program, and recording medium recording the encoding method program
US8180396B2 (en) 2007-10-18 2012-05-15 Yahoo! Inc. User augmented reality for camera-enabled mobile devices
CN103037220B (en) * 2008-01-04 2016-01-13 华为技术有限公司 Video coding, coding/decoding method and device and processing system for video
US9008171B2 (en) * 2008-01-08 2015-04-14 Qualcomm Incorporated Two pass quantization for CABAC coders
JP5302336B2 (en) 2008-01-21 2013-10-02 テレフオンアクチーボラゲット エル エム エリクソン(パブル) Method and system for compressing blocks of pixels
US8509555B2 (en) * 2008-03-12 2013-08-13 The Boeing Company Error-resilient entropy coding for partial embedding and fine grain scalability
CN101779467B (en) * 2008-06-27 2012-06-27 索尼公司 Image processing device and image processing method
US8464129B2 (en) * 2008-08-15 2013-06-11 Lsi Corporation ROM list-decoding of near codewords
KR101504887B1 (en) 2009-10-23 2015-03-24 삼성전자 주식회사 Method and apparatus for video decoding by individual parsing or decoding in data unit level, and method and apparatus for video encoding for individual parsing or decoding in data unit level
SG10201502226SA (en) * 2010-04-09 2015-05-28 Mitsubishi Electric Corp Moving image encoding device and moving image decoding device
JP5676744B2 (en) * 2010-04-13 2015-02-25 フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン Entropy coding
KR20120009618A (en) * 2010-07-19 2012-02-02 에스케이 텔레콤주식회사 Method and Apparatus for Partitioned-Coding of Frequency Transform Unit and Method and Apparatus for Encoding/Decoding of Video Data Thereof
US9378185B2 (en) * 2010-09-30 2016-06-28 Texas Instruments Incorporated Transform and quantization architecture for video coding and decoding
US9042440B2 (en) * 2010-12-03 2015-05-26 Qualcomm Incorporated Coding the position of a last significant coefficient within a video block based on a scanning order for the block in video coding
US8976861B2 (en) * 2010-12-03 2015-03-10 Qualcomm Incorporated Separately coding the position of a last significant coefficient of a video block in video coding
CA2822929C (en) * 2011-01-04 2016-07-12 Research In Motion Limited Coding of residual data in predictive compression
CA2770799A1 (en) * 2011-03-11 2012-09-11 Research In Motion Limited Method and system using prediction and error correction for the compact representation of quantization matrices in video compression
KR101089725B1 (en) 2011-03-18 2011-12-07 동국대학교 산학협력단 Method of designing threshold filter for lossless image compression, apparatus and method for lossless image compression using the filter
US8446301B2 (en) * 2011-04-15 2013-05-21 Research In Motion Limited Methods and devices for coding and decoding the position of the last significant coefficient
US9112526B2 (en) * 2011-06-15 2015-08-18 Sony Corporation Binarization of DQP using separate absolute value and sign (SAVS) in CABAC
EP3402206B1 (en) * 2011-06-28 2020-05-20 Samsung Electronics Co., Ltd. Video encoding and decoding method using arithmetic coding with a two-dimensional signaling of the last significant coefficient
WO2013052073A1 (en) * 2011-10-04 2013-04-11 Bird-B-Gone, Inc. Electrified bird deterrent device with treads
WO2013050612A1 (en) * 2011-10-06 2013-04-11 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Entropy coding buffer arrangement
CN108235015A (en) * 2011-11-08 2018-06-29 三星电子株式会社 For the equipment being decoded to video
US9584802B2 (en) * 2012-04-13 2017-02-28 Texas Instruments Incorporated Reducing context coded and bypass coded bins to improve context adaptive binary arithmetic coding (CABAC) throughput
SG10201710903VA (en) * 2012-04-15 2018-02-27 Samsung Electronics Co Ltd Parameter update method for entropy coding and decoding of conversion coefficient level, and entropy coding device and entropy decoding device of conversion coefficient level using same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1299510C (en) * 2001-01-30 2007-02-07 株式会社欧菲士诺亚 Moving-picture information compressing method and system
CN1874509A (en) * 2001-09-14 2006-12-06 诺基亚有限公司 Method and system for context-based adaptive binary arithmetic coding
CN1794815A (en) * 2004-12-22 2006-06-28 汤姆森许可贸易公司 Optimisation of a quantisation matrix for image and video coding
CN101039422A (en) * 2006-03-17 2007-09-19 佳能株式会社 Image encoding apparatus, image decoding apparatus and control method therefor
WO2010133763A1 (en) * 2009-05-19 2010-11-25 Nokia Corporation Method for variable length coding and apparatus
WO2011127403A1 (en) * 2010-04-09 2011-10-13 Ntt Docomo, Inc. Adaptive binarization for arithmetic coding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Context-Based Adaptive Binary Arithmetic Coding in the H.264/AVC Video Compression Standard》;Detlev Marpe ET AL;《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》;20030731;第13卷(第7期);全文 *
《REDUCED-COMPLEXITY ENTROPY CODING OF TRANSFORM COEFFECIENT LEVELS USING TRUNCATED GOLOMB-RICE CODES IN VIDEO COMPRESSION》;TUNG NGUYEN ET AL;《2011 18th IEEE International Conference on Image Processing》;20110911;全文 *

Also Published As

Publication number Publication date
PL3637621T3 (en) 2023-07-24
US9942567B2 (en) 2018-04-10
AU2016200562A1 (en) 2016-02-18
CA2870531A1 (en) 2013-10-24
CA2870531C (en) 2018-03-20
SG10201803573PA (en) 2018-06-28
MX2014012448A (en) 2015-01-14
ES2776899T3 (en) 2020-08-03
CN104365099B (en) 2017-10-27
RU2684587C1 (en) 2019-04-09
US10306230B2 (en) 2019-05-28
KR101945446B1 (en) 2019-02-07
CN107465930A (en) 2017-12-12
AU2017201237A1 (en) 2017-03-16
RU2589382C2 (en) 2016-07-10
HUE062795T2 (en) 2023-12-28
AU2013250108B2 (en) 2015-11-05
EP3637621B1 (en) 2023-06-07
PH12014502262A1 (en) 2014-12-10
MY185457A (en) 2021-05-19
US20180192054A1 (en) 2018-07-05
US9277233B1 (en) 2016-03-01
KR101547499B1 (en) 2015-08-26
PH12017500774A1 (en) 2018-10-15
TW201742459A (en) 2017-12-01
BR122018010479B1 (en) 2020-05-12
SG10201703653YA (en) 2017-06-29
RS57654B1 (en) 2018-11-30
MX339998B (en) 2016-06-21
AU2018203874A1 (en) 2018-06-21
SG10201510259UA (en) 2016-01-28
US20150189325A1 (en) 2015-07-02
HRP20181467T1 (en) 2018-11-02
PL2840789T3 (en) 2018-11-30
SG10201710903VA (en) 2018-02-27
LT2840789T (en) 2018-10-10
ZA201600979B (en) 2016-11-30
EP2840789B1 (en) 2018-09-12
US9277242B2 (en) 2016-03-01
CN105049868A (en) 2015-11-11
US20170105026A1 (en) 2017-04-13
EP3416292A1 (en) 2018-12-19
EP2840789A1 (en) 2015-02-25
PL3926832T3 (en) 2024-04-15
ES2949651T3 (en) 2023-10-02
SG10201707023VA (en) 2017-10-30
KR102028689B1 (en) 2019-10-04
KR20190012247A (en) 2019-02-08
EP3926832B1 (en) 2023-11-22
PH12017500771A1 (en) 2018-10-15
ZA201600980B (en) 2016-11-30
PH12017500773B1 (en) 2018-10-15
KR20150037782A (en) 2015-04-08
CA2993866A1 (en) 2013-10-24
DK3416292T3 (en) 2020-03-09
US20150189324A1 (en) 2015-07-02
TW201639362A (en) 2016-11-01
MY167815A (en) 2018-09-26
KR20150037779A (en) 2015-04-08
AU2013250108A1 (en) 2014-11-20
PL3416292T3 (en) 2020-06-01
CN105049868B (en) 2019-05-10
EP3926832A1 (en) 2021-12-22
SI2840789T1 (en) 2018-10-30
EP3416292B1 (en) 2020-02-26
CN104869424A (en) 2015-08-26
BR112014025686A8 (en) 2018-06-26
PH12017500771B1 (en) 2018-10-15
TWI601412B (en) 2017-10-01
US20150030081A1 (en) 2015-01-29
PH12017500772A1 (en) 2018-10-15
CN105049869A (en) 2015-11-11
KR101477621B1 (en) 2015-01-02
CN104869423B (en) 2017-09-08
PH12017500774B1 (en) 2018-10-15
KR20130118246A (en) 2013-10-29
KR20150037781A (en) 2015-04-08
ZA201600978B (en) 2016-11-30
US9386323B2 (en) 2016-07-05
PH12014502262B1 (en) 2014-12-10
CA2993866C (en) 2020-07-14
MX364043B (en) 2019-04-11
CN105049869B (en) 2019-05-10
EP2840789A4 (en) 2016-03-09
KR20140110809A (en) 2014-09-17
AU2017201237B2 (en) 2018-03-08
BR112014025686A2 (en) 2017-09-19
KR101573337B1 (en) 2015-12-01
US20150189326A1 (en) 2015-07-02
CN104869423A (en) 2015-08-26
RU2660639C1 (en) 2018-07-06
BR112014025686B1 (en) 2019-08-20
CN104365099A (en) 2015-02-18
TWI549487B (en) 2016-09-11
KR101573338B1 (en) 2015-12-01
AU2016200562B2 (en) 2016-12-08
PT2840789T (en) 2018-10-24
EP4203326A1 (en) 2023-06-28
CN104869424B (en) 2017-10-27
EP3926832C0 (en) 2023-11-22
RU2632409C1 (en) 2017-10-04
MX351023B (en) 2017-09-28
EP3637621C0 (en) 2023-06-07
PH12017500772B1 (en) 2018-10-15
ES2687522T3 (en) 2018-10-25
KR101573336B1 (en) 2015-12-01
PH12017500773A1 (en) 2018-10-15
ZA201600977B (en) 2016-11-30
EP3637621A1 (en) 2020-04-15
WO2013157794A1 (en) 2013-10-24
TWI640191B (en) 2018-11-01
CN105007496A (en) 2015-10-28
HUE041710T2 (en) 2019-05-28
RU2014145826A (en) 2016-06-10
AU2018203874B2 (en) 2019-06-13
US9426492B2 (en) 2016-08-23
US9554155B2 (en) 2017-01-24
MY185273A (en) 2021-04-30
CN105007496B (en) 2019-05-10
SG11201406560UA (en) 2014-11-27
KR20150037780A (en) 2015-04-08
DK2840789T3 (en) 2018-10-08
TW201404162A (en) 2014-01-16
HUE049811T2 (en) 2020-10-28
CY1120738T1 (en) 2019-12-11

Similar Documents

Publication Publication Date Title
CN107465930B (en) Method and apparatus for encoding video, and computer-readable storage medium
CN107911699B (en) Video encoding method and apparatus, and non-transitory computer-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant