EP3066833A1 - Simplified processing of weighted prediction syntax and semantics using a bit depth variable for high precision data - Google Patents

Simplified processing of weighted prediction syntax and semantics using a bit depth variable for high precision data

Info

Publication number
EP3066833A1
EP3066833A1 EP14809545.8A EP14809545A EP3066833A1 EP 3066833 A1 EP3066833 A1 EP 3066833A1 EP 14809545 A EP14809545 A EP 14809545A EP 3066833 A1 EP3066833 A1 EP 3066833A1
Authority
EP
European Patent Office
Prior art keywords
value
offset
weighted prediction
syntax element
variable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP14809545.8A
Other languages
German (de)
French (fr)
Inventor
Yue Yu
Limin Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Commscope UK Ltd
Original Assignee
Arris Enterprises LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Arris Enterprises LLC filed Critical Arris Enterprises LLC
Priority claimed from PCT/US2014/064073 external-priority patent/WO2015069729A1/en
Publication of EP3066833A1 publication Critical patent/EP3066833A1/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the High Efficiency Video Coding (“HEVC”) coding standard (also called H.265) is a coding standard promulgated by the ISO/IEC MPEG standardization organizations.
  • HEVC supports resolutions higher than "high definition,” which means pixels may be represented by a larger number of bits than the high definition pictures. For example, 4K resolutions may include images that are 4,000 pixels wide compared to high definition images that are 1920 pixels wide.
  • Temporal motion prediction is an effective method to increase the coding efficiency and provides high compression.
  • HEVC uses a translational model for temporal motion prediction.
  • a prediction signal for a given current unit in a current picture is generated from a corresponding reference unit in a reference picture.
  • the coordinates of the reference unit are given by a motion vector that describes the translational motion along horizontal (x) and vertical (y) directions that would be added/subtracted to/from the coordinates of the current unit.
  • a decoder needs the motion vector to decode the compressed video.
  • HEVC may use single prediction using one reference pictures or bi- prediction using two reference pictures.
  • the pixels in reference units of the reference pictures are used as the prediction.
  • pixels of one of the reference units in bi-prediction may not yield the most accurate prediction.
  • HEVC may use weighted prediction when performing the motion estimation process. Weighted prediction may weight the pixels in one or both of the reference units used as the prediction differently.
  • Embodiments of the present invention remove a condition check in the semantics for checking a high-precision data flag. This simplifies the semantics used in the encoding and decoding process. Even if the high-precision data flag is not set, the value of the weighted prediction syntax element is set by the BitDepth variable. However, even if the BitDepth is not considered high-precision data, such as 8 bits, the range for the weighted prediction syntax element is still the same as the fixed value.
  • the syntax elements luma_offset_10[ i ], luma_offset_ll [ i ], delta_chroma_offset_10[i][j], and delta_chroma_offset_ll [i][j] use the variable BitDepth as described above whether the flag extended _precision_processing_flag is enabled and not enabled to indicate whether the bit depth is above a threshold.
  • a method of the present invention includes the following steps: (1) setting a first value for a variable associated with a bit depth based on a number of bits associated with pixels of a reference picture; (2) determining a weighting factor for performing weighted prediction for a current unit of a current picture; (3) using the weighting factor to weight pixels of a first reference unit of a first reference picture when performing motion compensation for the current unit; and (4) setting a second value for a weighted prediction syntax element associated with the weighting factor whether an extended precision flag is enabled and not enabled to indicate whether the bit depth is above a threshold, wherein the second value for the weighted prediction syntax element is within a range set by the first value and is an offset for a prediction value used in performing motion compensation.
  • FIG. 1 depicts a simplified system for encoding and decoding video according to one embodiment.
  • FIG. 2 depicts an example of the motion estimation and compensation process according to one embodiment.
  • FIG. 3 depicts a table of syntax elements that are used for weighted prediction according to one embodiment.
  • FIG. 4 depicts a simplified flowchart of a method for using the variable BitDepth in the encoding process according to one embodiment.
  • FIG. 5 depicts a simplified flowchart for decoding an encoded bitstream using the variable BitDepth according to one embodiment.
  • Particular embodiments provide a variable, BitDepth, that may be set at a value based on a number of bits used to represent pixels in pictures of a video.
  • the variable may be used in syntax elements in HEVC, such as the HEVC range extension, but other coding standards may be used.
  • HEVC High Efficiency Video Coding
  • the variable By using the variable, different resolutions for the video may be accommodated during the encoding and decoding process.
  • the number of pixels in the pictures may be represented by 8 bits, 10 bits, 12 bits, or another number of bits depending on the resolution.
  • Using the BitDepth variable in the syntax provides flexibility in the motion estimation and motion compensation process. For example, syntax elements used in the weighted prediction process may take into account different numbers of bits used to represent the pictures.
  • a condition check is performed to determine if high- precision data is being processed; that is, if the number of pixels in pictures of the video are above a number or threshold, such as 8 bits.
  • This condition check may be used to determine whether syntax elements should be within a range set by the BitDepth variable.
  • the high-precision data flag is not set, then the value of the weighted prediction syntax element is set by a fixed range that represents the standard number of pixels used to represent pixels in the pictures, such as 8 bits.
  • Particular embodiments may remove the condition check. This simplifies the semantics used in the encoding and decoding process. In this case, even if the high-precision data flag is not set, the value of the weighted prediction syntax element is set by the BitDepth variable.
  • the syntax elements luma_offset_10[ i ], luma_offset_ll [ i ], delta_chroma_offset_10[i][j], and delta_chroma_offset_ll [i][j] use the variable BitDepth as described above whether the flag extended_precision_processing_flag is enabled and not enabled to indicate whether the bit depth is above a threshold.
  • FIG. 1 depicts a simplified system 100 for encoding and decoding video according to one embodiment.
  • System 100 includes an encoder 102 and a decoder 104.
  • Encoder 102 and decoder 104 may use a video coding standard to encode and decode video, such as HEVC.
  • HEVC high definition video
  • encoder 102 and decoder 104 may use syntax elements from the HEVC range extension. Also, other elements of encoder 102 and decoder 104 may be appreciated.
  • Encoder 102 and decoder 104 perform temporal prediction through motion estimation and motion compensation.
  • Motion estimation is a process of determining a motion vector (MV) for a current unit of video.
  • the motion estimation process searches for a best match prediction for a current unit of video (e.g., a prediction unit (PU)) over reference pictures.
  • the best match prediction is described by the motion vector and associated reference picture ID.
  • a reference unit in a B picture may have up to two motion vectors that point to a previous reference unit in a previous picture and a subsequent reference unit in a subsequent reference picture in the picture order.
  • Motion compensation is then performed by subtracting a reference unit pointed to by the motion vector from the current unit of video to determine a residual error that can be encoded.
  • the two motion vectors point to two reference units, which can be combined to form a combined bidirectional reference unit.
  • the combined bi-directional reference unit can be subtracted from the current unit to determine the residual error.
  • encoder 102 and decoder 104 include motion estimation and compensation blocks 104-1 and 104-2, respectively.
  • B and P pictures may exploit the temporal redundancy by using weighted prediction.
  • Weighted prediction applies weighting factors to one or both of the reference pictures.
  • the reference unit may be weighted before being used as a prediction for the current unit.
  • the reference units are weighted before combining the reference pictures into the combined bi-directional reference unit.
  • an average of the reference units from which the B picture is predicted may be used.
  • a weighted average (or other weighted calculation) of the reference units may be used to predict the current unit.
  • the pixels of reference units may be weighted by weighting factors before combining.
  • the following may be discussed with respect to B pictures, but the discussion will also apply to P pictures except that the combined bi-directional reference unit is replaced by one weighted reference unit.
  • the use of weighted prediction may be useful when certain conditions happen in the video, such as when one scene fades into another or where there is a gradual variation in luminance, such as when fading to or from black or in cross-fades.
  • the picture associated with the fading may not be as accurate to use as a prediction than a picture that does not include the fading.
  • Weighting the picture that does not include the fading higher may create a better prediction for the current unit. That is, the pixels of the reference unit that does not include the fading effect may have a higher correlation to the current unit.
  • the weighting may reduce the residual error and also reduce the bitrate.
  • the weighting factors may be provided for the luma and/or chroma components of reference units. Also, the weighting factors may be different based on the reference list used. That is, the bi-prediction may select a reference picture from a listO and a listl . Reference pictures in listO may be weighted with a weighting factor wo and reference pictures in listl may be weighted with a weighting factor wi. Also, only one of the reference pictures may be weighted.
  • the weighting factors may be information that adjusts the pixel values differently in the reference units. In one embodiment, the weighting factors may be percentages.
  • the weighted prediction process may use syntax elements that define parameters that encoder 102 and decoder 104 use to perform the weighted prediction. By setting the values of these parameters, a weighted prediction manager 106-1 in encoder 102 and a weighted prediction manager 106-2 in decoder 104 can perform the weighted prediction process. In a simple example, the weighting factors may weight pixels in different reference pictures differently. Then, weighted prediction managers 106-1 and 106-2 take the weighted average of the reference units to use as a combined bi-directional reference unit. Motion estimation and compensation blocks 104-1 and 104-2 then use this combined bi-directional reference unit in the motion compensation process for the current unit.
  • the syntax elements will be described in more detail below after describing the weighted prediction process in more detail.
  • FIG. 2 depicts an example of the motion estimation and compensation process according to one embodiment.
  • the video includes a number of pictures 200- 1 - 200- 5.
  • a current picture is shown at 200-3 and includes a current unit of video 202-1.
  • Current unit 202-1 may be bi-predicted using reference units from reference pictures in other pictures 200, such as a previous picture 200-1 in the picture order and a subsequent picture 200-5 in the picture order.
  • Picture 200-1 includes a reference unit 202-2 and picture 200-5 includes a reference unit 202-3, both of which can be used to predict current unit 202-1.
  • the pixels of reference units 202-2 and 202-3 may be weighted differently.
  • the pixels of reference units may be weighted by the weighting factors.
  • the weighting factors may be percentages, such as the pixels of reference unit 202-2 may be weighted with a weighting factor wo of 0.25 and the pixels of reference unit 202-3 may be weighted with a weighting factor wi of 0.75. These weighting factors may then be used to calculate the pixel values used as the combined bi-directional reference unit for current unit 202-1.
  • motion estimation and compensation block 104-1 can determine motion vectors that represent the location of reference units 202-2 and 202-3 with respect to current unit 202-1. Then, motion estimation and compensation block 104-1 calculates a difference between the combined bi-directional reference unit and the current unit 202- 1 as a residual error.
  • encoder 102 encodes the residual error, the motion vectors, and also the weighting factors used to determine the combined bidirectional reference unit.
  • Encoder 102 includes the encoded residual error, the motion vectors, and the weighting factors in an encoded bitstream that is sent to decoder 104.
  • the term weighting factors is used for discussion purposes to represent information that is encoded that allows the weighted prediction process to be performed and the weighting factors to be determined. The syntax elements used to determine which information for the weighting factors that are encoded in the encoded bitstream are described in more detail below.
  • Decoder 104 receives the encoded bitstream and can reconstruct the pictures of the video. Decoder 104 may reconstruct reference units 202-2 and 202-3 from the encoded bitstream prior to decoding current unit 202-1. Also, decoder 104 decodes the residual error for current unit 202-1, the motion vectors for current unit 202-1, and the weighting factors. Then, in decoder 104, motion estimation and compensation block 104-2 may then use the residual error to reconstruct the current unit 202-1. For example, motion estimation and compensation block 104-2 may use the motion vectors to locate reconstructed reference units 202-2 and 202-3.
  • weighted prediction manager 106-2 applies the weighting factors to the reconstructed units 202- 2 and 202-3 to form the reconstructed combined bi-directional reference unit.
  • the reconstructed residual error is then added to the reconstructed combined bi-directional predicted unit to form a reconstructed current unit.
  • syntax elements for weighted prediction may use a variable BitDepth in the weighted prediction process.
  • the variable BitDepth may represent a number of bits used to represent a pixel in a picture. The following will describe syntax elements that use the bit depth variable.
  • FIG. 3 depicts a table 300 of syntax elements that are used for weighted prediction according to one embodiment.
  • the combined bidirectional reference unit may be determined by applying the weighting factors in an averaging operation.
  • the weighting factor wo is multiplied by the previous reference unit 202-2 (e.g., the luma and chroma components of the pixels) and the weighting factor wi is multiplied by the subsequent reference unit 202-3 (e.g., the luma and chroma components of the pixels).
  • the two values are added together and divided by the added weighting factors (e.g., normalization).
  • the above example may not be exactly how encoder 102 and decoder 104 perform the calculation to determine the combined bi-directional reference unit.
  • Various methods may be used, but performing a division operation may be an expensive computational operation.
  • One method of deriving the weighting factors is to use bit shifting.
  • the weighting factors can be derived with a common denominator and the division is represented by a right shift of the combined weighted prediction of a number of bits based on a base 2 logarithm of the denominator. That is, the weighting factor, such as the luma weighting factor, may be bit shifted by the base 2 logarithm of the denominator.
  • the following syntax elements encode parameter values that can be used to perform the weighted prediction process.
  • the syntax elements in Table 300 of luma_offset_10[i] at 302 and delta_chroma_offset_10[i][j] at 304 may use a BitDepth variable for the number of bits that represent a pixel in the pictures. These syntax elements are for listO.
  • Corresponding variables for listl are also provided for luma_offset_ll [i] at 306 and delta_chroma_offset_ll [i][j] at 308.
  • the listO elements may be described in detail, but any discussion with respect to listO elements also applies to listl elements.
  • BitDepth variable with these syntax elements allows these syntax elements to be used for video that may represent pixels with a different number of bits, such as 8, 10, 12, 14, 16, etc., bits.
  • the syntax elements can thus handle high precision data, which may be when pixels are represented by more than 8 bits.
  • using the BitDepth variable allows a condition check to be avoided in encoder 102 and decoder 104.
  • Other syntax elements in Table 300 will also be described below to provide reference with respect to the above syntax elements.
  • the syntax element luma_log2_weight_denom is the base 2 logarithm of the denominator for all luma weighting factors.
  • the value of luma_log2_weight_denom shall be in the range of 0 to 7, inclusive, or 0 to BitDepthy, inclusive, where BitDepthy is the bit depth of the luma component of the reference picture. By including the range to be dependent on the variable BitDepthy, the value can represent different numbers of bits for the pixels.
  • the syntax element luma_log2_weight_denom is used to calculate the luma weighting factor (e.g., the delta luma weighting factor of the syntax element delta_luma_weight_10[ i ] and delta_luma_weight_ll [ i ] described below).
  • the syntax element delta_chroma_log2_weight_denom is the difference of the base 2 logarithm of the denominator for all chroma weighting factors.
  • the variable ChromaLog2WeightDenom which is the log 2 denominator of the chroma weighting factor, is derived to be equal to luma_log2_weight_denom + delta_chroma_log2_weight_denom, and the value shall be in the range of 0 to 7, inclusive, or 0 to BitDepthc, inclusive, where the variable BitDepthc is the bit depth of the chroma component of the reference picture.
  • the chroma weighting factor value can represent different numbers of bits for the pixels.
  • the syntax element luma_weight_10_flag[ i ] when equal to 1, specifies that weighting factors for the luma component of list 0 prediction using RefPicListO[ i ] are present.
  • the syntax element luma_weight_10_flag[ i ] when equal to 0, specifies that these weighting factors are not present.
  • the syntax element chroma_weight_10_flag[ i ] when equal to 1, specifies that weighting factors for the chroma prediction values of list 0 prediction using RefPicListO[ i ] are present.
  • the syntax element chroma_weight_10_flag[ i ] when equal to 0, specifies that the chroma weighting factors are not present.
  • chroma_weight_10_flag[ i ] when not present, it is inferred to be equal to 0.
  • the syntax element delta_luma_weight_10[ i ] is the difference of the weighting factor applied to the luma prediction value for list 0 prediction using RefPicListO[ i ].
  • the variable LumaWeightL0[ i ] may be the luma weighting factor for listO and is derived to be equal to ( 1 « luma_log2_weight_denom ) + delta_luma_weight_10[ i ]. That is, the luma weighting factor is a right shift of the variable luma_log2_weight_denom plus the value of delta_luma_weight_10[ i ].
  • the value of delta_luma_weight_10[ i ] shall be in the range of -128 to 127, inclusive, or -(1 « (BitDepthy - 1)), (1 « (BitDepthy - 1)) -1, inclusive, where BitDepthy is the bit depth for the luma component of the reference picture. This sets the range of the weighting factor to be based on the variable BitDepthy.
  • LumaWeightL0[ i ] is inferred to be equal to 2 luma g 2 - wei g ht - denom
  • the syntax element delta_chroma_weight_10[ i ][ j ] is the difference of the weighting factor applied to the chroma prediction values for list 0 prediction using RefPicListO[ i ] with j equal to 0 for Cb and j equal to 1 for Cr.
  • the variable ChromaWeightL0[ i ][ j ] may be the chroma weighting factor for listO and is derived to be equal to ( 1 « ChromaLog2WeightDenom ) + delta_chroma_weight_10[ i ][ j ].
  • the chroma weighting factor is a right shift of the variable ChromaLog2WeightDenom plus the value of delta_chroma_weight_10[ i ][ j ].
  • the value of delta_chroma_weight_10[ i ][ j ] shall be in the range of -128 to 127, inclusive, or -(1 « (BitDepth c - 1)), (1 « (BitDepthc - 1)) -1, inclusive, where BitDepthc is the bit depth for the chroma component of the reference picture. This sets the range of the weighting factor to be based on the variable BitDepthc.
  • ChromaWeightL0[ i ][ j ] is inferred to be equal to 2 chromaL ° s2WeishtDenom
  • the syntax elements luma_offset_10[i] and delta_chroma_offset_10[i][j] may use the BitDepth variable whether an extended precision flag is enabled and not enabled to indicate whether the bit depth is above a threshold, which alleviates the need to perform a condition check to determine if high precision data is being used.
  • a control flag extended_precision_processing_flag may be used to indicate when the bit depth of pixels of a picture is above a threshold or number, such as 8 bits.
  • the semantics used a condition check of the flag extended_precision_processing_flag to determine when the range should be fixed or be based on the BitDepth variable.
  • the syntax element luma_offset_10[i] is the additive offset applied to the luma prediction value for list 0 prediction using RefPicListO[ i ]. This offset value will be added to a base prediction value to determine the real weighted prediction value.
  • encoder 102 and decoder 104 need to perform a condition check to determine if the flag extended_precision_processing_flag is equal to 0. When equal to 0, this means that the number of bits in the pictures of the video is equal to a base value, such as 8 bits (e.g., high precision data is not being processed).
  • the range of the syntax element luma_offset_10[i] is within the range of -128 to 127, inclusive. This is the range when the number of bits to represent the pixels is 8 bits. If the flag extended _precision_processing_flag is equal to 1, then the value of the syntax element luma_offset_10[i] is within a range set by the BitDepth variable as described above. For example, the following semantics are conventionally used:
  • extended_precision_processing_flag the value of luma_offset_10[ i ] shall be in the range of -128 to 127, inclusive. If extended _precision_processing_flag equal to 1, the value of luma_offset_10[ i ] shall be in the range of -(1 « (BitDepthy - 1) to (1 « (BitDepthy - 1) - 1, inclusive. When luma_weight_10_flag[ i ] is equal to 0, luma_offset_10[ i ] is inferred as equal to 0.
  • the syntax element delta_chroma_offset_10[i][j] is the difference of the additive offset applied to the chroma prediction values for list 0 prediction using RefPicListO[ i ] with j equal to 0 for Cb and j equal to 1 for Cr. This is the delta value for the chroma prediction values used for the motion compensation process. This element is used to calculate the additive offset value for chroma prediction.
  • encoder 102 and decoder 104 need to perform a condition check to determine if the flag extended_precision_processing_flag is equal to 0.
  • ChromaOffsetL0[i][j] When equal to 0, this means that the number of bits in the pictures of the video is equal to a base value, such as 8 bits.
  • encoder 102 or decoder 104 determines the value for a variable ChromaOffsetL0[i][j] to be equal to:
  • ChromaOffsetL0[i][j] includes the values "-128, 127", which represents the range of -128 to 127, inclusive. This means the value of delta_chroma_offset_10[ i ][ j ] shall be in the range of -512 to 511, inclusive, which is when the bit depth is 8 bits.
  • ChromaOffsetL0[ i ][ j ] is inferred to be equal to 0.
  • ChromaOffsetL0[i][j] Clip3( -(1 « (BitDepth c - 1), (1 « (BitDepth c - 1) -1, (delta_chroma_offset_10[i][j]
  • Particular embodiments may remove the condition check in the semantics such that the value of syntax elements luma_offset_10[i] and delta_chroma_offset_10[i][j] are always dependent upon the BitDepth variable whether the flag extended_precision_processing_flag is enabled and not enabled to indicate whether the bit depth is above a threshold. This removes the condition check and simplifies the semantics. However, even if the flag extended _precision_processing_flag is equal to 0 indicating that high-precision data is not being processed, the values for the syntax elements luma_offset_10[i] and delta_chroma_offset_10[i][j] still are dependent upon the BitDepth variable. The calculation to determine the values of the syntax element luma_offset_10[i] even when high precision data is not being processed may be less computationally intensive than performing a condition check.
  • the syntax element luma_offset_10[ i ] is the additive offset applied to the luma prediction value for listO prediction using RefPicListO[ i ].
  • the value of luma_offset_10[ i ] shall be in the range of -(1 « (BitDepth Y - 1) to (1 « (BitDepthy - 1) - 1, inclusive.
  • luma_weight_10_flag[ i ] is equal to 0
  • luma_offset_10[ i ] is inferred as equal to 0.
  • the syntax element luma_offset_10[i] For the syntax element luma_offset_10[i], the fixed range of -128 to 127 is removed and replaced with -(1 « (BitDepthy - 1) to (1 « (BitDepthy - 1) - 1, inclusive. This makes the value of the syntax element luma_offset_10[i] based on the BitDepth value whether or not the flag extended_precision_processing_flag is 0 or 1 (e.g., enabled or disabled).
  • the syntax element delta_chroma_offset_10[ i ] [ j ] is the difference of the additive offset applied to the chroma prediction values for listO prediction using RefPicListO[ i ] with j equal to 0 for Cb and j equal to 1 for Cr.
  • the variable ChromaOffsetL0[i][j] may always be based off of the BitDepth variable whether or not the flag extended_precision_processing_flag is 0 or 1.
  • ChromaOffsetL0[i][j] Clip3( -(1 « (BitDepthc - 1), (1 « (BitDepthc - 1) -1, (delta_chroma_offset_10[i][j] - (((1 « (BitDepthc - 1)) * ChromaWeightLO[i][j] ) » ChromaLog2WeightDenom ) + (1 « (BitDepthc - 1))).
  • delta_chroma_offset_10[ i ][ j ] shall be in the range of -(1 « (BitDepthc + 1) to (1 « (BitDepthc + 1) -1, inclusive.
  • ChromaOffsetL0[ i ][ j ] is inferred to be equal to 0.
  • the syntax elements luma_weight_ll_flag[ i ], chroma_weight_ll_flag[ i ], delta_luma_weight_ll [ i ], luma_offset_ll [ i ], delta_chroma_weight_ll [ i ][ j ], and delta_chroma_offset_ll [ i ][ j ] have the same semantics as luma_weight_10_flag[ i ], chroma_weight_10_flag[ i ], delta_luma_weight_10[ i ], luma_offset_10[ i ], delta_chroma_weight_10[ i ][ j ], and delta_chroma_offset_10[ i ][ j ], respectively, with 10, L0, list 0, and ListO replaced by 11, LI, list 1, and Listl, respectively.
  • encoder 102 may change the number of bits that is used for weighted prediction based on the number of bits that represent pixels in the pictures of the video.
  • encoder 102 knows the BitDepth for the pictures of a video and can set the variable BitDepth to the value of the number of bits that represent the pixels of pictures. The semantics are also simplified by removing a condition check.
  • FIG. 4 depicts a simplified flowchart 400 of a method for using the variable BitDepth in the encoding process according to one embodiment.
  • encoder 102 determines a value for the variable BitDepth for the luma and chroma components. The value may be based on the precision of the video, such as a number of bits that represent the pixels in pictures of a video. Encoder 102 may determine the value based on analyzing characteristics of the video. Also, the number of bits that represent the pixels may be input or included in metadata associated with the video.
  • motion estimation and compensation block 104-1 determines when weighted prediction is enabled. For example, weighted prediction flags, luma_weight_10_flag[ i] and chroma_weight_10_flag[ i ], may be set to indicate that weighted prediction is enabled for the luma and chroma components.
  • weighted prediction manager 106-1 determines motion vectors for reference units 202-2 and 202-3 and weighting factors for the luma and chroma components for reference units 202-2 and 202-3 (e.g., listO and listl). The weighting factors may be assigned to the pictures (e.g., listO and listl) that are being used as reference pictures for a current picture. [0048] At 408, weighted prediction manager 106-1 then calculates the combined bidirectional reference unit to use in motion compensation based on the weighting factors and pixels of reference units 202-2 and 202-3. As described above, the weighting factors weight pixels differently in reference units 202-2 and 202-3. At 410, motion estimation and compensation block 104-1 determines the residual error using current unit 202- 1 and the combined bi-directional reference unit.
  • encoder 102 calculates the values for the applicable syntax elements used in the weighted prediction process. For example, the values for the syntax elements luma_offset_10[ i ], luma_offset_ll [ i ] , delta_chroma_offset_10[i][j], and delta_chroma_offset_ll [i][j] using the variable BitDepth are calculated. This includes performing the calculation without a condition check for the flag extended_precision_processing_flag.
  • encoder 102 encodes the parameters for the syntax elements for the weighted prediction process in an encoded bitstream along with the motion vectors and residual error for current unit 202-1. For example, encoder 102 encodes the values calculated for the syntax elements shown in Table 300 in FIG. 3. At 416, encoder 102 sends the encoded bitstream to decoder 104.
  • FIG. 5 depicts a simplified flowchart 500 for decoding an encoded bitstream using the variable BitDepth according to one embodiment.
  • decoder 104 receives the encoded bitstream.
  • decoder 104 determines a current unit 202-1 in a current picture to decode.
  • decoder 104 decodes the encoded motion vectors from the encoded bitstream for current unit 202-1 and uses these motion vectors to locate the reconstructed reference units 202-2 and 202-3. Also, at 508, decoder 104 determines the residual error for current unit 202-1 from the encoded bitstream.
  • weighted prediction manager 106-2 may determine the values for the syntax elements in Table 300 for weighted prediction. This may allow weighted prediction manager 106-2 to determine the weighting factors. For example, the values for the syntax elements luma_offset_10[ i ], luma_offset_ll [ i ] , delta_chroma_offset_10[i][j], and delta_chroma_offset_ll [i][j] using the variable BitDepth are calculated. This includes performing the calculation without a condition check for the flag extended j>recis ion _processing_flag.
  • weighted prediction manager 106-2 may then apply the weighting factors to reference units 202-2 and 202-3 to determine the combined bi-directional reference unit.
  • motion estimation and compensation block 104-2 can then add the residual error for current unit 202-1 to the combined weighted prediction unit to form a reconstructed current unit.
  • particular embodiments use a variable BitDepth for syntax elements in the weighted prediction process.
  • the syntax elements luma_offset_10[ i ], luma_offset_ll [ i ], delta_chroma_offset_10[i][j], and delta_chroma_offset_ll [i][j] use the variable BitDepth as described above whether the flag extended_precision_processing_flag is enabled and not enabled to indicate whether the bit depth is above a threshold.
  • Particular embodiments may be implemented in a non-transitory computer- readable storage medium for use by or in connection with the instruction execution system, apparatus, system, or machine.
  • the computer-readable storage medium contains instructions for controlling a computer system to perform a method described by particular embodiments.
  • the computer system may include one or more computing devices.
  • the instructions, when executed by one or more computer processors, may be configured to perform that which is described in particular embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Particular embodiments may remove a condition check in the semantics for checking a high-precision data flag. This simplifies the semantics used in the encoding and decoding process. In this case, even if the high-precision data flag is not set, the value of the weighted prediction syntax element is set by the BitDepth variable. However, even if the BitDepth is not considered high-precision data, such as 8 bits, the range for the weighted prediction syntax element is still the same as the fixed value. For example, the syntax elements luma_offset_l0[ i ], luma_offset_l1[ i ], delta_chroma_offset_l0[i][j], and delta_chroma_offset_l1[i][j] use the variable BitDepth as described above whether the flag extended_precision_processing_flag is enabled and not enabled to indicate whether the bit depth is above a threshold.

Description

SIMPLIFIED PROCESSING OF WEIGHTED PREDICTION SYNTAX AND SEMANTICS USING A BIT DEPTH VARIABLE FOR
HIGH PRECISION DATA
CROSS-REFERENCES TO RELATED APPLICATIONS
[0001] The present disclosure claims priority to U.S. Provisional App. No. 61/900,337, entitled "Simplification of Weighted Prediction Syntax and Semantics for HEVC Range Extension", filed November 5, 2013, the contents of which is incorporated herein by reference in its entirety.
BACKGROUND
[0002] The High Efficiency Video Coding ("HEVC") coding standard (also called H.265) is a coding standard promulgated by the ISO/IEC MPEG standardization organizations. HEVC supports resolutions higher than "high definition," which means pixels may be represented by a larger number of bits than the high definition pictures. For example, 4K resolutions may include images that are 4,000 pixels wide compared to high definition images that are 1920 pixels wide.
[0003] Temporal motion prediction is an effective method to increase the coding efficiency and provides high compression. HEVC uses a translational model for temporal motion prediction. According to the translational model, a prediction signal for a given current unit in a current picture is generated from a corresponding reference unit in a reference picture. The coordinates of the reference unit are given by a motion vector that describes the translational motion along horizontal (x) and vertical (y) directions that would be added/subtracted to/from the coordinates of the current unit. A decoder needs the motion vector to decode the compressed video.
[0004] HEVC may use single prediction using one reference pictures or bi- prediction using two reference pictures. The pixels in reference units of the reference pictures are used as the prediction. In some conditions, such as when fading occurs, pixels of one of the reference units in bi-prediction may not yield the most accurate prediction. To compensate for this, HEVC may use weighted prediction when performing the motion estimation process. Weighted prediction may weight the pixels in one or both of the reference units used as the prediction differently.
SUMMARY
[0005] Embodiments of the present invention remove a condition check in the semantics for checking a high-precision data flag. This simplifies the semantics used in the encoding and decoding process. Even if the high-precision data flag is not set, the value of the weighted prediction syntax element is set by the BitDepth variable. However, even if the BitDepth is not considered high-precision data, such as 8 bits, the range for the weighted prediction syntax element is still the same as the fixed value. For example, the syntax elements luma_offset_10[ i ], luma_offset_ll [ i ], delta_chroma_offset_10[i][j], and delta_chroma_offset_ll [i][j] use the variable BitDepth as described above whether the flag extended _precision_processing_flag is enabled and not enabled to indicate whether the bit depth is above a threshold.
[0006] More particularly, a method of the present invention includes the following steps: (1) setting a first value for a variable associated with a bit depth based on a number of bits associated with pixels of a reference picture; (2) determining a weighting factor for performing weighted prediction for a current unit of a current picture; (3) using the weighting factor to weight pixels of a first reference unit of a first reference picture when performing motion compensation for the current unit; and (4) setting a second value for a weighted prediction syntax element associated with the weighting factor whether an extended precision flag is enabled and not enabled to indicate whether the bit depth is above a threshold, wherein the second value for the weighted prediction syntax element is within a range set by the first value and is an offset for a prediction value used in performing motion compensation.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] FIG. 1 depicts a simplified system for encoding and decoding video according to one embodiment. [0008] FIG. 2 depicts an example of the motion estimation and compensation process according to one embodiment.
[0009] FIG. 3 depicts a table of syntax elements that are used for weighted prediction according to one embodiment.
[0010] FIG. 4 depicts a simplified flowchart of a method for using the variable BitDepth in the encoding process according to one embodiment.
[0011] FIG. 5 depicts a simplified flowchart for decoding an encoded bitstream using the variable BitDepth according to one embodiment.
DETAILED DESCRIPTION
[0012] Described herein are techniques for performing weighted prediction. In the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of particular embodiments. Particular embodiments as defined by the claims may include some or all of the features in these examples alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein.
[0013] Particular embodiments provide a variable, BitDepth, that may be set at a value based on a number of bits used to represent pixels in pictures of a video. The variable may be used in syntax elements in HEVC, such as the HEVC range extension, but other coding standards may be used. By using the variable, different resolutions for the video may be accommodated during the encoding and decoding process. For example, the number of pixels in the pictures may be represented by 8 bits, 10 bits, 12 bits, or another number of bits depending on the resolution. Using the BitDepth variable in the syntax provides flexibility in the motion estimation and motion compensation process. For example, syntax elements used in the weighted prediction process may take into account different numbers of bits used to represent the pictures. [0014] Conventionally, a condition check is performed to determine if high- precision data is being processed; that is, if the number of pixels in pictures of the video are above a number or threshold, such as 8 bits. This condition check may be used to determine whether syntax elements should be within a range set by the BitDepth variable. When the high-precision data flag is not set, then the value of the weighted prediction syntax element is set by a fixed range that represents the standard number of pixels used to represent pixels in the pictures, such as 8 bits. Particular embodiments may remove the condition check. This simplifies the semantics used in the encoding and decoding process. In this case, even if the high-precision data flag is not set, the value of the weighted prediction syntax element is set by the BitDepth variable. However, even if the BitDepth is not considered high-precision data, such as 8 bits, the range for the weighted prediction syntax element is still the same as the fixed value. For example, the syntax elements luma_offset_10[ i ], luma_offset_ll [ i ], delta_chroma_offset_10[i][j], and delta_chroma_offset_ll [i][j] use the variable BitDepth as described above whether the flag extended_precision_processing_flag is enabled and not enabled to indicate whether the bit depth is above a threshold.
[0015] FIG. 1 depicts a simplified system 100 for encoding and decoding video according to one embodiment. System 100 includes an encoder 102 and a decoder 104. Encoder 102 and decoder 104 may use a video coding standard to encode and decode video, such as HEVC. Specifically, encoder 102 and decoder 104 may use syntax elements from the HEVC range extension. Also, other elements of encoder 102 and decoder 104 may be appreciated.
[0016] Encoder 102 and decoder 104 perform temporal prediction through motion estimation and motion compensation. Motion estimation is a process of determining a motion vector (MV) for a current unit of video. For example, the motion estimation process searches for a best match prediction for a current unit of video (e.g., a prediction unit (PU)) over reference pictures. The best match prediction is described by the motion vector and associated reference picture ID. Also, a reference unit in a B picture may have up to two motion vectors that point to a previous reference unit in a previous picture and a subsequent reference unit in a subsequent reference picture in the picture order. Motion compensation is then performed by subtracting a reference unit pointed to by the motion vector from the current unit of video to determine a residual error that can be encoded. In the case of bi-prediction, the two motion vectors point to two reference units, which can be combined to form a combined bidirectional reference unit. The combined bi-directional reference unit can be subtracted from the current unit to determine the residual error.
[0017] To perform motion estimation and compensation, encoder 102 and decoder 104 include motion estimation and compensation blocks 104-1 and 104-2, respectively. In motion compensation, B and P pictures may exploit the temporal redundancy by using weighted prediction. Weighted prediction applies weighting factors to one or both of the reference pictures. In P pictures, the reference unit may be weighted before being used as a prediction for the current unit. In B pictures, the reference units are weighted before combining the reference pictures into the combined bi-directional reference unit. Conventionally, an average of the reference units from which the B picture is predicted may be used. However, using weighted prediction, a weighted average (or other weighted calculation) of the reference units may be used to predict the current unit. That is, the pixels of reference units may be weighted by weighting factors before combining. The following may be discussed with respect to B pictures, but the discussion will also apply to P pictures except that the combined bi-directional reference unit is replaced by one weighted reference unit. The use of weighted prediction may be useful when certain conditions happen in the video, such as when one scene fades into another or where there is a gradual variation in luminance, such as when fading to or from black or in cross-fades. For example, when fading occurs, the picture associated with the fading may not be as accurate to use as a prediction than a picture that does not include the fading. Weighting the picture that does not include the fading higher may create a better prediction for the current unit. That is, the pixels of the reference unit that does not include the fading effect may have a higher correlation to the current unit. The weighting may reduce the residual error and also reduce the bitrate.
[0018] The weighting factors may be provided for the luma and/or chroma components of reference units. Also, the weighting factors may be different based on the reference list used. That is, the bi-prediction may select a reference picture from a listO and a listl . Reference pictures in listO may be weighted with a weighting factor wo and reference pictures in listl may be weighted with a weighting factor wi. Also, only one of the reference pictures may be weighted. The weighting factors may be information that adjusts the pixel values differently in the reference units. In one embodiment, the weighting factors may be percentages.
[0019] The weighted prediction process may use syntax elements that define parameters that encoder 102 and decoder 104 use to perform the weighted prediction. By setting the values of these parameters, a weighted prediction manager 106-1 in encoder 102 and a weighted prediction manager 106-2 in decoder 104 can perform the weighted prediction process. In a simple example, the weighting factors may weight pixels in different reference pictures differently. Then, weighted prediction managers 106-1 and 106-2 take the weighted average of the reference units to use as a combined bi-directional reference unit. Motion estimation and compensation blocks 104-1 and 104-2 then use this combined bi-directional reference unit in the motion compensation process for the current unit. The syntax elements will be described in more detail below after describing the weighted prediction process in more detail.
[0020] FIG. 2 depicts an example of the motion estimation and compensation process according to one embodiment. The video includes a number of pictures 200- 1 - 200- 5. A current picture is shown at 200-3 and includes a current unit of video 202-1. Current unit 202-1 may be bi-predicted using reference units from reference pictures in other pictures 200, such as a previous picture 200-1 in the picture order and a subsequent picture 200-5 in the picture order. Picture 200-1 includes a reference unit 202-2 and picture 200-5 includes a reference unit 202-3, both of which can be used to predict current unit 202-1.
[0021] In weighted prediction, the pixels of reference units 202-2 and 202-3 may be weighted differently. For example, the pixels of reference units may be weighted by the weighting factors. In a simple example, the weighting factors may be percentages, such as the pixels of reference unit 202-2 may be weighted with a weighting factor wo of 0.25 and the pixels of reference unit 202-3 may be weighted with a weighting factor wi of 0.75. These weighting factors may then be used to calculate the pixel values used as the combined bi-directional reference unit for current unit 202-1.
[0022] Once the reference units are determined, motion estimation and compensation block 104-1 can determine motion vectors that represent the location of reference units 202-2 and 202-3 with respect to current unit 202-1. Then, motion estimation and compensation block 104-1 calculates a difference between the combined bi-directional reference unit and the current unit 202- 1 as a residual error.
[0023] Once determining the residual error, encoder 102 encodes the residual error, the motion vectors, and also the weighting factors used to determine the combined bidirectional reference unit. Encoder 102 includes the encoded residual error, the motion vectors, and the weighting factors in an encoded bitstream that is sent to decoder 104. The term weighting factors is used for discussion purposes to represent information that is encoded that allows the weighted prediction process to be performed and the weighting factors to be determined. The syntax elements used to determine which information for the weighting factors that are encoded in the encoded bitstream are described in more detail below.
[0024] Decoder 104 receives the encoded bitstream and can reconstruct the pictures of the video. Decoder 104 may reconstruct reference units 202-2 and 202-3 from the encoded bitstream prior to decoding current unit 202-1. Also, decoder 104 decodes the residual error for current unit 202-1, the motion vectors for current unit 202-1, and the weighting factors. Then, in decoder 104, motion estimation and compensation block 104-2 may then use the residual error to reconstruct the current unit 202-1. For example, motion estimation and compensation block 104-2 may use the motion vectors to locate reconstructed reference units 202-2 and 202-3. Then, weighted prediction manager 106-2 applies the weighting factors to the reconstructed units 202- 2 and 202-3 to form the reconstructed combined bi-directional reference unit. The reconstructed residual error is then added to the reconstructed combined bi-directional predicted unit to form a reconstructed current unit.
[0025] As mentioned above, syntax elements for weighted prediction may use a variable BitDepth in the weighted prediction process. The variable BitDepth may represent a number of bits used to represent a pixel in a picture. The following will describe syntax elements that use the bit depth variable.
[0026] FIG. 3 depicts a table 300 of syntax elements that are used for weighted prediction according to one embodiment. There are syntax elements for the luma and chroma components of the video, and also for listO and listl . The combined bidirectional reference unit may be determined by applying the weighting factors in an averaging operation. In an example to illustrate the calculation, the weighting factor wo is multiplied by the previous reference unit 202-2 (e.g., the luma and chroma components of the pixels) and the weighting factor wi is multiplied by the subsequent reference unit 202-3 (e.g., the luma and chroma components of the pixels). Then, the two values are added together and divided by the added weighting factors (e.g., normalization). The above example may not be exactly how encoder 102 and decoder 104 perform the calculation to determine the combined bi-directional reference unit. Various methods may be used, but performing a division operation may be an expensive computational operation. One method of deriving the weighting factors is to use bit shifting. The weighting factors can be derived with a common denominator and the division is represented by a right shift of the combined weighted prediction of a number of bits based on a base 2 logarithm of the denominator. That is, the weighting factor, such as the luma weighting factor, may be bit shifted by the base 2 logarithm of the denominator.
[0027] The following syntax elements encode parameter values that can be used to perform the weighted prediction process. The syntax elements in Table 300 of luma_offset_10[i] at 302 and delta_chroma_offset_10[i][j] at 304 may use a BitDepth variable for the number of bits that represent a pixel in the pictures. These syntax elements are for listO. Corresponding variables for listl are also provided for luma_offset_ll [i] at 306 and delta_chroma_offset_ll [i][j] at 308. The listO elements may be described in detail, but any discussion with respect to listO elements also applies to listl elements. Using the BitDepth variable with these syntax elements allows these syntax elements to be used for video that may represent pixels with a different number of bits, such as 8, 10, 12, 14, 16, etc., bits. The syntax elements can thus handle high precision data, which may be when pixels are represented by more than 8 bits. Further, as will be described below, using the BitDepth variable allows a condition check to be avoided in encoder 102 and decoder 104. Other syntax elements in Table 300 will also be described below to provide reference with respect to the above syntax elements.
[0028] The syntax element luma_log2_weight_denom is the base 2 logarithm of the denominator for all luma weighting factors. The value of luma_log2_weight_denom shall be in the range of 0 to 7, inclusive, or 0 to BitDepthy, inclusive, where BitDepthy is the bit depth of the luma component of the reference picture. By including the range to be dependent on the variable BitDepthy, the value can represent different numbers of bits for the pixels. The syntax element luma_log2_weight_denom is used to calculate the luma weighting factor (e.g., the delta luma weighting factor of the syntax element delta_luma_weight_10[ i ] and delta_luma_weight_ll [ i ] described below).
[0029] The syntax element delta_chroma_log2_weight_denom is the difference of the base 2 logarithm of the denominator for all chroma weighting factors. The variable ChromaLog2WeightDenom, which is the log 2 denominator of the chroma weighting factor, is derived to be equal to luma_log2_weight_denom + delta_chroma_log2_weight_denom, and the value shall be in the range of 0 to 7, inclusive, or 0 to BitDepthc, inclusive, where the variable BitDepthc is the bit depth of the chroma component of the reference picture. By including the range to be dependent on the variable BitDepthc, the chroma weighting factor value can represent different numbers of bits for the pixels.
[0030] The syntax element luma_weight_10_flag[ i ], when equal to 1, specifies that weighting factors for the luma component of list 0 prediction using RefPicListO[ i ] are present. The syntax element luma_weight_10_flag[ i ], when equal to 0, specifies that these weighting factors are not present.
[0031] The syntax element chroma_weight_10_flag[ i ], when equal to 1, specifies that weighting factors for the chroma prediction values of list 0 prediction using RefPicListO[ i ] are present. The syntax element chroma_weight_10_flag[ i ], when equal to 0, specifies that the chroma weighting factors are not present. When chroma_weight_10_flag[ i ] is not present, it is inferred to be equal to 0.
[0032] The syntax element delta_luma_weight_10[ i ] is the difference of the weighting factor applied to the luma prediction value for list 0 prediction using RefPicListO[ i ]. The variable LumaWeightL0[ i ] may be the luma weighting factor for listO and is derived to be equal to ( 1 « luma_log2_weight_denom ) + delta_luma_weight_10[ i ]. That is, the luma weighting factor is a right shift of the variable luma_log2_weight_denom plus the value of delta_luma_weight_10[ i ]. When luma_weight_10_flag[ i ] is equal to 1, the value of delta_luma_weight_10[ i ] shall be in the range of -128 to 127, inclusive, or -(1 « (BitDepthy - 1)), (1 « (BitDepthy - 1)) -1, inclusive, where BitDepthy is the bit depth for the luma component of the reference picture. This sets the range of the weighting factor to be based on the variable BitDepthy. When luma_weight_10_flag[ i ] is equal to 0, LumaWeightL0[ i ] is inferred to be equal to 2 luma g2-weight-denom
[0033] The syntax element delta_chroma_weight_10[ i ][ j ] is the difference of the weighting factor applied to the chroma prediction values for list 0 prediction using RefPicListO[ i ] with j equal to 0 for Cb and j equal to 1 for Cr. The variable ChromaWeightL0[ i ][ j ] may be the chroma weighting factor for listO and is derived to be equal to ( 1 « ChromaLog2WeightDenom ) + delta_chroma_weight_10[ i ][ j ]. That is, the chroma weighting factor is a right shift of the variable ChromaLog2WeightDenom plus the value of delta_chroma_weight_10[ i ][ j ]. When chroma_weight_10_flag[ i ] is equal to 1, the value of delta_chroma_weight_10[ i ][ j ] shall be in the range of -128 to 127, inclusive, or -(1 « (BitDepthc - 1)), (1 « (BitDepthc - 1)) -1, inclusive, where BitDepthc is the bit depth for the chroma component of the reference picture. This sets the range of the weighting factor to be based on the variable BitDepthc. When chroma_weight_10_flag[ i ] is equal to 0, ChromaWeightL0[ i ][ j ] is inferred to be equal to 2chromaL°s2WeishtDenom
[0034] As mentioned above, the syntax elements luma_offset_10[i] and delta_chroma_offset_10[i][j] may use the BitDepth variable whether an extended precision flag is enabled and not enabled to indicate whether the bit depth is above a threshold, which alleviates the need to perform a condition check to determine if high precision data is being used. In the syntax, a control flag extended_precision_processing_flag may be used to indicate when the bit depth of pixels of a picture is above a threshold or number, such as 8 bits. Conventionally, for the syntax elements luma_offset_10[i] and delta_chroma_offset_10[i][j], the semantics used a condition check of the flag extended_precision_processing_flag to determine when the range should be fixed or be based on the BitDepth variable.
[0035] The syntax element luma_offset_10[i] is the additive offset applied to the luma prediction value for list 0 prediction using RefPicListO[ i ]. This offset value will be added to a base prediction value to determine the real weighted prediction value. [Conventionally, for the syntax element luma_offset_10[i], encoder 102 and decoder 104 need to perform a condition check to determine if the flag extended_precision_processing_flag is equal to 0. When equal to 0, this means that the number of bits in the pictures of the video is equal to a base value, such as 8 bits (e.g., high precision data is not being processed). In this case, the range of the syntax element luma_offset_10[i] is within the range of -128 to 127, inclusive. This is the range when the number of bits to represent the pixels is 8 bits. If the flag extended _precision_processing_flag is equal to 1, then the value of the syntax element luma_offset_10[i] is within a range set by the BitDepth variable as described above. For example, the following semantics are conventionally used:
If extended_precision_processing_flag equal to 0, the value of luma_offset_10[ i ] shall be in the range of -128 to 127, inclusive. If extended _precision_processing_flag equal to 1, the value of luma_offset_10[ i ] shall be in the range of -(1 « (BitDepthy - 1) to (1 « (BitDepthy - 1) - 1, inclusive. When luma_weight_10_flag[ i ] is equal to 0, luma_offset_10[ i ] is inferred as equal to 0.
[0036] The syntax element delta_chroma_offset_10[i][j] is the difference of the additive offset applied to the chroma prediction values for list 0 prediction using RefPicListO[ i ] with j equal to 0 for Cb and j equal to 1 for Cr. This is the delta value for the chroma prediction values used for the motion compensation process. This element is used to calculate the additive offset value for chroma prediction. Conventionally, for the syntax element delta_chroma_offset_10[i][j], encoder 102 and decoder 104 need to perform a condition check to determine if the flag extended_precision_processing_flag is equal to 0. When equal to 0, this means that the number of bits in the pictures of the video is equal to a base value, such as 8 bits. When equal to 0, encoder 102 or decoder 104 then determines the value for a variable ChromaOffsetL0[i][j] to be equal to:
ChromaOffsetL0[ i ][ j ]
Clip3( -128, 127, ( delta_chroma_offset_10[ i ][ j ] - ( ( 128 * ChromaWei ghtL0[ i ][ j ] ) » ChromaLog2WeightDenom ) + 128 ) )
As can be seen, the variable ChromaOffsetL0[i][j] includes the values "-128, 127", which represents the range of -128 to 127, inclusive. This means the value of delta_chroma_offset_10[ i ][ j ] shall be in the range of -512 to 511, inclusive, which is when the bit depth is 8 bits. When chroma_weight_10_flag[ i ] is equal to 0, ChromaOffsetL0[ i ][ j ] is inferred to be equal to 0.
[0037] If the flag extended_precision_processing_flag is equal to 1, then the value of a variable ChromaOffsetL0[i][j] is equal to:
If extended _precision_processing_flag equal to 1,
ChromaOffsetL0[i][j] = Clip3( -(1 « (BitDepthc - 1), (1 « (BitDepthc - 1) -1, (delta_chroma_offset_10[i][j]
(((1 « (BitDepthc - 1))
* ChromaWeightL0[i][j] ) » ChromaLog2WeightDenom ) + (1 « (BitDepthc - 1)))
As can be seen in this equation when the flag extended_precision_processing_flag is equal to 1, the values "-128, 127" are exchanged with a range based on the BitDepth variable of "-(1 « (BitDepthc - 1), (1 « (BitDepthc - 1) -1". In this case, the variable ChromaOffsetL0[i][j] is further based on the BitDepth variable for the chroma component of the video. This represents that the value of the syntax element delta_chroma_offset_10[i][j] shall be in the range of -(1 « (BitDepthc + 1) to (1 « (BitDepthc + 1) -1, inclusive. When the flag extended_precision_processing_flag was equal to 0, the value of delta_chroma_offset_10[i][j] was in a fixed range of -512 to 511, inclusive. Thus, a fixed range was used above when the flag extended _precision_processing_flag is equal to 0.
[0038] Particular embodiments may remove the condition check in the semantics such that the value of syntax elements luma_offset_10[i] and delta_chroma_offset_10[i][j] are always dependent upon the BitDepth variable whether the flag extended_precision_processing_flag is enabled and not enabled to indicate whether the bit depth is above a threshold. This removes the condition check and simplifies the semantics. However, even if the flag extended _precision_processing_flag is equal to 0 indicating that high-precision data is not being processed, the values for the syntax elements luma_offset_10[i] and delta_chroma_offset_10[i][j] still are dependent upon the BitDepth variable. The calculation to determine the values of the syntax element luma_offset_10[i] even when high precision data is not being processed may be less computationally intensive than performing a condition check.
[0039] The following illustrates the syntax elements luma_offset_10[i] and delta_chroma_offset_10[i][j] with definitions that allow for the removal of the condition check.
[0040] The syntax element luma_offset_10[ i ] is the additive offset applied to the luma prediction value for listO prediction using RefPicListO[ i ]. The value of luma_offset_10[ i ] shall be in the range of -(1 « (BitDepthY - 1) to (1 « (BitDepthy - 1) - 1, inclusive. When luma_weight_10_flag[ i ] is equal to 0, luma_offset_10[ i ] is inferred as equal to 0. For the syntax element luma_offset_10[i], the fixed range of -128 to 127 is removed and replaced with -(1 « (BitDepthy - 1) to (1 « (BitDepthy - 1) - 1, inclusive. This makes the value of the syntax element luma_offset_10[i] based on the BitDepth value whether or not the flag extended_precision_processing_flag is 0 or 1 (e.g., enabled or disabled).
[0041] The syntax element delta_chroma_offset_10[ i ] [ j ] is the difference of the additive offset applied to the chroma prediction values for listO prediction using RefPicListO[ i ] with j equal to 0 for Cb and j equal to 1 for Cr. For the syntax element delta_chroma_offset_10[i][j], the variable ChromaOffsetL0[i][j] may always be based off of the BitDepth variable whether or not the flag extended_precision_processing_flag is 0 or 1. That is, the range "-128 to 127" is replaced by -(1 « (BitDepthc - 1), (1 « (BitDepthc - 1) -1. For example, the following is used to calculate the variable ChromaOffsetL0[i][j]
ChromaOffsetL0[i][j] = Clip3( -(1 « (BitDepthc - 1), (1 « (BitDepthc - 1) -1, (delta_chroma_offset_10[i][j] - (((1 « (BitDepthc - 1)) * ChromaWeightLO[i][j] ) » ChromaLog2WeightDenom ) + (1 « (BitDepthc - 1))).
The value of delta_chroma_offset_10[ i ][ j ] shall be in the range of -(1 « (BitDepthc + 1) to (1 « (BitDepthc + 1) -1, inclusive. When chroma_weight_10_flag[ i ] is equal to 0, ChromaOffsetL0[ i ][ j ] is inferred to be equal to 0.
[0042] This means that the value of the syntax element delta_chroma_offset_10[i][j] shall always be within the range set by -(1 « (BitDepthc + 1) to (1 « (BitDepthc + 1) -1, inclusive, whether or not the flag extended_precision_processing_flag is 0 or 1. Also, the calculation to determine the values of the syntax element and delta_chroma_offset_10[i][j] even when high precision data is not being processed may be less computationally intensive than performing a condition check.
[0043] The syntax elements luma_weight_ll_flag[ i ], chroma_weight_ll_flag[ i ], delta_luma_weight_ll [ i ], luma_offset_ll [ i ], delta_chroma_weight_ll [ i ][ j ], and delta_chroma_offset_ll [ i ][ j ] have the same semantics as luma_weight_10_flag[ i ], chroma_weight_10_flag[ i ], delta_luma_weight_10[ i ], luma_offset_10[ i ], delta_chroma_weight_10[ i ][ j ], and delta_chroma_offset_10[ i ][ j ], respectively, with 10, L0, list 0, and ListO replaced by 11, LI, list 1, and Listl, respectively. The variable sumWeightLOFlags is derived to be equal to the sum of luma_weight_10_flag[ i ] + 2 * chroma_weight_10_flag[ i ], for i = 0..num_ref_idx_10_active_minus l. When slice type is equal to B, the variable sumWeightLl Flags is derived to be equal to the sum of luma_weight_ll_flag[ i ] + 2 * chroma_weight_ll_flag[ i ], for i =
0..num_ref_idx_ll_active_minus l. It is a requirement of bitstream conformance that, when slicejype is equal to P, sumWeightLOFlags shall be less than or equal to 24, and when slicejype is equal to B, the sum of sumWeightLOFlags and sumWeightLl Flags shall be less than or equal to 24.
[0044] Accordingly, by using the variable BitDepth for the luma and chroma components, encoder 102 may change the number of bits that is used for weighted prediction based on the number of bits that represent pixels in the pictures of the video. In one embodiment, encoder 102 knows the BitDepth for the pictures of a video and can set the variable BitDepth to the value of the number of bits that represent the pixels of pictures. The semantics are also simplified by removing a condition check.
[0045] FIG. 4 depicts a simplified flowchart 400 of a method for using the variable BitDepth in the encoding process according to one embodiment. At 402, encoder 102 determines a value for the variable BitDepth for the luma and chroma components. The value may be based on the precision of the video, such as a number of bits that represent the pixels in pictures of a video. Encoder 102 may determine the value based on analyzing characteristics of the video. Also, the number of bits that represent the pixels may be input or included in metadata associated with the video.
[0046] At 404, motion estimation and compensation block 104-1 determines when weighted prediction is enabled. For example, weighted prediction flags, luma_weight_10_flag[ i] and chroma_weight_10_flag[ i ], may be set to indicate that weighted prediction is enabled for the luma and chroma components.
[0047] At 406, weighted prediction manager 106-1 determines motion vectors for reference units 202-2 and 202-3 and weighting factors for the luma and chroma components for reference units 202-2 and 202-3 (e.g., listO and listl). The weighting factors may be assigned to the pictures (e.g., listO and listl) that are being used as reference pictures for a current picture. [0048] At 408, weighted prediction manager 106-1 then calculates the combined bidirectional reference unit to use in motion compensation based on the weighting factors and pixels of reference units 202-2 and 202-3. As described above, the weighting factors weight pixels differently in reference units 202-2 and 202-3. At 410, motion estimation and compensation block 104-1 determines the residual error using current unit 202- 1 and the combined bi-directional reference unit.
[0049] At 412, encoder 102 calculates the values for the applicable syntax elements used in the weighted prediction process. For example, the values for the syntax elements luma_offset_10[ i ], luma_offset_ll [ i ] , delta_chroma_offset_10[i][j], and delta_chroma_offset_ll [i][j] using the variable BitDepth are calculated. This includes performing the calculation without a condition check for the flag extended_precision_processing_flag.
[0050] At 414, encoder 102 encodes the parameters for the syntax elements for the weighted prediction process in an encoded bitstream along with the motion vectors and residual error for current unit 202-1. For example, encoder 102 encodes the values calculated for the syntax elements shown in Table 300 in FIG. 3. At 416, encoder 102 sends the encoded bitstream to decoder 104.
[0051] FIG. 5 depicts a simplified flowchart 500 for decoding an encoded bitstream using the variable BitDepth according to one embodiment. At 502, decoder 104 receives the encoded bitstream. At 504, decoder 104 then determines a current unit 202-1 in a current picture to decode. At 506, decoder 104 decodes the encoded motion vectors from the encoded bitstream for current unit 202-1 and uses these motion vectors to locate the reconstructed reference units 202-2 and 202-3. Also, at 508, decoder 104 determines the residual error for current unit 202-1 from the encoded bitstream.
[0052] At 510, weighted prediction manager 106-2 may determine the values for the syntax elements in Table 300 for weighted prediction. This may allow weighted prediction manager 106-2 to determine the weighting factors. For example, the values for the syntax elements luma_offset_10[ i ], luma_offset_ll [ i ] , delta_chroma_offset_10[i][j], and delta_chroma_offset_ll [i][j] using the variable BitDepth are calculated. This includes performing the calculation without a condition check for the flag extended j>recis ion _processing_flag.
[0053] At 512, weighted prediction manager 106-2 may then apply the weighting factors to reference units 202-2 and 202-3 to determine the combined bi-directional reference unit. At 514, motion estimation and compensation block 104-2 can then add the residual error for current unit 202-1 to the combined weighted prediction unit to form a reconstructed current unit.
[0054] Accordingly, particular embodiments use a variable BitDepth for syntax elements in the weighted prediction process. For example, the syntax elements luma_offset_10[ i ], luma_offset_ll [ i ], delta_chroma_offset_10[i][j], and delta_chroma_offset_ll [i][j] use the variable BitDepth as described above whether the flag extended_precision_processing_flag is enabled and not enabled to indicate whether the bit depth is above a threshold.
[0055] Particular embodiments may be implemented in a non-transitory computer- readable storage medium for use by or in connection with the instruction execution system, apparatus, system, or machine. The computer-readable storage medium contains instructions for controlling a computer system to perform a method described by particular embodiments. The computer system may include one or more computing devices. The instructions, when executed by one or more computer processors, may be configured to perform that which is described in particular embodiments.
[0056] As used in the description herein and throughout the claims that follow, "a", "an", and "the" includes plural references unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise.
[0057] The above description illustrates various embodiments along with examples of how aspects of particular embodiments may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of particular embodiments as defined by the following claims. Based on the above disclosure and the following claims, other arrangements, embodiments, implementations and equivalents may be employed without departing from the scope hereof as defined by the claims.

Claims

CLAIMS What is claimed is:
1. A method comprising:
setting a first value for a variable associated with a bit depth based on a number of bits associated with pixels of a reference picture;
determining a weighting factor for performing weighted prediction for a current unit of a current picture;
using the weighting factor to weight pixels of a first reference unit of a first reference picture when performing motion compensation for the current unit; and setting a second value for a weighted prediction syntax element associated with the weighting factor whether an extended precision flag is enabled and not enabled to indicate whether the bit depth is above a threshold, wherein the second value for the weighted prediction syntax element is within a range set by the first value and is an offset for a prediction value used in performing motion compensation.
2. The method of claim 1, wherein a condition check of a status for the extended precision flag is not performed in conjunction with setting the second value.
3. The method of claim 1, wherein the weighted prediction syntax element is for applying the offset for a first component of the prediction value.
4. The method of claim 3, wherein the first component comprises a luma component of the video.
5. The method of claim 3, wherein the range comprises -(1 « (BitDepthy - 1) to (1 « (BitDepthy - 1) - 1, wherein BitDepthy is the variable for a luma component.
6. The method of claim 1, wherein the weighted prediction syntax element is for applying a difference of the offset for a first component of the prediction value.
7. The method of claim 6, wherein the first component comprises a chroma component of the video.
8. The method of claim 6, the range comprises -(1 « (BitDepthc - 1) to (1 « (BitDepthc - 1) - 1, wherein BitDepthc is the variable for a chroma component.
9. The method of claim 8, wherein the offset for the first component is derived using:
Clip3( -(1 « (BitDepthc - 1), (1 « (BitDepthc - 1) -1, (delta_chroma_offset_10[i][j] - (((1 « (BitDepthc - 1))
* ChromaWeightLO[i][j] ) » ChromaLog2WeightDenom ) + (1 « (BitDepthc - 1))), where BitDepthc is the variable for the first component.
10. An apparatus comprising:
one or more computer processors; and a non-transitory computer-readable storage medium comprising instructions that, when executed, control the one or more computer processors to be configured for: setting a first value for a variable associated with a bit depth based on a number of bits associated with pixels of a reference picture;
determining a weighting factor for performing weighted prediction for a current unit of a current picture;
using the weighting factor to weight pixels of a first reference unit of a first reference picture when performing motion compensation for the current unit;
setting a second value for a weighted prediction syntax element associated with the weighting factor whether an extended precision flag is enabled and not enabled to indicate whether the bit depth is above a threshold, wherein the second value for the weighted prediction syntax element is within a range set by the first value and is an offset for a prediction value used in performing motion compensation;
encoding the current unit using the first reference unit; and signaling the encoded current unit with the second value for the weighted prediction syntax element in an encoded bitstream.
1 1. The apparatus of claim 10, wherein a condition check of a status for the extended precision flag is not performed in conjunction with setting the second value.
12. The apparatus of claim 10, wherein the weighted prediction syntax element is for applying the offset for a first component of the prediction value.
13. The apparatus of claim 12, wherein the range comprises -(1 « (BitDepthy - 1) to (1 « (BitDepthy - 1) - 1, wherein BitDepthy is the variable for a luma component.
14. The apparatus of claim 10, wherein the weighted prediction syntax element is for applying a difference of the offset for a first component of the prediction value.
15. The apparatus of claim 14, the range comprises -(1 « (BitDepthc - 1) to (1 « (BitDepthc - 1) - 1, wherein BitDepthc is the variable for a chroma component used as the first component.
16. An apparatus comprising:
one or more computer processors; and
a non-transitory computer-readable storage medium comprising instructions that, when executed, control the one or more computer processors to be configured for:
receiving an encoded bitstream including an encoded current unit of a current picture, a first value for a variable associated with a bit depth based on a number of bits associated with pixels of pictures in a video, and a second value for a weighted prediction syntax element associated with a weighting factor, wherein the second value is calculated whether an extended precision flag is enabled and not enabled to indicate whether the bit depth is above a threshold, and wherein the second value for the weighted prediction syntax element is within a range set by the first value and is an offset for a prediction value used in performing motion compensation;
determining a weighting factor for performing weighted prediction for the current unit;
using the weighting factor to weight pixels of a first reference unit of a first reference picture when performing motion compensation for the current unit using the second value, wherein the second value for the weighted prediction syntax element is based on the first value; and
decoding the current unit using the first reference unit.
17. The apparatus of claim 16, wherein a condition check of a status for the extended precision flag is not performed in conjunction with setting the second value.
18. The apparatus of claim 16, wherein the weighted prediction syntax element is for applying the offset for a first component of the prediction value.
19. The apparatus of claim 18, wherein the range comprises -(1 « (BitDepthy - 1) to (1 « (BitDepthy - 1) - 1, wherein BitDepthy is the variable for a luma component.
20. The apparatus of claim 16, wherein the weighted prediction syntax element is for applying a difference of the offset for a first component of the prediction value.
EP14809545.8A 2013-11-05 2014-11-05 Simplified processing of weighted prediction syntax and semantics using a bit depth variable for high precision data Pending EP3066833A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361900337P 2013-11-05 2013-11-05
PCT/US2014/064073 WO2015069729A1 (en) 2013-11-05 2014-11-05 Simplified processing of weighted prediction syntax and semantics using a bit depth variable for high precision data

Publications (1)

Publication Number Publication Date
EP3066833A1 true EP3066833A1 (en) 2016-09-14

Family

ID=56341673

Family Applications (1)

Application Number Title Priority Date Filing Date
EP14809545.8A Pending EP3066833A1 (en) 2013-11-05 2014-11-05 Simplified processing of weighted prediction syntax and semantics using a bit depth variable for high precision data

Country Status (2)

Country Link
EP (1) EP3066833A1 (en)
CN (1) CN105765976B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108366242B (en) * 2018-03-07 2019-12-31 绍兴文理学院 Video compression method for adaptively adjusting chroma distortion weight factor according to video content

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008219100A (en) * 2007-02-28 2008-09-18 Oki Electric Ind Co Ltd Predictive image generating device, method and program, and image encoding device, method and program
CN101335902B (en) * 2007-06-25 2010-06-02 华为技术有限公司 Weighting predication method and device in video frequency decoding and encoding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
None *
See also references of WO2015069729A1 *

Also Published As

Publication number Publication date
CN105765976B (en) 2019-10-25
CN105765976A (en) 2016-07-13

Similar Documents

Publication Publication Date Title
US11470351B2 (en) Bit depth variable for high precision data in weighted prediction syntax and semantics
US9756336B2 (en) Method of background residual prediction for video coding
EP3518544B1 (en) Video decoding with improved motion vector diversity
KR20210027351A (en) Block size limit for DMVR
US20120230405A1 (en) Video coding methods and video encoders and decoders with localized weighted prediction
EP3202151B1 (en) Method and apparatus of video coding with prediction offset
EP2084906A1 (en) Deblocking filtering apparatus and method
US20240146904A1 (en) Simplified processing of weighted prediction syntax and semantics using a bit depth variable for high precision data
AU2012385919B2 (en) Video quality assessment at a bitstream level
CN105765976B (en) Simplify processing weight estimation syntax and semantics using bit-depth variable for high accuracy data
US20160134887A1 (en) Video encoding apparatus, video encoding method, video decoding apparatus, and video decoding method
US10187640B2 (en) Method of hard-limited packet size for video encoding

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20160331

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20170511

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ARRIS ENTERPRISES LLC

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

APBK Appeal reference recorded

Free format text: ORIGINAL CODE: EPIDOSNREFNE

APBN Date of receipt of notice of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNNOA2E

APBR Date of receipt of statement of grounds of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNNOA3E

APAF Appeal reference modified

Free format text: ORIGINAL CODE: EPIDOSCREFNE

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ARRIS INTERNATIONAL IP LTD

APBT Appeal procedure closed

Free format text: ORIGINAL CODE: EPIDOSNNOA9E

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: COMMSCOPE UK LIMITED