WO2015081888A1 - Method and apparatus for motion boundary processing - Google Patents

Method and apparatus for motion boundary processing Download PDF

Info

Publication number
WO2015081888A1
WO2015081888A1 PCT/CN2014/093148 CN2014093148W WO2015081888A1 WO 2015081888 A1 WO2015081888 A1 WO 2015081888A1 CN 2014093148 W CN2014093148 W CN 2014093148W WO 2015081888 A1 WO2015081888 A1 WO 2015081888A1
Authority
WO
WIPO (PCT)
Prior art keywords
current
boundary
pixel
prediction
generating
Prior art date
Application number
PCT/CN2014/093148
Other languages
French (fr)
Inventor
Chih-Wei Hsu
Original Assignee
Mediatek Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mediatek Inc. filed Critical Mediatek Inc.
Priority to US15/036,900 priority Critical patent/US11303900B2/en
Priority to CN201480066225.5A priority patent/CN105794210B/en
Priority to EP14867530.9A priority patent/EP3078196B1/en
Publication of WO2015081888A1 publication Critical patent/WO2015081888A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/15Data rate or code amount at the encoder output by monitoring actual compressed data size at the memory before deciding storage at the transmission buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/583Motion compensation with overlapping blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • the present invention relates to video coding.
  • the present invention relates to method and apparatus for motion boundary processing to reduce discontinuity at coding unit boundaries.
  • Motion estimation is an effective inter-frame coding technique to exploit temporal redundancy in video sequences.
  • Motion-compensated inter-frame coding has been widely used in various international video coding standards
  • the motion estimation adopted in various coding standards is often a block-based technique, where motion information such as coding mode and motion vector is determined for each macroblock or similar block configuration.
  • intra-coding is also adaptively applied, where the picture is processed without reference to any other picture.
  • the inter-predicted or intra-predicted residues are usually further processed by transformation, quantization, and entropy coding to generate compressed video bitstream.
  • coding artifacts are introduced, particularly in the quantization process.
  • additional processing has been applied to reconstructed video to enhance picture quality in newer coding systems.
  • the additional processing is often configured in an in-loop operation so that the encoder and decoder may derive the same reference pictures to achieve improved system performance.
  • Fig. 1A illustrates an exemplarysystem block diagram for an video encoder based on High Efficiency Vide Coding (HEVC) using adaptive Inter/Intra prediction.
  • HEVC High Efficiency Vide Coding
  • M Motion Estimation
  • MC Motion Compensation
  • Switch 114 selects Intra Prediction 110 or Inter-prediction data and the selected prediction data is supplied to Adder 116 to form prediction errors, also called residues.
  • the prediction error is then processed by Transformation (T) 118 followed by Quantization (Q) 120.
  • T Transformation
  • Q Quantization
  • the transformed and quantized residues are then coded by Entropy Encoder 122 to form a video bitstream corresponding to the compressed video data.
  • the bitstream associated with the transform coefficients is then packed with side information such as motion, mode, and other information associated with the image area.
  • the side information may also be subject to entropy coding to reduce required bandwidth. Accordingly, the data associated with the side information are provided to Entropy Encoder 122 as shown in Fig. 1A.
  • Entropy Encoder 122 When an Inter-prediction mode is used, a reference picture or pictures have to be reconstructed at the encoder end as well. Consequently, the transformed and quantized residues are processed by Inverse Quantization (IQ) 124 and Inverse Transformation (IT) 126 to recover the residues. The residues are then added back to prediction data 136 at Reconstruction (REC) 128 to reconstruct video data.
  • the reconstructed video data may be stored in Reference Picture Buffer 134 and used for prediction of other frames.
  • incoming video data undergoes a series of processing in the encoding system.
  • the reconstructed video data from REC 128 may be subject to various impairments due to a series of processing. Accordingly, various in-loop processing is applied to the reconstructed video data before the reconstructed video data are stored in the Reference Picture Buffer134 in order to improve video quality.
  • HEVC High Efficiency Video Coding
  • DF Deblocking Filter
  • SAO Sample Adaptive Offset
  • the in-loop filter information may have to be incorporated in the bitstream so that a decoder can properly recover the required information.
  • in-loop filter information from SAO is provided to Entropy Encoder 122 for incorporation into the bitstream.
  • DF 130 is applied to the reconstructed video first; and SAO 131 is then applied to DF-processed video.
  • the processing order among DF and SAO can be re-arranged.
  • FIG. 1B A corresponding decoder for the encoder of Fig. 1A is shown in Fig. 1B.
  • the video bitstream is decoded by Video Decoder 142 to recover the transformed and quantized residues, DF/SAO information and other system information.
  • Video Decoder 142 At the decoder side, only Motion Compensation (MC) 113 is performed instead of ME/MC.
  • MC Motion Compensation
  • the decoding process is similar to the reconstruction loop at the encoder side.
  • the recovered transformed and quantized residues, DF/SAO information and other system information are used to reconstruct the video data.
  • the reconstructed video is further processed by DF 130 and SAO to produce the final enhanced decoded video.
  • coding unit In the High Efficiency Video Coding (HEVC) system, the fixed-size macroblock of H.264/AVC is replaced by a flexible block, named coding unit (CU) . Pixels in the CU share the same coding parameters to improve coding efficiency.
  • a CU may begin with a largest CU (LCU, also referred as CTU, coded tree unit in HEVC) .
  • LCU largest CU
  • PU prediction unit
  • a partition size is selected to partition the CU.
  • a 2Nx2N PU may be partitioned into 2Nx2N, 2NxN, or Nx2N PU when Inter mode is selected.
  • the PU may be partitioned into either one 2Nx2N or fourNxN.
  • OBMC Overlapped Block Motion Compensation
  • LMMSE Linear Minimum Mean Squared Error
  • JCTVC-C251 Choen, et al, “Overlapped block motion compensation in TMuC” , in Joint Collaborative Team on Video Coding (JCT-VC) , of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 3rd Meeting: Guangzhou, CN, 7-15 October, 2010, Document: JCTVC-C251) , where OBMC is applied to geometry partition.
  • JCTVC-C251 Joint Collaborative Team on Video Coding (JCT-VC) , of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 3rd Meeting: Guangzhou, CN, 7-15 October, 2010, Document: JCTVC-C251) , where OBMC is applied to geometry partition.
  • JCTVC-VC-C251 Joint Collaborative Team on Video Coding (JCT-VC) , of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 3rd Meeting
  • region 1 and region 2 Let the two regions created by a geometry partition be denoted as region 1 and region 2.
  • the zig-zag line segments (210) indicate the partition line for region 1 and region 2 in Fig. 2.
  • a pixel from region 1 (2) is defined to be a boundary pixel if any of its four connected neighbors (left, top, right, and bottom) belongs to region 2 (1) .
  • Fig. 2 illustrates an example, where pixels corresponding to the boundary of region 1 are indicated by pattern 220 and pixels corresponding to the boundary of region 2 are indicated by pattern 230.
  • the motion compensation is performed using a weighted sum of the motion predictions from the two motion vectors.
  • the weights are 3/4 for the prediction using the motion vector of the region containing the boundary pixel and 1/4 for the prediction using the motion vector of the other region.
  • the pixel at the boundary is derived from the weighted sum of two predictors corresponding to two different motion vectors. The overlapping boundaries improve the visual quality of the reconstructed video while providing BD-rate gain.
  • JCTVC-F299 Another OBMC proposal during the HEVC standard development is disclosed in JCTVC-F299 (Guo, et al, “CE2: Overlapped Block Motion Compensation for 2NxN and Nx2N Motion Partitions” , in Joint Collaborative Team on Video Coding (JCT-VC) , of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 6th Meeting: Torino, 14-22 July, 2011, Document: JCTVC-F299) , where OBMC is applied to symmetrical motion partitions.
  • JCTVC-F299 Joint Collaborative Team on Video Coding
  • OBMC is applied to the horizontal boundary of the two 2NxN prediction blocks, and the vertical boundary of the two Nx2N prediction blocks. Since those partitions may have different motion vectors, the pixels at partition boundary (i.e., PU boundaries) may have large discontinuities, which may generate visual artifacts and also reduce the coding efficiency. In JCTVC-F299, OBMC is introduced to smooth the boundaries of motion partition.
  • Fig. 3 illustrates exemplary OBMC for 2NxN (Fig. 3A) and Nx2N blocks (Fig. 3B) .
  • the pixels in the shaded area belong to Partition 0 and the pixels in the clear area belong to Partition 1.
  • the overlapped region in the luma component is defined as 2 rows (or columns) of pixels on each side of the horizontal (or vertical) PU boundary.
  • OBMC weighting factors are (3/4, 1/4) .
  • pixels that are 2 rows (columns) away from the partition boundary i.e., pixels labeled as B in Fig.
  • OBMC weighting factors are (7/8, 1/8) .
  • the overlapped region is defined as 1 row (or column) of pixels on each side of the horizontal (or vertical) PU boundary, and the weighting factors are (3/4, 1/4) .
  • Embodiments of the present invention determine a current motion vector (MV) and one or more neighboring MVs corresponding to an above MV, a left MV, or both the above MV and the left MV.
  • a first predictor for a current boundary pixel in a boundary region of the current CU is generated by applying motion compensation based on the current MV and a first reference picture pointed by the current MV.
  • One or more second predictors for the current boundary pixel are generated by applying the motion compensation based on the neighboring MVs and reference pictures pointed by the neighboring MVs.
  • a current boundary pixel predictor for the current boundary pixel is then generated using a weighted sum of the first predictor and said one or more second predictors according to weighting factors.
  • the boundary region of the current CU may correspond to a number of pixel lines, pixel columns, or both at CU boundaries of the current CU.
  • the number of pixel lines, pixel columns, or both at the CU boundaries of the current CU can be pre-defined or adaptively determined based on CU size or PU size.
  • the weighting factors can be pre-defined or adaptively determined based on the distance between the current boundary pixel and a left or above CU boundary.
  • two second predictors can be generated for the current boundary pixel based on the above MV and the left MV respectively to form the current boundary pixel predictor.
  • one second predictor can be generated for the current boundary pixel based on one neighboring MV selected from the above MV and the left MV and used to form a first-stage current boundary pixel predictor.
  • Another second predictor can be generated for the current boundary pixel based on the other neighboring MV selected from the above MV and the left MV and used to form a final current boundary pixel predictor.
  • the first-stage current boundary pixel predictor can be formed based on the above MV or the left MV.
  • the MBE prediction process can be always performed for the current CU, or can be turned On/Off explicitly
  • the MBE prediction process can be applied jointly or independently with the overlapped boundary motion compensation (OBMC) process applied to PU boundary pixels in the current CU when the current CU is partitioned into two or more current prediction units (PUs) .
  • the current CU comprises a luma component and at least one chroma component and the weighting factors for the luma component and said at least one chroma component can be different.
  • the boundary regions for the luma component and the chroma component can also be different.
  • the above CU or the left CU may correspond to a smallest CU (SCU) .
  • Fig. 1A illustrates an exemplaryadaptive inter/intra video encoderassociated with an HEVC coding system.
  • Fig. 1B illustrates an exemplaryadaptive inter/intra video decoder associated with an HEVC coding system.
  • Fig. 2 illustratesan example of Overlapped Block Motion Compensation (OBMC) for geometry partitions.
  • OBMC Overlapped Block Motion Compensation
  • Fig. 3A illustrates exemplary Overlapped Block Motion Compensation (OBMC) for 2NxN prediction units (PUs) .
  • OBMC Overlapped Block Motion Compensation
  • Fig. 3B illustrates exemplary Overlapped Block Motion Compensation (OBMC) for Nx2N prediction units (PUs) .
  • OBMC Overlapped Block Motion Compensation
  • Fig. 4A illustrates an example of Motion Boundary Enhancement (MBE) according to an embodiment of the present invention, where an above motion vector and a left motion vector are used with the current motion vector to form weighted prediction for boundary pixels.
  • MBE Motion Boundary Enhancement
  • Fig. 4B illustrates an example of weighting factors for Motion Boundary Enhancement (MBE) according to an embodiment of the present invention.
  • Fig. 5 illustrates an example of fine-grained Motion Boundary Enhancement (fg-MBE) according to an embodiment of the present invention, where the above motion vector and the left motion vector are determined based on smallest coding unit (SCU) .
  • fg-MBE fine-grained Motion Boundary Enhancement
  • Fig. 6 illustrates an exemplary flow chart for a video coding system incorporating Motion Boundary Enhancement according to an embodiment of the present invention.
  • each coding unit may be partitioned into one or more prediction units.
  • the OBMC is only applied to PU boundaries as described in the previous section.
  • motion discontinuity may also exist at the CU boundaries as well.
  • the present invention discloses a boundary pixel processing technique named motion boundary enhancement (MBE) to improve the motion compensated prediction at the CU boundaries.
  • Fig. 4 illustrates an example according to an embodiment of the present invention. In Fig. 4A, the current CU boundaries are indicated by thicklines (410) .
  • the pixels at the CU boundaries will use the motion vector (s) from the upper side (MV_U_1) , the left side (MV_L_1) or both the upper side and the left side in addition to its own motion vector (MV_X) to form a weighted sum of motion prediction when performing motion compensation.
  • MV_U_1 is the first available motion vector derived from the upper CUs
  • MV_L_1 is the first available motion vector derived from the left CUs. It is well known in HEVC that a CU may be partitioned into multiple PUs and each PU may have its own motion vector. Therefore, the motion vector (i.e., MV_X) for a pixel in the CU boundary depends on which PU that the pixel is located.
  • Fig. 4B illustrates an example of MBE in details according to an embodiment of the present invention.
  • Pixels A through D in Fig. 4B correspond to the overlapped vertical and horizontal boundaries.
  • Both motion vectors MV_U_1 and MV_L_1 will be used for these pixels in addition to MV_X.
  • the weighting factors are (2/8, 2/8, 4/8) for MV_U_1, MV_L_1 and MV_X, respectively for pixel A.
  • pixel A according to MBE is calculated as a weighted sum of three predictors associated with three motion vectors (i.e., MV_U_1, MV_L_1 and MV_X) .
  • Each predictor is derived using motion compensation based on the respective motion vector.
  • pixel A is generated based on the three predictors using the weighting factor (2/8, 2/8, 4/8) .
  • the corresponding weighting factors are (2/8, 1/8, 5/8) .
  • the corresponding weighting factors are (1/8, 2/8, 5/8) .
  • the corresponding weighting factors are (1/8, 1/8, 6/8) .
  • E and F only MV_U_1 will be used with MV_X.
  • the weighting factors are (2/8, 6/8) for MV_U_1 and MV_X for pixel E.
  • the weighting factors are (1/8, 7/8) .
  • MV_L_1 For pixels labeled as G and H, only MV_L_1 will be used with MV_X.
  • the weighting factors are (2/8, 6/8) for MV_L_1 and MV_X for pixel G.
  • the weighting factors are (1/8, 7/8) .
  • the weighting factors disclosed above are intended to illustrate examples of MBE. These exemplary weighting factors shall not be construed as limitations to the present invention. A person skilled in the art may use other weighting factors to practice the present invention.
  • the weighting factors can be pre-defined or adaptively determined based on a distance between the current boundary pixel and a left or above CU boundary. For example, a larger weighting factor may be used for a boundary pixel at a shorter distance from the CU boundary. While the example in Fig. 4B includes two pixel lines and two pixel columns in the boundary region, different number of pixel lines/columns may also be used to practice the present invention.
  • the size of the boundary region can be pre-defined or adaptively determined based on CU size or PU size. For example, more pixel lines or columns may be used for larger CU or PU sizes.
  • the MBE processing can be always enabled and applied for video data being coded. However, the MBE process may also be turned On/Off explicitly. For example, a flag may be used to indicate whether MBE process is On or Off for the underlying video data.
  • the underlying data may correspond to a CU, a CTU (coding tree unit) , a CTB (coding tree block) , a slice, a picture or a sequence.
  • the MBE may also be applied to difference color components of the video data. Different MBE process may be applied to different color components. For example, the MBE process may be applied to the luma component, but not the chroma component. Alternatively, MBE process may be applied to both luma and chroma component. However, the weighting factors are different for different color components. Furthermore, different boundary regions may be selected for different color components. For example, less pixel lines/columns can be used for the chroma components.
  • MBE can be applied independently from OBMC. It may also be applied before or after the OBMC process so that not only PU boundaries but also CU boundaries can be improved with multiple motion vectors. Furthermore, it may also be applied jointly with the OBMC process to share data accessed during processing. Therefore, the joint processing may reduce memory access bandwidth or reduce buffer requirement.
  • fg-MBE fine-grained MBE
  • Fig. 5 illustrates an example of fine-grained MBE.
  • SCU smallest coding unit
  • a CU may be partitioned into smaller CUs using quadtree. The partition process may stop when the CU reaches the smallest size, i.e., smallest CU (SCU) .
  • SCU smallest coding unit
  • the SCU according to HEVC can be4x4. While the current CU size of 8x8 is illustrated in the example of Fig.
  • the current CU may correspond to other sizes (e.g., 16x16 or 32x32) .
  • the motion vectors for each SCU may belong to different PUs or even different CUs, the motion vectors may be different from each other.
  • MV_L_1 and MV_L_2 in Fig. 5 may be different.
  • MV_U_1 and MV_U_2 may be different.
  • the motion information derived accordingly will be more accurate to generate more accurate motion compensated predictors.
  • the motion vector may not be available for an SCU.
  • the SCU is Intra coded or the SCU is a boundary block with a valid MV.
  • a motion compensated predictor can be generated by data padding or using weighted sum from the existing predictors.
  • Fig. 6 illustrates an exemplary flowchart for a video coding system incorporating Motion Boundary Enhancement according to an embodiment of the present invention.
  • the input data associated with a current coding unit (CU) wherein the current CU is partitioned into one or more current prediction units (PUs) as shown in step 610.
  • the input data associated with the current coding unit may be accessed from a media such as a RAM or DRAM in a system.
  • the input data associated with the current coding unit may be received directly from a processor (such as a central processing unit, a controller or a digital signal processor) .
  • the input data corresponds to the pixel data to be processed according to motion compensation.
  • the input data corresponds to motion compensated residue and the decoding process will reconstruct the current CU using motion compensated prediction and motion compensated residue.
  • a current motion vector (MV) and one or more neighboring MVs corresponding to an above MV, a left MV, or both the above MV and the left MV are determined in step 620.
  • the above MV is associated with an above CU adjoining the current CU from above
  • the left MV is associated with a left CU adjoining the current CU from left
  • the current MV is associated with the current CU.
  • a first predictor for a current boundary pixel in a boundary region of the current CU is generated by applying motion compensation based on the current MV and a first reference picture pointed by the current MV in step 630.
  • One or more second predictors for the current boundary pixel are generated by applying the motion compensation based on said one or more neighboring MVs and one or more second reference pictures pointed by said one or more neighboring MVs in step 640.
  • a current boundary pixel predictor for the current boundary pixel is generated using a weighted sum of the first predictor and said one or more second predictors according to weighting factors in step 650.
  • Encoding process (for an encoder) or decoding process (for a decoder) is then applied to the current CU using prediction data including the current boundary pixel predictor as shown in step 660.
  • Fig. 6 The exemplary flowchart shown in Fig. 6 is for illustration purpose. A skilled person in the art may re-arrange, combine steps or split a step to practice the present invention without departing from the spirit of the present invention.
  • Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
  • an embodiment of the present invention can be a circuit integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein.
  • An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
  • DSP Digital Signal Processor
  • the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) .
  • These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
  • the software code or firmware code may be developed in different programming languages and different formats or styles.
  • the software code may also be compiled for different target platforms.
  • different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method and apparatus for deriving motion compensated prediction for boundary pixels in a video coding system are disclosed. Embodiments of the present invention determine a current motion vector (MV) and one or more neighboring MVs corresponding to an above MV, a left MV, or both the above MV and the left MV. A first predictor for a current boundary pixel in a boundary region of the current CU is generated by applying motion compensation based on the current MV and a first reference picture pointed by the current MV. One or more second predictors for the current boundary pixel are generated by applying the motion compensation based on the neighboring MVs and reference pictures pointed by the neighboring MVs. A current boundary pixel predictor for the current boundary pixel is then generated using a weighted sum of the first predictor and the second predictors according to weighting factors.

Description

METHOD AND APPARATUS FOR MOTION BOUNDARY PROCESSING
CROSS REFERENCE TO RELATED APPLICATIONS
The present invention claims priority to U.S. Provisional Patent Application, Serial No.61/912,686, filed on December 6, 2013, entitled “Motion Boundary Enhancement” . The U.S. Provisional Patent Application is hereby incorporated by reference in its entirety.
FIELD OF INVENTION
The present invention relates to video coding. In particular, the present invention relates to method and apparatus for motion boundary processing to reduce discontinuity at coding unit boundaries.
BACKGROUND OF THE INVENTION
Motion estimation is an effective inter-frame coding technique to exploit temporal redundancy in video sequences. Motion-compensated inter-frame coding has been widely used in various international video coding standards The motion estimation adopted in various coding standards is often a block-based technique, where motion information such as coding mode and motion vector is determined for each macroblock or similar block configuration. In addition, intra-coding is also adaptively applied, where the picture is processed without reference to any other picture. The inter-predicted or intra-predicted residues are usually further processed by transformation, quantization, and entropy coding to generate compressed video bitstream. During the encoding process, coding artifacts are introduced, particularly in the quantization process. In order to alleviate the coding artifacts, additional processing has been applied to reconstructed video to enhance picture quality in newer coding systems. The additional processing is often configured in an in-loop operation so that the encoder and decoder may derive the same reference pictures to achieve improved system performance.
Fig. 1A illustrates an exemplarysystem block diagram for an video encoder based on High Efficiency Vide Coding (HEVC) using adaptive Inter/Intra prediction. For Inter-prediction, Motion Estimation (ME) /Motion Compensation (MC) 112 is used to provide prediction data based on video data from other picture or pictures. Switch 114 selects Intra Prediction 110 or Inter-prediction data and the selected prediction data is supplied to Adder 116 to form prediction errors, also called residues. The prediction error is then processed by Transformation  (T) 118 followed by Quantization (Q) 120. The transformed and quantized residues are then coded by Entropy Encoder 122 to form a video bitstream corresponding to the compressed video data. The bitstream associated with the transform coefficients is then packed with side information such as motion, mode, and other information associated with the image area. The side information may also be subject to entropy coding to reduce required bandwidth. Accordingly, the data associated with the side information are provided to Entropy Encoder 122 as shown in Fig. 1A. When an Inter-prediction mode is used, a reference picture or pictures have to be reconstructed at the encoder end as well. Consequently, the transformed and quantized residues are processed by Inverse Quantization (IQ) 124 and Inverse Transformation (IT) 126 to recover the residues. The residues are then added back to prediction data 136 at Reconstruction (REC) 128 to reconstruct video data. The reconstructed video data may be stored in Reference Picture Buffer 134 and used for prediction of other frames.
As shown in Fig. 1A, incoming video data undergoes a series of processing in the encoding system. The reconstructed video data from REC 128 may be subject to various impairments due to a series of processing. Accordingly, various in-loop processing is applied to the reconstructed video data before the reconstructed video data are stored in the Reference Picture Buffer134 in order to improve video quality. In the High Efficiency Video Coding (HEVC) standard being developed, Deblocking Filter (DF) 130 and Sample Adaptive Offset (SAO) 131have been developed to enhance picture quality. The in-loop filter information may have to be incorporated in the bitstream so that a decoder can properly recover the required information. Therefore, in-loop filter information from SAO is provided to Entropy Encoder 122 for incorporation into the bitstream. In Fig. 1A, DF 130 is applied to the reconstructed video first; and SAO 131 is then applied to DF-processed video. However, the processing order among DF and SAO can be re-arranged.
A corresponding decoder for the encoder of Fig. 1A is shown in Fig. 1B. The video bitstream is decoded by Video Decoder 142 to recover the transformed and quantized residues, DF/SAO information and other system information. At the decoder side, only Motion Compensation (MC) 113 is performed instead of ME/MC. The decoding process is similar to the reconstruction loop at the encoder side. The recovered transformed and quantized residues, DF/SAO information and other system information are used to reconstruct the video data. The reconstructed video is further processed by DF 130 and SAO to produce the final enhanced decoded video.
In the High Efficiency Video Coding (HEVC) system, the fixed-size macroblock of H.264/AVC is replaced by a flexible block, named coding unit (CU) . Pixels in the CU share the  same coding parameters to improve coding efficiency. A CU may begin with a largest CU (LCU, also referred as CTU, coded tree unit in HEVC) . In addition to the concept of coding unit, the concept of prediction unit (PU) is also introduced in HEVC. Once the splitting of CU hierarchical tree is done, each leaf CU is further split into prediction units (PUs) according to prediction type and PU partition. The Inter/Intra prediction process in HEVC is applied to the PU basis. For each 2Nx2N leaf CU, a partition size is selected to partition the CU. A 2Nx2N PU may be partitioned into 2Nx2N, 2NxN, or Nx2N PU when Inter mode is selected. When a 2Nx2N PU is Intra coded, the PU may be partitioned into either one 2Nx2N or fourNxN.
While non-overlapped motion prediction blocks are most used in HEVC practice, there are also proposals for overlapped motion compensation presented during HEVC standard development. Overlapped Block Motion Compensation (OBMC) is a technical proposed during the HEVC standard development. OBMC utilizes Linear Minimum Mean Squared Error (LMMSE) technique to estimate a pixel intensity value based on motion-compensated signals derived from neighboring block motion vectors (MVs) . From estimation-theoretic perspective, these MVs are regarded as different plausible hypotheses for its true motion, and to maximize coding efficiency, their weights should minimize the mean squared prediction error subject to the unit-gain constraint.
An OBMC proposal during HEVC development is disclosed in JCTVC-C251 (Chen, et al, “Overlapped block motion compensation in TMuC” , in Joint Collaborative Team on Video Coding (JCT-VC) , of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 3rd Meeting: Guangzhou, CN, 7-15 October, 2010, Document: JCTVC-C251) , where OBMC is applied to geometry partition. In geometry partition, it is very likely that a transform block contains pixels belonging to different partitions since two different motion vectors are used for motion compensation. Therefore, the pixels at the partition boundary may have large discontinuities that can produce visual artifacts similar to blockiness. This in turn decreases the coding efficiency since the signal energy in the transform domain will spread wider toward high frequencies. Let the two regions created by a geometry partition be denoted as region 1 and region 2. The zig-zag line segments (210) indicate the partition line for region 1 and region 2 in Fig. 2. A pixel from region 1 (2) is defined to be a boundary pixel if any of its four connected neighbors (left, top, right, and bottom) belongs to region 2 (1) . Fig. 2 illustrates an example, where pixels corresponding to the boundary of region 1 are indicated by pattern 220 and pixels corresponding to the boundary of region 2 are indicated by pattern 230. If a pixel is a boundary pixel (220 or 230) , the motion compensation is performed using a weighted sum of the motion predictions from the two motion vectors. The weights are 3/4 for the prediction using the  motion vector of the region containing the boundary pixel and 1/4 for the prediction using the motion vector of the other region. In other words, the pixel at the boundary is derived from the weighted sum of two predictors corresponding to two different motion vectors. The overlapping boundaries improve the visual quality of the reconstructed video while providing BD-rate gain.
Another OBMC proposal during the HEVC standard development is disclosed in JCTVC-F299 (Guo, et al, “CE2: Overlapped Block Motion Compensation for 2NxN and Nx2N Motion Partitions” , in Joint Collaborative Team on Video Coding (JCT-VC) , of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 6th Meeting: Torino, 14-22 July, 2011, Document: JCTVC-F299) , where OBMC is applied to symmetrical motion partitions. If a coding unit (CU) is partitioned into two 2NxN or Nx2N partition units (PUs) , OBMC is applied to the horizontal boundary of the two 2NxN prediction blocks, and the vertical boundary of the two Nx2N prediction blocks. Since those partitions may have different motion vectors, the pixels at partition boundary (i.e., PU boundaries) may have large discontinuities, which may generate visual artifacts and also reduce the coding efficiency. In JCTVC-F299, OBMC is introduced to smooth the boundaries of motion partition.
Fig. 3 illustrates exemplary OBMC for 2NxN (Fig. 3A) and Nx2N blocks (Fig. 3B) . The pixels in the shaded area belong to Partition 0 and the pixels in the clear area belong to Partition 1.The overlapped region in the luma component is defined as 2 rows (or columns) of pixels on each side of the horizontal (or vertical) PU boundary. For pixels that are 1 row (or column) apart from the partition boundary, i.e., pixels labeled as A in Fig. 3, OBMC weighting factors are (3/4, 1/4) . For pixels that are 2 rows (columns) away from the partition boundary, i.e., pixels labeled as B in Fig. 3, OBMC weighting factors are (7/8, 1/8) . For chroma components, the overlapped region is defined as 1 row (or column) of pixels on each side of the horizontal (or vertical) PU boundary, and the weighting factors are (3/4, 1/4) .
SUMMARY OF THE INVENTION
A method and apparatus for deriving motion compensated prediction for boundary pixels in a video coding system are disclosed. Embodiments of the present invention determine a current motion vector (MV) and one or more neighboring MVs corresponding to an above MV, a left MV, or both the above MV and the left MV. A first predictor for a current boundary pixel in a boundary region of the current CU is generated by applying motion compensation based on the current MV and a first reference picture pointed by the current MV. One or more second predictors for the current boundary pixel are generated by applying the motion compensation based on the neighboring MVs and reference pictures pointed by the neighboring MVs. A  current boundary pixel predictor for the current boundary pixel is then generated using a weighted sum of the first predictor and said one or more second predictors according to weighting factors.
The boundary region of the current CU may correspond to a number of pixel lines, pixel columns, or both at CU boundaries of the current CU. The number of pixel lines, pixel columns, or both at the CU boundaries of the current CU can be pre-defined or adaptively determined based on CU size or PU size. The weighting factors can be pre-defined or adaptively determined based on the distance between the current boundary pixel and a left or above CU boundary.
When both the above MV and the left MV are used, two second predictors can be generated for the current boundary pixel based on the above MV and the left MV respectively to form the current boundary pixel predictor. Alternatively, one second predictor can be generated for the current boundary pixel based on one neighboring MV selected from the above MV and the left MV and used to form a first-stage current boundary pixel predictor. Another second predictor can be generated for the current boundary pixel based on the other neighboring MV selected from the above MV and the left MV and used to form a final current boundary pixel predictor. The first-stage current boundary pixel predictor can be formed based on the above MV or the left MV.
The MBE prediction process can be always performed for the current CU, or can be turned On/Off explicitly The MBE prediction process can be applied jointly or independently with the overlapped boundary motion compensation (OBMC) process applied to PU boundary pixels in the current CU when the current CU is partitioned into two or more current prediction units (PUs) . The current CU comprises a luma component and at least one chroma component and the weighting factors for the luma component and said at least one chroma component can be different. In this case, the boundary regions for the luma component and the chroma component can also be different. The above CU or the left CU may correspond to a smallest CU (SCU) .
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1Aillustrates an exemplaryadaptive inter/intra video encoderassociated with an HEVC coding system.
Fig. 1B illustrates an exemplaryadaptive inter/intra video decoder associated with an HEVC coding system.
Fig. 2 illustratesan example of Overlapped Block Motion Compensation (OBMC) for geometry partitions.
Fig. 3A illustrates exemplary Overlapped Block Motion Compensation (OBMC) for 2NxN prediction units (PUs) .
Fig. 3B illustrates exemplary Overlapped Block Motion Compensation (OBMC) for Nx2N prediction units (PUs) .
Fig. 4A illustrates an example of Motion Boundary Enhancement (MBE) according to an embodiment of the present invention, where an above motion vector and a left motion vector are used with the current motion vector to form weighted prediction for boundary pixels.
Fig. 4B illustrates an example of weighting factors for Motion Boundary Enhancement (MBE) according to an embodiment of the present invention.
Fig. 5 illustrates an example of fine-grained Motion Boundary Enhancement (fg-MBE) according to an embodiment of the present invention, where the above motion vector and the left motion vector are determined based on smallest coding unit (SCU) .
Fig. 6 illustrates an exemplary flow chart for a video coding system incorporating Motion Boundary Enhancement according to an embodiment of the present invention.
DETAILED DESCRIPTION
In HEVC, each coding unit (CU) may be partitioned into one or more prediction units. The OBMC is only applied to PU boundaries as described in the previous section. However, motion discontinuity may also exist at the CU boundaries as well. Accordingly, the present invention discloses a boundary pixel processing technique named motion boundary enhancement (MBE) to improve the motion compensated prediction at the CU boundaries. Fig. 4 illustrates an example according to an embodiment of the present invention. In Fig. 4A, the current CU boundaries are indicated by thicklines (410) . The pixels at the CU boundaries will use the motion vector (s) from the upper side (MV_U_1) , the left side (MV_L_1) or both the upper side and the left side in addition to its own motion vector (MV_X) to form a weighted sum of motion prediction when performing motion compensation. Note that MV_U_1 is the first available motion vector derived from the upper CUs and MV_L_1 is the first available motion vector derived from the left CUs. It is well known in HEVC that a CU may be partitioned into multiple PUs and each PU may have its own motion vector. Therefore, the motion vector (i.e., MV_X) for a pixel in the CU boundary depends on which PU that the pixel is located.
Fig. 4B illustrates an example of MBE in details according to an embodiment of the present invention. Pixels A through D in Fig. 4B correspond to the overlapped vertical and horizontal boundaries. Both motion vectors MV_U_1 and MV_L_1 will be used for these pixels in addition to MV_X. The weighting factors are (2/8, 2/8, 4/8) for MV_U_1, MV_L_1 and MV_X,  respectively for pixel A. In other words, pixel A according to MBE is calculated as a weighted sum of three predictors associated with three motion vectors (i.e., MV_U_1, MV_L_1 and MV_X) . Each predictor is derived using motion compensation based on the respective motion vector. After the three predictors are derived, pixel A is generated based on the three predictors using the weighting factor (2/8, 2/8, 4/8) . For pixel B, the corresponding weighting factors are (2/8, 1/8, 5/8) . For pixel C, the corresponding weighting factors are (1/8, 2/8, 5/8) . For pixel D, the corresponding weighting factors are (1/8, 1/8, 6/8) . For pixels labeled as E and F, only MV_U_1 will be used with MV_X. The weighting factors are (2/8, 6/8) for MV_U_1 and MV_X for pixel E. For pixel F, the weighting factors are (1/8, 7/8) . For pixels labeled as G and H, only MV_L_1 will be used with MV_X. The weighting factors are (2/8, 6/8) for MV_L_1 and MV_X for pixel G. For pixel H, the weighting factors are (1/8, 7/8) .
The weighting factors disclosed above are intended to illustrate examples of MBE. These exemplary weighting factors shall not be construed as limitations to the present invention. A person skilled in the art may use other weighting factors to practice the present invention. The weighting factors can be pre-defined or adaptively determined based on a distance between the current boundary pixel and a left or above CU boundary. For example, a larger weighting factor may be used for a boundary pixel at a shorter distance from the CU boundary. While the example in Fig. 4B includes two pixel lines and two pixel columns in the boundary region, different number of pixel lines/columns may also be used to practice the present invention. The size of the boundary region can be pre-defined or adaptively determined based on CU size or PU size. For example, more pixel lines or columns may be used for larger CU or PU sizes.
The MBE processing can be always enabled and applied for video data being coded. However, the MBE process may also be turned On/Off explicitly. For example, a flag may be used to indicate whether MBE process is On or Off for the underlying video data. The underlying data may correspond to a CU, a CTU (coding tree unit) , a CTB (coding tree block) , a slice, a picture or a sequence. The MBE may also be applied to difference color components of the video data. Different MBE process may be applied to different color components. For example, the MBE process may be applied to the luma component, but not the chroma component. Alternatively, MBE process may be applied to both luma and chroma component. However, the weighting factors are different for different color components. Furthermore, different boundary regions may be selected for different color components. For example, less pixel lines/columns can be used for the chroma components.
MBE can be applied independently from OBMC. It may also be applied before or after the OBMC process so that not only PU boundaries but also CU boundaries can be improved with  multiple motion vectors. Furthermore, it may also be applied jointly with the OBMC process to share data accessed during processing. Therefore, the joint processing may reduce memory access bandwidth or reduce buffer requirement.
To further improve the coding performance, fine-grained MBE (fg-MBE) can be used. Fig. 5illustrates an example of fine-grained MBE. In Fig. 5, for the current CU with size 8x8, the neighboring motion vectors from the left side and the upper side are derived based on 4x4 smallest coding unit (SCU) . As is known in HEVC, a CU may be partitioned into smaller CUs using quadtree. The partition process may stop when the CU reaches the smallest size, i.e., smallest CU (SCU) . The SCU according to HEVC can be4x4. While the current CU size of 8x8 is illustrated in the example of Fig. 5, the current CU may correspond to other sizes (e.g., 16x16 or 32x32) . Since the motion vectors for each SCU may belong to different PUs or even different CUs, the motion vectors may be different from each other. For example, MV_L_1 and MV_L_2 in Fig. 5 may be different. Also, MV_U_1 and MV_U_2may be different. The motion information derived accordingly will be more accurate to generate more accurate motion compensated predictors. In some cases, the motion vector may not be available for an SCU. For example, the SCU is Intra coded or the SCU is a boundary block with a valid MV. In this case, a motion compensated predictor can be generated by data padding or using weighted sum from the existing predictors.
Fig. 6 illustrates an exemplary flowchart for a video coding system incorporating Motion Boundary Enhancement according to an embodiment of the present invention. The input data associated with a current coding unit (CU) , wherein the current CU is partitioned into one or more current prediction units (PUs) as shown in step 610. The input data associated with the current coding unitmay be accessed from a media such as a RAM or DRAM in a system. Also the input data associated with the current coding unit may be received directly from a processor (such as a central processing unit, a controller or a digital signal processor) . At an encoder side, the input data corresponds to the pixel data to be processed according to motion compensation. At the decoder side, the input data corresponds to motion compensated residue and the decoding process will reconstruct the current CU using motion compensated prediction and motion compensated residue. A current motion vector (MV) and one or more neighboring MVs corresponding to an above MV, a left MV, or both the above MV and the left MV are determined in step 620. The above MV is associated with an above CU adjoining the current CU from above, the left MV is associated with a left CU adjoining the current CU from left, and the current MV is associated with the current CU. A first predictor for a current boundary pixel in a boundary region of the current CU is generated by applying motion compensation  based on the current MV and a first reference picture pointed by the current MV in step 630. One or more second predictors for the current boundary pixel are generated by applying the motion compensation based on said one or more neighboring MVs and one or more second reference pictures pointed by said one or more neighboring MVs in step 640. A current boundary pixel predictor for the current boundary pixel is generated using a weighted sum of the first predictor and said one or more second predictors according to weighting factors in step 650. Encoding process (for an encoder) or decoding process (for a decoder) is then applied to the current CU using prediction data including the current boundary pixel predictor as shown in step 660.
The exemplary flowchart shown in Fig. 6 is for illustration purpose. A skilled person in the art may re-arrange, combine steps or split a step to practice the present invention without departing from the spirit of the present invention.
The above description is presented to enable a person of ordinary skill in the art to practice the present invention as provided in the context of a particular application and its requirement. Various modifications to the described embodiments will be apparent to those with skill in the art, and the general principles defined herein may be applied to other embodiments. Therefore, the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed. In the above detailed description, various specific details are illustrated in order to provide a thorough understanding of the present invention. Nevertheless, it will be understood by those skilled in the art that the present invention may be practiced.
Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) . These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware code may be developed in different programming languages and different formats or styles. The software code may also be compiled for different target platforms. However, different code formats, styles and languages of software codes and other means of  configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (20)

  1. A method of motion compensated prediction for boundary pixels in a video coding system, the method comprising:
    receiving input data associated with a current coding unit (CU) , wherein the current CU is partitioned into one or more current prediction units (PUs) ;
    determining a current motion vector (MV) and one or more neighboring MVs corresponding to an above MV, a left MV, or both the above MV and the left MV, wherein the above MV is associated with an above CU adjoining the current CU from above, the left MV is associated with a left CU adjoining the current CU from left, and the current MV is associated with the current CU;
    generating Motion Boundary Enhancement (MBE) prediction for the current CU, wherein said generating MBE prediction comprising:
    generating a first predictor for a current boundary pixel in a boundary region of the current CU by applying motion compensation based on the current MV and a first reference picture pointed by the current MV;
    generating one or more second predictors for the current boundary pixel by applying the motion compensation based on said one or more neighboring MVs and one or more second reference pictures pointed by said one or more neighboring MVs;
    generating a current boundary pixel predictor for the current boundary pixel using a weighted sum of the first predictor and said one or more second predictors according to weighting factors; and
    applying encoding or decoding to the current CU using prediction data including the current boundary pixel predictor.
  2. The method of Claim 1, wherein the boundary region of the current CU correspond to a number of pixel lines, pixel columns, or both at CU boundaries of the current CU.
  3. The method of Claim 2, wherein the number of pixel lines, pixel columns, or both at the CU boundaries of the current CU is pre-defined or adaptively determined based on CU size or PU size.
  4. The method of Claim 1, wherein the weighting factors are pre-defined or adaptively determined based on a distance between the current boundary pixel and a left or above CU boundary.
  5. The method of Claim 1, wherein when both the above MV and the left MV are determined, said generating one or more second predictors for the current boundary pixel  corresponds to generating two second predictors for the current boundary pixel based on the above MV and the left MV respectively.
  6. The method of Claim 1, wherein when both the above MV and the left MV are determined, said generating MBE prediction is performed twice; said generating one or more second predictors for the current boundary pixel corresponds to generating one second predictor for the current boundary pixel based on one neighboring MV from the above MV and the left MV during first said generating MBE prediction; and said generating one or more second predictors for the current boundary pixel corresponds to generating one second predictor for the current boundary pixel based on another neighboring MV from the above MV and the left MV during second said generating MBE prediction.
  7. The method of Claim 6, wherein said one neighboring MV corresponds to the above MV and said another neighboring MV corresponds to the left MV.
  8. The method of Claim 6, wherein said one neighboring MV corresponds to the left MV and said another neighboring MV corresponds to the above MV.
  9. The method of Claim 1, wherein said generating MBE prediction is always performed for the current CU, or is turned On/Off explicitly.
  10. The method of Claim 1, further comprising applying overlapped boundary motion compensation (OBMC) to PU boundary pixels in the current CU when the current CU is partitioned into two or more current prediction units (PUs) .
  11. The method of Claim 1, wherein the current CU comprises a luma component and at least one chroma component, and the weighting factors for the luma component and said at least one chroma component are different.
  12. The method of Claim 1, wherein the current CU comprises a luma component and a chroma component, and the boundary regions for the luma component and the chroma component are different.
  13. The method of Claim 1, wherein the above MV is derived based on the above CU and the above CU corresponds to a smallest CU (SCU) .
  14. The method of Claim 1, wherein the left MV is derived based on the left CU and the left CU corresponds to a smallest CU (SCU) .
  15. An apparatus for deriving motion compensated prediction for boundary pixels in a video coding system, the apparatus comprising one or more electronic circuits configured to:
    receive input data associated with a current coding unit (CU) , wherein the current CU is partitioned into one or more current prediction units (PUs) ;
    determine a current motion vector (MV) and one or more neighboring MVs corresponding  to an above MV, a left MV, or both the above MV and the left MV, wherein the above MV is associated with an above CU adjoining the current CU from above, the left MV is associated with a left CU adjoining the current CU from left, and the current MV is associated with the current CU;
    generate a first predictor for a current boundary pixel in a boundary region of the current CU by applying motion compensation based on the current MV and a first reference picture pointed by the current MV;
    generate one or more second predictors for the current boundary pixel by applying the motion compensation based on said one or more neighboring MVs and one or more second reference pictures pointed by said one or more neighboring MVs;
    generate a current boundary pixel predictor for the current boundary pixel using a weighted sum of the first predictor and said one or more second predictors according to weighting factors; and
    apply encoding or decoding to the current CU using prediction data including the current boundary pixel predictor.
  16. The apparatus of Claim 15, wherein the boundary region of the current CU correspond to a number of pixel lines, pixel columns, or both at CU boundaries of the current CU.
  17. The apparatus of Claim 15, wherein the weighting factors are pre-defined or adaptively determined based on a distance between the current boundary pixel and a left or above CU boundary.
  18. The apparatus of Claim 15, wherein the current CU comprises a luma component and at least a chroma component and the weighting factors for the luma component and said at least chroma component are different.
  19. The apparatus of Claim 15, wherein the current CU comprises a luma component and a chroma component; and numbers of pixel lines or pixel columns corresponding to the boundary pixels at the CU boundaries of the current CU are different for the luma component and the chroma component.
  20. The apparatus of Claim 15, wherein the above CU or the left CU corresponds to a smallest CU (SCU) .
PCT/CN2014/093148 2013-12-06 2014-12-05 Method and apparatus for motion boundary processing WO2015081888A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/036,900 US11303900B2 (en) 2013-12-06 2014-12-05 Method and apparatus for motion boundary processing
CN201480066225.5A CN105794210B (en) 2013-12-06 2014-12-05 The motion prediction compensation method and device of boundary pixel are used in video coding system
EP14867530.9A EP3078196B1 (en) 2013-12-06 2014-12-05 Method and apparatus for motion boundary processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361912686P 2013-12-06 2013-12-06
US61/912,686 2013-12-06

Publications (1)

Publication Number Publication Date
WO2015081888A1 true WO2015081888A1 (en) 2015-06-11

Family

ID=53272911

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/093148 WO2015081888A1 (en) 2013-12-06 2014-12-05 Method and apparatus for motion boundary processing

Country Status (4)

Country Link
US (1) US11303900B2 (en)
EP (1) EP3078196B1 (en)
CN (1) CN105794210B (en)
WO (1) WO2015081888A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017213699A1 (en) * 2016-06-06 2017-12-14 Google Llc Adaptive overlapped block prediction in variable block size video coding
WO2017213700A1 (en) * 2016-06-06 2017-12-14 Google Llc Adaptive overlapped block prediction in variable block size video coding
WO2019089864A1 (en) * 2017-11-01 2019-05-09 Vid Scale, Inc. Overlapped block motion compensation
CN109792535A (en) * 2016-05-13 2019-05-21 夏普株式会社 Forecast image generating means, moving image decoding apparatus and dynamic image encoding device
WO2020089822A1 (en) * 2018-10-31 2020-05-07 Beijing Bytedance Network Technology Co., Ltd. Overlapped block motion compensation with derived motion information from neighbors
WO2020094077A1 (en) * 2018-11-06 2020-05-14 Beijing Bytedance Network Technology Co., Ltd. Weights derivation for geometric partitioning
CN113170136A (en) * 2018-10-09 2021-07-23 威尔乌集团 Motion smoothing of reprojected frames

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9813730B2 (en) * 2013-12-06 2017-11-07 Mediatek Inc. Method and apparatus for fine-grained motion boundary processing
US20180131943A1 (en) * 2015-04-27 2018-05-10 Lg Electronics Inc. Method for processing video signal and device for same
CN117528108A (en) * 2016-11-28 2024-02-06 英迪股份有限公司 Image encoding method, image decoding method, and method for transmitting bit stream
US20190014324A1 (en) * 2017-07-05 2019-01-10 Industrial Technology Research Institute Method and system for intra prediction in image encoding
CN115118994B (en) * 2017-08-22 2024-02-06 松下电器(美国)知识产权公司 Image encoder, image decoder, and bit stream generating apparatus
KR20200037130A (en) * 2017-08-28 2020-04-08 삼성전자주식회사 Video encoding method and apparatus, Video decoding method and apparatus
WO2019065329A1 (en) * 2017-09-27 2019-04-04 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device, decoding device, encoding method and decoding method
CN111213379B (en) * 2017-10-16 2023-11-21 数字洞察力有限公司 Method, apparatus and recording medium storing bit stream for encoding/decoding image
CN110720221A (en) * 2018-02-14 2020-01-21 北京大学 Method, device and computer system for motion compensation
US11563970B2 (en) * 2018-02-26 2023-01-24 Interdigital Vc Holdings, Inc. Method and apparatus for generalized OBMC
WO2019201203A1 (en) * 2018-04-16 2019-10-24 Mediatek Inc. Methods and apparatuses of video processing with overlapped block motion compensation in video coding systems
US20190387251A1 (en) * 2018-06-19 2019-12-19 Mediatek Inc. Methods and Apparatuses of Video Processing with Overlapped Block Motion Compensation in Video Coding Systems
WO2020008325A1 (en) * 2018-07-01 2020-01-09 Beijing Bytedance Network Technology Co., Ltd. Improvement of interweaved prediction
WO2020233513A1 (en) * 2019-05-17 2020-11-26 Beijing Bytedance Network Technology Co., Ltd. Motion information determination and storage for video processing
CN115361550B (en) * 2021-02-22 2024-07-05 北京达佳互联信息技术有限公司 Improved overlapped block motion compensation for inter prediction
CN113596474A (en) * 2021-06-23 2021-11-02 浙江大华技术股份有限公司 Image/video encoding method, apparatus, system, and computer-readable storage medium
EP4179728A4 (en) * 2021-06-23 2023-12-27 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image prediction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1175920A1 (en) * 2000-07-27 2002-01-30 Board Of Regents, The University Of Texas System Therapeutic ultrasound apparatus for enhancing tissue perfusion
CN101309405A (en) * 2007-05-14 2008-11-19 华为技术有限公司 Reference data loading method and device
US20110110427A1 (en) * 2005-10-18 2011-05-12 Chia-Yuan Teng Selective deblock filtering techniques for video coding
US20130051470A1 (en) * 2011-08-29 2013-02-28 JVC Kenwood Corporation Motion compensated frame generating apparatus and method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3466032B2 (en) 1996-10-24 2003-11-10 富士通株式会社 Video encoding device and decoding device
US8107535B2 (en) * 2003-06-10 2012-01-31 Rensselaer Polytechnic Institute (Rpi) Method and apparatus for scalable motion vector coding
KR101044934B1 (en) * 2003-12-18 2011-06-28 삼성전자주식회사 Motion vector estimation method and encoding mode determining method
US8731054B2 (en) * 2004-05-04 2014-05-20 Qualcomm Incorporated Method and apparatus for weighted prediction in predictive frames
US9883203B2 (en) * 2011-11-18 2018-01-30 Qualcomm Incorporated Adaptive overlapped block motion compensation
US10230980B2 (en) * 2015-01-26 2019-03-12 Qualcomm Incorporated Overlapped motion compensation for video coding
US10939118B2 (en) * 2018-10-26 2021-03-02 Mediatek Inc. Luma-based chroma intra-prediction method that utilizes down-sampled luma samples derived from weighting and associated luma-based chroma intra-prediction apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1175920A1 (en) * 2000-07-27 2002-01-30 Board Of Regents, The University Of Texas System Therapeutic ultrasound apparatus for enhancing tissue perfusion
US20110110427A1 (en) * 2005-10-18 2011-05-12 Chia-Yuan Teng Selective deblock filtering techniques for video coding
CN101309405A (en) * 2007-05-14 2008-11-19 华为技术有限公司 Reference data loading method and device
US20130051470A1 (en) * 2011-08-29 2013-02-28 JVC Kenwood Corporation Motion compensated frame generating apparatus and method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
B.-D. CHOI ET AL.: "IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY", vol. 16, 1 April 2007, IEEE SERVICE CENTER, article "disclose a motion-compensation (MC) interpolation algorithm to enhance the temporal resolution of video sequences using adaptive overlapped block MC", pages: 407 - 416
G. JING ET AL.: "2013 INTERNATIONAL CONFERENCE ON SIGNAL-IMAGE TECHNOLOGY & INTERNET-BASED SYSTEMS", 2 December 2013, IEEE, article "disclose a video super-resolution algorithm to reconstruct high quality pictures in a low resolution video sequence from existing high resolution key frames", pages: 187 - 194
GUO ET AL.: "CE2: Overlapped Block Motion Compensation for 2NxN and Nx2N Motion Partitions", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC), OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11, 6TH MEETING, 14 July 2011 (2011-07-14)
See also references of EP3078196A4

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109792535A (en) * 2016-05-13 2019-05-21 夏普株式会社 Forecast image generating means, moving image decoding apparatus and dynamic image encoding device
US10567793B2 (en) 2016-06-06 2020-02-18 Google Llc Adaptive overlapped block prediction in variable block size video coding
WO2017213700A1 (en) * 2016-06-06 2017-12-14 Google Llc Adaptive overlapped block prediction in variable block size video coding
WO2017213699A1 (en) * 2016-06-06 2017-12-14 Google Llc Adaptive overlapped block prediction in variable block size video coding
US10390033B2 (en) 2016-06-06 2019-08-20 Google Llc Adaptive overlapped block prediction in variable block size video coding
US11425418B2 (en) 2017-11-01 2022-08-23 Vid Scale, Inc. Overlapped block motion compensation
WO2019089864A1 (en) * 2017-11-01 2019-05-09 Vid Scale, Inc. Overlapped block motion compensation
CN113170136A (en) * 2018-10-09 2021-07-23 威尔乌集团 Motion smoothing of reprojected frames
CN113170136B (en) * 2018-10-09 2024-04-23 威尔乌集团 Motion smoothing of reprojected frames
WO2020089822A1 (en) * 2018-10-31 2020-05-07 Beijing Bytedance Network Technology Co., Ltd. Overlapped block motion compensation with derived motion information from neighbors
CN111131830A (en) * 2018-10-31 2020-05-08 北京字节跳动网络技术有限公司 Overlapped block motion compensation improvement
CN111131830B (en) * 2018-10-31 2024-04-12 北京字节跳动网络技术有限公司 Improvement of overlapped block motion compensation
WO2020094077A1 (en) * 2018-11-06 2020-05-14 Beijing Bytedance Network Technology Co., Ltd. Weights derivation for geometric partitioning
CN111418208A (en) * 2018-11-06 2020-07-14 北京字节跳动网络技术有限公司 Weight derivation for geometric segmentation
US11265541B2 (en) 2018-11-06 2022-03-01 Beijing Bytedance Network Technology Co., Ltd. Position dependent storage of motion information
US11431973B2 (en) 2018-11-06 2022-08-30 Beijing Bytedance Network Technology Co., Ltd. Motion candidates for inter prediction
US11665344B2 (en) 2018-11-06 2023-05-30 Beijing Bytedance Network Technology Co., Ltd. Multiple merge lists and orders for inter prediction with geometric partitioning
CN111418208B (en) * 2018-11-06 2023-12-05 北京字节跳动网络技术有限公司 Weight derivation for geometric segmentation

Also Published As

Publication number Publication date
CN105794210B (en) 2019-05-10
EP3078196A1 (en) 2016-10-12
CN105794210A (en) 2016-07-20
EP3078196B1 (en) 2023-04-05
EP3078196A4 (en) 2017-03-29
US11303900B2 (en) 2022-04-12
US20160295215A1 (en) 2016-10-06

Similar Documents

Publication Publication Date Title
EP3078196B1 (en) Method and apparatus for motion boundary processing
US10009612B2 (en) Method and apparatus for block partition of chroma subsampling formats
US9813730B2 (en) Method and apparatus for fine-grained motion boundary processing
WO2018036447A1 (en) Video encoding method and apparatus with in-loop filtering process not applied to reconstructed blocks located at image content discontinuity edge and associated video decoding method and apparatus
CA2876017C (en) Method and apparatus for intra transform skip mode
US9967563B2 (en) Method and apparatus for loop filtering cross tile or slice boundaries
US20170272758A1 (en) Video encoding method and apparatus using independent partition coding and associated video decoding method and apparatus
US9860530B2 (en) Method and apparatus for loop filtering
WO2017084577A1 (en) Method and apparatus for intra prediction mode using intra prediction filter in video and image compression
US11870991B2 (en) Method and apparatus of encoding or decoding video blocks with constraints during block partitioning
US20150326886A1 (en) Method and apparatus for loop filtering
Chiu et al. Decoder-side motion estimation and wiener filter for HEVC
EP4047928A1 (en) Improved overlapped block motion compensation for inter prediction
EP3903483A1 (en) Motion compensation boundary filtering
CN110771166B (en) Intra-frame prediction device and method, encoding device, decoding device, and storage medium
KR102686450B1 (en) Methods and devices for prediction-dependent residual scaling for video coding
WO2023207511A1 (en) Method and apparatus of adaptive weighting for overlapped block motion compensation in video coding system
WO2024016955A1 (en) Out-of-boundary check in video coding
EP3751850A1 (en) Motion compensation boundary filtering
WO2024006231A1 (en) Methods and apparatus on chroma motion compensation using adaptive cross-component filtering

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14867530

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15036900

Country of ref document: US

REEP Request for entry into the european phase

Ref document number: 2014867530

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014867530

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE