WO2013032073A1 - Amvp 모드에서의 예측 블록 생성 방법 - Google Patents
Amvp 모드에서의 예측 블록 생성 방법 Download PDFInfo
- Publication number
- WO2013032073A1 WO2013032073A1 PCT/KR2012/000522 KR2012000522W WO2013032073A1 WO 2013032073 A1 WO2013032073 A1 WO 2013032073A1 KR 2012000522 W KR2012000522 W KR 2012000522W WO 2013032073 A1 WO2013032073 A1 WO 2013032073A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- prediction unit
- block
- candidate
- amvp
- motion vector
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
- H04N19/126—Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/196—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
- H04N19/198—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters including smoothing of a sequence of encoding parameters, e.g. by averaging, by choice of the maximum, minimum or median value
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- the present invention relates to a method of generating a prediction block of an image encoded in an AMVP mode, and more particularly, to a method of decoding motion information encoded in an AMVP mode and generating a prediction block based on the motion information.
- the inter prediction coding which is a method of extracting a block similar to the current block from the previous picture and encoding the difference value, is one of the most effective methods for compressing an image.
- a block most similar to the current block is searched in a predetermined search range of a reference picture using a predetermined evaluation function.
- the compression rate of the data is increased by transmitting only the residue between the current block and similar blocks in the reference picture.
- the motion vector of the current block is predicted using neighboring blocks, and only the difference value between the predicted motion vector and the original motion vector generated as a result of prediction is encoded and transmitted, do.
- the median does not predict the motion vector of the current block effectively.
- the amount of data in the residual block decreases while the amount of motion information (motion vector and reference picture index) to be transmitted increases more and more.
- the present invention relates to a method of effectively generating motion compensation information by restoring motion information encoded in an AMVP mode.
- a method for generating a prediction block in an AMVP mode includes reconstructing a reference picture index and a differential motion vector of a current prediction unit, searching valid spatial AMVP candidates of a current prediction unit, Constructing an AMVP candidate list using the valid space and time AMVP candidates; and if the number of valid AMVP candidates is less than a predetermined number, adding a motion vector having a predetermined value to the list as candidates Determining a motion vector corresponding to an AMVP index of a current prediction unit among motion vectors in the AMVP candidate list as a predictive motion vector of a current predictive unit; The prediction unit restores the motion vector, And generating a prediction block corresponding to a position indicated by the restored motion vector in a reference picture indicated by the reference picture index.
- the method of generating a prediction block in an AMVP mode restores a reference picture index and a differential motion vector of a current prediction unit, constructs an AMVP candidate list using valid space and time AMVP candidates of the current prediction unit, If the number of AMVP candidates is smaller than a predetermined number, a motion vector having a predetermined value is added to the list as a candidate, and a motion vector corresponding to the AMVP index of the current prediction unit from among the motion vectors in the AMVP candidate list is As a predicted motion vector of the current prediction unit.
- the current prediction unit reconstructs the motion vector using the differential motion vector and the predictive motion vector and generates a prediction block corresponding to the position indicated by the reconstructed motion vector in the reference picture indicated by the reference picture index .
- the motion information of the current prediction unit can be better predicted and the amount of coding can be reduced. Further, there is an effect that motion information encoded in the AMVP mode is decoded very effectively, and a fast and accurate prediction block can be generated.
- FIG. 1 is a block diagram illustrating a moving picture encoding apparatus according to an embodiment of the present invention.
- FIG. 2 is a block diagram illustrating an inter-prediction encoding process according to the present invention.
- FIG. 3 is a block diagram illustrating a merge encoding process according to an embodiment of the present invention.
- FIG. 4 is a view showing a position of a merge candidate according to the first embodiment of the present invention.
- FIG. 5 is a diagram showing a position of a merge candidate according to the second embodiment of the present invention.
- FIG. 6 is a block diagram illustrating an AMVP encoding process according to an embodiment of the present invention.
- FIG. 7 is a block diagram illustrating a video decoding apparatus according to an embodiment of the present invention.
- FIG. 8 is a diagram illustrating an inter-prediction decoding process according to an embodiment of the present invention.
- FIG. 9 is a diagram illustrating a merge mode motion vector decoding process according to the first embodiment of the present invention.
- FIG. 10 is a diagram illustrating a merge mode motion vector decoding process according to a second embodiment of the present invention.
- FIG. 11 is a diagram illustrating a process of decoding an AMVP mode motion vector according to the first embodiment of the present invention.
- FIG. 12 is a diagram illustrating a process of decoding an AMVP mode motion vector according to a second embodiment of the present invention.
- FIG. 1 is a block diagram illustrating a moving picture encoding apparatus according to the present invention.
- a moving picture encoding apparatus 100 includes a picture dividing unit 110, a transform unit 120, a quantization unit 130, a scanning unit 131, an entropy coding unit 140, An inter prediction unit 160, an inverse quantization unit 135, an inverse transformation unit 125, a post-processing unit 170, a picture storage unit 180, a subtraction unit 190, and an addition unit 195, .
- the picture division unit 110 analyzes the input video signal to determine a prediction mode by dividing a picture into a coding unit of a predetermined size for each largest coding unit (LCU: Largest Coding Unit), and determines a prediction unit size .
- the picture division unit 110 sends the prediction unit to be encoded to the intra prediction unit 150 or the inter prediction unit 160 according to a prediction mode (or a prediction method). Further, the picture division unit 110 sends the prediction unit to be encoded to the subtraction unit 190.
- LCU Largest Coding Unit
- the transforming unit 120 transforms the residual block, which is a residual signal of the prediction block generated by the intra prediction unit 150 or the inter prediction unit 160, with the original block of the input prediction unit.
- the residual block is composed of a coding unit or a prediction unit.
- a residual block composed of a coding unit or a prediction unit is divided into optimum conversion units and converted.
- Different transformation matrices may be determined depending on the prediction mode (intra or inter). Also, since the residual signal of the intra prediction has directionality according to the intra prediction mode, the transformation matrix can be adaptively determined according to the intra prediction mode.
- the transformation unit can be transformed by two (horizontal, vertical) one-dimensional transformation matrices. For example, in the case of inter prediction, a predetermined conversion matrix is determined.
- the DCT-based integer matrix is applied in the vertical direction, Or a KLT-based integer matrix.
- a DST-based or KLT-based integer matrix is applied in the vertical direction and a DCT-based integer matrix is applied in the horizontal direction.
- DCT-based integer matrix is applied in both directions.
- the transformation matrix may be adaptively determined depending on the size of the conversion unit.
- the quantization unit 130 determines a quantization step size for quantizing the coefficients of the residual block transformed by the transform matrix.
- the quantization step size is determined for each coding unit of a predetermined size or larger (hereinafter referred to as a quantization unit).
- the predetermined size may be 8x8 or 16x16.
- the quantization unit 130 uses the quantization step size of the quantization unit adjacent to the current quantization unit as the quantization step size predictor of the current quantization unit.
- the quantization unit 130 searches the left quantization unit, the upper quantization unit, and the upper left quantization unit of the current quantization unit in order, and can generate a quantization step size predictor of the current quantization unit using one or two effective quantization step sizes have.
- the effective first quantization step size searched in the above order can be determined as a quantization step size predictor.
- the average value of the two effective quantization step sizes searched in the above order may be determined as a quantization step size predictor, or when only one is effective, it may be determined as a quantization step size predictor.
- the difference value between the quantization step size of the current encoding unit and the quantization step size predictor is transmitted to the entropy encoding unit 140.
- the left coding unit, the upper coding unit, and the upper left coding unit of the current coding unit do not exist.
- the order may be changed, and the upper left side quantization unit may be omitted.
- the quantized transform block is provided to the inverse quantization unit 135 and the scanning unit 131.
- the scanning unit 131 scans the coefficients of the quantized transform block and converts them into one-dimensional quantization coefficients. Since the coefficient distribution of the transform block after quantization may be dependent on the intra prediction mode, the scanning scheme is determined according to the intra prediction mode. The coefficient scanning method may be determined depending on the size of the conversion unit. The scan pattern may vary according to the directional intra prediction mode. The scan order of the quantization coefficients is scanned in the reverse direction.
- the same scan pattern is applied to the quantization coefficients in each subset.
- the scan pattern between subset applies zigzag scan or diagonal scan.
- the scan pattern is preferably scanned to the remaining subsets in the forward direction from the main subset containing the DC, but vice versa.
- a scan pattern between subsets can be set in the same manner as a scan pattern of quantized coefficients in a subset. In this case, the scan pattern between the sub-sets is determined according to the intra-prediction mode.
- the encoder transmits to the decoder information indicating the position of the last non-zero quantization coefficient in the transform unit. Information that can indicate the position of the last non-zero quantization coefficient in each subset can also be transmitted to the decoder.
- the inverse quantization unit 135 dequantizes the quantized quantized coefficients.
- the inverse transform unit 125 restores the inversely quantized transform coefficients into residual blocks in the spatial domain.
- the adder combines the residual block reconstructed by the inverse transform unit with the intra prediction unit 150 or the received prediction block from the inter prediction unit 160 to generate a reconstruction block.
- the post-processing unit 170 performs a deblocking filtering process for eliminating the blocking effect generated in the reconstructed picture, an adaptive offset application process for compensating a difference value from the original image on a pixel-by-pixel basis, and a coding unit And performs an adaptive loop filtering process to compensate the value.
- the deblocking filtering process is preferably applied to the boundary of a prediction unit and a conversion unit having a size larger than a predetermined size.
- the size may be 8x8.
- the deblocking filtering process may include determining a boundary to be filtered, determining a bounary filtering strength to be applied to the boundary, determining whether to apply a deblocking filter, And selecting a filter to be applied to the boundary if it is determined to apply the boundary.
- Whether or not the deblocking filter is applied is determined based on i) whether the boundary filtering strength is greater than 0 and ii) whether a pixel value at a boundary between two blocks adjacent to the boundary to be filtered (P block, Q block) Is smaller than a first reference value determined by the quantization parameter.
- the filter is preferably at least two or more. If the absolute value of the difference between two pixels located at the block boundary is greater than or equal to the second reference value, a filter that performs relatively weak filtering is selected. And the second reference value is determined by the quantization parameter and the boundary filtering strength.
- the adaptive offset application process is to reduce a distortion between a pixel in the image to which the deblocking filter is applied and the original pixel. It may be determined whether to perform the adaptive offset applying process in units of pictures or slices.
- the picture or slice may be divided into a plurality of offset regions, and an offset type may be determined for each offset region.
- the offset type may include a predetermined number (e.g., four) of edge offset types and two band offset types. If the offset type is an edge offset type, the edge type to which each pixel belongs is determined and the corresponding offset is applied.
- the edge type is determined based on the distribution of two pixel values adjacent to the current pixel.
- the adaptive loop filtering process can perform filtering based on a value obtained by comparing a reconstructed image and an original image through a deblocking filtering process or an adaptive offset applying process.
- the adaptive loop filtering can be applied to the entire pixels included in the 4x4 block or the 8x8 block. Whether or not the adaptive loop filter is applied can be determined for each coding unit.
- the size and the coefficient of the loop filter to be applied may vary depending on each coding unit.
- Information indicating whether or not the adaptive loop filter is applied to each coding unit may be included in each slice header. In the case of the color difference signal, it is possible to determine whether or not the adaptive loop filter is applied in units of pictures.
- the shape of the loop filter may have a rectangular shape unlike the luminance.
- Adaptive loop filtering can be applied on a slice-by-slice basis. Therefore, information indicating whether or not adaptive loop filtering is applied to the current slice is included in the slice header or the picture header. If the current slice indicates that adaptive loop filtering is applied, the slice header or picture header additionally includes information indicating the horizontal and / or vertical direction filter length of the luminance component used in the adaptive loop filtering process.
- the slice header or picture header may include information indicating the number of filter sets. At this time, if the number of filter sets is two or more, the filter coefficients can be encoded using the prediction method. Accordingly, the slice header or the picture header may include information indicating whether or not the filter coefficients are encoded in the prediction method, and may include predicted filter coefficients when the prediction method is used.
- the slice header or the picture header may include information indicating whether or not each of the color difference components is filtered.
- information indicating whether or not to filter Cr and Cb can be joint-coded (i.e., multiplexed coding).
- Cr and Cb since Cr and Cb are not all filtered in order to reduce the complexity, it is most likely to be the most frequent. Therefore, if Cr and Cb are not all filtered, the smallest index is allocated and entropy encoding is performed . When both Cr and Cb are filtered, the largest index is allocated and entropy encoding is performed.
- the picture storage unit 180 receives the post-processed image data from the post-processing unit 160, and restores and restores the pictures on a picture-by-picture basis.
- the picture may be a frame-based image or a field-based image.
- the picture storage unit 180 has a buffer (not shown) capable of storing a plurality of pictures.
- the inter-prediction unit 160 performs motion estimation using at least one reference picture stored in the picture storage unit 180, and determines a reference picture index and a motion vector indicating a reference picture. Based on the determined reference picture index and motion vector, a prediction block corresponding to a prediction unit to be coded is extracted from a reference picture used for motion estimation among a plurality of reference pictures stored in the picture storage unit 180 and output .
- the intraprediction unit 150 performs intraprediction encoding using the reconstructed pixel values in a picture including the current prediction unit.
- the intra prediction unit 150 receives the current prediction unit to be predictively encoded and selects one of a predetermined number of intra prediction modes according to the size of the current block to perform intra prediction.
- the intra prediction unit adaptively filters the reference pixels to generate an intra prediction block. If reference pixels are not available, reference pixels may be generated using available reference pixels.
- the entropy coding unit 140 entropy-codes the quantized coefficients quantized by the quantization unit 130, the intra prediction information received from the intra prediction unit 140, the motion information received from the inter prediction unit 150, and the like.
- FIG. 2 is a block diagram illustrating an inter-prediction encoding process according to the present invention.
- the inter-prediction coding process includes determining motion information of a current prediction unit, generating a prediction block, generating a residual block, coding a residual block, and coding motion information.
- prediction units will be referred to as blocks.
- the motion information of the current prediction unit includes a reference picture index and a motion vector to be referred to by the current prediction unit.
- one of the at least one reconstructed reference pictures is determined as the reference picture of the current prediction unit, and the motion information indicating the position of the prediction block in the reference picture is determined .
- the reference picture index of the current block may be changed according to the inter prediction mode of the current block.
- the current block when the current block is a unidirectional prediction mode, it is an index indicating any one of the reference pictures belonging to the list 0 (L0).
- the current block when the current block is a bidirectional prediction mode, the current block may include an index indicating a reference picture belonging to the list 0 (L0) and an index indicating any one of the reference pictures of the list 1 (L1).
- the current block when the current block is a bidirectional prediction mode, it may include an index indicating one or two pictures among the reference pictures of the composite list LC generated by combining the list 0 and the list 1.
- the motion vector indicates the position of the prediction block in the picture indicated by each reference picture index.
- the motion vector may be an integer unit, but it may have a resolution of 1/8 or 1/16 pixels.
- the prediction block is generated from the pixels of the integer unit.
- a corresponding block at a position indicated by a motion vector in a picture indicated by the reference picture index is copied to generate a prediction block of the current prediction unit.
- the pixels of the prediction block are generated from the integer unit pixels in the picture indicated by the reference picture index.
- a prediction pixel can be generated using an 8-tap interpolation filter.
- a 4-tap interpolation filter can be used to generate a predictive pixel.
- a residual block is generated using the difference between the current prediction unit and the prediction block.
- the size of the residual block may be different from the size of the current prediction unit. For example, when the current prediction unit is 2Nx2N, the size of the current prediction unit and the residual block are the same. However, when the current prediction unit is 2NxN or Nx2N, the residual block may be 2Nx2N. That is, when the current prediction unit is 2NxN, the residual block may be configured by combining two 2NxN residual blocks.
- the residual block is encoded into a block of the transcoding size unit. That is, transcoding, quantization, and entropy encoding are performed on a transcoding size basis.
- the size of the transcoding may be determined according to the size of the residual block in a quadtree manner. In other words, transcoding uses integer-based DCT.
- the transcoded block is quantized using a quantization matrix.
- the quantized block is entropy encoded with CABAC or CAVLC.
- the motion information of the current prediction unit is encoded using motion information of prediction units adjacent to the current prediction unit.
- the motion information of the current prediction unit is merge-encoded or AMVP-encoded. Therefore, first, it is determined whether the motion information of the current prediction unit is merge-encoded or AMVP-encoded, and the motion information of the current prediction unit is encoded according to the determined method.
- the space merge candidate and the time merge candidate are induced (S210, S220).
- the space merge candidate is derived and the time merge candidate is obtained.
- the order of finding the space merge candidate and the time merge candidate is not limited thereto. For example, they may be obtained in the opposite order or in parallel.
- the spatial merge candidate configuration may be any of the following embodiments.
- Space merge candidate configuration information can be sent to the decoder.
- the configuration information may represent any one of the following embodiments, and may be information indicating the number of merge candidates in any one of the following embodiments.
- the plurality of space merge candidates are a left prediction unit (block A) of the current prediction unit, an upper prediction unit (block B) of the current prediction unit, an upper right prediction unit And a lower left side prediction unit (block D) of the current prediction unit.
- all of the valid prediction units may be candidates, or two valid candidates may be scanned in order of A, B, C, and D.
- a valid prediction unit existing at the uppermost side or a valid prediction unit having the largest area can be set as the left prediction unit.
- the prediction unit existing on the leftmost side or the prediction unit having the largest area that is effective can be set as the upper prediction unit.
- the plurality of space merge candidates include a left prediction unit (block A) of the current prediction unit, an upper prediction unit (block B) of the current prediction unit, an upper right prediction unit ), The lower left side prediction unit (block D) of the current prediction unit, and the upper left side prediction unit (block E) of the current prediction unit, and two valid candidates can be obtained.
- the left prediction unit may be a prediction unit adjacent to the block E but not adjacent to the block D.
- the upper prediction unit may be a prediction unit adjacent to block E but not adjacent to block C.
- a plurality of space merge candidates are arranged in the order of the left block (block A) of the current block, the upper block (block B) of the current block, the upper right block (block C) A block effective in order of the side block (block D) and the upper left side block (block E) of the current block can be candidates.
- the block E can be used when at least one of the blocks A, B, C, and D is invalid.
- the corner prediction unit is configured to scan in the order of the upper right side prediction unit (block C) of the current prediction unit, the lower left side prediction unit (block D) of the current prediction unit and the upper left side prediction unit (block E) to be.
- the motion information of merge candidates existing above the current prediction unit among the spatial merge candidates in the above embodiments may be set differently according to the position of the current prediction unit.
- the motion information of the upper prediction unit (block B, C or E) of the current prediction unit may be motion information of itself or motion information of the adjacent prediction unit have.
- the motion information of the upper prediction unit may be determined as one of its own motion information or motion information (reference picture index and motion vector) of the adjacent prediction unit according to the size and position of the current prediction unit.
- the temporal merge candidate's reference picture index and motion vector are obtained through a separate process.
- the reference picture index of the temporal merging candidate can be obtained using any one of the reference picture indexes among the prediction units spatially adjacent to the current prediction unit.
- Block A an upper prediction unit (block B), an upper right prediction unit (block C), and a lower left prediction unit (block D) of the current prediction unit in order to obtain reference indices of time merge candidates of the current prediction unit.
- the reference picture indexes of the upper left side prediction unit (block E) may be used.
- the reference picture indexes of the left prediction unit (block A), the upper prediction unit (block B) and the corner prediction unit (any one of the blocks C, D and E) of the current prediction unit can be used.
- the order of the left prediction unit (block A), the upper prediction unit (block B), the upper right prediction unit (block C), the lower left prediction unit (block D), and the upper left prediction unit (For example, three) reference picture indexes of prediction units that have been scanned may be used.
- a left reference picture index a reference picture index of an upper prediction unit (hereinafter referred to as an upper reference picture index), and a reference picture index of a corner prediction unit (hereinafter referred to as a corner reference picture index ).
- an upper reference picture index a reference picture index of an upper prediction unit
- a corner reference picture index a reference picture index of a corner prediction unit
- the reference picture index 0 can be set.
- the reference picture index having the highest frequency among the available reference picture indexes may be set as the reference picture index of the temporal merge candidate.
- a reference picture index having a minimum value among the plurality of reference picture indexes or a reference picture index of a left block or an upper block is a reference picture index of a temporal merge candidate Can be set.
- a picture to which the temporal merge candidate block belongs (hereinafter, a temporal merge candidate picture) is determined.
- the temporal merge candidate picture may be set to a picture having a reference picture index of 0.
- the first picture of list 0 i.e., the picture whose index is 0
- the slice type is B
- the first picture of the reference picture list indicated by the flag indicating the time merge candidate list in the slice header is set as the temporal merge candidate picture. For example, if the flag indicates 1, if 0 is indicated from list0, a time merge candidate picture can be set from list1.
- a time merge candidate block in the temporal merged candidate picture is obtained.
- the temporal merging candidate block any one of a plurality of corresponding blocks corresponding to the current ing unit in the temporal merging candidate picture can be selected.
- a plurality of corresponding blocks may be prioritized, and the first corresponding block that is valid based on the priority may be selected as a time merge candidate block.
- the location of the current prediction unit may be a location within a slice or LCU.
- the motion vector of the temporal merged candidate prediction block is set as a temporal merged candidate motion vector.
- the time merge candidate may be adaptively turned off according to the size of the current prediction unit. For example, in the case of a 4x4 block, the time merge candidate may be turned off to reduce complexity.
- the predetermined order is A, B, Col, C, and D in that order.
- Col means a time-honored candidate.
- E is added when the predetermined order is A, B, Col, C, D, and at least one of A, B, C, .
- E can be added to the subordinate.
- a list may be constructed in the order of (A and D), (C, B, E), and Col.
- the predetermined order may be A, B, Col, Corner, or A, B, Corner, and Col.
- the number of merge candidates can be determined in slice units or LCU units.
- the merge candidate list is constructed in the order determined in the above-described embodiments.
- merge candidate generation it is determined whether merge candidate generation is necessary (S240). If the number of remaining merge candidates is set to a fixed value, if the number of valid remaining merge candidates is smaller than the fixed number of remaining merge candidates, a merge candidate is generated (S250). Then, the generated merge candidate is added to the remaining candidate list. In this case, the generated merge candidate is added to the next position of the merge candidate with the lowest rank in the list. When a plurality of merge candidates are added, they are added in a predetermined order.
- the merge candidate to be merge-added may be a candidate (first additional merge candidate) whose motion vector is 0 and the reference picture index is 0. Also, it may be a candidate (second additional merge candidate) generated by combining motion information of valid merge candidates. For example, a candidate generated by combining movement information (reference picture index) of a temporal merging candidate and motion information (motion vector) of a valid space candidate can be added. The added merge candidate may be added in the order of the first additional merge candidate, the second additional merge candidate, or vice versa.
- steps S240 and S250 may be omitted.
- one merge candidate is determined as a merge predictor of the current prediction unit in the constructed merge candidate list (S260)
- the index of the merge predictor (i.e., merge index) is encoded (S270). If there is only one candidate, the merge index is omitted. However, if there are more than two merge candidates, the merge index is encoded.
- the merge index may be fixed length coding or CAVLC.
- CAVLC CAVLC
- the merge index for mode word mapping can be adjusted according to the PU shape and the index of the block (PU index).
- the number of merge candidates can be variable.
- a code word corresponding to the merge index is selected using a table determined according to the number of valid merge candidates.
- the number of merge candidates can be fixed. In this case, a code word corresponding to the merge index is selected using one table corresponding to the number of merge candidates.
- the AMVP coding method will be described with reference to FIG.
- a space AMVP candidate and a time AMVP candidate are derived (S310, S320).
- the space AMVP candidates include one of the left prediction unit (block A) and the lower left prediction unit (block D) of the current prediction unit (left candidate), the upper prediction unit of the current prediction unit Block B), an upper right prediction unit (block C) of the current prediction unit, and one upper left candidate prediction unit (block E) of the current prediction unit.
- the motion vector of the first predictive unit which is valid by scanning in a predetermined order is selected as the left or upper candidate.
- the predetermined order may be in the order of block A, block D or reverse order in the case of the left prediction unit and in the order of block B, block C, block E or block C, block B and block E in the case of the upper prediction unit.
- the space AMVP candidate includes a left prediction unit (block A) of the current prediction unit, an upper prediction unit (block B) of the current prediction unit, an upper right prediction unit (block C) of the current prediction unit, And a lower left block (block D) of the current prediction unit.
- all of the valid prediction units may be candidates, or two effective prediction units may be candidates by scanning in order of A, B, C, and D.
- a valid prediction unit existing at the uppermost side or a valid prediction unit having the largest area can be set as the left prediction unit.
- the prediction unit existing on the leftmost side or the prediction unit having the largest area that is effective can be set as the upper prediction unit.
- the space AMVP candidates include a left prediction unit (block A) of the current prediction unit, an upper prediction unit (block B) of the current prediction unit, an upper right prediction unit (block C) of the current prediction unit, A left lower side prediction unit (block D) of the current prediction unit, and an upper left side prediction unit (block E) of the current prediction unit.
- the left prediction unit may be a prediction unit adjacent to the block E but not adjacent to the block D.
- the upper prediction unit may be a prediction unit adjacent to block E but not adjacent to block C.
- the space AMVP candidates include a left prediction unit (block A) of the current prediction unit, an upper prediction unit (block B) of the current prediction unit, an upper right prediction unit (block C) of the current prediction unit, A left lower side prediction unit (block D) of the current prediction unit, and an upper left side prediction unit (block E) of the current prediction unit.
- the block E can be used when any one of the blocks A, B, C, and D is invalid.
- the space AMVP candidates include a left prediction unit (block A) of the current prediction unit, an upper prediction unit (block B) of the current prediction unit, corner prediction units (blocks C, D, E ). ≪ / RTI >
- the corner prediction unit may be the first predictive unit which is scanned in the order of the upper right side prediction unit (block C), the lower left side prediction unit (block D) and the upper left side prediction unit (block E) of the current prediction unit.
- the motion information of the AMVP candidates existing on the upper side of the current prediction unit among the spatial AMVP candidates in the above embodiments may be set differently according to the position of the current prediction unit.
- the motion vector of the upper prediction unit (block B, C or E) of the current prediction unit may be a motion vector of its own or a motion vector of the adjacent prediction unit have.
- the motion vector of the upper prediction unit may be determined as its own motion vector or a motion vector of the adjacent prediction unit according to the size and position of the current prediction unit.
- a picture to which the time AVMP candidate block belongs (hereinafter, a time AVMP candidate picture) is determined.
- the temporal AVMP candidate picture can be set to a picture whose reference picture index is 0.
- the slice type is P
- the first picture of list 0 i.e., the picture whose index is 0
- the slice type is B
- the first picture of the list indicated by the flag indicating the time AVMP candidate list in the slice header is set as the time merge candidate picture.
- a time AVMP candidate block in the time AVMP candidate picture is obtained. This process is the same as that for obtaining the above-mentioned time merge candidate block, and thus is omitted.
- the time AVMP candidate may be adaptively turned off according to the size of the current prediction unit. For example, if the current prediction unit is 4x4, the time AVMP candidate may be turned off for complexity reduction.
- the AVMP candidate list is constructed according to the determined order using the available AVMP candidates. In this case, if a plurality of AVMP candidates have the same motion vector (the reference pictures need not be the same), the AVMP candidates of the latter order are deleted from the list.
- the predetermined order is one of A and D (order A, D or D, A), B, C, E (B, C, E order or C, B, ), Col, or Col, one of A and D, and one of B, C, and E.
- Col represents a time AMVP candidate.
- the order may be A, B, Col, C, D, or C, D, Col, A,
- the specified order is two (valid in order of A, B, C, D, E) .
- E can be added to the subordinate order if the order is A, B, Col, C, D, and at least one of A, B, C, Or Col, (four valid in order of A, B, C, D, and E).
- the order may be A, B, col, corner.
- AVMP candidate generation it is determined whether AVMP candidate generation is necessary (S340). If the number of AVMP candidates is set to a fixed value in the AVMP candidate configuration, if the number of valid AVMP candidates is smaller than the fixed value, an AVMP candidate is generated (S350). The fixed number may be 2 or 3. Then, the generated AMVP candidate is added to the next position of the AMVP candidate with the lowest rank in the list.
- the candidate to be added may be a candidate having a motion vector of 0.
- the motion vector predictor of the current prediction unit is determined in the constructed AMVP candidate list (S360). Then, an AMVP index indicating the predictor is generated.
- the reference picture index, differential motion vector, and AMVP index of the current prediction unit are coded (S380). If there is only one AMVP candidate, the AMVP index may be omitted.
- the AMVP index can be either fixed length coding or CAVLC.
- CAVLC CAVLC index
- the AMVP index for codeword mapping can be adjusted according to the PU shape and the index of the block (PU index).
- the number of AMVP candidates can be variable.
- a codeword corresponding to the AMVP index is selected using a table determined according to the number of valid AMVP candidates.
- the merge candidate block and the AMVP candidate block can be set to be the same. This is the case with the AMVP candidate configuration having the same configuration as the above merge candidate configuration. In this case, the complexity of the encoder can be reduced.
- FIG. 7 is a block diagram illustrating a video decoding apparatus according to an embodiment of the present invention.
- the moving picture decoding apparatus includes an entropy decoding unit 210, an inverse quantization / inverse transform unit 220, an adder 270, a deblocking filter unit 250, a picture storage unit 260, An intra prediction unit 230, a motion compensation prediction unit 240, and an intra / inter changeover switch 280.
- the entropy decoding unit 210 decodes the coded bit stream transmitted from the moving picture coding apparatus and divides the coded bit stream into an intra prediction mode index, motion information, and a quantization coefficient sequence.
- the entropy decoding unit 210 supplies the decoded motion information to the motion compensation prediction unit 240.
- the entropy decoding unit 210 supplies the intra prediction mode index to the intraprediction unit 230 and the inverse quantization / inverse transformation unit 220.
- the entropy decoding unit 210 supplies the inverse quantization coefficient sequence to the inverse quantization / inverse transformation unit 220.
- the inverse quantization / inverse transform unit 220 transforms the quantized coefficient sequence into an inverse quantization coefficient of the two-dimensional array.
- One of a plurality of scanning patterns is selected for the conversion.
- One of a plurality of scanning patterns is selected based on at least one of a prediction mode of the current block (i.e., one of intra prediction and inter prediction) and the intra prediction mode.
- the intraprediction mode is received from an intraprediction unit or an entropy decoding unit.
- the inverse quantization / inverse transform unit 220 restores the quantization coefficients using the selected quantization matrix among the plurality of quantization matrices to the inverse quantization coefficients of the two-dimensional array.
- a different quantization matrix is applied according to the size of the current block to be restored and a quantization matrix is selected based on at least one of a prediction mode and an intra prediction mode of the current block with respect to the same size block. Then, the reconstructed quantized coefficient is inversely transformed to reconstruct the residual block.
- the adder 270 reconstructs the image block by adding the residual block reconstructed by the inverse quantization / inverse transforming unit 220 to the intra prediction unit 230 or the prediction block generated by the motion compensation prediction unit 240.
- the deblocking filter 250 performs deblocking filter processing on the reconstructed image generated by the adder 270. Accordingly, the deblocking artifact due to the video loss due to the quantization process can be reduced.
- the picture storage unit 260 is a frame memory for holding a local decoded picture in which the deblocking filter process is performed by the deblocking filter 250.
- the intraprediction unit 230 restores the intra prediction mode of the current block based on the intra prediction mode index received from the entropy decoding unit 210.
- a prediction block is generated according to the restored intra prediction mode.
- the motion compensation prediction unit 240 generates a prediction block for the current block from the picture stored in the picture storage unit 240 based on the motion vector information.
- a prediction block is generated by applying a selected interpolation filter.
- the intra / inter selector switch 280 provides the adder 235 with a prediction block generated in either the intra prediction unit 250 or the motion compensation prediction unit 260 based on the coding mode.
- FIG. 8 illustrates the inter-prediction decoding process according to the present invention.
- the motion information of the current prediction unit is decoded through the skip mode motion information decoding process of the current prediction unit (S410).
- the skip mode motion information decoding process is the same as the merge mode motion information decoding process.
- the corresponding block of the reference picture indicated by the motion information of the derived current prediction unit is copied to generate a reconstruction block of the current prediction unit (S415).
- motion information of the current prediction unit is coded in the merge mode
- motion information of the current prediction unit is decoded through the merge mode motion information decoding process (S425).
- a prediction block is generated using the decoded motion information of the current prediction unit (S430).
- the residual block is decoded (S435).
- motion information of the current prediction unit is decoded through the AMVP mode motion information decoding process (S445).
- a prediction block is generated using the motion information of the decoded current prediction unit. Then, the residual block is decoded (S455), and a reconstruction block is generated using the prediction block and the residual block (S460).
- the motion information decoding process depends on the coding pattern of the motion information of the current prediction unit.
- the motion information encoding pattern of the current prediction unit may be one of merge mode and AMVP mode.
- the motion information decoding process is the same as that of the merge mode.
- FIG. 9 shows a motion vector decoding process when the number of merge candidates is variable.
- merge candidate configuration and merge candidate search order are the same as those shown in the detailed description related to FIG.
- the motion information of the current prediction unit is generated from the motion information of the merge candidate (S530). That is, the reference picture index and the motion vector of the merge candidate are set as the reference picture index and the motion vector of the current prediction unit.
- a valid merge candidate is searched to construct a merge candidate list (S540).
- the configuration of the merge candidate and the method of constructing the merge candidate list are the same as those shown in the detailed description related to FIG.
- a VLC table corresponding to the remaining candidate number is selected (S550).
- a merge index corresponding to the merge code word is obtained (S560).
- the merge candidate corresponding to the merge index is selected on the merge candidate list, and the motion information of the merge candidate is set as the motion information of the current prediction unit (S570).
- FIG. 10 shows a merge mode motion vector decoding process when the number of merge candidates is fixed.
- the number of merge candidates may be a fixed value in units of pictures or slices.
- the valid merge candidate is searched (S610).
- the Mersey Candidate is composed of Candidates for Space Merge and Time Candidate.
- the location and induction method of the space merge candidate is derived in the same manner as shown in Fig.
- the location and the induction method of the time merge candidate are the same as those shown in FIG.
- the time merge candidate may not apply if the size of the current prediction unit is smaller than a predetermined size. For example, a merge candidate may be omitted in a 4x4 prediction unit.
- a valid merge candidate is found, it is determined whether to generate a merge candidate (S620). If the number of valid remaining candidates is smaller than the predetermined number of remaining candidates, a merge candidate is generated (S630). The motion information of the remaining merge candidates can be combined to generate a merge candidate. A merge coder with a motion vector of 0 and a reference picture index of 0 may be added. Add merge candidates in a predetermined order.
- a merge list is constructed using the merge candidates (S640). This step may be performed in combination with steps S620 and S630.
- the merge candidate configuration and merge candidate search order i.e., list build order
- the merge index corresponding to the merge code word of the received bit stream is recovered (S650). Since the number of merge candidates is fixed, a merge index corresponding to the merge code word can be obtained using one decoding table corresponding to the merge candidate number. However, different decoding tables may be used depending on whether the time merge candidate is used or not.
- a candidate corresponding to the merge index is searched from the merged list (S660).
- the searched merged candidate is determined as a merge predictor.
- the motion information of the current predictor is generated using the motion information of the merge predictor (S670). Specifically, the motion information of the merge predictor, that is, the reference picture index and the motion vector, is determined as the reference picture index and motion vector of the current prediction unit.
- FIG. 11 shows a motion vector decoding process when the number of AMVP candidates is variable.
- the reference picture index and the differential motion vector of the current prediction unit are parsed (S710)
- AMVP candidate configuration and AMVP candidate search order are the same as those shown in the detailed description related to FIG.
- the motion vector of the AMVP candidate is set as a predicted motion vector of the current prediction unit (S740).
- AMVP candidate list If there is an AMVP codeword, a valid AMVP candidate is searched to construct an AMVP candidate list (S750).
- the AMVP candidate configuration and AMVP candidate list construction method are the same as those shown in the detailed description related to FIG.
- a VLC table corresponding to the number of AMVP candidates is selected (S760).
- step S790 the predicted motion vector obtained in step S740 or S780 and the differential motion vector obtained in step S710 are added to the final motion vector of the current block.
- FIG. 12 shows a motion vector decoding process when the number of AMVP candidates is fixed.
- the reference picture index and the difference motion vector of the current prediction unit are parsed (S810).
- AMVP candidates are composed of space AMVP candidates and time AMVP candidates.
- Location and Guidance Method of Space AMVP Candidate and Time The location and guidance method of the AMVP candidate is the same as that shown in Fig. 6 and will be omitted.
- the time AMVP candidate may not apply to prediction units smaller than a predetermined size. For example, a merge candidate may be omitted in a 4x4 prediction unit.
- an AMVP candidate is generated (S840).
- the predetermined number may be 2 or 3.
- a motion vector of the prediction unit may be added if a valid prediction unit other than the spatial upper AMVP candidate exists.
- the motion vector of the prediction unit may be added if there is a valid prediction unit other than the spatial left AMVP candidate.
- an AMVP candidate with a motion vector of zero may be added if there is no spatial left AMVP candidate and there is a spatial upper AMVP candidate.
- the AMVP candidate list is constructed using the available AMVP candidates and / or the generated AMVP candidates (S850).
- the above step may be present after S820. In this case, the step proceeds after step 840.
- the candidate list construction method is the same as that shown in Fig.
- the AMVP index may be fixed length encoded.
- the AMVP candidate corresponding to the AMVP index is selected from the AMVP candidate list (S870). Then, the selected candidate is determined as an AMVP predictor.
- the motion vector of the AMVP predictor is determined as a predictive motion vector of the current prediction unit (S880).
- the differential motion vector obtained in step S810 and the predicted motion vector obtained in step S880 are added to the final motion vector of the current prediction unit and the reference picture index obtained in step S810 is set as a reference picture index of the current prediction unit (S880 ).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (9)
- AMVP 모드에서의 예측 블록 생성 방법에 있어서,현재 예측 유닛의 참조 픽쳐 인덱스 및 차분 움직임 벡터를 복원하는 단계;현재 예측 유닛의 유효한 공간 AMVP 후보자들을 검색하는 단계;현재 예측 유닛의 유효한 시간 AMVP 후보자를 검색하는 단계;상기 유효한 공간 및 시간 AMVP 후보자를 이용하여 AMVP 후보자 리스트를 구축하는 단계;상기 유효한 AMVP 후보자의 수가 미리 정해진 수보다 작으면, 미리 정해진 값을 갖는 움직임 벡터를 후보자로 상기 리스트에 추가하는 단계;상기 AMVP 후보자 리스트 내의 움직임 벡터들 중에서, 현재 예측 유닛의 AMVP 인덱스에 대응하는 움직임 벡터를 현재 예측 유닛의 예측 움직임 벡터로 결정하는 단계;상기 차분 움직임 벡터 및 상기 예측 움직임 벡터를 이용하여 현재 예측 유닛이 움직임 벡터를 복원하는 단계; 및상기 참조 픽쳐 인덱스가 나타내는 참조 픽쳐 내에서 상기 복원된 움직임 벡터가 나타내는 위치에 대응하는 예측 블록을 생성하는 단계를 포함하는 것을 특징으로 하는 예측 블록 생성 방법.
- 제1항에 있어서, 상기 AMVP 후보자 리스트를 구축하는 단계는 움직임 벡터가 동일한 AMVP 후보자가 2 이상이면, 후순위의 AMVP 후보자를 상기 리스트에서 삭제하는 것을 특징으로 하는 예측 블록 생성 방법.
- 제1항에 있어서, 상기 시간 AMVP 후보자를 검색하는 단계는시간 AMVP 후보자 픽쳐를 결정하는 단계;상기 시간 AMVP 후보자 픽쳐 내의 시간 AMVP 후보자 블록을 결정하는 단계를 포함하고, 상기 시간 AMVP 후보자 픽쳐는 슬라이스 타입에 따라 달리 결정되는 것을 특징으로 하는 예측 블록 생성 방법.
- 제3항에 있어서, 상기 시간 AMVP 후보자 픽쳐는 참조 픽쳐 인덱스가 0인 픽쳐인 것을 특징으로 하는 예측 블록 생성 방법.
- 제3항에 있어서, 상기 시간 AMVP 후보자 블록은 제1 후보자 블록 또는 제2 후보자 블록이고, 상기 제1 후보자 블록은 상기 시간 후보자 픽쳐 내의 현재 예측 유닛에 대응하는 블록의 좌하측 코너 블록이고, 제2 후보자 블록은 현재 예측 유닛에 대응하는 블록의 중앙 위치의 우하측 픽셀을 포함하는 블록인 것을 특징으로 하는 예측 블록 생성 방법.
- 제5항에 있어서, 상기 시간 AMVP 후보자 블록은 제1 후보자 블록 및 제2 후보자 블록 순으로 검색하여 유효한 첫번째 블록인 것을 특징으로 하는 예측 블록 생성 방법.
- 제1항에 있어서, 현재 예측 유닛의 공간 AMVP 후보자의 움직임 벡터는 현재 예측 유닛의 크기 및 위치에 따라 달리 설정되는 것을 특징으로 하는 예측 블록 생성 방법.
- 제7항에 있어서, 현재 예측 유닛이 LCU 상측경계와 접하는 경우, 현재 예측 유닛의 상측에 존재하는 공간 AMVP 후보자의 움직임 벡터는 상기 AMVP 후보자에 대응하는 예측 유닛의 움직임 벡터 또는 상기 AMVP 후보자의 좌측 또는 우측 예측 유닛의 움직임 벡터인 것을 특징으로 하는 예측 블록 생성 방법.
- 제1항에 있어서, 상기 미리 정해진 수는 2인 것을 특징으로 하는 예측 블록 생성 방법.
Priority Applications (29)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
MX2017014038A MX365013B (es) | 2011-08-29 | 2012-01-20 | Metodo para generar un bloque de prediccion en modo de prediccion de vector de movimiento avanzada (amvp). |
KR1020147011166A KR101492104B1 (ko) | 2011-08-29 | 2012-01-20 | 동영상 복호화 장치 |
MX2014002346A MX343471B (es) | 2011-08-29 | 2012-01-20 | Metodo para generar un bloque de prediccion en modo de prediccion de vector de movimiento avanzada (amvp). |
KR1020147011164A KR101492105B1 (ko) | 2011-08-29 | 2012-01-20 | Amvp 모드에서 영상 부호화 방법 |
MX2016014512A MX351933B (es) | 2011-08-29 | 2012-01-20 | Método para generar un bloque de predicción en modo de predicción de vector de movimiento avanzada (amvp). |
KR1020127006971A KR20140105039A (ko) | 2012-01-20 | 2012-01-20 | Amvp 모드에서의 예측 블록 생성 방법 |
CN201280042121.1A CN103765886B (zh) | 2011-08-29 | 2012-01-20 | 以amvp模式产生预测区块的方法 |
KR1020127006460A KR101210892B1 (ko) | 2011-08-29 | 2012-01-20 | Amvp 모드에서의 예측 블록 생성 방법 |
MX2019005839A MX2019005839A (es) | 2011-08-29 | 2012-01-20 | Metodo para generar un bloque de prediccion en modo de prediccion de vector de movimiento avanzada (amvp). |
BR112014004914-9A BR112014004914B1 (pt) | 2011-08-29 | 2012-01-20 | Método de codificação de uma imagem em um modo amvp |
KR1020147011165A KR20140057683A (ko) | 2011-08-29 | 2012-01-20 | 머지 모드에서 영상 부호화 방법 |
US13/742,058 US20130128982A1 (en) | 2011-08-29 | 2013-01-15 | Method for generating prediction block in amvp mode |
US14/083,232 US9948945B2 (en) | 2011-08-29 | 2013-11-18 | Method for generating prediction block in AMVP mode |
US14/586,406 US20150110197A1 (en) | 2011-08-29 | 2014-12-30 | Method for generating prediction block in amvp mode |
US14/586,471 US9800887B2 (en) | 2011-08-29 | 2014-12-30 | Method for generating prediction block in AMVP mode |
US15/335,015 US20170048539A1 (en) | 2011-08-29 | 2016-10-26 | Method for generating prediction block in amvp mode |
US15/708,740 US10123033B2 (en) | 2011-08-29 | 2017-09-19 | Method for generating prediction block in AMVP mode |
US15/708,526 US10123032B2 (en) | 2011-08-29 | 2017-09-19 | Method for generating prediction block in AMVP mode |
US15/879,249 US10123035B2 (en) | 2011-08-29 | 2018-01-24 | Method for generating prediction block in AMVP mode |
US15/879,236 US10123034B2 (en) | 2011-08-29 | 2018-01-24 | Method for generating prediction block in AMVP mode |
US15/879,230 US10148976B2 (en) | 2011-08-29 | 2018-01-24 | Method for generating prediction block in AMVP mode |
US16/165,185 US10798401B2 (en) | 2011-08-29 | 2018-10-19 | Method for generating prediction block in AMVP mode |
US17/061,676 US11350121B2 (en) | 2011-08-29 | 2020-10-02 | Method for generating prediction block in AMVP mode |
US17/526,747 US11689734B2 (en) | 2011-08-29 | 2021-11-15 | Method for generating prediction block in AMVP mode |
US17/672,272 US11778225B2 (en) | 2011-08-29 | 2022-02-15 | Method for generating prediction block in AMVP mode |
US18/314,898 US20230276067A1 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in amvp mode |
US18/314,890 US20230276066A1 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in amvp mode |
US18/314,888 US20230283798A1 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in amvp mode |
US18/314,903 US20230283799A1 (en) | 2011-08-29 | 2023-05-10 | Method for generating prediction block in amvp mode |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20110086518 | 2011-08-29 | ||
KR10-2011-0086518 | 2011-08-29 |
Related Child Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13742058 A-371-Of-International | 2012-01-20 | ||
US13/742,058 Continuation US20130128982A1 (en) | 2011-08-29 | 2013-01-15 | Method for generating prediction block in amvp mode |
US14/083,232 Continuation US9948945B2 (en) | 2011-08-29 | 2013-11-18 | Method for generating prediction block in AMVP mode |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013032073A1 true WO2013032073A1 (ko) | 2013-03-07 |
Family
ID=47756521
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2012/000522 WO2013032073A1 (ko) | 2011-08-29 | 2012-01-20 | Amvp 모드에서의 예측 블록 생성 방법 |
Country Status (6)
Country | Link |
---|---|
US (18) | US20130128982A1 (ko) |
KR (4) | KR20140057683A (ko) |
CN (7) | CN107197272B (ko) |
BR (1) | BR112014004914B1 (ko) |
MX (4) | MX2019005839A (ko) |
WO (1) | WO2013032073A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015057037A1 (ko) * | 2013-10-18 | 2015-04-23 | 엘지전자 주식회사 | 멀티-뷰 비디오를 디코딩하는 비디오 디코딩 장치 및 방법 |
Families Citing this family (66)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5875979B2 (ja) * | 2010-06-03 | 2016-03-02 | シャープ株式会社 | フィルタ装置、画像復号装置、画像符号化装置、および、フィルタパラメータのデータ構造 |
KR20120016991A (ko) * | 2010-08-17 | 2012-02-27 | 오수미 | 인터 프리딕션 방법 |
EP2654302B1 (en) | 2010-12-13 | 2019-09-04 | Electronics and Telecommunications Research Institute | Inter prediction method |
CN107071459B (zh) * | 2010-12-14 | 2020-01-03 | M&K控股株式会社 | 用于编码运动画面的设备 |
KR20120140181A (ko) | 2011-06-20 | 2012-12-28 | 한국전자통신연구원 | 화면내 예측 블록 경계 필터링을 이용한 부호화/복호화 방법 및 그 장치 |
KR20130050403A (ko) * | 2011-11-07 | 2013-05-16 | 오수미 | 인터 모드에서의 복원 블록 생성 방법 |
CN109996082B (zh) | 2011-11-08 | 2022-01-25 | 韩国电子通信研究院 | 用于共享候选者列表的方法和装置 |
CN105659602B (zh) | 2013-10-14 | 2019-10-08 | 微软技术许可有限责任公司 | 用于视频和图像编码的帧内块复制预测模式的编码器侧选项 |
WO2015054812A1 (en) | 2013-10-14 | 2015-04-23 | Microsoft Technology Licensing, Llc | Features of base color index map mode for video and image coding and decoding |
WO2015054811A1 (en) | 2013-10-14 | 2015-04-23 | Microsoft Corporation | Features of intra block copy prediction mode for video and image coding and decoding |
US10390034B2 (en) | 2014-01-03 | 2019-08-20 | Microsoft Technology Licensing, Llc | Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area |
MX360926B (es) | 2014-01-03 | 2018-11-22 | Microsoft Technology Licensing Llc | Prediccion de vector de bloque en codificacion/descodificacion de video e imagen. |
US11284103B2 (en) | 2014-01-17 | 2022-03-22 | Microsoft Technology Licensing, Llc | Intra block copy prediction with asymmetric partitions and encoder-side search patterns, search ranges and approaches to partitioning |
US10542274B2 (en) | 2014-02-21 | 2020-01-21 | Microsoft Technology Licensing, Llc | Dictionary encoding and decoding of screen content |
EP3253059A1 (en) | 2014-03-04 | 2017-12-06 | Microsoft Technology Licensing, LLC | Block flipping and skip mode in intra block copy prediction |
EP3158734A4 (en) | 2014-06-19 | 2017-04-26 | Microsoft Technology Licensing, LLC | Unified intra block copy and inter prediction modes |
KR102330740B1 (ko) | 2014-09-30 | 2021-11-23 | 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 | 파면 병렬 프로세싱이 인에이블되는 경우의 인트라 픽쳐 예측 모드에 대한 규칙 |
WO2016056842A1 (ko) * | 2014-10-07 | 2016-04-14 | 삼성전자 주식회사 | 뷰 병합 예측을 이용하여 영상을 부호화 또는 복호화 하는 방법 및 그 장치 |
US9591325B2 (en) | 2015-01-27 | 2017-03-07 | Microsoft Technology Licensing, Llc | Special case handling for merged chroma blocks in intra block copy prediction mode |
US10659783B2 (en) | 2015-06-09 | 2020-05-19 | Microsoft Technology Licensing, Llc | Robust encoding/decoding of escape-coded pixels in palette mode |
US10148977B2 (en) * | 2015-06-16 | 2018-12-04 | Futurewei Technologies, Inc. | Advanced coding techniques for high efficiency video coding (HEVC) screen content coding (SCC) extensions |
US20170085886A1 (en) * | 2015-09-18 | 2017-03-23 | Qualcomm Incorporated | Variable partition size for block prediction mode for display stream compression (dsc) |
KR102169435B1 (ko) * | 2016-03-21 | 2020-10-23 | 후아웨이 테크놀러지 컴퍼니 리미티드 | 가중된 행렬 계수의 적응형 양자화 |
CN109417629B (zh) * | 2016-07-12 | 2023-07-14 | 韩国电子通信研究院 | 图像编码/解码方法以及用于该方法的记录介质 |
WO2018030773A1 (ko) * | 2016-08-11 | 2018-02-15 | 한국전자통신연구원 | 영상 부호화/복호화 방법 및 장치 |
US11095892B2 (en) * | 2016-09-20 | 2021-08-17 | Kt Corporation | Method and apparatus for processing video signal |
US10911761B2 (en) | 2016-12-27 | 2021-02-02 | Mediatek Inc. | Method and apparatus of bilateral template MV refinement for video coding |
KR20180111378A (ko) * | 2017-03-31 | 2018-10-11 | 주식회사 칩스앤미디어 | 병렬 처리를 위한 움직임 정보를 처리하는 영상 처리 방법, 그를 이용한 영상 복호화, 부호화 방법 및 그 장치 |
US10630974B2 (en) * | 2017-05-30 | 2020-04-21 | Google Llc | Coding of intra-prediction modes |
KR102302671B1 (ko) | 2017-07-07 | 2021-09-15 | 삼성전자주식회사 | 적응적 움직임 벡터 해상도로 결정된 움직임 벡터의 부호화 장치 및 부호화 방법, 및 움직임 벡터의 복호화 장치 및 복호화 방법 |
WO2019066514A1 (ko) | 2017-09-28 | 2019-04-04 | 삼성전자 주식회사 | 부호화 방법 및 그 장치, 복호화 방법 및 그 장치 |
CN107613305B (zh) * | 2017-10-12 | 2020-04-07 | 杭州当虹科技股份有限公司 | 一种hevc中p、b帧快速运动估计方法 |
US10986349B2 (en) | 2017-12-29 | 2021-04-20 | Microsoft Technology Licensing, Llc | Constraints on locations of reference blocks for intra block copy prediction |
JP7104186B2 (ja) | 2018-06-05 | 2022-07-20 | 北京字節跳動網絡技術有限公司 | Ibcとatmvpとの間でのインタラクション |
TWI746994B (zh) | 2018-06-19 | 2021-11-21 | 大陸商北京字節跳動網絡技術有限公司 | 用於不同參考列表的不同精確度 |
CN110636297B (zh) | 2018-06-21 | 2021-05-14 | 北京字节跳动网络技术有限公司 | 分量相关的子块分割 |
CN110636298B (zh) | 2018-06-21 | 2022-09-13 | 北京字节跳动网络技术有限公司 | 对于Merge仿射模式和非Merge仿射模式的统一约束 |
EP3791587A1 (en) | 2018-06-29 | 2021-03-17 | Beijing Bytedance Network Technology Co. Ltd. | Resetting of look up table per slice/tile/lcu row |
JP7328330B2 (ja) | 2018-06-29 | 2023-08-16 | 北京字節跳動網絡技術有限公司 | Lutにおける動き候補のチェック順序 |
WO2020003281A1 (en) * | 2018-06-29 | 2020-01-02 | Beijing Bytedance Network Technology Co., Ltd. | Video bitstream processing using an extended merge mode and signaled motion information of a block |
KR20210025537A (ko) | 2018-06-29 | 2021-03-09 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 하나 또는 다수의 룩업 테이블들을 사용하여 이전에 코딩된 모션 정보를 순서대로 저장하고 이를 사용하여 후속 블록들을 코딩하는 개념 |
SG11202012293RA (en) | 2018-06-29 | 2021-01-28 | Beijing Bytedance Network Technology Co Ltd | Update of look up table: fifo, constrained fifo |
CN110662037B (zh) * | 2018-06-29 | 2022-06-28 | 北京字节跳动网络技术有限公司 | 运动信息共享的限制 |
TWI719523B (zh) | 2018-06-29 | 2021-02-21 | 大陸商北京字節跳動網絡技術有限公司 | 哪個查找表需要更新或不更新 |
CN110662053B (zh) | 2018-06-29 | 2022-03-25 | 北京字节跳动网络技术有限公司 | 使用查找表的视频处理方法、装置和存储介质 |
JP7460617B2 (ja) | 2018-06-29 | 2024-04-02 | 北京字節跳動網絡技術有限公司 | Lut更新条件 |
WO2020003284A1 (en) | 2018-06-29 | 2020-01-02 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between lut and amvp |
EP3791585A1 (en) | 2018-06-29 | 2021-03-17 | Beijing Bytedance Network Technology Co. Ltd. | Partial/full pruning when adding a hmvp candidate to merge/amvp |
CN110677669B (zh) | 2018-07-02 | 2021-12-07 | 北京字节跳动网络技术有限公司 | 具有lic的lut |
CN110876058B (zh) * | 2018-08-30 | 2021-09-21 | 华为技术有限公司 | 一种历史候选列表更新方法与装置 |
TW202017377A (zh) | 2018-09-08 | 2020-05-01 | 大陸商北京字節跳動網絡技術有限公司 | 視頻編碼和解碼中的仿射模式 |
WO2020053800A1 (en) | 2018-09-12 | 2020-03-19 | Beijing Bytedance Network Technology Co., Ltd. | How many hmvp candidates to be checked |
TWI815967B (zh) | 2018-09-19 | 2023-09-21 | 大陸商北京字節跳動網絡技術有限公司 | 仿射模式編解碼的模式相關自適應調整運動矢量分辨率 |
WO2020071846A1 (ko) * | 2018-10-06 | 2020-04-09 | 엘지전자 주식회사 | 인트라 예측을 사용하여 비디오 신호를 처리하기 위한 방법 및 장치 |
CN112997480B (zh) * | 2018-11-10 | 2023-08-22 | 北京字节跳动网络技术有限公司 | 成对平均候选计算中的取整 |
WO2020141914A1 (ko) * | 2019-01-01 | 2020-07-09 | 엘지전자 주식회사 | 히스토리 기반 모션 벡터 예측을 기반으로 비디오 신호를 처리하기 위한 방법 및 장치 |
KR102648159B1 (ko) | 2019-01-10 | 2024-03-18 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | Lut 업데이트의 호출 |
CN113383554B (zh) | 2019-01-13 | 2022-12-16 | 北京字节跳动网络技术有限公司 | LUT和共享Merge列表之间的交互 |
WO2020147773A1 (en) | 2019-01-16 | 2020-07-23 | Beijing Bytedance Network Technology Co., Ltd. | Inserting order of motion candidates in lut |
CN113366839B (zh) * | 2019-01-31 | 2024-01-12 | 北京字节跳动网络技术有限公司 | 视频编解码中的细化量化步骤 |
JP7235877B2 (ja) | 2019-01-31 | 2023-03-08 | 北京字節跳動網絡技術有限公司 | アフィンモード適応型動きベクトル解像度を符号化するためのコンテキスト |
US10742972B1 (en) * | 2019-03-08 | 2020-08-11 | Tencent America LLC | Merge list construction in triangular prediction |
CN111698506B (zh) | 2019-03-11 | 2022-04-26 | 杭州海康威视数字技术股份有限公司 | 运动信息候选者列表构建方法、三角预测解码方法及装置 |
CN113615193A (zh) | 2019-03-22 | 2021-11-05 | 北京字节跳动网络技术有限公司 | Merge列表构建和其他工具之间的交互 |
US11297320B2 (en) * | 2020-01-10 | 2022-04-05 | Mediatek Inc. | Signaling quantization related parameters |
US11968356B2 (en) * | 2022-03-16 | 2024-04-23 | Qualcomm Incorporated | Decoder-side motion vector refinement (DMVR) inter prediction using shared interpolation filters and reference pixels |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090038278A (ko) * | 2007-10-15 | 2009-04-20 | 세종대학교산학협력단 | 영상의 부호화, 복호화 방법 및 장치 |
KR20110027480A (ko) * | 2009-09-10 | 2011-03-16 | 에스케이 텔레콤주식회사 | 움직임 벡터 부호화/복호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 |
KR20110090841A (ko) * | 2010-02-02 | 2011-08-10 | (주)휴맥스 | 가중치 예측을 이용한 영상 부호화/복호화 장치 및 방법 |
Family Cites Families (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5778143A (en) * | 1993-01-13 | 1998-07-07 | Hitachi America, Ltd. | Method and apparatus for the selection of data for use in VTR trick playback operation in a system using progressive picture refresh |
TW536692B (en) * | 1999-04-16 | 2003-06-11 | Dolby Lab Licensing Corp | Using gain-adaptive quantization and non-uniform symbol lengths for improved audio coding |
JP2002112268A (ja) * | 2000-09-29 | 2002-04-12 | Toshiba Corp | 圧縮画像データ復号装置 |
CN100459708C (zh) * | 2001-01-23 | 2009-02-04 | 皇家菲利浦电子有限公司 | 将水印嵌入信息信号中的方法和设备 |
US7920624B2 (en) * | 2002-04-01 | 2011-04-05 | Broadcom Corporation | Inverse quantizer supporting multiple decoding processes |
JP4130783B2 (ja) * | 2002-04-23 | 2008-08-06 | 松下電器産業株式会社 | 動きベクトル符号化方法および動きベクトル復号化方法 |
JP2004023458A (ja) * | 2002-06-17 | 2004-01-22 | Toshiba Corp | 動画像符号化/復号化方法及び装置 |
CN1225127C (zh) * | 2003-09-12 | 2005-10-26 | 中国科学院计算技术研究所 | 一种用于视频编码的编码端/解码端双向预测方法 |
US7561620B2 (en) * | 2004-08-03 | 2009-07-14 | Microsoft Corporation | System and process for compressing and decompressing multiple, layered, video streams employing spatial and temporal encoding |
US7720154B2 (en) * | 2004-11-12 | 2010-05-18 | Industrial Technology Research Institute | System and method for fast variable-size motion estimation |
KR20060105810A (ko) | 2005-04-04 | 2006-10-11 | 엘지전자 주식회사 | 무선통신을 이용한 착신전환 방법 및 시스템 |
KR100733966B1 (ko) | 2005-04-13 | 2007-06-29 | 한국전자통신연구원 | 움직임 벡터 예측 장치 및 방법 |
US8422546B2 (en) * | 2005-05-25 | 2013-04-16 | Microsoft Corporation | Adaptive video encoding using a perceptual model |
KR100727990B1 (ko) * | 2005-10-01 | 2007-06-13 | 삼성전자주식회사 | 영상의 인트라 예측 부호화 방법 및 그 방법을 사용하는부호화 장치 |
CN101379847B (zh) * | 2006-02-07 | 2012-11-07 | 日本电气株式会社 | 移动通信系统、无线电基站控制器、和重定位方法 |
EP2076049A4 (en) * | 2007-03-28 | 2011-09-14 | Panasonic Corp | DECODING CIRCUIT, DECODING PROCESS, CODING CIRCUIT, AND CODING METHOD |
US8428133B2 (en) * | 2007-06-15 | 2013-04-23 | Qualcomm Incorporated | Adaptive coding of video block prediction mode |
WO2009045683A1 (en) * | 2007-09-28 | 2009-04-09 | Athanasios Leontaris | Video compression and tranmission techniques |
CN101472174A (zh) * | 2007-12-29 | 2009-07-01 | 智多微电子(上海)有限公司 | 视频解码器中用于复原原始图像数据的方法及装置 |
CN101610413B (zh) * | 2009-07-29 | 2011-04-27 | 清华大学 | 一种视频的编码/解码方法及装置 |
US9060176B2 (en) * | 2009-10-01 | 2015-06-16 | Ntt Docomo, Inc. | Motion vector prediction in video coding |
US20110274162A1 (en) * | 2010-05-04 | 2011-11-10 | Minhua Zhou | Coding Unit Quantization Parameters in Video Coding |
JP2011160359A (ja) | 2010-02-03 | 2011-08-18 | Sharp Corp | ブロックノイズ量予測装置、ブロックノイズ量予測方法、画像処理装置、プログラム、及び、記録媒体 |
WO2011110039A1 (en) * | 2010-03-12 | 2011-09-15 | Mediatek Singapore Pte. Ltd. | Motion prediction methods |
KR101752418B1 (ko) * | 2010-04-09 | 2017-06-29 | 엘지전자 주식회사 | 비디오 신호 처리 방법 및 장치 |
KR101484281B1 (ko) * | 2010-07-09 | 2015-01-21 | 삼성전자주식회사 | 블록 병합을 이용한 비디오 부호화 방법 및 그 장치, 블록 병합을 이용한 비디오 복호화 방법 및 그 장치 |
US9124898B2 (en) * | 2010-07-12 | 2015-09-01 | Mediatek Inc. | Method and apparatus of temporal motion vector prediction |
KR20120012385A (ko) * | 2010-07-31 | 2012-02-09 | 오수미 | 인트라 예측 부호화 장치 |
KR101373814B1 (ko) * | 2010-07-31 | 2014-03-18 | 엠앤케이홀딩스 주식회사 | 예측 블록 생성 장치 |
KR20120016991A (ko) * | 2010-08-17 | 2012-02-27 | 오수미 | 인터 프리딕션 방법 |
HRP20231669T1 (hr) * | 2010-09-02 | 2024-03-15 | Lg Electronics, Inc. | Inter predikcija |
CN102006480B (zh) * | 2010-11-29 | 2013-01-30 | 清华大学 | 基于视间预测的双目立体视频的编码及解码方法 |
CN107071459B (zh) * | 2010-12-14 | 2020-01-03 | M&K控股株式会社 | 用于编码运动画面的设备 |
US9621916B2 (en) * | 2010-12-14 | 2017-04-11 | M&K Holdings Inc. | Apparatus for encoding a moving picture |
US9473789B2 (en) * | 2010-12-14 | 2016-10-18 | M&K Holdings Inc. | Apparatus for decoding a moving picture |
KR101831311B1 (ko) * | 2010-12-31 | 2018-02-23 | 한국전자통신연구원 | 영상 정보 부호화 방법 및 복호화 방법과 이를 이용한 장치 |
EP2664146A1 (en) * | 2011-01-14 | 2013-11-20 | Motorola Mobility LLC | Joint spatial and temporal block merge mode for hevc |
JP5358746B2 (ja) * | 2011-03-03 | 2013-12-04 | パナソニック株式会社 | 動画像符号化方法、動画像符号化装置及びプログラム |
US9288501B2 (en) * | 2011-03-08 | 2016-03-15 | Qualcomm Incorporated | Motion vector predictors (MVPs) for bi-predictive inter mode in video coding |
US9066110B2 (en) * | 2011-03-08 | 2015-06-23 | Texas Instruments Incorporated | Parsing friendly and error resilient merge flag coding in video coding |
US9648334B2 (en) * | 2011-03-21 | 2017-05-09 | Qualcomm Incorporated | Bi-predictive merge mode based on uni-predictive neighbors in video coding |
US9247266B2 (en) * | 2011-04-18 | 2016-01-26 | Texas Instruments Incorporated | Temporal motion data candidate derivation in video coding |
US9247249B2 (en) * | 2011-04-20 | 2016-01-26 | Qualcomm Incorporated | Motion vector prediction in video coding |
US9866859B2 (en) * | 2011-06-14 | 2018-01-09 | Texas Instruments Incorporated | Inter-prediction candidate index coding independent of inter-prediction candidate list construction in video coding |
US9131239B2 (en) * | 2011-06-20 | 2015-09-08 | Qualcomm Incorporated | Unified merge mode and adaptive motion vector prediction mode candidates selection |
US8896284B2 (en) * | 2011-06-28 | 2014-11-25 | Texas Instruments Incorporated | DC-DC converter using internal ripple with the DCM function |
CN107360432A (zh) * | 2011-08-29 | 2017-11-17 | 苗太平洋控股有限公司 | 用于解码合并模式下的运动信息的装置 |
US9083983B2 (en) * | 2011-10-04 | 2015-07-14 | Qualcomm Incorporated | Motion vector predictor candidate clipping removal for video coding |
US9762904B2 (en) * | 2011-12-22 | 2017-09-12 | Qualcomm Incorporated | Performing motion vector prediction for video coding |
BR122020008353B1 (pt) * | 2011-12-28 | 2022-05-10 | JVC Kenwood Corporation | Dispositivo de codificação de foto em movimento e método de codificação de foto em movimento |
JP6171627B2 (ja) * | 2013-06-28 | 2017-08-02 | 株式会社Jvcケンウッド | 画像符号化装置、画像符号化方法、画像符号化プログラム、画像復号装置、画像復号方法および画像復号プログラム |
US9794870B2 (en) * | 2013-06-28 | 2017-10-17 | Intel Corporation | User equipment and method for user equipment feedback of flow-to-rat mapping preferences |
US9432685B2 (en) * | 2013-12-06 | 2016-08-30 | Qualcomm Incorporated | Scalable implementation for parallel motion estimation regions |
-
2012
- 2012-01-20 CN CN201710357011.9A patent/CN107197272B/zh active Active
- 2012-01-20 CN CN201611152704.6A patent/CN106851311B/zh active Active
- 2012-01-20 KR KR1020147011165A patent/KR20140057683A/ko not_active Application Discontinuation
- 2012-01-20 MX MX2019005839A patent/MX2019005839A/es unknown
- 2012-01-20 WO PCT/KR2012/000522 patent/WO2013032073A1/ko active Application Filing
- 2012-01-20 CN CN201280042121.1A patent/CN103765886B/zh active Active
- 2012-01-20 CN CN201710358282.6A patent/CN107277548B/zh active Active
- 2012-01-20 KR KR1020147011164A patent/KR101492105B1/ko active IP Right Review Request
- 2012-01-20 KR KR1020127006460A patent/KR101210892B1/ko active IP Right Review Request
- 2012-01-20 MX MX2014002346A patent/MX343471B/es active IP Right Grant
- 2012-01-20 CN CN201710357777.7A patent/CN107277547B/zh active Active
- 2012-01-20 KR KR1020147011166A patent/KR101492104B1/ko active IP Right Review Request
- 2012-01-20 CN CN201510012849.5A patent/CN104883576B/zh active Active
- 2012-01-20 MX MX2016014512A patent/MX351933B/es unknown
- 2012-01-20 MX MX2017014038A patent/MX365013B/es unknown
- 2012-01-20 CN CN201710358284.5A patent/CN107257480B/zh active Active
- 2012-01-20 BR BR112014004914-9A patent/BR112014004914B1/pt active IP Right Grant
-
2013
- 2013-01-15 US US13/742,058 patent/US20130128982A1/en not_active Abandoned
- 2013-11-18 US US14/083,232 patent/US9948945B2/en active Active
-
2014
- 2014-12-30 US US14/586,471 patent/US9800887B2/en active Active
- 2014-12-30 US US14/586,406 patent/US20150110197A1/en not_active Abandoned
-
2016
- 2016-10-26 US US15/335,015 patent/US20170048539A1/en not_active Abandoned
-
2017
- 2017-09-19 US US15/708,740 patent/US10123033B2/en active Active
- 2017-09-19 US US15/708,526 patent/US10123032B2/en active Active
-
2018
- 2018-01-24 US US15/879,249 patent/US10123035B2/en active Active
- 2018-01-24 US US15/879,236 patent/US10123034B2/en active Active
- 2018-01-24 US US15/879,230 patent/US10148976B2/en active Active
- 2018-10-19 US US16/165,185 patent/US10798401B2/en active Active
-
2020
- 2020-10-02 US US17/061,676 patent/US11350121B2/en active Active
-
2021
- 2021-11-15 US US17/526,747 patent/US11689734B2/en active Active
-
2022
- 2022-02-15 US US17/672,272 patent/US11778225B2/en active Active
-
2023
- 2023-05-10 US US18/314,898 patent/US20230276067A1/en active Pending
- 2023-05-10 US US18/314,888 patent/US20230283798A1/en active Pending
- 2023-05-10 US US18/314,890 patent/US20230276066A1/en active Pending
- 2023-05-10 US US18/314,903 patent/US20230283799A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090038278A (ko) * | 2007-10-15 | 2009-04-20 | 세종대학교산학협력단 | 영상의 부호화, 복호화 방법 및 장치 |
KR20110027480A (ko) * | 2009-09-10 | 2011-03-16 | 에스케이 텔레콤주식회사 | 움직임 벡터 부호화/복호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 |
KR20110090841A (ko) * | 2010-02-02 | 2011-08-10 | (주)휴맥스 | 가중치 예측을 이용한 영상 부호화/복호화 장치 및 방법 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015057037A1 (ko) * | 2013-10-18 | 2015-04-23 | 엘지전자 주식회사 | 멀티-뷰 비디오를 디코딩하는 비디오 디코딩 장치 및 방법 |
CN105637874A (zh) * | 2013-10-18 | 2016-06-01 | Lg电子株式会社 | 解码多视图视频的视频解码装置和方法 |
US10063887B2 (en) | 2013-10-18 | 2018-08-28 | Lg Electronics Inc. | Video decoding apparatus and method for decoding multi-view video |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013032073A1 (ko) | Amvp 모드에서의 예측 블록 생성 방법 | |
WO2017052081A1 (ko) | 영상 코딩 시스템에서 인터 예측 방법 및 장치 | |
WO2018212578A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2012081879A1 (ko) | 인터 예측 부호화된 동영상 복호화 방법 | |
WO2017188566A1 (ko) | 영상 코딩 시스템에서 인터 예측 방법 및 장치 | |
WO2016052977A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016200100A1 (ko) | 적응적 가중치 예측을 위한 신택스 시그널링을 이용하여 영상을 부호화 또는 복호화하는 방법 및 장치 | |
WO2019117634A1 (ko) | 2차 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2017065357A1 (ko) | 영상 코딩 시스템에서 예측 향상을 위한 필터링 방법 및 장치 | |
WO2013002557A2 (ko) | 움직임 정보의 부호화 방법 및 장치, 그 복호화 방법 및 장치 | |
WO2012173415A2 (ko) | 움직임 정보의 부호화 방법 및 장치, 그 복호화 방법 및 장치 | |
WO2017082443A1 (ko) | 영상 코딩 시스템에서 임계값을 이용한 적응적 영상 예측 방법 및 장치 | |
WO2017052009A1 (ko) | 영상 코딩 시스템에서 amvr 기반한 영상 코딩 방법 및 장치 | |
WO2014171713A1 (ko) | 인트라 예측을 이용한 비디오 부호화/복호화 방법 및 장치 | |
WO2012023763A2 (ko) | 인터 예측 부호화 방법 | |
WO2019045392A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016159610A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016085231A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2015020504A1 (ko) | 병합 모드 결정 방법 및 장치 | |
WO2016114583A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2020130600A1 (ko) | 예측 모드를 시그널링하는 비디오 신호 처리 방법 및 장치 | |
WO2016122251A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016122253A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2017195917A1 (ko) | 비디오 코딩 시스템에서 인트라 예측 방법 및 장치 | |
WO2019027200A1 (ko) | 비-제로 계수들의 위치를 표현하는 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20127006460 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12829062 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2014/002346 Country of ref document: MX |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27-06-2014) |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112014004914 Country of ref document: BR |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12829062 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: IDP00201607817 Country of ref document: ID |
|
ENP | Entry into the national phase |
Ref document number: 112014004914 Country of ref document: BR Kind code of ref document: A2 Effective date: 20140228 |