WO2009096721A2 - Procédé et appareil servant à coder et à décoder un signal vidéo à l'aide d'une compensation de déplacement fondée sur une transformation affine - Google Patents

Procédé et appareil servant à coder et à décoder un signal vidéo à l'aide d'une compensation de déplacement fondée sur une transformation affine Download PDF

Info

Publication number
WO2009096721A2
WO2009096721A2 PCT/KR2009/000441 KR2009000441W WO2009096721A2 WO 2009096721 A2 WO2009096721 A2 WO 2009096721A2 KR 2009000441 W KR2009000441 W KR 2009000441W WO 2009096721 A2 WO2009096721 A2 WO 2009096721A2
Authority
WO
WIPO (PCT)
Prior art keywords
affine
current block
transformation
block
motion compensation
Prior art date
Application number
PCT/KR2009/000441
Other languages
English (en)
Other versions
WO2009096721A3 (fr
Inventor
Dong Hyung Kim
Se Yoon Jeong
Jin Soo Choi
Won Sik Cheong
Kyung Ae Moon
Jin Woo Hong
Original Assignee
Electronics And Telecommunications Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electronics And Telecommunications Research Institute filed Critical Electronics And Telecommunications Research Institute
Priority to US12/865,069 priority Critical patent/US8665958B2/en
Priority claimed from KR1020090007038A external-priority patent/KR101003105B1/ko
Publication of WO2009096721A2 publication Critical patent/WO2009096721A2/fr
Publication of WO2009096721A3 publication Critical patent/WO2009096721A3/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • H04N19/543Motion estimation other than block-based using regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • H04N19/54Motion estimation other than block-based using feature points or meshes

Definitions

  • the present invention relates to a video encoding method and apparatus and a video decoding method and apparatus in which a video signal can be encoded through affine transformation-based motion compensation.
  • the present invention is based on research (Project Management No.: 2007-S-004-01, Project Title: Development of Rich Media Broadcasting Technology through Advancement of AV codec) conducted as part of Information Technology (IT) Growth Power Technology Development Project launched by Ministry of Information and Communication and Institute for Information Technology Advancement (IITA).
  • Inter-frame encoding such as H.264 video encoding is similar to other various video encoding methods in terms of predicting a current block through block-oriented motion estimation and encoding the predicted current block.
  • inter-frame encoding is differentiated from other various video encoding methods by using various macroblock modes and adopting different block sizes for the various macroblock modes so as to perform motion estimation and motion compensation.
  • Inter-frame encoding generally includes performing motion estimation in each of the various macroblock modes, choosing whichever of the various macroblock modes is determined to be optimal in consideration of rate-distortion performance, and encoding a prediction error in the chosen macroblock mode, i.e., the difference(s) between a current block and a block obtained by performing motion estimation on the current block.
  • motion estimation and motion compensation are performed only in consideration of horizontal and vertical translational motion components. That is, referring to FIG. 1, motion estimation and motion compensation may be performed on a current block only in consideration of horizontal and vertical motions (mv x and mv y ) with respect to a reference frame.
  • coding complexity may decrease, but it may not be able to achieve high encoding efficiency especially when an object in a picture to be encoded has an affine transformation such as rotation, enlargement or reduction.
  • affine transformation such as rotation, enlargement or reduction.
  • encoding efficiency may increase, but coding complexity, and particularly, the complexity of motion estimation, may considerably increase.
  • the present invention provides a video encoding method and apparatus and a video decoding method and apparatus which can achieve high encoding efficiency even when a block to be encoded includes an affine-transformation object having an affine transformation such as rotation, enlargement or reduction.
  • a video encoding method including determining whether a current block includes an affine-transformation object having an affine transformation; if the current block includes an affine-transformation object, generating a prediction block by performing affine transformation-based motion compensation on the current block in consideration of an affine transformation of the affine-transformation object; and if the current block does not include any affine-transformation object, generating a prediction block by performing motion vector-based motion compensation on the current block using a motion vector of the current block.
  • a video encoding apparatus including a motion estimation unit calculating a motion vector of a current block with reference to a reference block; an affine-transformation object calculation unit determining whether a current block to be subjected to motion compensation includes an affine-transformation object having an affine transformation and outputting an affine-transformation object detection signal corresponding to the results of the determination; and a motion compensation unit generating a prediction block by performing either affine transformation-based motion compensation or motion vector-based motion compensation on the current block in response to the affine-transformation object detection signal.
  • a video decoding method including determining whether an affine-transformation object exists in a reference block; if an affine-transformation object exists in the reference block, generating a predicted block by performing affine transformation-based motion compensation on the reference block; and if no affine-transformation object exists in the reference block, generating the predicted block by performing motion vector-based motion compensation on the reference block.
  • a video decoding apparatus including an affine-transformation object detection unit determining whether an affine-transformation object exists in a reference block and outputting a signal indicating the results of the determination; a motion compensation unit generating a predicted block by performing one of affine transformation-based motion compensation and motion vector-based motion compensation on the reference block in response to the signal output by the affine-transformation object detection unit; and an adding unit which generates a current block by adding the predicted block and a residual signal.
  • a computer-readable recording medium having recorded thereon a program for executing a video encoding method including determining whether a current block includes an affine-transformation object having an affine transformation; if the current block includes an affine-transformation object, generating a prediction block by performing affine transformation-based motion compensation on the current block in consideration of an affine transformation of the affine-transformation object; and if the current block does not include any affine-transformation object, generating a prediction block by performing motion vector-based motion compensation on the current block using a motion vector of the current block.
  • a computer-readable recording medium having recorded thereon a program for executing a video decoding method including determining whether an affine-transformation object exists in a reference block; if an affine-transformation object exists in the reference block, generating a predicted block by performing affine transformation-based motion compensation on the reference block; and if no affine-transformation object exists in the reference block, generating the predicted block by performing motion vector-based motion compensation on the reference block.
  • affine transformation-based motion estimation/compensation may be performed on each block including an affine-transformation object having an affine transformation.
  • the present invention it is possible to establish an affine model only based on the motion in a previously-encoded macroblock.
  • the present invention can be readily applied to an encoding apparatus (such as an H.264 encoding apparatus) performing encoding in units of macroblocks.
  • FIG. 1 illustrates a diagram for explaining conventional motion estimation and compensation methods in which only horizontal and vertical translational motions are considered
  • FIG. 2 illustrates a diagram for explaining a typical inter-frame encoding method
  • FIG. 3 illustrates a block diagram of a video encoding apparatus according to an exemplary embodiment of the present invention
  • FIG. 4 illustrates a diagram for explaining a video encoding method according to an exemplary embodiment of the present invention
  • FIG. 5 illustrates a diagram for explaining how to divide an 8 ⁇ 8 block into eight triangular blocks
  • FIG. 6 illustrates a diagram for explaining motion vectors used to deduce an affine transformation at each of a plurality of triangular blocks in an 8 ⁇ 8 block
  • FIGS. 7 and 8 illustrate diagrams for affine transformations that can be used in the present invention.
  • FIG. 2 illustrates a diagram for explaining a typical inter-frame encoding method.
  • the typical inter-frame encoding method may largely involve four phases: Phases 1 through 4.
  • Phases 1 and 2 may be phases for estimating motion. More specifically, in phase 1, a motion vector for each of an inter 16 ⁇ 16 block, inter 16 ⁇ 8 blocks, inter 8 ⁇ 16 blocks may be estimated. In phase 2, a motion vector for each of a plurality of sub-blocks of an inter 8 ⁇ 8 block, i.e., a motion vector for each of an inter 8 ⁇ 8 block, inter 8 ⁇ 4 blocks, inter 4 ⁇ 8 blocks, and inter 4 ⁇ 4 blocks may be estimated.
  • a sub-macroblock mode may be chosen for a sub-macroblock in an inter 8 ⁇ 8 macroblock by using a rate-distortion function.
  • the rate-distortion function may be represented by Equation (1):
  • Rate indicates a bitrate used to encode side information such as a prediction error (i.e., the differences between a block currently being encoded and a restored block obtained by compensation using a motion vector of the current block) and a motion vector and Distortion indicates the sum of the squares of the differences between the current block and the restored block.
  • a prediction error i.e., the differences between a block currently being encoded and a restored block obtained by compensation using a motion vector of the current block
  • Distortion indicates the sum of the squares of the differences between the current block and the restored block.
  • an optimum macroblock mode may be chosen from all macroblock modes available, including a skip mode and an intra macroblock mode, in consideration of rate-distortion performance.
  • affine transformation-based motion compensation may be applied only to phases 3 and 4 in consideration of coding complexity. That is, in the video encoding and decoding methods according to the present invention, only horizontal and vertical translational motions may be taken into consideration during the estimation of a motion vector, and affine transformation-based motion compensation, in which the rotation, enlargement or reduction of an object is considered, may be performed in the phase of motion compensation. Therefore, it is possible to minimize coding complexity and provide high encoding efficiency.
  • affine transformation-based motion compensation may be performed only on blocks that are believed to include affine transformations such as rotation, enlargement and reduction.
  • affine transformations such as rotation, enlargement and reduction.
  • the video encoding and decoding methods according to the present invention suggest ways to skip an inverse matrix calculation process for deducing an affine model from blocks to be subjected to affine transformation-based motion compensation. Therefore, it is possible to achieve high encoding efficiency with less computation.
  • FIG. 3 illustrates a block diagram of a video encoding apparatus according to an exemplary embodiment of the present invention.
  • the video encoding apparatus may include a motion estimation unit 110, an affine-transformation object calculation unit 120 and a motion compensation unit 130.
  • the motion estimation unit 110 may calculate a motion vector of a current block based on a reference block.
  • the affine-transformation object calculation unit 120 may determine whether the current block includes an affine-transformation object.
  • the motion compensation unit 130 may generate a prediction block by compensating for the current block based on an affine-object-detection signal provided by the affine-transformation object calculation unit 120 or the motion vector provided by the motion estimation unit 110.
  • the video encoding apparatus may also include an encoding unit (not shown) generating a bitstream by encoding a differential signal generated based on the difference(s) between the current block and the prediction block and a signal including side information such as the motion vector of the current block.
  • FIG. 4 illustrates a diagram for explaining a video encoding method according to an exemplary embodiment of the present invention.
  • the video encoding method may be largely divided into two phases: phases 1 and 2 (200 and 220).
  • phase 1 (200) the affine-transformation object calculation unit 120 may determine whether a current block includes an affine-transformation object.
  • Phase 2 (220) may involve compensating for the current block through affine transformation-based motion compensation using information such as the motion vectors of blocks adjacent to the current block (221) if it is determined in phase 1 that the current block includes an affine-transformation object; and performing typical motion compensation on the current block (223) if it is determined in phase 1 that the current block does not include any affine-transformation object.
  • phase 1 (200) may involve determining whether the current block includes an affine-transformation object having an affine transformation based on the motion vector of the current block and the motion vectors of blocks adjacent to the current block, a reference frame and macroblock mode information used to encode the current block .
  • the current block includes any affine-transformation object: if the maximum of the angles between the motion vector of the current block and the motion vectors of blocks adjacent to the current block is within a predefined range; and if a maximum variation obtained by applying affine transformation-based motion compensation is less than a reference value. If the current block satisfies neither the first nor second condition, the current block may not be subjected to affine transformation-based motion compensation.
  • the current block may not be subjected to affine transformation-based motion compensation if at least one of the blocks adjacent to the current block is intra-encoded, if the current block is located on the upper left corner of a corresponding frame or if the current block references a different reference frame from the blocks adjacent to the current block.
  • video encoding or decoding may be performed in units of 8 ⁇ 8 blocks. If it is determined in phase 1 that the current block has an affine-transformation object, affine transformation-based motion compensation may be performed on the current block by using only the motion vectors within a range that establishes causality. Therefore, it is possible to address the problems associated with two-pass coding such as high coding complexity.
  • an 8 ⁇ 8 block may be divided into eight triangular blocks 300 through 307.
  • the triangular blocks 300 through 307 may be motion-compensated using different affine models.
  • FIG. 6 illustrates motion vectors used to deduce an affine model for each of a plurality of triangular blocks in an 8 ⁇ 8 block.
  • the affine model for each of a plurality of triangular blocks (i.e., blocks 0 through 7) in a current (8 ⁇ 8) block may vary according to a macroblock mode of the current block and a macroblock mode of a number of blocks adjacent to the current block. If the current block is located at the lower right corner of a macroblock and the macroblock mode of the current block is a 16 ⁇ 16 mode, the motion vectors of blocks 0 through 7 may all be the same.
  • the affine models for blocks 0 through 7 may all include translations only and may thus have the same model formula.
  • Equation (2) An affine transformation formula between ( x , y ) and ( x ', y ') may be represented by Equation (2):
  • a total of six equations may be required to determine the values of parameters a , b , c , d , e and f in Equation (2). For this, at least three displacement values for ( x , y ) may be required. If there are more than three displacement values, a least square solution may be used to determine the values of parameters a , b , c , d , e and f in Equation (2).
  • an affine model for each of a plurality of triangular blocks in an 8 ⁇ 8 block may be deduced using variations at the apexes of each of the triangular blocks.
  • Equations (3) there is no need to calculate the inverse matrix of matrix A, i.e., A -1 , because the inverse 6 ⁇ 6 matrix A -1 can be easily obtained from eight inverse matrices respectively corresponding to blocks 0 through 7, which are all calculated in advance. Thus, it is possible to reduce coding complexity.
  • FIG. 7 illustrates the case in which a current block to be encoded includes an object which is reduced by 1/2 with respect to the vertical axis of a previous frame and is inclined to the right at an angle of 45 degrees.
  • Three points of displacement for obtaining an affine model for block 0 of a current block are ( x 0 , y 0 ) ⁇ ( x 0 + mv x 0 , y 0 + mv y 0 ), ( x 1 , y 1 ) ⁇ ( x 1 + mv x 1 , y 1 + mv y 1 ), and ( x 2 , y 2 ) ⁇ ( x 2 + mv x 2 , y 2 + mv y2 ).
  • the minimum size of macroblocks that can have a motion vector is 4 ⁇ 4.
  • motion vectors mvx0 through mvx2 may be different from one another.
  • all 4 ⁇ 4 blocks in the current block have the same motion vector if theminimum size of macroblocks that can have a motion vector is 4 ⁇ 4 in blocks adjacent to the current block.
  • An affine model for block 0 may be obtained using the three points of displacement, as indicated by Equations (4):
  • matrix A includes the coordinates of the current block and the coordinates of each of the blocks adjacent to the current block. If the point ( x 0 , y 0 ) is mapped to the origin (0,0), matrix A can be commonly applied to blocks 0 through 7 regardless of the position of the current block in a corresponding macroblock.
  • Equations (4) may be transformed into Equations (5), and Equations (6) may be obtained by applying (x1,y1) to Equations (5).
  • Equations (5) and Equations (6) are as follows:
  • motion estimation may be performed in units of 1/4 pixels, and thus, the distance between a pair of adjacent pixels may be 4. Therefore, if a pixel at a point (4,-12) is determined to have been moved to (4+ mv x 2 + ⁇ x , -12+ mv y 2 + ⁇ y ) based on an affine model, the pixel may be determined to have the same displacement ( ⁇ x , ⁇ y ) at any arbitrary block location. This is very important for the reduction of computation because, according to the present invention, it is possible to easily obtain an affine model simply using eight inverse matrices respectively corresponding to blocks 0 through 7 without the need to calculate the inverse matrix of matrix A.
  • An affine model for each of blocks 1 through 7 may be obtained using the same method used to obtain the affine model for block 0.
  • motion compensation may be performed on the current block, as indicated by Equation (7):
  • affine transformation-based motion compensation may be performed on the current block, thereby maintaining high encoding efficiency.
  • a video decoding method may be performed by inversely performing the above-mentioned video encoding method. That is, it may be determined whether an affine-transformation object exists in a reference block. Thereafter, if an affine-transformation object exists in the reference block, a predicted block may be generated by performing affine transformation-based motion compensation on the reference block. On the other hand, if no affine-transformation object exists in the reference block, a predicted block may be generated by performing motion vector-based motion compensation on the reference block. Thereafter, a current block may be generated using a predicted block and a residual signal included in a video signal to be decoded. Therefore, a video decoding apparatus according to an exemplary embodiment of the present invention, unlike a typical video decoding apparatus, may include an affine-transformation object calculation unit determining whether an affine-transformation object exists in the reference block.
  • the present invention can be realized as computer-readable code written on a computer-readable recording medium.
  • the computer-readable recording medium may be any type of recording device in which data is stored in a computer-readable manner. Examples of the computer-readable recording medium include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disc, an optical data storage, and a carrier wave (e.g., data transmission through the Internet).
  • the computer-readable recording medium can be distributed over a plurality of computer systems connected to a network so that computer-readable code is written thereto and executed therefrom in a decentralized manner. Functional programs, code, and code segments needed for realizing the present invention can be easily construed by one of ordinary skill in the art.
  • the present invention can be effectively applied to the encoding or decoding of a video signal and can thus achieve high efficiency especially when a block to be encoded includes an affine-transformation object having an affine transformation such as rotation, enlargement or reduction.

Abstract

L'invention concerne un procédé et un appareil de codage vidéo. Ce procédé de codage vidéo consiste à: déterminer si un bloc actuel comprend un objet à transformation affine ayant une transformation affine; si le bloc actuel comprend un objet à transformation affine, générer un bloc de prédiction en effectuant, sur le bloc actuel, une compensation de déplacement fondée sur une transformation affine, en fonction de la transformation affine de l'objet à transformation affine; et si le bloc actuel ne comprend pas d'objet à transformation affine, générer un bloc de prédiction en effectuant, sur le bloc actuel, une compensation de déplacement fondée sur un vecteur de déplacement, à l'aide d'un vecteur de déplacement du bloc actuel. Ainsi, il est possible d'obtenir un codage/décodage vidéo hautement efficace, même lorsqu'un bloc à coder ou à décoder comprend une transformation affine.
PCT/KR2009/000441 2008-01-29 2009-01-29 Procédé et appareil servant à coder et à décoder un signal vidéo à l'aide d'une compensation de déplacement fondée sur une transformation affine WO2009096721A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/865,069 US8665958B2 (en) 2008-01-29 2009-01-29 Method and apparatus for encoding and decoding video signal using motion compensation based on affine transformation

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2008-0009120 2008-01-29
KR20080009120 2008-01-29
KR10-2009-0007038 2009-01-29
KR1020090007038A KR101003105B1 (ko) 2008-01-29 2009-01-29 어파인 변환 기반의 움직임 보상을 이용한 비디오 부호화 및 복호화 방법 및 장치

Publications (2)

Publication Number Publication Date
WO2009096721A2 true WO2009096721A2 (fr) 2009-08-06
WO2009096721A3 WO2009096721A3 (fr) 2009-11-05

Family

ID=40913419

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2009/000441 WO2009096721A2 (fr) 2008-01-29 2009-01-29 Procédé et appareil servant à coder et à décoder un signal vidéo à l'aide d'une compensation de déplacement fondée sur une transformation affine

Country Status (1)

Country Link
WO (1) WO2009096721A2 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965869A (zh) * 2015-08-29 2018-12-07 华为技术有限公司 图像预测的方法及设备
CN109792533A (zh) * 2016-10-05 2019-05-21 高通股份有限公司 在视频译码中用于仿射运动模型的运动向量预测
CN110024403A (zh) * 2016-12-29 2019-07-16 高通股份有限公司 用于视频译码的仿射运动模型的运动向量生成
US11979559B2 (en) 2015-08-29 2024-05-07 Huawei Technologies Co., Ltd. Image prediction method and device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8665958B2 (en) 2008-01-29 2014-03-04 Electronics And Telecommunications Research Institute Method and apparatus for encoding and decoding video signal using motion compensation based on affine transformation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3679426B2 (ja) * 1993-03-15 2005-08-03 マサチューセッツ・インスティチュート・オブ・テクノロジー 画像データを符号化して夫々がコヒーレントな動きの領域を表わす複数の層とそれら層に付随する動きパラメータとにするシステム
US5784175A (en) * 1995-10-05 1998-07-21 Microsoft Corporation Pixel block correlation process
EP2029018A2 (fr) * 2006-06-01 2009-03-04 Philips Intellectual Property & Standards GmbH Estimation de mouvement hiérarchique

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965869A (zh) * 2015-08-29 2018-12-07 华为技术有限公司 图像预测的方法及设备
CN108965869B (zh) * 2015-08-29 2023-09-12 华为技术有限公司 图像预测的方法及设备
US11979559B2 (en) 2015-08-29 2024-05-07 Huawei Technologies Co., Ltd. Image prediction method and device
CN109792533A (zh) * 2016-10-05 2019-05-21 高通股份有限公司 在视频译码中用于仿射运动模型的运动向量预测
CN109792533B (zh) * 2016-10-05 2023-08-15 高通股份有限公司 一种解码和编码视频数据的方法及装置
CN110024403A (zh) * 2016-12-29 2019-07-16 高通股份有限公司 用于视频译码的仿射运动模型的运动向量生成

Also Published As

Publication number Publication date
WO2009096721A3 (fr) 2009-11-05

Similar Documents

Publication Publication Date Title
WO2010068020A9 (fr) Appareil et procédé de décodage/codage de vidéo multivue
WO2009157669A2 (fr) Procédé de prédiction intra-trame et appareil, procédé de codage/décodage d'image et appareil utilisant ce procédé
WO2013005941A2 (fr) Appareil et procédé de codage et de décodage d'une image
WO2013062191A1 (fr) Procédé et appareil de décodage d'image à mode de prédiction intra
WO2011115356A1 (fr) Système de surveillance
US20100329347A1 (en) Method and apparatus for encoding and decoding video signal using motion compensation based on affine transformation
WO2012057528A2 (fr) Procédé de codage et de décodage à prédiction intra adaptative
WO2011145819A2 (fr) Dispositif et procédé de codage/décodage d'image
WO2013002549A2 (fr) Procédés et appareil de codage/décodage d'une image
WO2013070006A1 (fr) Procédé et appareil de codage et de décodage vidéo faisant appel à un mode de saut
WO2013183918A1 (fr) Appareil et procédé de traitement d'image pour une image tridimensionnelle (3d)
WO2010058895A2 (fr) Appareil et procédé de codage/décodage d'un signal vidéo
EP2594075A2 (fr) Procédé et appareil pour coder et décoder une image par intra-prédiction
JP2008178149A (ja) 運動ベクトルフィールドを圧縮するための装置及び方法
WO2011102597A1 (fr) Structure de codage
WO2012081877A2 (fr) Appareil et procédé d'encodage/de décodage vidéo à vues multiples
WO2009096721A2 (fr) Procédé et appareil servant à coder et à décoder un signal vidéo à l'aide d'une compensation de déplacement fondée sur une transformation affine
WO2013133627A1 (fr) Procédé de traitement de signaux vidéo
JP3633204B2 (ja) 信号符号化装置、信号符号化方法、信号記録媒体及び信号伝送方法
WO2018070556A1 (fr) Procédé et appareil pour extraire des données de mode de prédiction intra d'un bloc carré ou rectangulaire
WO2012026734A2 (fr) Appareil de codage/décodage et procédé utilisant le partage de vecteurs de mouvement d'une image couleur et d'une image de profondeur
WO2012033343A2 (fr) Appareil de codage/décodage de prédiction intra et procédé permettant d'omettre la transmission d'informations de mode de prédiction à l'aide de caractéristiques de pixels de référence
WO2014171709A1 (fr) Procédé et appareil de compensation de luminosité adaptative basés sur objet
WO2013133587A1 (fr) Procédé et appareil de traitement de signaux vidéo
EP0937367A1 (fr) Videocommunication mobile en temps reel a terminaux a faible puissance

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09705890

Country of ref document: EP

Kind code of ref document: A2

WWE Wipo information: entry into national phase

Ref document number: 12865069

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09705890

Country of ref document: EP

Kind code of ref document: A2