WO2013144197A1 - Procédé et dispositif de prédiction d'un bloc d'image et procédés et dispositifs correspondants pour le codage et le décodage - Google Patents

Procédé et dispositif de prédiction d'un bloc d'image et procédés et dispositifs correspondants pour le codage et le décodage Download PDF

Info

Publication number
WO2013144197A1
WO2013144197A1 PCT/EP2013/056505 EP2013056505W WO2013144197A1 WO 2013144197 A1 WO2013144197 A1 WO 2013144197A1 EP 2013056505 W EP2013056505 W EP 2013056505W WO 2013144197 A1 WO2013144197 A1 WO 2013144197A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
block
reference image
weighted parameter
current
Prior art date
Application number
PCT/EP2013/056505
Other languages
English (en)
Inventor
Philippe Bordes
Pierre Andrivon
Philippe Salmon
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of WO2013144197A1 publication Critical patent/WO2013144197A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks

Definitions

  • the invention relates to the general domain of image coding.
  • the invention relates to a device and method for predicting an image block.
  • the invention also relates to the methods and devices for coding and decoding implementing the prediction method.
  • the majority of coding/decoding methods for image sequences use prediction between images (inter-image prediction) or prediction in the image (intra-image prediction).
  • a prediction is used to improve the compression of the image sequence. It consists in generating a prediction image for a current image having to be coded and in coding the difference, called the residual image, between this current image and the prediction image.
  • the more the prediction image is correlated with the current image the lower is the number of bits required to code the current image and therefore the more effective is the compression.
  • the prediction loses its efficiency when there is a variation in luminosity between the images of the sequence or inside an image. Such a luminosity variation is for example due to a modification of illumination, to fade effects, to flashes, etc.
  • weighted prediction parameters are explicitly transmitted per image slice and this is the case for each reference image.
  • H.264 enables the transmission of multiplicative parameters and offsets.
  • weighted prediction parameters will be referred to as weighted parameters. Illumination correction using weighted parameters is applied in the same way for all of the blocks of the slice that use a given reference image. This method can be costly in terms of bitrate as the weighted parameters are transmitted in the stream. Such parameters are called explicit parameters.
  • these implicit weighted parameters can not be used with type P slices, that is to say a slice comprising blocks predicted from a single reference image.
  • the implicit weighted parameters are only multiplicative parameters.
  • the method for calculating implicit weighted parameters in H.264 does not enable offsets to be calculated, these are thus always considered as being null.
  • the purpose of the invention is to overcome at least one of the disadvantages of the prior art.
  • the invention relates to a method for predicting a block of a current image from at least one current reference image l k comprising steps for:
  • the at least one implicit weighted parameter is determined from at least one weighted parameter of at least one reference image.
  • the method for predicting according to the invention advantageously enables weighted parameters to be determined independently of their nature, that is to say multiplicative or offset parameters.
  • the method according to the invention also enables implicit weighted parameters to be determined for P type blocks.
  • the at least one weighted parameter of at least one reference image is an implicit or explicit weighted parameter.
  • the at least one implicit weighted parameter is determined from at least one first weighted parameter of a first reference image with respect to the current reference image and from at least a second weighted parameter of a second reference image with respect to the current reference image.
  • the at least one implicit weighted parameter is equal to Tnk+(Tmk-Tnk)*T1 T0, where Tnk is the at least one first weighted parameter and TMK is the at least one second weighted parameter and where T1 is the temporal distance between the current image and the first reference image and TO is the temporal distance between the first and second reference images.
  • each of the at least one implicit weighted parameter, at least one first weighted parameter and at least one second weighted parameter is a multiplicative or an offset weighted parameter.
  • the at least one implicit weighted parameter is determined from at least a first weighted parameter of the current image l k with respect to a third reference image Is and a second weighted parameter of the current image or a current slice to which the current block belongs with respect to the third reference image Is, the third reference image Is preceding the current reference image l k in the decoding order.
  • the at least one implicit weighted parameter comprises a multiplicative weighted parameter Wck and an offset Ock calculated as follows:
  • the invention also relates to a method for coding an image comprising the prediction of a block of the image according to the prediction method of the invention, to determine a residue from the block and the predictor and to code the residue.
  • the coding method comprises the coding of an item of information of slice level indicating if the weighting parameters are explicit or implicit.
  • the invention also relates to a method for decoding a block of an image comprising the prediction of the block according to the prediction method of the invention, to decode a residue and reconstruct the block from the residue and the predictor.
  • the invention relates to a device for predicting a block of a current image from at least one current reference image l k comprising:
  • the at least one implicit weighted parameter is determined from at least one weighted parameter of at least one reference image.
  • the invention also relates to a device for coding an image comprising a device for predicting a block according to the invention, means for determining a residue from the block and the predictor and means for coding the residue.
  • the invention also relates to a device for decoding an image comprising a device for predicting a block according to the invention, means for decoding a residue and means for reconstructing the block from the residue and the predictor.
  • - figure 1 shows a method for prediction of an image block according to the invention
  • - figures 2 and 3 represent a current image Ic and reference images
  • FIG. 4 shows a variant for prediction of an image block according to the invention
  • FIG. 7 shows a coding device according to the invention.
  • FIG. 8 shows a decoding device according to the invention.
  • the invention relates to a method for coding an image belonging to an image sequence in a stream. Likewise the invention relates to a method for decoding such an image. According to the invention, the methods for coding and decoding implement a same method for predicting blocks.
  • the method for predicting a block Be of a current image Ic from a current reference image Ik is described in reference to figure 1 .
  • the block Be belongs to a current slice of the image Ic.
  • at least one implicit weighted parameter Wck is determined for the block Be with respect to the image Ik.
  • the weighted parameter Wck is applied on the block of the image Ik to predict the block Be.
  • two weighted parameters are determined: a multiplicative parameter Wck and an offset parameter Ock.
  • the implicit weighted parameter Wck is determined from at least one weighted parameter of a reference image that can be Ik or another reference image.
  • a first embodiment is described in reference to figure 2.
  • weighted parameters Wnk and Wmk there are two reference images In and Im that are respectively associated with weighted parameters Wnk and Wmk and possibly with Onk and Omk which are offsets.
  • the weighted parameter Wck is thus determined from two parameters Wnk and Wmk also taking into account the temporal position relative to images Ic, In and Im, and more particularly of the temporal distance between these images. Wck is for example determined according to the following equations:
  • Wck Wnk+(Wmk-Wn k)*(Tc-Tn )/(Tm -Tn )
  • Tc indicates for example the time at which the current image is displayed
  • Tn indicates the time at which the image In is displayed
  • Tm indicates the time at which the image Im is displayed.
  • the display times Tc, Tn and Tm are replaced by the POCs.
  • Wck, Ock the use is favoured of parameters Wnk and Wmk that are explicit or equal to (1 ,0) with respect to the use of implicit parameters.
  • the weighted parameter(s) are determined from a weighted parameter Wks associated with the current reference image Ik with respect to a reference image Is and another weighted parameter Wcs associated with the current image or with part of the current image to which the current block belongs with respect to the image Is.
  • the image Is is a reference image preceding the current reference image Ik in the decoding order.
  • the weighted parameter Wck is thus determined from two parameters Wks and Wcs also taking into account the temporal distance between the images lc, In and Im according to the following equations:
  • a predictor Bp is determined for the current block Be from implicit weighted parameter(s) determined in step 20.
  • the predictor is determined from at least one motion vector MV associated with the current block Be.
  • the block Be is a P type block. It is predicted with respect to the image Ik according to the following formula: Wck*Bk(MV)+Ock where Bk(MV) is a block of the image Ik determined from the motion vector MV.
  • the block Be is a B type block. It is predicted with respect to two reference images Ik1 and Ik2 according to the following formula: (Wck1 * Bk1 (MV1 )+Wck2 * Bk2(MV2)+Ock1 +Ock2)/2 where Bk1 (MV1 ) is a block of the image Ik1 determined from the motion vector MV1 and where Bk2(MV2) is a block of the image Ik2 determined from the motion vector MV2. Wck1 is determined with respect to the current reference image Ik1 and Wck2 is determined with respect to the current reference image Ik2.
  • the motion vectors are determined by motion estimation.
  • the decoder the motion vectors are decoded from the stream.
  • FIG. 4 An embodiment variant of the method for prediction is described in reference to figure 4.
  • a table is filled that contains the implicit and explicit weighted parameters of reference images stored in memory during the coding of the current slice or image.
  • this table the implicit or explicit nature of weighted parameters is also specified.
  • Such a table is shown in figures 5 and 6.
  • the reference images that are no longer used as reference images are deleted from the table.
  • the weighted parameters of the current image or slice that are explicit are added to the table. It should be noted that the table is filled with a multiplicative parameter equal to 1 and an offset parameter equal to 0 each time that an image refers to itself.
  • the weighted parameters of the current image or current slice that are implicit are determined and added to the table.
  • N(n) 2
  • the weighted parameters corresponding to the reference image In being: (1 , 0) and e(Wmn, Omn).
  • the weighted parameters Wen and Ocn of the current image or slice with respect to the reference image In are thus determined according to the following equations:
  • Wcn 1 +(Wmn-1 ) * (Tc-Tn)/(Tm-Tn)
  • Ocn Omn * (Tc-Tn)/(Tm-Tn).
  • Tc, Tn and Tm represent either the display times of images or the POCs of these same images.
  • the weighted parameters stored in the k th column are reordered so that the implicit parameters are found in the first column, followed by the parameters (1 ,0) and the explicit parameters.
  • the explicit parameters and the parameters (1 ,0) determined with respect to the reference image Ik are thus preferred over the implicit parameters to determine the value of parameters of the current image or slice.
  • the block Be of the current image lc is predicted according to steps 20 and 22 of the prediction method.
  • a residue is determined from the block Be and the predictor Bp.
  • the residue is determined by subtracting pixel by pixel from the block Be the predictor Bp.
  • the residue thus determined is coded in the stream F.
  • the coding of the residue generally comprises a transformation (for example by a DCT) of the residue, a quantization of the transformed residue and entropy coding (for example VLC). Such steps are well known to those skilled in the art of video coders and are not further described hereafter.
  • the coding method comprises the coding of an item of information at image slice level (for example in a slice header) indicating if the weighted parameters used to code the blocks of the slice are implicit or explicit.
  • This information is for example coded in the slice header.
  • the coding method also comprises an item of information at image level indicating that the weighted parameters used to code the blocks of the image are implicit or explicit.
  • This information at slice level and at image level can co-exist.
  • the information at image level can indicate that the image uses implicit weighted parameters and the information at slice level allows inside the image the use of explicit parameters for some slices.
  • the information at image level is for example coded in a PPS (Picture Parameter Set).
  • PPS Picture Parameter Set
  • the invention at slice level thus enables some of these images to be coded using explicit weighted parameters while those referring to a PPS indicate a use of implicit weighted parameters.
  • Another solution is to duplicate the PPS, one indicating use of explicit weighted parameters and the other indicating use of implicit weighted parameters.
  • such as solution is more costly in terms of bitrate.
  • the block Be of the current image lc to be decoded is predicted according to steps 20 and 22 of the prediction method.
  • a residue is decoded from the stream F.
  • the decoding of the residue generally comprises an entropy decoding, an inverse quantization and an inverse transform. Such steps are well known to those skilled in the art of video coders and are not further described hereafter.
  • the block Be is then reconstructed from the residue and the predictor Bp. For example, the residue is determined by adding pixel by pixel the residue and the predictor Bp.
  • the decoding method comprises the decoding of an item of information at image slice level indicating if the weighted parameters used to decode the blocks of the slice are implicit or explicit. This information is for example decoded in the slice header.
  • the decoding method also comprises the decoding of an item of information at image level indicating that the weighted parameters used to decode the blocks of the image are implicit or explicit.
  • This information at slice level and at image level can co- exist.
  • the information at image level can indicate that the image uses implicit weighted parameters and the information at slice level enables the use inside the image of explicit parameters for some slices.
  • the information at image level is for example coded in a PPS (Picture Parameter Set).
  • the invention also relates to a coding device ENC described in reference to figure 7 and a decoding device DECOD described in reference to figure 8.
  • the modules shown are functional units, that may correspond or not to physically distinguishable units. For example, these modules or some of them can be grouped together in a single component or circuit, or constitute functions of the same software. Conversely, some modules may be composed of separate physical entities.
  • Coding and decoding devices compatible with the invention are implemented according to a purely hardware realisation, for example in the form of a dedicated component (for example in an ASIC (Application Specific Integrated Circuit) or FPGA (Field-Programmable Gate Array) or VLSI (Very Large Scale Integration) or of several electronic components integrated into a device or even in a form of a mix of hardware elements and software elements.
  • a dedicated component for example in an ASIC (Application Specific Integrated Circuit) or FPGA (Field-Programmable Gate Array) or VLSI (Very Large Scale Integration) or of several electronic components integrated into a device or even in a form of a mix of hardware elements and software elements.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • VLSI Very Large Scale Integration
  • the coding device ENC receives at input images I belonging to a sequence of images. Each image is divided into blocks of pixels each of which is associated with at least one item of image data, e.g. luminance and/or chrominance data.
  • the coding device ENC notably implements a coding with temporal prediction. Only the modules of the coding device ENC relating to the coding by temporal prediction or INTER coding are shown in figure 7. Other modules not shown and known to those skilled in the art of video coders implement INTRA coding with or without spatial prediction.
  • the coding device ENC notably comprises a calculation module ADD1 capable of subtracting a prediction block Bp pixel by pixel from a current block Be to generate a residue or residual block noted res.
  • the coding device ENC further comprises an entropy coding module COD able to code the quantized data into a binary stream F. It also comprises a module ITQ implementing the inverse operation to the module TQ.
  • the ITQ module carries out an inverse quantization followed by an inverse transform.
  • the module ITQ is connected to a calculation module ADD2 able to add pixel by pixel the block of data from the module ITQ and the prediction block Bp to generate a block of reconstructed image data that is stored in a memory DPB.
  • the coding device ENC also comprises a motion estimation module ME able to estimate at least one motion vector MV between the block Be and a current reference image Ik stored in the memory DPB.
  • the motion estimation is carried out between the current block Be and the source image corresponding to Ik in which case the memory DPB is not connected to the motion estimation module ME.
  • the motion vector(s) in the case of the temporal prediction mode or INTER mode are transmitted to a motion compensation module MC.
  • the motion vector or motion vectors are also transmitted to the entropy coding module COD to be coded in the stream F.
  • the motion compensation module MC determines from the reference image Ik previously reconstructed and stored in the memory DPB, a reference block Bref from the motion vector MV determined by the motion estimation module ME.
  • the coding device ENC also comprises a processing module WP able to implement step 20 of the prediction method. It also comprises a calculation module MULT able to determine the predictor Bp according to step 22 of the prediction method.
  • the decoding device DECOD is described in reference to figure 8.
  • the decoding device DECOD receives at input a binary stream F representative of an image sequence.
  • the stream F is for example transmitted by a coding device ENC.
  • the decoding device DECOD comprises an entropy decoding module DEC able to generate decoded data, for example residues relating to the content of images.
  • the decoding device DECOD also comprises a motion data reconstruction module.
  • the motion data reconstruction module is the entropy decoding module DEC that decodes a part of the stream F representative of motion vectors.
  • the motion data reconstruction module is a motion estimation module.
  • This solution for reconstructing motion data via the decoding device DECOD is known as "template matching".
  • the decoded data relating to the content of images is then transmitted to a module ITQ able to carry out an inverse quantization followed by an inverse transform.
  • the ITQ module is identical to the ITQ module of the coding device ENC having generated the stream F.
  • the ITQ module is connected to a calculation module ADD3 able to add pixel by pixel the block from the ITQ module and the prediction block Bp to generate a block of reconstructed image data that are stored in a memory DPB.
  • the decoding device DECOD also comprises a motion compensation module MC identical to the motion compensation module MC of the coding device ENC.
  • the motion compensation module MC determines from a reference image Ik previously reconstructed and stored in the memory DPB, a reference block Bref from the motion vector MV decoded for the current block Be by the entropy decoding module DEC.
  • the decoding device DEC also comprises a processing module WP able to implement step 20 of the prediction method. It also comprises a calculation module MULT able to determine the predictor Bp according to step 22 of the prediction method.
  • the coding of images of an enhancement layer uses images from a lower layer (for example: the base layer) to construct a prediction signal.
  • the prediction signal can be constructed from images of a lower layer by using the prediction method described in reference to figures 1 and 2.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un procédé de prédiction d'un bloc d'une image courante à partir d'au moins une image de référence courante Ik comprenant les étapes consistant à : - déterminer (20), pour le bloc, au moins un paramètre pondéré implicite, et - déterminer (22) un prédicteur pour le bloc à partir de l'image de référence courante en utilisant le paramètre pondéré implicite. Ledit paramètre pondéré implicite est déterminé à partir d'au moins un paramètre pondéré de l'au moins une image de référence.
PCT/EP2013/056505 2012-03-29 2013-03-27 Procédé et dispositif de prédiction d'un bloc d'image et procédés et dispositifs correspondants pour le codage et le décodage WO2013144197A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1252829 2012-03-29
FR1252829 2012-03-29

Publications (1)

Publication Number Publication Date
WO2013144197A1 true WO2013144197A1 (fr) 2013-10-03

Family

ID=46754544

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2013/056505 WO2013144197A1 (fr) 2012-03-29 2013-03-27 Procédé et dispositif de prédiction d'un bloc d'image et procédés et dispositifs correspondants pour le codage et le décodage

Country Status (1)

Country Link
WO (1) WO2013144197A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040008783A1 (en) * 2002-07-15 2004-01-15 Boyce Jill Macdonald Adaptive weighting of reference pictures in video decoding
US20090010330A1 (en) * 2006-02-02 2009-01-08 Alexandros Tourapis Method and Apparatus for Adaptive Weight Selection for Motion Compensated Prediction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040008783A1 (en) * 2002-07-15 2004-01-15 Boyce Jill Macdonald Adaptive weighting of reference pictures in video decoding
US20090010330A1 (en) * 2006-02-02 2009-01-08 Alexandros Tourapis Method and Apparatus for Adaptive Weight Selection for Motion Compensated Prediction

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BORDES P ET AL: "Improvement of Implicit Weighted Prediction", 9. JCT-VC MEETING; 100. MPEG MEETING; 27-4-2012 - 7-5-2012; GENEVA; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-I0115, 16 April 2012 (2012-04-16), XP030111878 *
BOYCE J M: "Weighted prediction in the H.264/MPEG AVC video coding standard", PROCEEDINGS / 2004 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS : MAY 23 - 26, 2004, SHERATON VANCOUVER WALL CENTRE HOTEL, VANCOUVER, BRITISH COLUMBIA, CANADA, IEEE OPERATIONS CENTER, PISCATAWAY, NJ, 23 May 2004 (2004-05-23), pages III - 789, XP010719383, ISBN: 978-0-7803-8251-0 *
BOYCE: "Changes to Adaptive Reference Picture Weighting", 5. JVT MEETING; 14-10-2002 - 18-10-2002; GENEVA, CH; (JOINT VIDEO TEAMOF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ),, no. JVT-E060r1, 18 October 2002 (2002-10-18), XP030005477, ISSN: 0000-0432 *
KIKUCHI: "Simpl. Weighted Pred. & Verif. Result", 6. JVT MEETING; 63. MPEG MEETING; 09-12-2002 - 13-12-2002; AWAJI, JP;(JOINT VIDEO TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ),, no. JVT-F077r1, 13 December 2002 (2002-12-13), XP030005645, ISSN: 0000-0428 *
KOTO S-I ET AL: "Adaptive Bi-predictive video coding using temporal extrapolation", PROCEEDINGS 2003 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (CAT. NO.03CH37429), BARCELONA, SPAIN, 14-17 SEPT. 2003; [INTERNATIONAL CONFERENCE ON IMAGE PROCESSING], IEEE, IEEE PISCATAWAY, NJ, USA, vol. 3, 14 September 2003 (2003-09-14), pages 829 - 832, XP010669962, ISBN: 978-0-7803-7750-9 *

Similar Documents

Publication Publication Date Title
US10979707B2 (en) Method and apparatus of adaptive inter prediction in video coding
US9794568B2 (en) Content adaptive entropy coding of coded/not-coded data for next generation video
RU2720975C2 (ru) Способ кодирования и декодирования изображений, устройство кодирования и декодирования и соответствующие компьютерные программы
US20140362911A1 (en) Matched filtering of prediction and reconstruction signals for next generation video
US20130101040A1 (en) Method for coding a block of a sequence of images and method for reconstructing said block
US9584808B2 (en) Device and method for scalable coding of video information
WO2011050641A1 (fr) Procédés de codage d'images vidéo et encodeurs et décodeurs d'images vidéo dotés d'une fonction de prédiction pondérée localisée
KR20160106617A (ko) 비디오 코딩을 위한 적응 모션 벡터 해상도 시그널링
CN117336504A (zh) 图像预测方法、装置以及编解码器
JP2015529065A (ja) スケーラブルビデオコーディングおよび3dビデオコーディングのための多重仮説動き補償
JP2016511619A (ja) ビデオ情報のスケーラブルコード化のための装置及び方法
KR20210057146A (ko) 비디오 코딩에서 조건적 디코더측 모션 벡터 정제를 위한 장치 및 방법
KR20190015218A (ko) 비디오 코딩을 위해 이웃 블록 패턴을 이용하는 모션 추정 방법 및 시스템
KR20160034900A (ko) 이미지 시퀀스를 나타내는 스케일러블 스트림을 디코딩하는 방법 및 디바이스 그리고 대응하는 코딩 방법 및 디바이스
CN107079154B (zh) 向后兼容的hdr视频多层压缩技术
KR20230150284A (ko) 효율적인 비디오 인코더 아키텍처
JP2019022120A (ja) 動画像符号化装置、動画像符号化方法、動画像符号化用コンピュータプログラム、動画像復号装置及び動画像復号方法ならびに動画像復号用コンピュータプログラム
JP2024069638A (ja) 予測装置、符号化装置、復号装置、及びプログラム
KR101187580B1 (ko) 조도 보상 방법 및 그 장치와 이를 이용한 동영상 부호화방법 및 그 장치
JP5938424B2 (ja) 画像ブロックを再構成および符号化する方法
KR102407912B1 (ko) 양방향 인트라 예측 시그널링
US20170099494A1 (en) Apparatus, method and non-transitory medium storing program for encoding moving picture
US20150010083A1 (en) Video decoding method and apparatus using the same
WO2013144197A1 (fr) Procédé et dispositif de prédiction d'un bloc d'image et procédés et dispositifs correspondants pour le codage et le décodage
CN109891888A (zh) 基于内部的局部层间预测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13716226

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13716226

Country of ref document: EP

Kind code of ref document: A1