WO2010076439A1 - Systeme et procede de codage video - Google Patents
Systeme et procede de codage video Download PDFInfo
- Publication number
- WO2010076439A1 WO2010076439A1 PCT/FR2009/052193 FR2009052193W WO2010076439A1 WO 2010076439 A1 WO2010076439 A1 WO 2010076439A1 FR 2009052193 W FR2009052193 W FR 2009052193W WO 2010076439 A1 WO2010076439 A1 WO 2010076439A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- module
- current image
- macroblocks
- coding
- zone
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
- H04N21/2365—Multiplexing of several video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/40—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- the present invention relates to a video coding system. It also relates to a video coding method.
- the invention applies to the field of video data broadcasting by a server to a client terminal.
- the server usually a computer, is connected to the client terminal, for example a video decoder, by a network, for example in the form of HDMI ("High Definition Multimedia Interface” in English), WIFI or Ethernet.
- the computer screen can then be displayed by the client terminal on a television screen according to a "Remote Frame Buffer" type protocol, for example VNC ("Virtual Network Computing").
- VNC Virtual Network Computing
- the server encodes, that is to say compresses, what it broadcasts before sending it to the client terminal. If the server had to display on a screen of its own the images it broadcasts, it would not be necessary to compress them.
- the server captures its own display, encodes it and sends it over the network to the client terminal.
- Each image to be displayed is stored in a so-called “framebuffer” buffer of the server and is generally coded in RGB ("Red Green Blue”) format which is the most direct way of coding the images, the three planes corresponding to the three elemental colors red, green and blue.
- RGB Red Green Blue
- the image is then generally transformed into a YUV (or luminance - chrominance) format.
- the first plane called the luminance plane (Y) represents the luminous intensity of the pixels.
- the next two planes correspond to the chrominance (U, V) and carry the color information.
- the encoding performed by the server is an encoding of the spatiotemporal type such as H264.
- the H264 standard is a video coding standard de- jointly developed by VCEG (Video Coding Experts Group) and MPEG (Moving Pictures Experts Group). This standard makes it possible to encode video streams with a bit rate less than two times less than that obtained by the MPEG2 standard for the same quality.
- a spatio-temporal encoding fully encodes only part of the images to be transmitted in order to reconstitute a video.
- the H264 standard contains the types of images known and defined in the MPEG2 standard, namely:
- the present invention aims at providing a spatio-temporal video coding system that makes it possible to reduce the encoding effort with a view to using a server client protocol in real time while allowing enough resources on the server in charge of encoding to run other applications.
- the invention proposes a video coding system for encoding successive images of a video sequence, the coding of at least one current image being performed relative to at least one preceding image. dente and / or posterior of said video sequence, said coding system comprising:
- an input data reception module for receiving said current image to be coded; means for dividing said current image into macroblocks;
- a motion compensation module receiving motion vectors and providing at least one predicted zone, said coding system being characterized in that said data receiving module further receives a real motion vector of at least one displaced zone said current image, said coding system comprising: means for allocating said real motion vector to the macroblocks belonging to said displaced zone;
- macroblock denotes a rectangular elementary region of the image having a size between 4x4 and 16x16 pixels (via 8x16, 8x8, ).
- Each macroblock itself consists of luminance blocks and chrominance blocks.
- Motion estimation in the context of a spatiotemporal coding is an operation which requires a very important computing power.
- the system according to the invention makes it possible to overcome a part of this estimate by advantageously using the provision of an already existing motion vector. Thanks to the invention, the provision of the motion vector relative to a zone (typically a rectangle within a frame or "frame" in English) having been displaced makes it possible not to calculate the motion vectors for the macroblocks that are in such a moved area.
- the real motion vector is directly injected on the input of the compensation module.
- the coding system finds a particularly interesting application in the case where the initiation of the movement of the zone is performed at a client terminal connected to a server via a VNC protocol, the rendering of the displacement being displayed on the screen of the terminal.
- the coding by the system according to the invention is performed at the server and the actual vector of the moved area is provided by a programming interface of the graphical environment of the server.
- the system according to the invention may also have one or more of the following characteristics, considered individually or in any technically possible combination:
- the system comprises means for transmitting only the macroblocks not belonging to said displaced zone to said motion vector estimation module; a subtracter for effecting the difference between the pixels of the current image and the predicted zone and providing a residual error corresponding to this difference; a frequency transform module applying a frequency transform on each macroblock processed by said estimation module as well as on said residual error; a module for quantizing data from said frequency transform module;
- an entropic coder for coding data from said quantization module.
- the present invention also relates to a video coding method for the coding of successive images of a video sequence, the coding of at least one current image being performed relative to at least one previous image and / or posterior of said video sequence, said method comprising the following steps:
- estimation of motion vectors as a function of the macroblocks of said current image and of said at least one previous and / or subsequent image said estimation being made only from the macroblocks not belonging to said displaced zone
- said current image to be encoded being transmitted from a server to a client terminal, the coding being performed at the server and said actual vector of at least one zone moved from said current image being provided by a programming interface of the graphical environment of said server.
- the method according to the invention may also have one or more of the following characteristics, considered individually or in any technically possible combination:
- said video coding is a spatio-temporal coding H264 - the screen of said server is displayed by said client terminal on a screen according to a RFB protocol "Remote Frame Buffer" such as the VNC protocol "Virtual Network Computing", said real movement vector of said displaced zone is determined in the following cases: horizontal or vertical scrolling of said displaced zone with a browser-type application; o moving a graphical window of the operating system of said server; o transition from one transparency to another transparency in the case of a slide show; o Flash type animation.
- said client terminal is a video decoder;
- said current image as well as said real motion vector are initially coded in an RGB format and then undergo a transformation in a YUV format.
- said real motion vector is a vector with two or three dimensions.
- FIG. 1 is a simplified schematic representation of a system of coding according to the invention for the implementation of the coding method according to the invention.
- FIG. 1 represents a coding system 100 according to the invention.
- the coding system 100 comprises: a module 101 for receiving input data,
- a motion estimation module 105 also hereinafter referred to as a motion vector estimation module
- a buffer 1 1 1 1, a reordering module 108,
- the invention applies to the field of video data broadcasting by a server to a client terminal.
- the server generally a computer, is connected to the client terminal, for example a video decoder, via a network, for example in the form of HDMI ("High Definition Multimedia Interface"), WIFI or ethernet.
- the computer screen can then be displayed by the client terminal on a television screen according to a "Remote Frame Buffer" type protocol, for example VNC ("Virtual Network Computing").
- the server encodes what it broadcasts before sending it to the client terminal.
- the encoding performed by the server is an encoding of the spatio-temporal type such as H264: it is therefore the server that integrates the encoding system 100 according to the invention.
- the reception module 101 receives as input a predictive image F n .
- F n is the current image of the entire server screen.
- the invention relates only to the coding of the predictive images, the intra-predictive coding of the images I continuing to be done according to known techniques. Thus, to make the diagram clearer, the means necessary for intra-predictive coding have been deliberately omitted.
- the image F n is generally in a YUV12 format after undergoing a RGB - YUV transformation.
- the reception module 101 also receives as input information on the zones having undergone a displacement (also called zone displaced in the remainder of the description) in the image F n .
- the displaced zone is a rectangular zone generally represented by a quadruplet (x, y, I, h): x and y represent respectively the abscissa and the ordinate of the point at the top left of the zone, I represents the width of the rectangle and h is the height of said rectangle.
- this real vector can be obtained by the server via the programming interfaces of its graphical environment, also called API (Application Programming Interface) for graphical user interface (GUI "Graphical User Interface") of the software application running on the server and used by the client terminal or operating system (or “operating system” in English) of the server, Windows TM for example.
- API Application Programming Interface
- GUI graphical user interface
- This real motion vector is known to the software application since the latter is at the initiative of the displacement of the area following an event (typically an event generated by a click or mouse movement or typing) of the user final via the client terminal.
- an event typically an event generated by a click or mouse movement or typing
- the size of the rectangle can also be obtained by functions of the "Windows" type. innerHeight "and” Windows. innerWidth ".
- the server can obtain values characterizing the real movement vector of the zone moved by the user via the client terminal.
- the motion vector m (mx, my) ⁇ coded in RGB format is also transformed into YUV12 format.
- the input data processing module 102 comprises:
- each current image F n to be encoded is divided by means 103 into macroblocks corresponding to a rectangular elementary region of the image having a variable size between 4x4 and 16x16 pixels (via 8x16, 8x8, ).
- the means 104 knowing the displaced areas of the image F n as well as their real motion vectors make it possible to attribute to the macroblocks belonging to a displaced zone the same real motion vector. Therefore, the means 1 19 will guide only the macroblocks not affected by a zone moved to the motion estimation module 105, the actual motion vectors of the other macroblocks being transmitted directly to the motion compensation module 106 via the means 118.
- the function of the motion estimation module 105 is to retrieve a macroblock of the current image F n in at least one previous image F n- i of the server screen in its entirety (it could also be a posterior image in the case of an image B and even a plurality of images before and / or after).
- a motion vector which corresponds to the difference between the position of the selected region and that of the selected region. macroblock.
- the motion vectors that have been retained by the estimation module (in addition to the real motion vectors transmitted by the means 1 18) are transmitted to the motion compensation module 106. This gives a prediction error due to the fact that the region retained in the past image is not exactly equal to the macroblock analyzed.
- a predicted picture P is obtained.
- Subtractor 109 then calculates a residual error D n between the pixels of F n and the predicted picture P.
- a frequency transform (of discrete cosine transform type DCT "Discrete Cosine Transform” or Hadamard transform) is applied via the frequency transform module 1 12 on each macroblock that has undergone motion estimation, as well as on the residual error D n .
- This transform makes it possible to have a frequency representation of the modified zones.
- the data from the frequency transform module 1 12 are then quantized (ie coded on a limited number of bits) by the quantization module 1 13 to provide transformed and quantized parameters X.
- the function of the quantization module 1 13 is to define different quantification steps depending on whether certain components will be judged or not significant visually; these quantization steps are defined in a quantization step table.
- the module 1 14 of inverse quantizing retrieves the processed and quantized parameters X which then pass through the module 115 of inverse frequency transform that operates an inverse frequency transform to recover a quantized version D 'n of the residual error D n; this quantized version D ' n is then added to the macroblocks of the predicted zone P by the adder 1 10; the image at the output of the adder 1 10 is then processed by the deblocking filter to provide a reconstructed image F ' n corresponding to a set of reconstructed zones having the same position, the same width and the same height as the modified areas.
- F ' n is used internally by the decoder 100 to estimate the quality of the encoding.
- the quantized results X from the quantization module 1 13 are then reordered by the reordering module 108 to group together the non-zero coefficients so as to allow an efficient representation of the other coefficients having a zero value.
- the data then undergoes a final phase of entropy coding compression via the entropy coder 120.
- the function of the encodes encoder is to re-encode the data differently in order to reduce the number of bits necessary for their encoding by approaching as close as possible to the minimum of theoretical bits (which is fixed by entropy).
- the entropy encoder 120 constructs an output stream ⁇ in a Network Abstraction Layer (NAL) format defined to allow the use of the same video syntax in many network environments.
- NAL Network Abstraction Layer
- the invention is not limited to the embodiment just described.
- the invention has been more particularly described in the context of the H264 coding but it applies to any type of spatiotemporal coding: this is for example the case of MPEG2 coding or VC1 coding. (SMPTE video compression standard "Society of Motion Picture and Television Engineers").
- motion vector has been described as a two-dimensional vector but it is also possible to use a three-dimensional motion vector, for example in the case of a graphical interface such as Aero TM which is the graphical interface of Windows Vista TM for displaying 3D effects.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200980156608.0A CN102318344B (zh) | 2008-12-30 | 2009-11-16 | 视频编码系统和方法 |
EP09768199A EP2380350A1 (fr) | 2008-12-30 | 2009-11-16 | Systeme et procede de codage video |
BRPI0923824-7A BRPI0923824A2 (pt) | 2008-12-30 | 2009-11-16 | Sistema e processo de codificação de vídeo. |
US13/142,551 US8731060B2 (en) | 2008-12-30 | 2009-11-16 | Video encoding system and method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0859113A FR2940736B1 (fr) | 2008-12-30 | 2008-12-30 | Systeme et procede de codage video |
FR0859113 | 2008-12-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010076439A1 true WO2010076439A1 (fr) | 2010-07-08 |
Family
ID=40688557
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/FR2009/052193 WO2010076439A1 (fr) | 2008-12-30 | 2009-11-16 | Systeme et procede de codage video |
Country Status (6)
Country | Link |
---|---|
US (1) | US8731060B2 (fr) |
EP (1) | EP2380350A1 (fr) |
CN (1) | CN102318344B (fr) |
BR (1) | BRPI0923824A2 (fr) |
FR (1) | FR2940736B1 (fr) |
WO (1) | WO2010076439A1 (fr) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9060010B1 (en) * | 2012-04-29 | 2015-06-16 | Rockwell Collins, Inc. | Incorporating virtual network computing into a cockpit display system for controlling a non-aircraft system |
US9602822B2 (en) * | 2013-04-17 | 2017-03-21 | Qualcomm Incorporated | Indication of cross-layer picture type alignment in multi-layer video coding |
CN104144349B (zh) * | 2014-07-09 | 2017-08-29 | 中电科华云信息技术有限公司 | 基于h264的spice视频编解码扩展方法及系统 |
WO2016045729A1 (fr) | 2014-09-25 | 2016-03-31 | Huawei Technologies Co.,Ltd. | Serveur destiné à procurer une interface utilisateur graphique à un client et client |
CN104469400B (zh) * | 2014-12-17 | 2018-02-23 | 浪潮软件集团有限公司 | 一种基于rfb协议的图像数据压缩方法 |
TWI681669B (zh) | 2017-04-21 | 2020-01-01 | 美商時美媒體公司 | 用於藉由預測運動向量及/或快取重複運動向量的玩家輸入運動補償的系統及方法 |
CN110012293B (zh) * | 2019-02-19 | 2021-06-04 | 西安万像电子科技有限公司 | 视频数据处理方法及装置 |
CN113365083B (zh) * | 2021-07-08 | 2022-10-11 | 广州市保伦电子有限公司 | 一种基于h.265实现yuv444图像编解码方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1377040A1 (fr) * | 2002-06-19 | 2004-01-02 | STMicroelectronics S.r.l. | Procédé de stabilisation d'une séquence d'images |
WO2004013985A1 (fr) * | 2002-08-06 | 2004-02-12 | Motorola, Inc. | Procede et dispositif permettant d'effectuer une recherche de mouvement predictive rapide de haute qualite |
US7321626B2 (en) * | 2002-03-08 | 2008-01-22 | Sharp Laboratories Of America, Inc. | System and method for predictive motion estimation using a global motion predictor |
EP1991004A2 (fr) * | 2006-11-28 | 2008-11-12 | Samsung Electronics Co., Ltd | Procédé et appareil de codage et de décodage d'images vidéo |
Family Cites Families (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5809201A (en) * | 1994-06-24 | 1998-09-15 | Mitsubishi Denki Kabushiki Kaisha | Specially formatted optical disk and method of playback |
US5864681A (en) * | 1996-08-09 | 1999-01-26 | U.S. Robotics Access Corp. | Video encoder/decoder system |
JP3351705B2 (ja) * | 1997-04-25 | 2002-12-03 | 日本ビクター株式会社 | 動き補償符号化装置、動き補償符号化方法、及び記録媒体への記録方法 |
EP0936813A1 (fr) * | 1998-02-16 | 1999-08-18 | CANAL+ Société Anonyme | Traitement d'image dans un décodeur |
US6563953B2 (en) * | 1998-11-30 | 2003-05-13 | Microsoft Corporation | Predictive image compression using a single variable length code for both the luminance and chrominance blocks for each macroblock |
JP4447197B2 (ja) * | 2002-01-07 | 2010-04-07 | 三菱電機株式会社 | 動画像符号化装置および動画像復号装置 |
US20030160814A1 (en) * | 2002-02-27 | 2003-08-28 | Brown David K. | Slide show presentation and method for viewing same |
US7738551B2 (en) * | 2002-03-18 | 2010-06-15 | International Business Machines Corporation | System and method for processing a high definition television (HDTV) image |
EP1574067B1 (fr) * | 2002-12-17 | 2017-06-07 | Zoran (France) | Traitement ou compression de signaux n-dimensionnels a l'aide de paquets d'ondelettes et de bandelettes modifies |
JP4536325B2 (ja) * | 2003-02-04 | 2010-09-01 | ソニー株式会社 | 画像処理装置および方法、記録媒体、並びにプログラム |
EP1627533B1 (fr) * | 2003-05-02 | 2006-12-27 | Koninklijke Philips Electronics N.V. | Interpolation biaisee de vecteurs de deplacement permettant de reduire les artefacts video |
NO318973B1 (no) * | 2003-07-01 | 2005-05-30 | Tandberg Telecom As | Fremgangsmate for stoyreduksjon |
JP2005039340A (ja) * | 2003-07-15 | 2005-02-10 | Hitachi Ltd | 再生装置 |
US7724827B2 (en) * | 2003-09-07 | 2010-05-25 | Microsoft Corporation | Multi-layer run level encoding and decoding |
KR101118982B1 (ko) * | 2004-04-09 | 2012-03-13 | 소니 주식회사 | 화상 처리 장치 및 방법, 기록 매체, 및 프로그램 |
US8503530B2 (en) * | 2004-05-27 | 2013-08-06 | Zhourong Miao | Temporal classified filtering for video compression |
US7881546B2 (en) * | 2004-09-08 | 2011-02-01 | Inlet Technologies, Inc. | Slab-based processing engine for motion video |
US7516255B1 (en) * | 2005-03-30 | 2009-04-07 | Teradici Corporation | Method and apparatus for providing a low-latency connection between a data processor and a remote graphical user interface over a network |
FR2886044B1 (fr) * | 2005-05-23 | 2007-06-22 | Canon Kk | Procede et dispositif d'affichage d'images d'une sequence video |
CN101502099B (zh) * | 2006-05-09 | 2012-02-22 | Nxp股份有限公司 | 具有抖动提取的处理设备和包括这种设备的装备 |
US20070291839A1 (en) * | 2006-06-15 | 2007-12-20 | Faraday Technology Corp. | Method and device for multimedia processing |
WO2008073416A1 (fr) * | 2006-12-11 | 2008-06-19 | Cinnafilm, Inc. | Utilisation d'effets cinématographiques en temps réel sur des vidéo numériques |
US8825739B2 (en) * | 2007-07-04 | 2014-09-02 | International Business Machines Corporation | Method and apparatus for controlling multiple systems in a low bandwidth environment |
EP2175607A1 (fr) * | 2008-10-08 | 2010-04-14 | NEC Corporation | Procédé pour établir une session client léger |
US8219759B2 (en) * | 2009-03-16 | 2012-07-10 | Novell, Inc. | Adaptive display caching |
US20100332613A1 (en) * | 2009-06-30 | 2010-12-30 | Jorg Brakensiek | Method and apparatus for providing content and context analysis of remote device content |
-
2008
- 2008-12-30 FR FR0859113A patent/FR2940736B1/fr not_active Expired - Fee Related
-
2009
- 2009-11-16 CN CN200980156608.0A patent/CN102318344B/zh not_active Expired - Fee Related
- 2009-11-16 US US13/142,551 patent/US8731060B2/en not_active Expired - Fee Related
- 2009-11-16 BR BRPI0923824-7A patent/BRPI0923824A2/pt not_active Application Discontinuation
- 2009-11-16 WO PCT/FR2009/052193 patent/WO2010076439A1/fr active Application Filing
- 2009-11-16 EP EP09768199A patent/EP2380350A1/fr not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7321626B2 (en) * | 2002-03-08 | 2008-01-22 | Sharp Laboratories Of America, Inc. | System and method for predictive motion estimation using a global motion predictor |
EP1377040A1 (fr) * | 2002-06-19 | 2004-01-02 | STMicroelectronics S.r.l. | Procédé de stabilisation d'une séquence d'images |
WO2004013985A1 (fr) * | 2002-08-06 | 2004-02-12 | Motorola, Inc. | Procede et dispositif permettant d'effectuer une recherche de mouvement predictive rapide de haute qualite |
EP1991004A2 (fr) * | 2006-11-28 | 2008-11-12 | Samsung Electronics Co., Ltd | Procédé et appareil de codage et de décodage d'images vidéo |
Also Published As
Publication number | Publication date |
---|---|
CN102318344B (zh) | 2014-10-08 |
US20110268191A1 (en) | 2011-11-03 |
FR2940736A1 (fr) | 2010-07-02 |
US8731060B2 (en) | 2014-05-20 |
FR2940736B1 (fr) | 2011-04-08 |
EP2380350A1 (fr) | 2011-10-26 |
CN102318344A (zh) | 2012-01-11 |
BRPI0923824A2 (pt) | 2015-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2010076439A1 (fr) | Systeme et procede de codage video | |
JP7047119B2 (ja) | 変換領域における残差符号予測のための方法および装置 | |
US10013746B2 (en) | High dynamic range video tone mapping | |
US20210377542A1 (en) | Video encoding and decoding method, device, and system, and storage medium | |
TW201711473A (zh) | 針對視訊寫碼處理高動態範圍及廣色域視訊資料 | |
TW201931853A (zh) | 具有聯合像素/變換為基礎之量化之視頻寫碼之量化參數控制 | |
TWI713354B (zh) | 用於顯示器調適之色彩重映射資訊sei信息發信號 | |
WO2021109978A1 (fr) | Procédé de codage vidéo, procédé de décodage vidéo et appareils correspondants | |
WO2014026097A1 (fr) | Procédé et appareil de quantification et de codage en deux étapes | |
US20240171773A1 (en) | Method and apparatus of cross-component linear modeling for intra prediction | |
JP7397878B2 (ja) | イントラ・サブ・パーティション・コーディング・モードのための方法及び装置 | |
EP2380352A2 (fr) | Procede d'encodage par segmentation d'une image | |
EP4300958A1 (fr) | Procédé de codage d'image vidéo, procédé de décodage d'image vidéo et dispositifs associés | |
US20120170663A1 (en) | Video processing | |
WO2021164014A1 (fr) | Procédé et dispositif de codage vidéo | |
JP2023085351A (ja) | Cbfフラグの効率的なシグナリング方法 | |
EP4277274A1 (fr) | Procédés et appareils de codage et décodage en couches | |
RU2786086C1 (ru) | Способ и устройство кросс-компонентного линейного моделирования для внутреннего предсказания | |
EP2015584A2 (fr) | Système et procédé de codage vidéo | |
TWI821013B (zh) | 視頻編解碼方法及裝置 | |
Azadegan et al. | Improving video quality by predicting inter-frame residuals based on an additive 3D-CNN model | |
RU2777967C1 (ru) | Деблокирующий фильтр для границ подразделов, возникающих под действием инструмента кодирования интра-подразделов | |
RU2772313C2 (ru) | Способ и устройство для фильтрации изображений с адаптивными коэффициентами множителя | |
Ginzburg et al. | DCT-Domain Coder for Digital Video Applications | |
Lee et al. | Video Coding Techniques and Standards. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200980156608.0 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09768199 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13142551 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2009768199 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: PI0923824 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: PI0923824 Country of ref document: BR Kind code of ref document: A2 Effective date: 20110629 |