TW200935914A - Methods and apparatus for inter-layer residue prediction for scalable video - Google Patents

Methods and apparatus for inter-layer residue prediction for scalable video Download PDF

Info

Publication number
TW200935914A
TW200935914A TW097139609A TW97139609A TW200935914A TW 200935914 A TW200935914 A TW 200935914A TW 097139609 A TW097139609 A TW 097139609A TW 97139609 A TW97139609 A TW 97139609A TW 200935914 A TW200935914 A TW 200935914A
Authority
TW
Taiwan
Prior art keywords
tone mapping
block
prediction
layer
inter
Prior art date
Application number
TW097139609A
Other languages
Chinese (zh)
Other versions
TWI528831B (en
Inventor
Peng Yin
Jian-Cong Luo
Yong-Ying Gao
Yu-Wen Wu
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of TW200935914A publication Critical patent/TW200935914A/en
Application granted granted Critical
Publication of TWI528831B publication Critical patent/TWI528831B/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

There are provided methods and apparatus for inter-layer residue prediction for scalable video. An apparatus is described for an encoder (200) for encoding a block of a picture, or a decoder (300) for decoding a block of a picture, by applying inverse tone mapping to an inter-layer residue prediction process for the block, wherein the inverse tone mapping is performed in the pixel domain. Methods for encoding (440, 460) or decoding (540, 560) a block of a picture are also described; and performed by applying inverse tone mapping to an inter-layer residue prediction process for the block, wherein the inverse tone mapping is performed in the pixel domain.

Description

200935914 九、發明說明: 【發明所屬之技術領域】 本原理一般係關於視訊編碼及解碼,且更特定士之,係 關於尺寸可調視訊的層間殘相預測之方法及裝置。° 此申請案主張於簡年1G月15日中請的美國臨時申請案 序號60/979,956之利益,該申請案係全部以引用方式併入 ❹200935914 IX. INSTRUCTIONS: [Technical Fields of the Invention] The present principles are generally related to video encoding and decoding, and more specifically to methods and apparatus for inter-layer residual phase prediction of size-tunable video. ° This application claims the benefit of the US Provisional Application No. 60/979,956, which was filed on the 15th of January 1st, and is incorporated by reference.

本文中。此外,此申請案係關於非臨時申請案,其律師標 案號碼為PU_157,名稱為尺寸可調視訊的層間殘相預測 之方法及裝置,該申請案亦主張於2〇〇7年1〇月15日申請的 美國臨時申請案序號60/979,956之利益,而且係共同讓 渡、以引用方式併入本文中並與本發明同時申 【先前技術】 亦可替換地稱為「色彩深度」及/或「像素深度」的 「位元深度」指用以保持一像素的位元之數目。位元深度 決定可一次加以顯示的色彩之最大數目。近年來,具有大 於八的位元深度之數位影像及/或數位視訊係更為許多應 用領域所期望,該等領域包括但不限於醫療影像處理、製 作及後期製作中的數位電影工作流程、家庭影院相關應用 等等。 存在若干方式用以處理(例如)8位元視訊及1〇位元視訊 之共存。在第一先前技術解決方式中,僅傳輸一 10位元編 碼位元流而且藉由施加色調映射方法於1〇位元呈現來獲得 對標準8位元顯示器件的8位元表示。色調映射係已知的技 術’其用以將較高位元深度轉換為較低位元深度,通常至 135302.doc 200935914 接近具有更有限動態範圍之媒體中的高動態範圍影像之出 現。 在一第二先前技術解決方式中,傳輸一聯播位元流,其 包括8位元編碼表示及1〇位元編碼表示。在選擇哪一位元 深度來解碼中,優先選擇該解碼器。例如,一 10位元能動 解碼器可解碼並輸出一 1〇位元視訊,而支持僅8位元視訊 的一正常解碼器僅可輸出一 8位元視訊。 該第一解決方式係固有地與國際標準化組織/國際電技 術委員會(ISO/IEC)動晝專家群4(MPEG-4)第10部分先進視 訊編碼(AVC)標準/國際電信聯盟,電信部門(ITU-T) H.264 推薦(以下稱為「MPEG_4 AVC標準」)之8位元設定檔不相 谷。該第二解決方式係與所有當前標準相容但是需要較多 附加項。然而,位元減少與向後標準相容性之間的良好折 衷可以係尺寸可調解決方式。亦瞭解為MpEG4 Avc標準 之尺寸可調擴大的尺寸可調視訊編碼(svc)考量支持位元 深度尺寸可調性。 存在位元冰度尺寸可調編碼優於後處理或聯播的至少三 個優點。—第一優點係,位元深度尺寸可調編碼採用 MEG-4AVC標準之高設定檔以向後相容方式致能ι〇位元視 訊° 一帛-優點# ’位Α深度尺寸可調編碼致能採用不同 網路頻寬或H件能力1元深度尺寸可調編碼之—第三優 點係,其提供低複雜性、高效率以及高靈活性。 在MPEG-4 AVC標準之當前尺寸可調視訊編碼擴大中, 單-迴路解碼經支持用以減小解碼複雜性。當前空間或粗 135302.doc 200935914 粒尺寸可調(CGS)層僅需要交互編碼巨集區塊之完全解 瑪’包括運動補償預測及解鎖。此係藉由約束對採用内部 巨集區塊編碼的較低層圖像之該些部分的層間紋理内預測 來實現。為擴大對位元深度尺寸可調性的層間紋理内預 測’使用反色調映射《尺寸可調視訊編碼亦支持層間殘相 預測。因為一般地在像素(空間)域中使用色調映射,所以 極難以在殘相域中發現對應反色調映射。在第三及第四先 前技術方法中,將位元偏移用於層間殘相預測。 在稱為係用以增加用於單一迴路解碼的層間編碼效率而 無需位元深度尺寸可調性之技術的平化參考預測(SRp)之 第五先刖技術方法中’當設定語法元素residual_prediction flag 以及base_mode—flag兩者時傳送一位元語法元素sm〇〇thed_ reference—flag。當 smoothed—reference一flag係等於一時, 在該解碼器中採取下列步驟以獲得重建視訊區塊: 1·使用自基礎層的增強層參考圖框以及向上取樣運動 向量獲得預測區塊户; 2. 向上取樣對應基礎層殘相區塊[並且將¢/(^)添加至p 以形成户+ C/(rA); 3. 首先在水平方向上並接著在垂直方向上施加帶有分 接頭[1、2、1]的平化濾波器,以獲得只户 + c/k)); 以及 4-將增強層殘相區塊添加至(3)以獲得重建區塊 R^S(P + U(rb)) + r 〇 參考圖1 ’ 一般由參考數字100指示使用平化參考預測的 135302.doc 200935914 一解碼器之—部分。 =器部分π)。包括一運動補償器u2,其具有與一組合 -第一非反相輸入進行信號通信的一輸出、组合 器i·32之—輸出係與一開關142 桩。門之一輸入進行信號通信連 ,開關142之-第-輸出係與—組合器162之—第一非反 凌、…c Μ關14 2之-第二輸出係與- 慮波器⑸之輸人進行錢通信連接mi52之一輸出 Ο Φ 係與組合器162之第一非反相輸入進行信號通信連接。 一參考圖框緩衝器122之—輸出係與運動補償器112之-第一輸入進行信號通信連接。 運動補償器112之-第一輪入可用作至解碼器部分削的 一輸入,以接收增強層運動向量。運動補償器ιΐ2之一第 三輸入可用作至解碼器部分1〇〇的一輸入以接收向上取 樣基礎層運動向量。組合器132之—第二非反相輸入可用 作解碼器部分議之-輸人,以接收—向上取樣基礎層殘 相。開關142之一控制輸入可用作解碼器部分1〇〇之一輸 入,以接收一 smoothed_reference_flag語法元素。組合器 162之一第二非反相輸入可用作解碼器部分1〇〇之一輸入, 以接收一增強層殘相。組合器2 62之一輸出可用作解碼器 部分100之一輸出,以輸出一重建區塊r。 然而’前述先前技術不利地不能直接用於位元深度尺寸 可調性。 【發明内容】 藉由關於尺寸可調視訊的層間殘相預測之方法及裝置的 I35302.doc -9- 200935914 本原理來解決先前技術之此等及其他缺點與不利條件β 依據本原理之一態樣’提供一裝置。該裝置包括藉由施 加反色調映射於用於一囷像之一區塊的層間殘相預測程序 來編碼該區塊之編碼器。在像素域中實行反色調映射以支 持位元深度尺寸可調性。 依據本原理之另一態樣,提供一方法。該方法包括藉由 施加反色調映射於用於一圖像之一區塊的層間殘相預測程 序來編碼該區塊。在像素域中實行反色調映射以支持位元 深度尺寸可調性。 依據本原理之另一態樣,提供一裝置。該裝置包括藉由 施加反色調映射於用於一圖像之一區塊的層間殘相預測程 序來解碼該區塊之解碼器。在像素域中實行反色調映射以 支持位元深度尺寸可調性。 依據本原理之另一態樣,提供一方法。該方法包括藉由 施加反色調映射於用於一圖像之一區塊的層間殘相預測程 序來解碼該區塊。在像素域中實行反色調映射以支持位元 深度尺寸可調性。 從結合附圖閲讀的範例性具體實施例之下列詳細說明, 將明白本原理之此等及其他態樣特徵及優點。 【實施方式】 本原理係關於尺寸可調視訊的層間殘相預測之方法及裝 置。 本說明解說本原理。因此應瞭解,熟習此項技術者將能 夠a"十各種配置’其儘管未在本文中加以明確說明或顯 135302.doc 200935914 示,但是體現本原理而且係包括在其精神及範疇内。 本文中敍述的所有範例及條件語言係基於教導目的預計 用以協助讀者瞭解本原理及由發明者貢獻的概念以促進該 技術’而且將視為不限於如此明確敍述的範例及條件。 此外,本文中敍述本原理之原理、態樣及具體實施例, 以及其特定範例的所有陳述係預計包含其結構及功能等效 物兩者。此外,預計此類等效物包括當前已知的等效物以In this article. In addition, this application is for a non-provisional application, the lawyer's bid number is PU_157, and the name is the method and device for predicting the inter-layer residual phase of the size-adjustable video. The application is also claimed in 2〇〇7年一〇月U.S. Provisional Application Serial No. 60/979,956, filed on the same date, which is hereby incorporated by reference in its entirety herein in its entirety herein in Or "bit depth" of "pixel depth" refers to the number of bits used to hold one pixel. Bit Depth Determines the maximum number of colors that can be displayed at one time. In recent years, digital video and/or digital video systems with bit depths greater than eight are expected in many applications, including but not limited to digital cinema workflows in medical image processing, production and post production, and homes. Cinema related applications and more. There are several ways to handle the coexistence of, for example, 8-bit video and 1-bit video. In a first prior art solution, only a 10-bit encoded bit stream is transmitted and an 8-bit representation of a standard 8-bit display device is obtained by applying a tone mapping method to a 1-bit representation. Tone mapping is a technique known to convert higher bit depths to lower bit depths, typically to 135302.doc 200935914 near the appearance of high dynamic range images in media with more limited dynamic range. In a second prior art solution, a simulcast bitstream is transmitted, which includes an 8-bit encoded representation and a 1-bit encoded representation. In selecting which bit depth to decode, the decoder is preferred. For example, a 10-bit active decoder can decode and output a 1-bit video, while a normal decoder supporting only 8-bit video can only output an 8-bit video. The first solution is inherently in conjunction with the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) Expert Group 4 (MPEG-4) Part 10 Advanced Video Coding (AVC) Standards/International Telecommunication Union, Telecommunications Sector ( ITU-T) The 8-bit profile recommended by H.264 (hereafter referred to as the "MPEG_4 AVC Standard") is out of phase. This second solution is compatible with all current standards but requires more additional items. However, a good compromise between bit reduction and backwards standard compatibility can be a size tunable solution. Also known as the MpEG4 Avc standard size-adjustable size-adjustable video coding (svc) considerations support bit depth size adjustability. There are at least three advantages of bit-size tunable coding over post-processing or simulcasting. - The first advantage is that the bit depth size adjustable code uses the high profile of the MEG-4AVC standard to enable the 〇 〇 bit video in the backward compatible mode. 帛 优点 优点 优点 优点 优点 ' Α Α Α Α 优点 优点The third advantage is the use of different network bandwidths or H-capacity 1-dimensional depth-size tunable coding, which provides low complexity, high efficiency and high flexibility. In current size tunable video coding extensions of the MPEG-4 AVC standard, single-loop decoding is supported to reduce decoding complexity. The current space or coarse 135302.doc 200935914 The Grain Size Adjustable (CGS) layer only requires the complete solution of the cross-coded macroblocks' including motion compensated prediction and unlocking. This is achieved by constraining the inter-layer texture prediction of the portions of the lower layer image encoded with the inner macroblock. In order to expand intra-layer texture prediction for bit depth size tunability, the use of inverse tone mapping, size-adjustable video coding, also supports inter-layer residual phase prediction. Since tone mapping is generally used in the pixel (space) domain, it is extremely difficult to find a corresponding inverse tone mapping in the residual phase domain. In the third and fourth prior art methods, the bit offset is used for inter-layer residual phase prediction. In the fifth prior art method called Flattened Reference Prediction (SRp), which is used to increase the inter-layer coding efficiency for single-loop decoding without bit depth size tunability, 'when the syntax element residual_prediction flag is set And the base_mode_flag transmits a one-bit syntax element sm〇〇thed_reference_flag. When the smoothed-reference-flag is equal to one, the following steps are taken in the decoder to obtain the reconstructed video block: 1. Use the enhancement layer reference frame from the base layer and the upsampled motion vector to obtain the predicted block user; Sampling up the corresponding base layer residual phase block [and adding ¢/(^) to p to form the household + C/(rA); 3. First applying the tap with the horizontal direction and then the vertical direction [1] , 2, 1] flattening filter to obtain only + c / k)); and 4 - add the enhancement layer residual phase block to (3) to obtain the reconstructed block R ^ S (P + U ( Rb)) + r 〇 Refer to Figure 1 'Generally indicated by reference numeral 100 135302.doc 200935914 - Part of the decoder using flattened reference prediction. = device part π). A motion compensator u2 is provided having an output for signal communication with a combination of a first non-inverting input, an output system of the combiner i.32, and a switch 142. One of the gates inputs a signal communication connection, the switch 142-the first output system and the - combiner 162 - the first non-reverse, ... c - 14 - the second output system - the filter (5) The output Ο Φ of one of the human money communication connections mi52 is in signal communication with the first non-inverting input of the combiner 162. The output of a reference frame buffer 122 is in signal communication with the first input of the motion compensator 112. The first round of motion compensator 112 can be used as an input to the decoder portion to receive the enhancement layer motion vector. A third input of motion compensator ι2 can be used as an input to decoder portion 1 to receive the upsampled base layer motion vector. The second non-inverting input of combiner 132 can be used as part of the decoder-input to receive-upsample the base layer residual phase. One of the switches 142 controls the input to be used as one of the decoder portions 1 to receive a smoothed_reference_flag syntax element. A second non-inverting input of one of the combiners 162 can be used as an input to the decoder portion 1 to receive an enhancement layer residual phase. One of the combiner 2 62 outputs can be used as one of the outputs of the decoder portion 100 to output a reconstructed block r. However, the aforementioned prior art is disadvantageously not directly applicable to the bit depth size adjustability. SUMMARY OF THE INVENTION The present invention solves these and other shortcomings and disadvantages of the prior art by a method and apparatus for inter-layer residual phase prediction of size-tunable video. According to one aspect of the present principles Sample 'provides a device. The apparatus includes an encoder that encodes the block by applying an anti-tone mapping to an inter-layer residual phase prediction procedure for a block of an image. Anti-tone mapping is implemented in the pixel domain to support bit depth size adjustability. According to another aspect of the present principles, a method is provided. The method includes encoding the block by applying an inverse tone mapping to an inter-layer residual phase prediction procedure for a block of an image. Anti-tone mapping is implemented in the pixel domain to support bit depth size adjustability. According to another aspect of the present principles, a device is provided. The apparatus includes a decoder for decoding the block by applying an inverse tone mapping to an inter-layer residual phase prediction procedure for a block of an image. Anti-tone mapping is implemented in the pixel domain to support bit depth size adjustability. According to another aspect of the present principles, a method is provided. The method includes decoding the block by applying an inverse tone mapping to an inter-layer residual phase prediction process for a block of an image. Anti-tone mapping is implemented in the pixel domain to support bit depth size adjustability. These and other aspects of the present principles and advantages will be apparent from the following detailed description of exemplary embodiments. [Embodiment] This principle is a method and apparatus for predicting the inter-layer residual phase of a size-adjustable video. This description explains the principle. It should be understood, therefore, that those skilled in the art will be able to a'a variety of configurations' which, although not explicitly described herein or shown in 135302.doc 200935914, embody the principles and are included within their spirit and scope. All of the examples and conditional language described herein are intended to assist the reader in understanding the present principles and concepts contributed by the inventors to facilitate the teachings of the present invention, and are not to be construed as limited to the examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the present invention, as well as specific examples thereof, are intended to include both structural and functional equivalents. In addition, such equivalents are intended to include the currently known equivalents.

及將來開發的等效物兩者’即實行同一功能而不管結構的 任何開發元件。 因此,例如,熟習此項技術者應瞭解,本文中呈現的方 塊圖表示體現本原理的解說性電路之概念圖。同樣地,應 瞭解,明確地顯示任何流程圖表、流程囷、狀態轉變圖、 虛擬碼及表示各種程序的類似物,其可實質上加以表示在 電腦可讀取媒韹中並由一電腦或處理器(不管是否為此電 腦或處理器)如此執行。 可透過使用專用硬趙以及能夠與適當軟體相關聯執行軟 體之硬體來提供該等圖式中所示的各種元件之功能。當由 一處理器提供時,該等功能可由一單一專用處理器、由一 單一共用處理器,或由複數個個別處理器(該等處理器之 某些可共用)來提供。此外,術語「處理器」或「控制 器」的明確使用不應理解為專指能夠執行敕體的硬趙:並 可含蓄地包括但不限於數位信號處理器(「Dsp」)硬體、 用「=存軟體的唯讀記憶體(「麵」)、隨機存取記憶體 (Ram」)以及非揮發性儲存器。 135302.doc 200935914 亦可包括其他傳統與/或自訂的硬體。同樣地該等圖 ^中所7F的任何開關僅係概念上的。其功能可透過程式邏 輯之操作、透過專用邏輯、透過程式控制及專用邏輯之相 乍用實現’或甚至手動實現,可由實施者選擇特定技 - 術,如從内容更明確地瞭解。 • 在其中請專利範圍中,任何表達為用於實施-指定功能 之構件的元件係預計包含實施該功能的任何方式,其包 Φ 括,例如,a)實施該功能的電路元件之組合或b)任何形式 &軟體’因此包括與適當電路組合以執行該軟體來實施該 功能的韌體、微碼或類似物。如由此類申請專利範圍定義 &本原理在於以下事實:由各種敍述的構件提供的官能度 係’以中睛專利H圍要求之方式加以組合並集合。因此認為 可提供此等官能度的任何構件係等效於本文中所示者。 說明書中對本原s之「一項具體實施例」或「一具體實 施例」的參考意指結合具體實施例說明的特定特徵、結構 φ 或特性等等係包括在本原理之至少一項具體實施例中。因 此,出現在整個說明書各處的「在一項具體實施例中」或 「在一具體實施例中」之出現不一定全部指同一項具體實 施例。此外,片語「在另一具體實施例中」並不排除說明 的具體實施例之主旨完全或部分地與另一具鱧實施例組 合0 應瞭解使用術語「及/或」以及「其之至少一個」,例 如在「A及/或B」以及「A及B之至少一個」的情況下,係 預計包含僅選擇第一列舉選項(A),或僅選擇第二列舉選 135302.doc 12Both the equivalents developed in the future will perform the same function regardless of any development elements of the structure. Thus, for example, those skilled in the art should understand that the block diagrams presented herein represent a conceptual diagram of an illustrative circuit embodying the present principles. Similarly, it should be understood that any flow chart, flow chart, state transition diagram, virtual code, and analogs representing various programs can be explicitly displayed, which can be substantially represented in a computer readable medium and processed by a computer or This is done (whether or not this computer or processor). The functions of the various elements shown in the figures can be provided by the use of a dedicated hard singer and hardware capable of executing the software associated with the appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors (some of which may be shared). In addition, the explicit use of the terms "processor" or "controller" should not be taken to mean a hard-to-execute hard-wired: and may include, but is not limited to, a digital signal processor ("Dsp") hardware, "=Read-only memory ("face"), random access memory (Ram), and non-volatile memory. 135302.doc 200935914 Other traditional and/or custom hardware may also be included. Similarly, any of the switches of Figure 7F in these figures are conceptual only. Its functions can be implemented through the operation of program logic, through dedicated logic, through program control and dedicated logic, or even manually. The implementer can select a specific technology, such as a clearer understanding of the content. • In the scope of the patent, any component expressed as a component for implementing a specified function is intended to include any means of performing the function, including, for example, a) a combination of circuit components that implement the function or Any form &software' thus includes a firmware, microcode or the like that is combined with appropriate circuitry to perform the software to perform the function. As defined by the scope of such patent application & the present principles reside in the fact that the functionality provided by the various recited components is combined and assembled in the manner required by the medium eye. It is therefore contemplated that any component that provides such functionality is equivalent to that shown herein. References to "a specific embodiment" or "a specific embodiment" of the original s in the specification means that the specific features, structures φ or characteristics, etc. described in connection with the specific embodiments are included in at least one embodiment of the present principles. In the example. Thus, appearances of "a particular embodiment" or "an" In addition, the phrase "in another embodiment" does not exclude that the meaning of the specific embodiments described herein is in whole or in part in combination with another embodiment. It should be understood that the terms "and/or" and "the For example, in the case of "A and / or B" and "at least one of A and B", it is expected to include only the first enumeration option (A), or only the second enumeration 135302.doc 12

200935914 項()或選擇二個選項(A及B)。作為另一範例,在「A、 或u及「A、B&C之至少一個」情況下,此片語 計包含僅選擇第—列舉選項⑷,或僅選擇第二列舉 選項(B),或僅選擇第三列舉選項(c),或僅選擇第一及第 二列舉選項(A及B),或僅選擇第一及第三列舉選項(A及 C),或僅選擇第二及第三列舉選項(B&C),或選擇所有三 選項(A及B及C)»此可加以擴大,如由熟習此及相關技 術者針對許多列舉項目所輕易瞭解。 此外,應瞭解雖然本原理之一或多個具體實施例係在本 文中相對於MPEG-4 AVC標準之尺寸可調視訊編碼擴大來 說明,但是本原理並非單獨地限於此擴大及/或此標準, 並因此可相對於其他視訊編碼標準、推薦及其擴大加以利 用’同時維持本原理之精神》 此外,應瞭解雖然本文下列說明相對於高位元視訊之一 或多個範例使用1 〇位元視訊,但是本原理係適用於大於八 之任何數目的位元,包括但不限於(例如)12位元、14位元 等等。 如本文中所用,「高階語法」指階層上在巨集區塊層以 上之位元流中存在的語法。例如,如本文中使用的高階語 法可指但不限於片段標頭層級、補充增強資訊(SEI)層級、 圖像參數集(PPS)層級、序列參數集(SPS)層級以及網路擷 取層(NAL)單元標頭層級處的語法。 如以上所備註,本原理係關於尺寸可調視訊的層間殘相 預測之方法及裝置。 135302.doc -13- 200935914 參考圖2, 一般由參考數字200指示本原理可應用的範例 性視訊編碼器。 編碼器200包括具有與一變換器210之一輸入進行信號通 信的一輸出之一組合器205。變換器210之一輸出係與一量 - 化器215之一輸入進行信號通信連接。該量化器之一輸出 係在與一熵編碼器220之一第一輸入以及一反量化器225之 一輸入進行信號通信中連接。反量化器225之一輸出係與 一反變換器230之一輸入進行信號通信連接,反變換器23〇 之一輸出係與一組合器235之一第一非反相輸入進行信號 通信連接》組合器235之一輸出係與一迴路濾波器24〇之一 輸入進行信號通信連接》迴路濾波器240之一輸出係與一 運動估計器及層間預測決定器245之一第一輸入進行信號 通信連接。運動估計器或層間預測決定器245之一輸出係 與熵編碼器220之一第二輸入以及一運動補償器255之一輸 入進行信號通信連接。運動補償器255之一輸出係與一色 〇 調映射器260之一輸入進行信號通信連接。色調映射器260 之一輸出係與一組合器270之一第一非反相輸入進行信號 • 通信連接。組合器之一輸出係與一平化濾波器275之一 輸入進行信號通信連接。平化濾波器275之一輸出係與一 反色調映射器280之-輸入進行信號通信連接。反色調映 射=280之—輸出係與組合器235之—第二非反相輸入以及 組〇器105之一反相輸入進行信號通信連接。向上取樣器 250之輸出係與運動估計器及層間預測決定器245之一第 一輸入進行信號通信連接。向上取樣器265之一輸出係與 135302.doc -14· 200935914 組合器270之一第二非反相輸入進行信號通信連接。 組合器205之—輸入可用作編碼器200之一輸入,以接收 高位元深度圖像。向上取樣器250之一輸入可用作編碼器 2〇〇之輸入’以接收一基礎層運動向量。向上取樣器265 之一輸入可用作編碼器2〇〇之一輸入,以接收一低位元深 度基礎層殘相。熵編碼器22〇之一輸出可用作編碼器2〇〇之 一輸出,以輸出一位元流。200935914 item () or select two options (A and B). As another example, in the case of "A, or u, and "at least one of A, B & C", the phrase grammar includes only selecting the first-enumeration option (4), or only the second enumeration option (B), or Select only the third enumeration option (c), or only the first and second enumeration options (A and B), or only the first and third enumeration options (A and C), or only the second and third Enumerate options (B&C), or select all three options (A and B and C)» This can be expanded, as easily understood by those skilled in the art and for many enumerated items. Moreover, it should be understood that although one or more embodiments of the present principles are described herein in relation to the size-tunable video coding extension of the MPEG-4 AVC standard, the present principles are not limited solely to this expansion and/or the standard. And therefore can be used in relation to other video coding standards, recommendations and their extensions while maintaining the spirit of the principle. In addition, it should be understood that although the following description uses one-bit video with respect to one or more examples of high-level video. However, the present principles are applicable to any number of bits greater than eight, including but not limited to, for example, 12 bits, 14 bits, and the like. As used herein, "high-order grammar" refers to a grammar that exists hierarchically in a bit stream above a macroblock level. For example, a high-level syntax as used herein may refer to, but is not limited to, a slice header level, a Supplemental Enhancement Information (SEI) level, a Picture Parameter Set (PPS) level, a Sequence Parameter Set (SPS) level, and a network capture layer ( NAL) The syntax at the unit header level. As noted above, the present principles are directed to methods and apparatus for predicting inter-layer residual phases of dimensionally tunable video. 135302.doc -13- 200935914 Referring to Figure 2, an exemplary video encoder to which the present principles are applicable is generally indicated by reference numeral 200. Encoder 200 includes a combiner 205 having an output for signal communication with one of the inputs of a transducer 210. An output of one of the converters 210 is coupled to an input of a quantizer 215 for signal communication. One of the quantizer outputs is coupled in signal communication with one of the first input of an entropy coder 220 and an input of an inverse quantizer 225. An output of the inverse quantizer 225 is in signal communication connection with an input of an inverter 230, and an output of the inverter 23 is connected to a first non-inverting input of a combiner 235. One of the outputs of the 235 is in signal communication with one of the inputs of the primary loop filter 24". One of the output of the loop filter 240 is in signal communication with a first input of a motion estimator and inter-layer predictor 245. An output of the motion estimator or inter-layer prediction determinator 245 is coupled in signal communication with one of the second input of the entropy encoder 220 and one of the motion compensators 255. One of the output of motion compensator 255 is in signal communication with one of the inputs of one-color mapper 260. One of the tone mapper 260 outputs is signaled to a first non-inverting input of a combiner 270. One of the combiner output lines is in signal communication with one of the inputs of a flattening filter 275. The output of one of the flattening filters 275 is in signal communication with the input of an inverse tone mapper 280. The inverse tone mapping = 280 - the output system is in signal communication with the second non-inverting input of the combiner 235 and one of the inverting inputs of the grouper 105. The output of upsampler 250 is coupled in signal communication with a first input of motion estimator and inter-layer predictor 245. The output of one of the upsamplers 265 is in signal communication with a second non-inverting input of one of the 135302.doc -14. 200935914 combiners 270. The input of combiner 205 can be used as an input to encoder 200 to receive a high bit depth image. One of the upsamplers 250 inputs can be used as an input to the encoder 2' to receive a base layer motion vector. One of the upsampler 265 inputs can be used as an input to the encoder 2 to receive a low bit depth base layer residual phase. One of the outputs of the entropy encoder 22 can be used as an output of the encoder 2 to output a bit stream.

參考圖3,一般由參考數字3〇〇指示本原理可應用的範例 性解碼器。 解碼器300包括一熵解碼器3〇5,其具有與反量化器3ι〇 之輸入進行k號通信的一第一輸出。反量化器31〇之一 輸出係與-反變換器315之-輸人進行信號通信連接。反 變換器315之-輸出係與—組合器32()之—第—非反相輸入 進行信號通信連接。 網解瑪器305之-第二輸出係與—運動補償器325之一 第輸入進行信號通6連接。運動補償器器325之一輸出 係與一色調映射器330之—輪入冷仏 > 咏 輸入進行信號通信連接。色調 映射器330之一輸出係與—«η ^ to ~ ^ ^ 組合器335之一第一非反相輸入 進行信號通信連接。組合器3 〇器335之一輸出係與一平化濾波 器340之一第一輸入進行信號補e °筑通k連接。平化濾波器340之 一輸出係與一反色調映射 射器345之一輸入進行信號通信連 接。反色調映射器345之—於山及& 務1出係與一組合器320之一第二 非反相輸入進行信號通信連接。 向上取樣器35〇之一輪出後 出係與組合器335之一第二非反相 135302.doc • 15 · 200935914 輸入進行信號通信連接。向上取樣器355之—輪出係與運 動補償器325之一第二輪入進行信號通信連接。 ❹Referring to Figure 3, an exemplary decoder to which the present principles are applicable is generally indicated by reference numeral 3'. The decoder 300 includes an entropy decoder 〇5 having a first output for k-number communication with the input of the inverse quantizer 3ι. One of the inverse quantizers 31 is connected to the input of the -reactor 315 for signal communication. The output of the inverter 315 is coupled to the -Non-inverting input of the combiner 32() for signal communication. The second output system of the net damper 305 is connected to one of the motion compensators 325. One of the outputs of the motion compensator 325 is coupled to a tone mapper 330 - a wheeled cold > 咏 input for signal communication. The output of one of the tone mappers 330 is in signal communication with one of the first non-inverting inputs of the -«η ^ to ~ ^ ^ combiner 335. The output of one of the combiner 3 switches 335 is coupled to a first input of a flattening filter 340 for signal k. An output of the smoothing filter 340 is coupled to one of the inputs of an inverse tone mapper 345 for signal communication. The anti-tone mapper 345 is connected to the second non-inverting input of one of the combiners 320. One of the upsamplers 35 is rotated out and one of the combiners 335 is second non-inverted 135302.doc • 15 · 200935914 The input is for signal communication connection. The up-sampler 355 is in signal communication with the second wheel of one of the motion compensators 325. ❹

熵解碼器305之一輸入可用作解碼器3〇〇之一輸入以接 收-增強層位元流。運動補償器325之一第三輸入可用作 解碼器300之-輸入,以接收多個增強層參考圖框。平化 參考據波器34G之-第二輸人可用作至解碼器刪之一輸 入,以接收一平化參考旗標。向上取樣器35〇之一輸入可 用作至解碼器则之-輸人,以接收—低位元深度基礎層 殘相。向上取樣器355之-輸入可用作至解碼器扇之一輸 入,以接收一基礎層運動向量。組合器32〇之一輸出可用 作解碼器3〇〇之一輸出,以輸出圖像。 位το深度尺寸可調性係潛在地可用於考量下列事實:在 該圖像中的某—點處,傳統8位W度及高位元深度數位 成像系統將同時存在於市場中。 依據本原理之一或多個具體實施例,建議將新技術用於 位元深度尺寸可調性(BDS)的層間殘相預測。 在位元深度尺寸可調性中,若使用單一迴路解碼,則當 在增強層(較高位元深度層)中實行運動補償時,難以將反 色調映射施加於層間殘相預測。因此’依據本原理,呈現 新層間殘相預測技術錢良位元深度尺寸可調性之編碼效 率依據本原S之一或多個具體實施例代替在殘相域中 對層間殘相預測進行反色調映射,將反色調映射問題從殘 相域轉換為像素域(空間域)以進行層間殘相預測。 基於解說目的,本文中提供一或多個範例,其僅考量位 135302.doc -16- 200935914 元深度以及一單一迴路解碼架構的使用。然而,應瞭解, 在、D定本文中提供的本原理之教示情況下,如相對於以上 參考範例說明的此類原理可輕易地由熟習此及相關技術的 技術者擴大為組合式尺寸可調性,包括但不限於(例如)位 元冰度及空間尺寸可調性等等。此外,本原理可輕易地應 用於多個迴路解碼架構。當然,本原理並非限於前述應用 及變化,而且因此如熟習此及相關技術的技術者所輕易決 定的其他應用及變化亦可相對於本原理加以使用,同時維 持本原理之精神》 因此,在一具體實施例中’若使用層間殘相預測,則在 從基礎層添加色調映射運動補償預測及向上取樣殘相之後 施加反色調映射。對於僅位元深度尺寸可調性,空間向上 取樣因數係1。 因此’依據一具體實施例的編碼方法之一範例係如下: 1. 使用增強層參考圖框獲得預測區塊P並且將p色調映 射於基礎層中,得到; 2. 在空間上向上取樣對應基礎層殘相區塊4並且將 添加至Ρ以形成7XP) + t/(r4); 3. 使用一濾波器,以獲得*S(r(/>) + C/(rJ); 4. 接著施加反色調映射,以獲得7^05(7^(/^) + ^/(4)));以及 5. 藉由從增強層區塊Ο減去(4)來產生增強層殘相區塊 re,=0-7^1 剛尸) + £/(/;)))。 參考圖4, 一般由參考數字400指示將層間殘相預測用於 位元深度尺寸可調性來編碼的範例性方法。 135302.doc -17· 200935914 方法400包括開始方塊405,其傳遞控制至決策方塊 41〇。決策方塊410決定是否施加層間運動預測。若是,則 將控制傳遞至功能方塊415。否則,將控制傳遞至功能方 塊 425。 ' 功能方塊415使用一基礎層運動向量,並將控制傳遞至 - 功能方塊420 °功能方塊42〇向上取樣一基礎層運動向量, 並將控制傳遞至功能方塊430。 功能方塊425使用一增強層運動向量,並將控制傳遞至 ® 功能方塊430。 功能方塊430得到運動補償區塊p,並將控制傳遞至功能 方塊435。功能方塊435對P實行色調映射以獲得一低位元 深度τ(ρ),並將控制傳遞至功能方塊44(^功能方塊44〇讀 取基礎層紋理殘相rb,並將控制傳遞至功能方塊445。功能 方塊445計算P’=T(P)+rb ’並將控制傳遞至決策方塊45〇。 決策方塊450決定是否施加一平化參考。若是,則將控制 φ 傳遞至功能方塊455。否則,將控制傳遞至功能方塊460 〇 功能方塊455對P'施加平化濾波器,並將控制傳遞至功 能方塊460。 功能方塊460對p,實行反色調映射以獲得一高位元深度 預測Τ·】(ρ’),並將控制傳遞至功能方塊465。功能方塊465 從高位元深度預測τ·ι(Ρ,)減去色調映射與反色調映射操作 之間的誤差值,並將控制傳遞至功能方塊47〇。功能方塊 470藉由從廣、始圖像減去τ-1(ρ,)來獲得增強層殘相卜,其中 Q = Ο Τ'Ρ’)’ Ο表示原始圖像,而且將控制傳遞至結束 135302.doc 200935914 方塊499。 依據一具體實施例的解碼方法之一範例係如下: 1·使用增強層參考圖框獲得預測區塊p並且將p色調映 射於基礎層中,得到jyp); 2. 在空間上向上取樣對應基礎層殘相區塊^並且將 添加至P以形成Γ(ρ) + ; 3. 使用一濾波器,以獲得只Γ(户) + c/(〇); 4. 接著施加反色調映射,以獲得厂,(只Γ(户) + 广以及 5. 將增強層殘相區塊添加至(4)以獲得重建區塊 參考圖5,一般由參考數字5〇〇指示將層間殘相預測用於 位元深度尺寸可調性來解碼的範例性方法。 方法500包括開始方塊5〇5,其傳遞控制至決策方塊 510。決策方塊510決定是否將層間運動預測旗標設定為正 確的。若是,則將控制傳遞至功能方塊515。否則,將控 制傳遞至功能方塊525。 功能方塊515讀取並熵解碼一基礎層運動向量,而且將 控制傳遞至功能方塊520。功能方塊52〇向上取樣一基礎層 運動向量,並將控制傳遞至功能方塊53〇。 功能方塊525讀取並熵解碼一增強層運動向量,而且將 控制傳遞至功能方塊530。 功能方塊530得到運動補償區塊p ,並將控制傳遞至功能 方塊535。功能方塊535對p實行色調映射以獲得一低位元 深度τ(ρ),並將控制傳遞至功能方塊54〇。功能方塊54〇讀 135302.doc -19- 200935914 取並熵解碼基礎層紋理殘相rb,並將控制傳遞至功能方塊 545 ^功能方塊545計算P’ = T(P) + rb,並將控制傳遞至決 策方塊550。決策方塊550決定是否將一平化參考旗標設定 為正確的。若是’則將控制傳遞至功能方塊555。否則, 將控制傳遞至功能方塊560。 功能方塊555對P’施加平化濾波器,並將控制傳遞至功 能方塊560。 功能方塊560對P'實行反色調映射以獲得一高位元深度 預測T ^P’)’並將控制傳遞至功能方塊565。功能方塊565 在色調映射與反色調映射之間添加一誤差值至高位元深度 預測τ ^(ρ·) ’並將控制傳遞至功能方塊567。功能方塊567 讀取並熵解碼增強層殘相re ’並將控制傳遞至功能方塊 570。功月色方塊570獲得重建區塊R,其中r = τ-1(ρ,)+Γ , 並將控制傳遞至結束方塊599。 已在先前技術中提到,可在未使用層間運動預測情況下 藉由自増強層的運動向量或在使用層間運動預測情況下採 用自基礎層的向上取樣運動向量來產生運動補償區塊 在本原理之一具體實施例中,允許將技術用於二種情況。 在另一具體實施例中,僅當使用層間運動預測時才可組合 技術。若未使用層間運動預測,則將位元偏移施加於殘相 預測,如在以上參考第三及第四先前技術方法中一樣。 同樣地,可交換第二及第四先前技術方法之方案。即, 首先可實行濾波,接著實行反色調映射。或者,首先可實 行反色調映射,接著實行濾波。該濾波器可以係線性或非 135302.doc •20- 200935914 線性的、-維或二維的等[在—範例t,可首先垂直 地’接著水平地使用3分㈣波器卜該濾波器亦 可以係相同的,所以不需要第三先前技術程序。One of the inputs of the entropy decoder 305 can be used as an input to the decoder 3 to receive the enhancement layer bit stream. A third input of motion compensator 325 can be used as an input to decoder 300 to receive a plurality of enhancement layer reference frames. The second input of the flattened reference waver 34G can be used as a input to the decoder to receive a flattened reference flag. One of the upsampler 35 inputs can be used to input to the decoder to receive the low bit depth base layer residual phase. The input to upsampler 355 can be used as one input to the decoder sector to receive a base layer motion vector. One of the combiner 32's outputs can be used as one of the outputs of the decoder 3 to output an image. The bit το depth size adjustability is potentially useful for considering the fact that at some point in the image, conventional 8-bit W-degree and high-bit depth digital imaging systems will exist in the market at the same time. In accordance with one or more embodiments of the present principles, new techniques are suggested for inter-layer residual phase prediction of bit depth dimension tunability (BDS). In the bit depth size adjustability, if single loop decoding is used, it is difficult to apply the inverse tone mapping to the inter-layer residual phase prediction when motion compensation is performed in the enhancement layer (higher bit depth layer). Therefore, according to the principle, the coding efficiency of the new inter-layer residual phase prediction technique is improved according to one or more specific embodiments of the original S, instead of the inter-layer residual phase prediction in the residual phase domain. Tone mapping, transforming the inverse tone mapping problem from the residual phase domain to the pixel domain (space domain) for inter-layer residual phase prediction. For illustrative purposes, one or more examples are provided herein that consider only the bit depths of 135302.doc -16 - 200935914 and the use of a single loop decoding architecture. However, it should be understood that, in the teachings of the present principles provided herein, such principles as described with respect to the above reference examples can be readily expanded to a combined size by those skilled in the art and related art. Sex, including but not limited to, for example, bite ice and spatial size adjustability, and the like. In addition, the present principles can be easily applied to multiple loop decoding architectures. Of course, the present principles are not limited to the aforementioned applications and variations, and thus other applications and variations that are easily determined by those skilled in the art and related art can be used with respect to the present principles while maintaining the spirit of the present principles. In the specific embodiment, 'if the inter-layer residual phase prediction is used, an inverse tone mapping is applied after the tone mapping motion compensation prediction is added from the base layer and the residual phase is upsampled. For a bit-only depth size adjustability, the spatial up-sampling factor is 1. Thus, an example of an encoding method according to a specific embodiment is as follows: 1. Obtain a prediction block P using an enhancement layer reference frame and map the p-tone to the base layer to obtain; 2. Spatially upsample the corresponding basis Layer residual phase block 4 and will be added to Ρ to form 7XP) + t/(r4); 3. Use a filter to obtain *S(r(/>) + C/(rJ); Apply an inverse tone map to obtain 7^05(7^(/^) + ^/(4))); and 5. Generate an enhancement layer residual phase block by subtracting (4) from the enhancement layer block Ο Re,=0-7^1 刚尸) + £/(/;))). Referring to Figure 4, an exemplary method of encoding inter-layer residual phase prediction for bit depth dimension tunability is generally indicated by reference numeral 400. 135302.doc -17· 200935914 Method 400 includes a start block 405 that passes control to decision block 41〇. Decision block 410 determines whether inter-layer motion prediction is applied. If so, control is passed to function block 415. Otherwise, control is passed to function block 425. The function block 415 uses a base layer motion vector and passes control to the function block 420 ° function block 42 〇 upsamples a base layer motion vector and passes control to function block 430. Function block 425 uses an enhancement layer motion vector and passes control to ® function block 430. Function block 430 obtains motion compensated block p and passes control to function block 435. Function block 435 performs tone mapping on P to obtain a low bit depth τ(ρ), and passes control to function block 44 (^ function block 44 reads the base layer texture residual phase rb and passes control to function block 445 Function block 445 calculates P'=T(P)+rb' and passes control to decision block 45. Decision block 450 determines whether a flattening reference is applied. If so, control φ is passed to function block 455. Otherwise, Control passes to function block 460. Function block 455 applies a flattening filter to P' and passes control to function block 460. Function block 460 performs inverse tone mapping on p to obtain a high bit depth prediction 】·] '), and passes control to function block 465. Function block 465 subtracts the error value between the tone mapping and the anti-tone mapping operation from the high bit depth prediction τ·ι(Ρ,) and passes control to function block 47.功能. Function block 470 obtains the enhancement layer residual phase by subtracting τ-1(ρ,) from the wide and original images, where Q = Ο Τ 'Ρ')' Ο represents the original image, and the control is passed To the end 135302.doc 2009359 14 Block 499. An example of a decoding method according to a specific embodiment is as follows: 1. Using the enhancement layer reference frame to obtain the prediction block p and mapping the p tone to the base layer to obtain jyp); 2. Upscaling the corresponding basis in space The layer residual phase block ^ and will be added to P to form Γ(ρ) + ; 3. Use a filter to obtain only Γ(户) + c/(〇); 4. Then apply an inverse tone mapping to get Factory, (only Γ(户) + 广和5. Adding the enhancement layer residual phase block to (4) to obtain the reconstructed block. Refer to Figure 5, generally using the reference number 5〇〇 to indicate the inter-layer residual phase prediction for the bit. An exemplary method of decoding the meta depth size adjustability. The method 500 includes a start block 5〇5 that passes control to decision block 510. Decision block 510 determines whether the inter-layer motion prediction flag is set to be correct. If so, Control passes to function block 515. Otherwise, control is passed to function block 525. Function block 515 reads and entropy decodes a base layer motion vector and passes control to function block 520. Function block 52 〇 upsamples a base layer motion Vector and will control The function block 525 reads and entropy decodes an enhancement layer motion vector and passes control to function block 530. Function block 530 obtains motion compensation block p and passes control to function block 535. Block 535 performs tone mapping on p to obtain a low bit depth τ(ρ) and passes control to function block 54. Function block 54 reads 135302.doc -19- 200935914 takes the entropy decoding base layer texture residual phase rb Control is passed to function block 545. The function block 545 calculates P' = T(P) + rb and passes control to decision block 550. Decision block 550 determines whether a flattened reference flag is set to be correct. Control is then passed to function block 555. Otherwise, control is passed to function block 560. Function block 555 applies a flattening filter to P' and passes control to function block 560. Function block 560 performs an inverse tone on P' The mapping is to obtain a high bit depth prediction T^P')' and control is passed to function block 565. Function block 565 adds an error value to the high bit depth prediction τ ^(ρ·) ' between the tone mapping and the anti-tone mapping and passes control to function block 567. Function block 567 reads and entropy decodes the enhancement layer residual phase re' and passes control to function block 570. The power moon color block 570 obtains the reconstructed block R, where r = τ-1(ρ,) + Γ and passes control to the end block 599. It has been mentioned in the prior art that the motion compensation block can be generated by using the motion vector of the self-strength layer or the up-sampling motion vector from the base layer in the case of using inter-layer motion prediction without using inter-layer motion prediction. In one embodiment of the principle, the technique is allowed to be used in two situations. In another embodiment, the techniques can be combined only when inter-layer motion prediction is used. If inter-layer motion prediction is not used, the bit offset is applied to the residual phase prediction as in the third and fourth prior art methods above. Likewise, the schemes of the second and fourth prior art methods can be exchanged. That is, filtering can be performed first, followed by inverse tone mapping. Alternatively, an inverse tone mapping can be performed first, followed by filtering. The filter can be linear or non-135302.doc •20- 200935914 linear, -dimensional or two-dimensional, etc. [in-example t, first vertically] then horizontally using a 3-point (four) waver The same can be used, so no third prior art procedure is required.

因此,依據本原理之-具體實施例,對色調映射及反色 調映射兩方法進行發信。可使用一演算法計算一查找表 及/或其他方式等等來實行發信。可在序列、圖像、片段 或區塊層級實行發信。因為色調映射及反色調映射並非真 實可逆的,即Γ-|(7&gt;/,所以可考量誤差^Γ_,(Γ),其中】 意指密度。在一具體實施例中,因為Γ-1(Γ(户))矣户,所以可 考量誤差3 =户-:Γ-ΥΤΧΡ))。即在該編碼器中,減去c^在該 解碼|§中’添加d。 現在說明本發明之許多附帶優點/特徵之某些,以上已 乂及該等附帶優點/特徵之某些β例如,一個優點/特徵係 一裝置具有藉由施加反色調映射於用於一圖像之一區塊的 層間殘相預測程序來編碼該區塊之編碼器》在像素域中實 行反色調映射以支持位元深度尺寸可調性。 另一優點/特徵係該裝置具有如以上說明的編碼器,其 中該編碼器藉由下列方式實行層間殘相預測程序:在該圖 像之一增強層中實行運動補償以獲得一增強層預測,對該 增強層預測實行色調映射於該圖像之一基礎層中以獲得用 於該區塊的色調映射運動補償低位元深度預測,添加自該 基礎層的一空間向上取樣殘相至用於區塊的色調映射運動 補償低位元深度預測以獲得一總和,以及對該總和實行反 色調映射於該增強層中以獲得用於該區塊的較高位元深度 135302.doc 200935914 預測。 另一優點/特徵係該裝置具有如以上說明的編碼器,其 中該編碼n藉由在實行反色職射之前施加—平化滤波器 至該總和來進-步實行層間殘相制程序。對經濾波的總 和實行反色調映射。 另優點/特徵係該裝置具有如以上說明的編碼器,其 中一高階語法元素以及一區塊層級語法元素之至少一項係 用以對色調映射及反色調映射之任一者進行發信。 此外,另一優點/特徵係該裝置具有如以上說明的編碼 器,其中該高階語法元素係包含在一片段標頭一序列參 數集 圖像參數集、一檢視參數集、一網路擷取層單元 標頭以及一補充增強資訊訊息之至少一項中。 此外,另一優點/特徵係該裝置具有如以上說明的編碼 器,其中該編碼器藉由從用於該區塊的較高位元深度預測 減去色調映射與反色調映射之間的誤差值來進一步實行層 間殘相預測程序。 根據本文之教示,熟習有關技術者可輕易明白本原理之 此等以及其他特徵與優點。應瞭解,本原理之教示可在各 種形式的硬體、軟體、韌體、專用處理器或其組合中加以 實施。 最佳地,本原理之教示係實施為一硬體與軟體之組合。 此外,該軟體可實施為有形地體現在一程式儲存單元上之 應用程式。該應用程式可上傳至包含任何適當架構之機器 上,並由該機器執行。最佳地,該機器係實施於一電腦平 135302.doc •22· 200935914 0上 該平台具有諸如一或多個中央處理單元 (「CPU」)、一隨機存取記憶體(「ram」)與一輸入/輸出 (「I/O」)介面之硬體。該電腦平台亦可包括一作業系統與 微指令碼。本文中說明的各種程序及功能可以係由一 CPU - 執打的微指令碼之部分或應用程式之部分或其任何組合。 ' 此外,諸如一額外資料儲存單元及一列印單元的各種其他 周邊單元可連接至該電腦平台。 ❹ 應進一步瞭解,因為較佳在軟體中實施附圖中描述的組 成系統組件及方法之某些,所以該等系統組件,或處理功 能區塊之間的實際連接可以根據程式化本原理所採用的方 式而不同。在給定本文中的教示情況下,熟習有關技術者 將能預期本原理之此等及類似實施方案或組態。 儘管已參考附圖說明解說性具體實施例,但應瞭解本原 理並非限於該些精端具體實施例,而且熟習有關技術者可 在其中進行各種變化與修改而不脫離本原理之範鳴或精 φ 神。所有此類變化及修改係預計包括在如隨附的申請專利 範圍所提出的本原理之範疇内。 【圖式簡單說明】 可依據下列範例性圖式而較佳地瞭解本原理,在該等圖 式中: 圖1係依據先前技術使用平化參考預測的一解碼器之一 部分的方塊圖; 圖2係依據本原理之一具體實施例,本原理可應用的一 範例性視訊編碼器之方塊圖; 135302.doc 23· 200935914 圖3係依據本原理之一具體實施例,本原理可應用的一 範例性解碼器之方塊圖; 圓4係依據本原理之一具體實施例,將層間殘相預測用 於位元深度尺寸可調性來編碼的一範例性方法之流程圖; 以及 圖5係依據本原理之一具體實施例,將層間殘相預測用 於位元深度尺寸可調性來解碼的一範例性方法之流程圖。Thus, in accordance with a specific embodiment of the present principles, two methods of tone mapping and inverse tone mapping are signaled. The algorithm can be implemented using an algorithm to calculate a lookup table and/or other means. Sending can be done at the sequence, image, clip or block level. Since tone mapping and inverse tone mapping are not truly reversible, ie Γ-|(7&gt;/, so the error can be considered ^Γ_, (Γ), where] means density. In a specific embodiment, because Γ-1( Γ (household)) Seto, so you can consider the error 3 = household -: Γ - ΥΤΧΡ)). That is, in the encoder, subtract c^ to add d in the decoding|§. Some of the many attendant advantages/features of the present invention are now described, and some of the attendant advantages/features are described above. For example, an advantage/feature is a device having an inverse tone mapping applied to an image. The inter-layer residual phase prediction procedure of one block to encode the encoder of the block implements inverse tone mapping in the pixel domain to support bit depth size adjustability. Another advantage/feature is that the apparatus has an encoder as described above, wherein the encoder performs an inter-layer residual phase prediction procedure by performing motion compensation in an enhancement layer of the image to obtain an enhancement layer prediction, Performing tone mapping on the enhancement layer prediction in a base layer of the image to obtain tone mapping motion compensated low bit depth prediction for the block, adding a spatial upsampled residual phase from the base layer to the region The tone mapping motion of the block compensates for the low bit depth prediction to obtain a sum, and the inverse is mapped to the enhancement layer in the enhancement layer to obtain a higher bit depth 135302.doc 200935914 prediction for the block. Another advantage/feature is that the apparatus has an encoder as described above, wherein the code n proceeds to the inter-layer residual phase sequence by applying a flattening filter to the sum before performing the inverse color shot. An inverse tone mapping is performed on the filtered sum. Another advantage/feature is that the apparatus has an encoder as described above, wherein at least one of a higher order syntax element and a block level syntax element is used to signal either tone mapping or inverse tone mapping. In addition, another advantage/feature is that the apparatus has an encoder as described above, wherein the high-order syntax element includes a slice header-sequence parameter set image parameter set, a view parameter set, and a network capture layer. At least one of the unit header and a supplemental enhancement information message. Moreover, another advantage/feature is that the apparatus has an encoder as explained above, wherein the encoder subtracts the error value between the tone mapping and the anti-tone mapping by higher bit depth prediction for the block. The inter-layer residual phase prediction procedure is further implemented. These and other features and advantages of the present principles will be readily apparent to those skilled in the <RTIgt; It will be appreciated that the teachings of the present principles can be implemented in a variety of forms of hardware, software, firmware, special purpose processors, or combinations thereof. Optimally, the teachings of the present principles are implemented as a combination of hardware and software. In addition, the software can be implemented as an application tangibly embodied on a program storage unit. The application can be uploaded to and executed by a machine containing any suitable architecture. Preferably, the machine is implemented on a computer platform 135302.doc • 22· 200935914 0. The platform has one or more central processing units (“CPUs”), a random access memory (“ram”), and An input/output ("I/O") interface hardware. The computer platform can also include an operating system and microinstruction code. The various programs and functions described herein may be part of a microinstruction code or a portion of an application or any combination thereof that is executed by a CPU. In addition, various other peripheral units such as an additional data storage unit and a printing unit can be connected to the computer platform. ❹ It should be further appreciated that since some of the constituent system components and methods described in the figures are preferably implemented in software, the actual connections between the system components, or processing functional blocks, may be employed in accordance with the principles of the stylized The way it is different. Such and similar implementations or configurations of the present principles will be apparent to those skilled in the art given the teachings herein. Although the illustrative embodiments have been described with reference to the drawings, it is understood that the present invention is not limited to the specific embodiments, and various changes and modifications may be made therein without departing from the scope of the present principles. φ God. All such variations and modifications are intended to be included within the scope of the present principles as set forth in the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS The present principles may be better understood from the following exemplary drawings, in which: Figure 1 is a block diagram of a portion of a decoder that uses flattened reference prediction in accordance with the prior art; 2 is a block diagram of an exemplary video encoder according to a specific embodiment of the present principles, which is applicable to the present principles; 135302.doc 23· 200935914 FIG. 3 is an applicable embodiment of the present principles according to an embodiment of the present principles. Block diagram of an exemplary decoder; Circle 4 is a flowchart of an exemplary method for encoding inter-layer residual phase prediction for bit depth size tunability in accordance with one embodiment of the present principles; and Figure 5 is based on In one embodiment of the present principles, a flowchart of an exemplary method of inter-layer residual phase prediction for bit depth size tunability decoding.

Φ 【主要元件符號說明】 100 解碼器部分 112 運動補償器 122 參考圖框緩衝· II 132 組合器 142 開關 152 濾波器 162 組合器 200 編瑪器 205 組合器 210 變換器 215 量化器 220 摘編碼器 225 反量化器 230 反變換器 235 組合器 240 迴路濾波器 135302,doc •24 200935914 245 運動估計器及層間預測決定器 250 向上取樣器 255 運動補償器 260 色調映射器 - 265 向上取樣器 270 組合器 275 平化濾波器 280 反色調映射器 ^ 300 解碼器 305 熵解碼器 310 反量化器 315 反變換器 320 組合器 325 運動補償器 330 色調映射器 * 335 ❿ 組合器 340 平化濾波器 - 345 反色調映射器 350 向上取樣器 355 向上取樣器 135302.doc -25·Φ [Major component symbol description] 100 decoder section 112 motion compensator 122 reference frame buffer · II 132 combiner 142 switch 152 filter 162 combiner 200 coder 205 combiner 210 converter 215 quantizer 220 encoder 225 inverse quantizer 230 inverse transformer 235 combiner 240 loop filter 135302, doc • 24 200935914 245 motion estimator and inter-layer prediction decider 250 up sampler 255 motion compensator 260 tone mapper - 265 upsampler 270 combiner 275 flattening filter 280 inverse tone mapper ^ 300 decoder 305 entropy decoder 310 inverse quantizer 315 inverse transformer 320 combiner 325 motion compensator 330 tone mapper * 335 组合 combiner 340 flattening filter - 345 anti Tone mapper 350 upsampler 355 upsampler 135302.doc -25·

Claims (1)

200935914 十、申請專利範圍: ι· 一種裝置,其包含: 一編碼器(200) ’其用於藉由施加反色調映射於用於一 圖像之一區塊的一層間殘相預測程序來編碼該區塊,其 中在像素域中實行該反色調映射。 2. 如請求項1之裝置’其中該編碼器(2〇〇)藉由下列方式實 行該層間殘相預測程序:在該圖像之一增強層中實行運 動補償以獲得一增強層預測,對該增強層預測實行色調 映射於該圖像之一基礎層中以獲得用於該區塊的色調映 射運動補償低位元深度預測,添加自該基礎層的一空間 向上取樣殘相至用於該區塊的該色調映射運動補償低位 元深度預測以獲得一總和’以及對該總和實行反色調映 射於該增強層中以獲得用於該區塊的較高位元深度預 測。 3. 如請求項2之裝置’其中該編碼器(200)藉由在實行該反 色調映射之前施加一平化濾波器於該總和來進一步實行 該層間殘相預測程序’而且其中對該經濾波的總和實行 該反色調映射。 4. 如請求項2之裝置’其中一高階語法元素以及一區塊層 級語法元素之至少一項係用以對該色調映射以及該反色 調映射之任一者進行發信。 5. 如請求項4之裝置,其中該高階語法元素係包含在一片 段標頭、-~~序列參數集、一圖像參數集、一檢視參數 集、一網路擷取層單元標頭以及一補充增強資訊訊息之 135302.doc 200935914 至少一項中。 6.如請求項2之裝置,其中該編碼器(2〇〇)藉由從用於該區 塊的該較高位元深度預測料該色調映射與該反色調映 射之間的一誤差值來進一步實行該層間殘相預測程序。 -7如請求们之裝置,其中在該像素域中實行該反色調映 , 射以支持位元深度尺寸可調性。 8 —種方法,其包含: ❿ 藉由施加反色調映射於用於一圖像之一區塊的一層間 殘相預測程序來編碼該區塊,其中在該像素域中實行該 反色調映射(440、460)。 9.如請求項8之方法,其中該層間殘相預測程序包含: 在該圖像之一增強層中實行運動補償以獲得一増強層 預測(43 0); 對該增強層預測實行色調映射於該圖像之一基礎層中 以獲得用於該區塊的一色調映射運動補償低位元深度預 φ 測(435),· 添加自該基礎層的一空間向上取樣殘相至用於該區塊 ' 的該色調映射運動補償低位元深度預測以獲得一總和 (440、445);以及 對該總和實行該反色調映射於該増強層中以獲得用於 該區塊的一較高位元深度預測(460)。 1 〇.如請求項9之方法,其中該層間殘相預測程序進一步包 含在實行該反色調映射之該步驟之前施加一平化濾波器 於該總和’而且其中對該經濾波的總和實行該反色調映 135302.doc .2- 200935914 射(455) 〇 η·如請求項9之方法,其中一高階語法元素以及一區塊層 級語法元素之至少一項係用以對該色調映射以及該反色 調映射之任一者進行發信(43 5、460)。 12. 如請求項丨丨之方法,其中該高階語法元素係包含在一片 段標頭、一序列參數集、一圖像參數集、一檢視參數 集、一網路擷取層單元標頭以及一補充増強資訊訊息之 至少一項中。 13. 如請求項9之方法,其中該層間殘相預測程序進一步包 含從用於該區塊的該較高位元深度預測減去該色調映射 與該反色調映射之間的一誤差值(465)。 14. 如請求項8之方法,其中在該像素域中實行該反色調映 射以支持位元深度尺寸可調性。 15. —種具有在其上編碼的視訊資料之電腦可讀取儲存媒 體,其包含: 一圖像之一區塊,其係藉由施加反色調映射於用於該 區塊的一層間殘相預測程序來編碼,其中在該像素域中 實行該反色調映射。 I35302.doc200935914 X. Patent application scope: ι· A device comprising: an encoder (200) for encoding by applying an inverse tone mapping to an inter-layer residual phase prediction program for a block of an image The block, wherein the inverse tone mapping is performed in a pixel domain. 2. The apparatus of claim 1 wherein the encoder (2〇〇) performs the inter-layer residual phase prediction procedure by performing motion compensation in an enhancement layer of the image to obtain an enhancement layer prediction, The enhancement layer prediction performs tone mapping in a base layer of the image to obtain tone mapping motion compensated low bit depth prediction for the block, adding a spatial upsampled residual phase from the base layer to the region The tone mapping motion of the block compensates for low bit depth prediction to obtain a sum 'and performs inverse tone mapping on the sum in the enhancement layer to obtain higher bit depth prediction for the block. 3. The apparatus of claim 2, wherein the encoder (200) further performs the inter-layer residual phase prediction procedure by applying a flattening filter to the sum prior to performing the inverse tone mapping and wherein the filtered The inverse performs the inverse tone mapping. 4. At least one of the higher order syntax elements and a block level syntax element of the device of claim 2 is used to signal any of the tone mapping and the inverse tone mapping. 5. The apparatus of claim 4, wherein the higher-order syntax element comprises a segment header, a ~~~ sequence parameter set, an image parameter set, a view parameter set, a network capture layer unit header, and A supplementary information message is added in at least one of 135302.doc 200935914. 6. The apparatus of claim 2, wherein the encoder (2) further further predicts an error value between the tone map and the inverse tone map from the higher bit depth for the block. The inter-layer residual phase prediction procedure is implemented. -7, such as the device of the request, wherein the inverse tone mapping is performed in the pixel domain to support bit depth size adjustability. 8 - A method comprising: ??? encoding the block by applying an inverse tone mapping to an inter-layer residual phase prediction procedure for a block of an image, wherein the inverse tone mapping is performed in the pixel domain ( 440, 460). 9. The method of claim 8, wherein the inter-layer residual phase prediction procedure comprises: performing motion compensation in one of the enhancement layers of the image to obtain a prime layer prediction (43 0); performing tone mapping on the enhancement layer prediction One of the images is obtained in the base layer to obtain a tone mapping motion compensated low bit depth pre-measure for the block (435), and a space is added from the base layer to sample the residual phase to the block The tone mapping motion compensates for low bit depth prediction to obtain a sum (440, 445); and performs the inverse tone mapping on the sum in the prime layer to obtain a higher bit depth prediction for the block ( 460). 1. The method of claim 9, wherein the inter-layer residual phase prediction procedure further comprises applying a flattening filter to the sum before the step of performing the inverse tone mapping and wherein the inverse of the filtered sum is performed The method of claim 9, wherein at least one of a higher-order syntax element and a block-level syntax element is used to map the tone and the inverse tone mapping Any one of them sends a letter (43 5, 460). 12. The method of claim 1, wherein the high-level syntax element comprises a segment header, a sequence parameter set, an image parameter set, a view parameter set, a network capture layer unit header, and a Add at least one of the reluctant information messages. 13. The method of claim 9, wherein the inter-layer residual phase prediction procedure further comprises subtracting an error value between the tone mapping and the anti-tone mapping from the higher bit depth prediction for the block (465) . 14. The method of claim 8, wherein the inverse tone mapping is performed in the pixel domain to support bit depth size adjustability. 15. A computer readable storage medium having video material encoded thereon, comprising: a block of an image by applying an inverse tone mapping to a layer of residual phase for the block The prediction program encodes, wherein the inverse tone mapping is performed in the pixel domain. I35302.doc
TW097139609A 2007-10-15 2008-10-15 Methods and apparatus for inter-layer residue prediction for scalable video TWI528831B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US97995607P 2007-10-15 2007-10-15
PCT/US2008/011712 WO2009051694A2 (en) 2007-10-15 2008-10-14 Methods and apparatus for inter-layer residue prediction for scalable video

Publications (2)

Publication Number Publication Date
TW200935914A true TW200935914A (en) 2009-08-16
TWI528831B TWI528831B (en) 2016-04-01

Family

ID=40493616

Family Applications (2)

Application Number Title Priority Date Filing Date
TW097139609A TWI528831B (en) 2007-10-15 2008-10-15 Methods and apparatus for inter-layer residue prediction for scalable video
TW097139607A TWI422231B (en) 2007-10-15 2008-10-15 Methods and apparatus for inter-layer residue prediction for scalable video

Family Applications After (1)

Application Number Title Priority Date Filing Date
TW097139607A TWI422231B (en) 2007-10-15 2008-10-15 Methods and apparatus for inter-layer residue prediction for scalable video

Country Status (8)

Country Link
US (2) US8537894B2 (en)
EP (2) EP2206349B1 (en)
JP (2) JP5534521B2 (en)
KR (2) KR101436671B1 (en)
CN (2) CN101822059B (en)
BR (2) BRPI0818648A2 (en)
TW (2) TWI528831B (en)
WO (2) WO2009051692A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI455588B (en) * 2009-12-08 2014-10-01 Intel Corp Bi-directional, local and global motion estimation based frame rate conversion

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8184693B2 (en) * 2008-04-11 2012-05-22 Intel Corporation Adaptive filtering for bit-depth scalable video codec
PL2835976T3 (en) * 2008-04-16 2017-04-28 Ge Video Compression, Llc Bit-depth scalability
CN102308579B (en) * 2009-02-03 2017-06-06 汤姆森特许公司 The method and apparatus of the motion compensation of the gradable middle use smooth reference frame of locating depth
CN102388611B (en) 2009-02-11 2015-08-19 汤姆森特许公司 The method and apparatus of the bit depth scalable video Code And Decode using tone mapping and negative tone to map
WO2010105036A1 (en) * 2009-03-13 2010-09-16 Dolby Laboratories Licensing Corporation Layered compression of high dynamic range, visual dynamic range, and wide color gamut video
JP5237212B2 (en) * 2009-07-09 2013-07-17 キヤノン株式会社 Image processing apparatus and image processing method
DK3324622T3 (en) 2011-04-14 2019-10-14 Dolby Laboratories Licensing Corp INDICATOR WITH MULTIPLE REGRESSIONS AND MULTIPLE COLOR CHANNELS
US9066070B2 (en) 2011-04-25 2015-06-23 Dolby Laboratories Licensing Corporation Non-linear VDR residual quantizer
US8891863B2 (en) * 2011-06-13 2014-11-18 Dolby Laboratories Licensing Corporation High dynamic range, backwards-compatible, digital cinema
WO2013081615A1 (en) * 2011-12-01 2013-06-06 Intel Corporation Motion estimation methods for residual prediction
TWI606718B (en) 2012-01-03 2017-11-21 杜比實驗室特許公司 Specifying visual dynamic range coding operations and parameters
US10609394B2 (en) * 2012-04-24 2020-03-31 Telefonaktiebolaget Lm Ericsson (Publ) Encoding and deriving parameters for coded multi-layer video sequences
US9219913B2 (en) 2012-06-13 2015-12-22 Qualcomm Incorporated Inferred base layer block for TEXTURE—BL mode in HEVC based single loop scalable video coding
WO2014000168A1 (en) * 2012-06-27 2014-01-03 Intel Corporation Cross-layer cross-channel residual prediction
US9854259B2 (en) 2012-07-09 2017-12-26 Qualcomm Incorporated Smoothing of difference reference picture
US9420289B2 (en) * 2012-07-09 2016-08-16 Qualcomm Incorporated Most probable mode order extension for difference domain intra prediction
CN110035286B (en) 2012-07-09 2021-11-12 Vid拓展公司 Codec architecture for multi-layer video coding
CA2807404C (en) * 2012-09-04 2017-04-04 Research In Motion Limited Methods and devices for inter-layer prediction in scalable video compression
US20140086319A1 (en) * 2012-09-25 2014-03-27 Sony Corporation Video coding system with adaptive upsampling and method of operation thereof
US10085017B2 (en) 2012-11-29 2018-09-25 Advanced Micro Devices, Inc. Bandwidth saving architecture for scalable video coding spatial mode
EP2934014A4 (en) * 2012-12-13 2016-07-13 Sony Corp Transmission device, transmission method, reception device, and reception method
KR20230080500A (en) 2013-01-04 2023-06-07 지이 비디오 컴프레션, 엘엘씨 Efficient scalable coding concept
US20140192880A1 (en) * 2013-01-04 2014-07-10 Zhipin Deng Inter layer motion data inheritance
WO2014161355A1 (en) * 2013-04-05 2014-10-09 Intel Corporation Techniques for inter-layer residual prediction
CN117956141A (en) 2013-04-08 2024-04-30 Ge视频压缩有限责任公司 Multi-view decoder
US20160142593A1 (en) * 2013-05-23 2016-05-19 Thomson Licensing Method for tone-mapping a video sequence
US9497473B2 (en) * 2013-10-03 2016-11-15 Qualcomm Incorporated High precision explicit weighted prediction for video coding
KR20150075041A (en) 2013-12-24 2015-07-02 주식회사 케이티 A method and an apparatus for encoding/decoding a multi-layer video signal
KR102336932B1 (en) * 2013-12-27 2021-12-08 소니그룹주식회사 Image processing device and method
KR20150110295A (en) 2014-03-24 2015-10-02 주식회사 케이티 A method and an apparatus for encoding/decoding a multi-layer video signal
CN107079079B (en) * 2014-10-02 2019-10-15 杜比实验室特许公司 For shaking the both-end metadata of visual control
EP3076669A1 (en) * 2015-04-03 2016-10-05 Thomson Licensing Method and apparatus for generating color mapping parameters for video encoding
WO2023072216A1 (en) * 2021-10-28 2023-05-04 Beijing Bytedance Network Technology Co., Ltd. Method, apparatus, and medium for video processing

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004045217A1 (en) * 2002-11-13 2004-05-27 Koninklijke Philips Electronics N.V. Transmission system with colour depth scalability
US7295345B2 (en) * 2003-04-29 2007-11-13 Eastman Kodak Company Method for calibration independent defect correction in an imaging system
US8218625B2 (en) * 2004-04-23 2012-07-10 Dolby Laboratories Licensing Corporation Encoding, decoding and representing high dynamic range images
US20050259729A1 (en) * 2004-05-21 2005-11-24 Shijun Sun Video coding with quality scalability
US7483486B2 (en) * 2004-07-02 2009-01-27 Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. Method and apparatus for encoding high dynamic range video
DE102004059993B4 (en) 2004-10-15 2006-08-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for generating a coded video sequence using interlayer motion data prediction, and computer program and computer readable medium
US7876833B2 (en) 2005-04-11 2011-01-25 Sharp Laboratories Of America, Inc. Method and apparatus for adaptive up-scaling for spatially scalable coding
US8483277B2 (en) 2005-07-15 2013-07-09 Utc Fire & Security Americas Corporation, Inc. Method and apparatus for motion compensated temporal filtering using split update process
US8014445B2 (en) 2006-02-24 2011-09-06 Sharp Laboratories Of America, Inc. Methods and systems for high dynamic range video coding

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI455588B (en) * 2009-12-08 2014-10-01 Intel Corp Bi-directional, local and global motion estimation based frame rate conversion

Also Published As

Publication number Publication date
WO2009051692A3 (en) 2009-06-04
EP2206350A2 (en) 2010-07-14
EP2206349A2 (en) 2010-07-14
JP5534522B2 (en) 2014-07-02
JP2011501564A (en) 2011-01-06
US20100208809A1 (en) 2010-08-19
KR101436671B1 (en) 2014-09-02
US8537894B2 (en) 2013-09-17
CN101822055A (en) 2010-09-01
WO2009051694A2 (en) 2009-04-23
US20100208810A1 (en) 2010-08-19
JP2011501563A (en) 2011-01-06
JP5534521B2 (en) 2014-07-02
CN101822059B (en) 2012-11-28
WO2009051694A3 (en) 2009-06-04
BRPI0817769A2 (en) 2015-03-24
EP2206350B1 (en) 2015-03-04
KR20100081988A (en) 2010-07-15
EP2206349B1 (en) 2015-03-11
US8385412B2 (en) 2013-02-26
CN101822059A (en) 2010-09-01
TWI422231B (en) 2014-01-01
KR20100085106A (en) 2010-07-28
KR101492302B1 (en) 2015-02-23
CN101822055B (en) 2013-03-13
TW200935913A (en) 2009-08-16
WO2009051692A2 (en) 2009-04-23
TWI528831B (en) 2016-04-01
BRPI0818648A2 (en) 2015-04-07

Similar Documents

Publication Publication Date Title
TW200935914A (en) Methods and apparatus for inter-layer residue prediction for scalable video
JP6214511B2 (en) Bit depth scalable video encoding and decoding method and apparatus using tone mapping and inverse tone mapping
JP6416992B2 (en) Method and arrangement for transcoding video bitstreams
JP5409640B2 (en) Method and apparatus for artifact removal for bit depth scalability
JP5232796B2 (en) Method and apparatus for encoding and / or decoding bit depth scalable video data using adaptive enhancement layer prediction
JP5383674B2 (en) Method and apparatus for encoding and / or decoding video data using enhancement layer residual prediction for bit depth scalability
KR101697149B1 (en) Specifying visual dynamic range coding operations and parameters
JP5539592B2 (en) Multi-layer image encoding and decoding apparatus and method
US20110293013A1 (en) Methods and Apparatus for Motion Compensation with Smooth Reference Frame in Bit Depth Scalability
TW200934252A (en) Scalable video coding techniques for scalable bitdepths
KR20160148835A (en) Method and apparatus for decoding a video signal with reference picture filtering