TW202224438A - Adaptive color space transform coding - Google Patents

Adaptive color space transform coding Download PDF

Info

Publication number
TW202224438A
TW202224438A TW111105943A TW111105943A TW202224438A TW 202224438 A TW202224438 A TW 202224438A TW 111105943 A TW111105943 A TW 111105943A TW 111105943 A TW111105943 A TW 111105943A TW 202224438 A TW202224438 A TW 202224438A
Authority
TW
Taiwan
Prior art keywords
color
data
sample data
residual sample
transform
Prior art date
Application number
TW111105943A
Other languages
Chinese (zh)
Other versions
TWI807644B (en
Inventor
亞歷山卓 圖拉比斯
Original Assignee
美商蘋果公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/905,889 external-priority patent/US9225988B2/en
Application filed by 美商蘋果公司 filed Critical 美商蘋果公司
Publication of TW202224438A publication Critical patent/TW202224438A/en
Application granted granted Critical
Publication of TWI807644B publication Critical patent/TWI807644B/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • H04N19/198Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters including smoothing of a sequence of encoding parameters, e.g. by averaging, by choice of the maximum, minimum or median value
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/67Circuits for processing colour signals for matrixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/64Systems for the transmission or the storage of the colour picture signal; Details therefor, e.g. coding or decoding means therefor

Abstract

An encoder system may include an analyzer that analyzes a current image area in an input video to select a transform. A selectable residue transformer, controlled by the analyzer, may perform the selectable transform on a residue image generated from the current image area and a predicted current image area, to generate a transformed residue image. An encoder may encode the transformed residue image to generate output data. The analyzer controls the encoder to encode information to identify the selectable transform and to indicate that the selectable transform for the current image area is different from a transform of a previous image area of the input video. A decoder system may include components appropriate for decoding the output data from the encoder system.

Description

可適性色彩空間轉換寫碼Adaptive color space conversion coding

影像資料(諸如,含有於視訊中之彼等影像資料)可含有與色彩、像素位置及時間相關的大量資訊。為了處置此大量資訊,可必要的是在不損失來自原始視訊之過多資訊情況下而同時在不增加資料壓縮之複雜性情況下壓縮或編碼影像資料,此情形可能減低影像資料處理的速度。經編碼影像資料可需要稍後進行解碼以變換回或恢復原始視訊資訊。 為了編碼影像,像素色彩資料可首先經轉換為適當色彩空間座標系統中的色彩資料。接著,編碼經轉換之資料。舉例而言,影像資料在紅綠藍(RGB)色彩空間座標系統中可具有原始像素色彩資料。為了編碼影像資料,RGB色彩空間中之原始像素色彩資料可藉由分離明度分量與色彩分量而轉換成YCbCr色彩空間座標系統中的色彩資料。接著,可編碼YCbCr色彩空間座標系統中之色彩資料。藉由如此做,可能存在於原始三個色彩之間的冗餘資訊可藉由在色彩空間轉換期間移除冗餘而得以壓縮。 影像資料中之額外冗餘可在編碼經轉換影像資料期間藉由以下操作被移除:執行空間預測及時間預測,繼之以在理想的任何程度上額外編碼任何剩餘殘餘資料,以及熵編碼在一時間點之個別圖框中之資料及/或視訊序列之持續時間中之資料。空間預測可及時預測單一圖框中之影像資料以消除同一圖框中不同像素之間的冗餘資訊。時間預測可預測視訊序列之持續時間中的影像資料,以消除不同圖框之間的冗餘資訊。可自非經編碼影像資料與經預測影像資料之間的差而產生殘餘影像。 諸如RGB 4:4:4之一些色彩空間格式對於原生地進行寫碼可為效率較低的,此係因為不同色彩平面可尚未有效地去相關。亦即,冗餘資訊可存在於在編碼期間可能未經移除的不同分量之間,從而相對於替代性色彩空間導致減小之寫碼效率。另一方面,因為可必須在寫碼迴圈外部執行之色彩轉換以及可能經由色彩轉換引入之可能損耗,在一些應用中在諸如YUV 4:4:4或YCoCg及YCoCg-R 4:4:4之替代性色彩空間中編碼此材料可不理想。 因此,需要有效率地轉換並編碼影像資料的改善之方式。 Image data, such as those contained in video, can contain a wealth of information related to color, pixel location, and time. In order to handle this large amount of information, it may be necessary to compress or encode the image data without losing too much information from the original video while at the same time increasing the complexity of data compression, which may slow down the processing of the image data. The encoded image data may need to be decoded later to transform back or restore the original video information. To encode an image, pixel color data may first be converted to color data in an appropriate color space coordinate system. Next, the converted data is encoded. For example, image data may have raw pixel color data in a red-green-blue (RGB) color space coordinate system. To encode image data, raw pixel color data in the RGB color space can be converted to color data in the YCbCr color space coordinate system by separating the luminance and color components. Next, the color data in the YCbCr color space coordinate system can be encoded. By doing so, redundant information that may exist between the original three colors can be compressed by removing the redundancy during color space conversion. Additional redundancy in the image data can be removed during encoding of the transformed image data by performing spatial prediction and temporal prediction, followed by additional encoding of any remaining residual data to any degree ideal, and entropy encoding at Data in individual frames at a point in time and/or data in the duration of a video sequence. Spatial prediction can predict image data in a single frame in time to eliminate redundant information between different pixels in the same frame. Temporal prediction predicts image data over the duration of a video sequence to eliminate redundant information between different frames. A residual image may be generated from the difference between the non-encoded image data and the predicted image data. Some color space formats, such as RGB 4:4:4, may be less efficient to write natively because the different color planes may not yet be effectively decorrelated. That is, redundant information may exist between different components that may not have been removed during encoding, resulting in reduced coding efficiency relative to alternative color spaces. On the other hand, because of the color conversion that may have to be performed outside the code write loop and the possible losses that may be introduced through color conversion, in some applications such as YUV 4:4:4 or YCoCg and YCoCg-R 4:4:4 Encoding this material in an alternative color space may not be ideal. Therefore, there is a need for improved ways to efficiently convert and encode image data.

優先權主張本申請案主張係2013年5月30日申請之美國專利申請案第13/905,889號之部分接續案的於2013年7月11日申請的美國專利申請案第13/940,025號的優先權,該申請案之全文以引用方式併入本文中。 根據如說明於圖1中之一實施例,系統100可包括分析器130、可選擇殘餘轉換器160及編碼器170。 分析器130可分析輸入視訊110中之當前影像區域以選擇一轉換。可選擇殘餘轉換器160可受控於分析器130,以對自前影像區域及所預測當前影像區域產生的殘餘影像執行可選擇轉換,以產生轉換之殘餘影像。編碼器170可編碼經轉換之殘餘影像以產生輸出資料190。分析器130可控制編碼器170以編碼資訊以識別可選擇轉換且指示當前影像區域之可選擇轉換不同於輸入視訊之先前影像區域的轉換。 視情況,系統100可包括圖框緩衝器120以儲存輸入視訊110之資訊,例如,先前經處理的影像資料。圖框緩衝器120中之此資料可由框間預測150使用,由分析器130控制以執行時間預測,亦即,基於先前圖框之資料而產生當前影像區域的所預測影像資料。或者圖框緩衝器120中之此資料可由框內預測152使用,由分析器130控制以執行空間預測,亦即,基於當前圖框之另一部分的資料而產生當前影像區域的所預測影像資料。視情況,分析器130可基於儲存於圖框緩衝器120中的資料來執行其分析。藉由框間預測150及/或框內預測152產生之當前影像區域的所預測影像區域可藉由整合器140與輸入視訊110之當前影像區域組合(或自該當前影像區域減掉),以產生殘餘影像。 根據一實施例,當前影像區域可為圖框、圖塊及寫碼樹狀結構單元中的一者。可選擇轉換可包括色彩空間轉換。編碼器170可包括熵編碼器。識別可選擇轉換之經編碼資訊可指定可選擇逆轉換的係數。識別可選擇轉換之經編碼資訊在當前影像區域之經編碼殘餘影像資料之前可含有於序列參數集、圖像參數集及圖塊標頭中的一者中。編碼器170可包括轉換器172及/或量化器174,其可由分析器130控制以執行量化。 分析器130可選擇並改變可選擇殘餘轉換器160之可選擇轉換且相應地變更(例如)框間預測150、框內預測152及編碼器170的參數,以針對資料編碼、資料解碼、經編碼之資料大小、錯誤率及/或編碼或解碼需要的系統資源最佳化。 下一代高效率視訊寫碼(HEVC)標準引入若干新視訊寫碼工具而努力相對於先前視訊寫碼標準及技術(諸如,MPEG-2、MPEG-4第2部分、MPEG-4 AVC/H.264、VC1及VP8)來改善視訊寫碼效率。 新標準可支援使用經良好定義之設定檔(例如,主設定檔、主10設定檔及主靜態圖像設定檔)而進行之YUV 4:2:0 8或10位元材料的編碼。在諸如電影院應用、捕獲、視訊編輯、存檔、遊戲之專業應用中及消費型應用中(尤其是對於螢幕內容壓縮及共用)存在相當大的關注,以開發支援較高(大於10個位元)樣本精度(位元深度)之格式以及不同色彩取樣格式及色彩空間(包括YUV或RGB 4:4:4)。 較高色彩取樣格式/空間之編碼原理可類似於具有較低取樣精度之格式(亦即,4:2:0 YUV)的彼等原理,以適當地處置色度分量之解析度上的差。色彩分量中之一者可被感知為等效於4:2:0 YUV編碼中的明度分量,而剩餘色彩分量可作為色度分量類似地進行處置,同時解決較高解析度。亦即,諸如框內預測及運動補償之預測工具需要解決解析度之增量,且轉換及量化程序亦需要處置色彩分量的額外殘餘資料。類似地,諸如熵寫碼、解區塊及樣本可適性偏移(SAO)的其他程序可需要經擴展以處理視訊資料的增加。替代地,所有色彩分量可經分離地編碼為分離之單色影像,其中每一色彩分量在編碼或解碼程序期間起到明度資訊的作用。 為了改善寫碼效能,可對殘餘資料執行額外色彩空間轉換,此情形可導致所有色彩分量之間的更好去相關(較低冗餘)。可選擇色彩空間轉換可使用可適性地導出之色彩空間轉換矩陣而應用於經解量化(反量化)及逆轉換殘餘資料,諸如:

Figure 02_image001
可使用先前所恢復之影像資料(諸如,當前轉換單元之左側或上方的影像資料或先前圖框中之轉換單元的影像資料)來導出色彩轉換矩陣。該導出可涉及藉由減去每一色彩平面中之參考樣本之平均值且藉由越過所有色彩平面計算並正規化協方差矩陣來使每一色彩平面中的參考樣本正規化。此情形可達成一些「區域化」寫碼效能益處,而不於HEVC規範中添加任何新傳訊額外負荷。然而,此情形在編碼器及解碼器兩者中可添加針對導出轉換參數的複雜性。 為了簡化視訊編碼及解碼上之可適性色彩轉換,僅將色彩轉換應用於殘餘資料上。根據本發明,額外色彩轉換可由編碼器可選擇並傳訊,且解碼器可基於自經編碼資料解碼的傳訊來選擇並執行對應反色彩轉換。 詳言之,一或多個色彩轉換在諸如HEVC之編碼解碼器內可於不同層級下隱含地或顯式地傳訊。舉例而言,編碼器可隱含地傳訊來自RGB色彩空間的已知色彩轉換,諸如有限或全範圍YUV Rec.709、Rec.2020或Rec.601以及YcoCg上的轉換。編碼器可藉由以預定義精度傳訊或指定所有反色彩轉換係數(例如)藉由將轉換係數或其相關關係列出於經編碼資料的數個部分中來顯式地傳訊色彩轉換。包括類型、參數及係數之色彩轉換可在序列參數集(SPS) NALU、圖像參數集(PPS)及/或圖塊標頭中傳訊或指定。寫碼樹狀結構單元(CTU)內之傳訊亦可為可能的,儘管依據位元速率可多花費一個位元(其可能不理想)。 若針對視訊序列之不同層級(亦即,CTU之序列、圖框及像素區塊)指定此轉換資訊,則可在此等元素之階層內預測轉換。亦即,PPS中之轉換可自SPS中定義之轉換進行預測,且圖塊標頭中之轉換可自PPS及或SPS中之轉換進行預測。可定義並使用新語法元素及單元以允許不同層級之視訊序列階層之間的轉換之此預測,以包括轉換的來自所指定轉換或較高層級轉換之預測或非預測,以及用於轉換係數之精度及係數自身的預測或非預測。顯式定義之色彩轉換之導出可基於可用資料,諸如來自整個序列、圖像、圖塊或CTU的樣本資料。編碼器可選定或選擇使用對應於當前像素樣本的資料(若可用),或使用來自已經編碼之過去圖框或單元的資料。主分量分析方法(例如,協方差方法、反覆方法、非線性反覆偏最小二乘法等)可用以導出轉換係數。 系統可指示,僅單一轉換應用於整個序列,因此不准許經由傳訊或語義(亦即,藉由編碼解碼器或設定檔/層級強制)而在序列之任何子分量(亦即,在圖像、圖塊、CTU或轉換單元(TU))內之色彩轉換的任何改變。類似約束可在較低層級亦即在圖像、圖塊或CTU內執行。 然而,對於系統可亦有可能的是,允許轉在一序列、圖像、圖塊或甚至CTU內之色彩轉換之切換。針對每一圖像及圖塊之色彩轉換之切換可藉由傳訊用於每一新資料區塊之新色彩轉換參數而進行,該等新色彩轉換參數更動控制較高層級或先前區塊轉換參數。額外轉換參數可在較低層處傳訊,從而有效地允許用於整個CTU、寫碼單元(CU)或甚至TU之色彩轉換的切換。然而,此傳訊可佔用所得經編碼資料串流中的顯著數目個位元,因此增加資料串流大小。 替代地,色彩轉換可基於位元串流中之多個預定義或傳訊之條件來導出。詳言之,特定色彩轉換可經預先指派給特定轉換區塊大小、寫碼單元大小或預測模式(例如,框內對框間)。舉例而言,在假設明度及色度資料之轉換單元針對一特定視訊序列而對準的情況下,若將使用之明度轉換的大小為16×16,則使用色彩轉換A;若將使用8×8明度轉換,則使用色彩轉換B;且對於32×32或4×4轉換,不應用色彩轉換。若未對準明度及色度資料之轉換單元,則可使用色彩轉換之預定義條件導出的替代性但類似之方式以解決轉換單元的不對準。 系統在編碼或解碼中可緩衝或快取數個預定義色彩轉換連同關聯處理演算法,使得系統可儲存碼簿,其可(例如)經由查找表(LUT)查找預定義色彩轉換。系統亦可計算或預測色彩轉換且將色彩轉換儲存於緩衝器中以供稍後查找。 在一些編碼解碼器標準中,預測單元(PU)及TU可在兩者之間無嚴格相依性情況下在一CU內進行定義。因此,預測單元(PU)及TU可能並非在大小方面直接相關。在其他編碼解碼器標準中,若TU在PU內被嚴格定義,則諸如預測清單及參考索引的PU資訊可用以導出色彩轉換。 對於複雜性並非係關注事項的系統,則可使用以上方法之組合。亦即,對於每一CTU、CU或轉換區塊,編碼器可在經編碼資料串流中傳訊是否使用先前定義/傳訊的色彩轉換,或是否應基於鄰里資訊而對於當前單元分離地導出色彩轉換。此情形允許系統控制解碼器之複雜性,且避免存在不夠資訊來自其相鄰者導出色彩轉換的狀況。此情形在物件或色彩邊緣或有雜訊資料周圍尤其為真,其中鄰里資料可經去相關。可適性地計算之色彩轉換可以較不頻繁地間隔(例如,每CTU列一次或甚至對於每CTU一次)計算並更新,以減小解碼器複雜性。色彩轉換之穩定性可藉由使用先前產生之值緩慢地調適色彩轉換來增加。亦即,以單元(例如,轉換單元) n計算當前色彩轉換可經執行為: Transform(n) = w 0*Transform(n-1) + w 1* ComputedTransform(n) 其中ComputedTransform(n)為純粹地基於局部像素群組資訊估計的轉換。兩個權重w 0及w 1可在系統中經預定義或傳訊,從而提供關於如何控制色彩轉換之計算的進一步靈活性。亦即,使w 0之值相對於w 1增加使所得色彩轉換Transform(n)對相鄰色彩轉換Transform(n-1)的相依性增加。 編碼系統可藉由(例如)分析視訊序列中之影像資料來判定編碼視訊序列需要的所有轉換,且執行成本效益評估來使編碼、解碼、資料品質及/或經編碼資料的大小最佳化。舉例而言,若編碼系統具有足夠計算資源,則編碼系統可藉由對所有個別圖框及轉換單元執行多個可能色彩轉換來執行「強力」分析,且接著在速率失真將經最佳化的情況下針對每一轉換單元選擇導致最低速率失真的一個色彩轉換。然而,此「強力」分析將需要許多計算資源,且將為緩慢的,且因此在編碼需要近「即時」進行的應用中(例如,在「直播」視訊串流中)可能並非為有用的。 每區塊使用不同色彩轉換可影響編碼及解碼程序的其他部分。詳言之,(例如)基於上下文可適性二進位算術寫碼(CABAC)的熵寫碼假設,相鄰區塊中之係數係在相同色彩域中,熵寫碼程序之統計可相應地經累積,且解區塊在對區塊邊緣進行濾波時可將所使用之量化參數(QP)用於每一色彩分量。 然而,在使用可影響寫碼效能之區塊層級可適性色彩轉換的系統中可能並非此狀況。在熵寫碼之狀況下,影響可為不重要的,且因此可忽略色彩空間的差。在考慮係在相同色彩空間中之相鄰資料中限制程序可使複雜性及實施方面之效能惡化,此係因為更多上下文可需要被處置並補償可能已使用的每一新色彩轉換。因此,系統可能不需要變更用於可適性色彩轉換的編碼程序。 另一方面,可適性色彩轉換改變在解區塊期間可更易於解決。詳言之,在給出用於寫碼經轉換殘餘的QP值情況下,當導出供解區塊之每一色彩分量之適當臨限值時,經傳訊之QP值可在忽略所使用之色彩空間時予以使用,或QP值可在原生色域中進行近似。舉例而言,簡單方式為將應用於殘餘資料之相同色彩轉換亦應用於量化器值,或定義並傳訊將有助於將經轉換殘餘的所使用之轉換量化器值轉譯成原生色彩空間量化器值的額外轉換。為了簡單,系統可不需要轉譯或調整可適性色彩轉換的量化值。 根據如說明於圖2中之一實施例,系統200可包括解碼器230、可選擇殘餘逆轉換器220及整合器240。 解碼器230可接收並解碼輸入資料210。可選擇殘餘逆轉換器220可藉由解碼器230控制以對經解碼輸入資料執行可選擇逆轉換,以產生經逆轉換殘餘影像。整合器240可組合經逆轉換殘餘影像與當前影像區域之預測影像,以產生輸出視訊290之所恢復當前影像區域。解碼器230可基於輸入資料210中之經編碼資訊選擇可選擇逆轉換,該經編碼資訊識別可選擇逆轉換且指示當前影像區域之可選擇逆轉換不同於輸出視訊290之先前影像區域的轉換。 視情況,系統200可包括圖框緩衝器280以儲存輸出視訊290之資訊,例如,先前經處理的影像資料。圖框緩衝器280中之此資料可由框間預測250使用,由解碼器230控制以執行時間預測,亦即,基於先前圖框之資料產生當前影像區域的所預測影像資料。框內預測260可由解碼器230控制以執行空間預測,亦即,基於當前圖框之另一部分的資料產生當前影像區域的所預測影像資料。藉由框間預測250及/或框內預測260產生之當前影像區域的所預測影像區域可藉由整合器240與來自可選擇殘餘逆轉換器220的逆轉換殘餘影像組合(或與該逆轉換殘餘影像相加),以產生輸出視訊290的所恢復當前影像區域。系統200可包括調整器270,該調整器270對輸出視訊290之所恢復當前影像區域執行調整。調整器270可包括解區塊272及樣本可適性偏移(SAO) 274。調整器270可輸出至輸出視訊290及/或圖框緩衝器280。 根據一實施例,當前影像區域可為圖框、圖塊及寫碼樹狀結構單元中的一者。可選擇逆轉換可包括色彩空間轉換。解碼器230可包括熵解碼器。識別可選擇逆轉換之經編碼資訊可指定可選擇逆轉換的係數。識別可選擇逆轉換之經編碼資訊在當前影像區域之經編碼殘餘影像資料之前可含有於序列參數集、圖像參數集及圖塊標頭中的一者中。解碼器230可包括逆轉換器232及/或可執行量化的反量化器233。輸出視訊290可連接至顯示器件(圖中未示)且經顯示。 解碼器230可選擇並改變可選擇殘餘逆轉換器220的可選擇逆轉換,且基於識別可選擇逆轉換之所接收的輸入資料中之經編碼資訊而相應地變更(例如)框間預測250、框內預測260及調整器270的參數。 圖3說明根據一實施例的方法300。 方法300可包括區塊310,從而藉由分析器分析輸入視訊中的當前影像區域以選擇轉換。 在區塊320處,藉由受控於分析器的可選擇殘餘轉換器對自當前影像區域及所預測當前影像區域產生的殘餘影像執行可選擇轉換,以產生轉換之殘餘影像。 在區塊330處,藉由編碼器編碼經轉換之殘餘影像以產生輸出資料。 根據一實施例,分析器可控制編碼器來編碼資訊以識別可選擇轉換且指示當前影像區域之可選擇轉換不同於輸入視訊之先前影像區域的轉換。 根據一實施例,分析器可分析輸入視訊並選擇用於整個視訊序列的總體序列色彩轉換,且分析並選擇用於個別圖框、圖塊、像素區塊、CTU等的殘餘色彩轉換。分析器可在輸入視訊經接收且編碼經處理時連續地分析輸入視訊且執行用於每一圖框的色彩轉換之就地選擇。或者,分析器在選擇色彩轉換且開始編碼之前可完整地分析整個輸入視訊序列。 圖4說明根據一實施例的方法400。 方法400可包括區塊410,從而藉由解碼器接收並解碼輸入資料。 在區塊420處,藉由受控於解碼器之可選擇殘餘逆轉換器對經解碼輸入資料執行可選擇逆轉換,以產生逆轉換殘餘影像。 在區塊430處,藉由整合器組合經逆轉換殘餘影像與當前影像區域之預測影像,以產生輸出視訊之所恢復當前影像區域。 根據一實施例,解碼器可基於輸入資料中之經編碼資訊選擇可選擇逆轉換,該經編碼資訊識別可選擇逆轉換且指示當前影像區域之可選擇逆轉換不同於該輸入視訊之一先前影像區域的一轉換。 根據一實施例,圖1中之可選擇殘餘轉換器160可執行色彩轉換,其中結果之一個色彩分量可基於輸入之僅一個色彩分量。舉例而言,可選擇殘餘轉換器160可執行以下色彩轉換:
Figure 02_image003
若輸入資料各自具有N個位元,則色彩轉換可藉由簡單量化組合成包括正負號的N個位元。以上計算中的減法可以兩種方式進行。首先,向右移位運算(亦即,經寫碼Rb可藉由(B - G + 1) >> 1導出)。第二,削波運算[亦即,min(max_range, max(min_range, B - G)),其中min_range及max_range為轉換中允許的最小值及最大值,且可在協定中經預先指定,藉由編碼系統傳訊,或經動態地計算(例如,max_range = (1<<(N-1)) – 1,且min_range = -max_range -1)]。 以上轉換可係有利的,此係因為其為「有因果關係的」且對應於影像資料中之色彩分量可如何經解碼的序列,例如,常常自綠色(或者YCbCr或YCgCo/YCgCo-R色彩空間情況下明度)開始,接著B(或Cb),繼之以R(或Cr)。第一色彩分量可取決於輸入資料之僅一個色彩分量,且可獨立於輸入資料之其他(尚未寫碼)色彩分量。然而,在編碼了第一色彩分量之後,第一色彩分量可用作一因數來計算其他色彩分量的預測。對應解碼系統可實施對應於以上色彩轉換的反色彩轉換。此情形可允許實施對此等色彩平面串列地操作之編碼及解碼系統,以允許使用上文展示之相對簡單計算而隨著依序發送或接收色彩分量來處理該等色彩分量,而不添加等待所有色彩分量經排入佇列及/或經處理的額外延遲。圖1中之可選擇殘餘轉換器160可實施用於每一色彩分量的分離或分裂處理路徑,其中輸入資料可被分裂成個別色彩分量,且所得經轉換色彩分量可稍後藉由編碼器170合併。 根據一實施例,圖1中之可選擇殘餘轉換器160可使用色彩轉換之「封閉」迴路最佳化來實施。亦即,可選擇殘餘轉換器160可接收回饋資料以用於色彩轉換中。 可選擇殘餘轉換器160可使用原始樣本作為輸入資料來執行色彩轉換。舉例而言,在GRbRr轉換中,原始GBR色彩空間資料樣本可用以執行色彩轉換,其中使用原始GBT色彩空間資料樣本的新集合獨立地計算所得經轉換資料之每一新集合。 在給出展示於以上實例中之色彩轉換的串列本質的情況下,綠色分量資料可首先經色彩轉換並編碼,繼之以其他色彩。圖1中之可選擇殘餘轉換器160可藉由(例如)使用以下方程式來將經重建構綠色分量資料用作輸入而用於其他色彩分量的色彩轉換: G* = IQT(QT(G'), 其中QT為量化函數,且IQT為對應反量化函數,G'表示綠色殘餘資料,且G*表示經重建構之綠色殘餘資料, Rb' = (B - G*), 其中B表示藍色分量資料,且Rb'表示Rb分量的殘餘資料, Rb* = IQT(QT(Rb'), 其中Rb*表示經重建構之Rb殘餘資料, Rr' = (R - G*), 其中R表示紅色分量資料,且Rr' 表示Rr分量的殘餘資料, Rr* = IQT(QT(Rr'), 其中Rr*表示經重建構之Rr殘餘資料。 藉由以上色彩轉換,基於經重建構之綠色殘餘資料而產生經編碼資料分量Rb'、Rb*、Rr'及Rr*。此情形可有助於對應解碼系統來達成更好效能。因為對應解碼系統可僅具有用於反色彩轉換之經重建構色彩分量資料(諸如G*)且並不具有原始色彩資料樣本,所以使用經重建構色彩分量資料的編碼系統將與解碼系統更好地匹配,從而減少在量化程序中引起的任何潛在色彩分量洩漏。 應瞭解,本發明不限於所描述之實施例,且可解析存在衝突點之任何數目個情境及實施例。 儘管已參看若干例示性實施例描述了本發明,但應理解,已使用之詞語為關於描述及說明之詞語,而非關於限制的詞語。如目前所陳述並如所修正,可在附加申請專利範圍之權限內進行改變而不偏離本發明在其態樣中的範疇及精神。儘管已參看特定構件、材料及實施例描述了本發明,但本發明不意欲限於所揭示之細節;確切而言,本發明擴展至諸如在附加申請專利範圍之範疇內的所有功能等效之結構、方法及使用。 雖然可將電腦可讀媒體描述為單一媒體,但術語「電腦可讀媒體」包括單一媒體或多個媒體,諸如集中式或分散式資料庫,及/或儲存一或多個指令集的關聯快取記憶體及伺服器。術語「電腦可讀媒體」亦應包括能夠儲存、編碼或攜載指令集以供處理器執行或使得電腦系統執行本文中所揭示之實施例中之任一或多者的任何媒體。 電腦可讀媒體可包括非暫時性電腦可讀媒體及/或包含暫時性電腦可讀媒體。在一特定非限制性例示性實施例中,電腦可讀媒體可包括固態記憶體,諸如,記憶卡或收納一或多個非揮發性唯讀記憶體之其他封裝。另外,電腦可讀媒體可為隨機存取記憶體或其他揮發性可重寫記憶體。另外,電腦可讀媒體可包括磁光或光學媒體,諸如磁碟或磁帶或者用以捕獲諸如經由傳輸媒體傳達之信號的載波信號之其他儲存器件。因此,本發明被認為是包括可儲存資料或指令的任何電腦可讀媒體或其他等效及後續媒體。 儘管本申請案描述可實施為電腦可讀媒體中之碼段的特定實施例,但應理解,諸如特殊應用積體電路、可程式化邏輯陣列及其他硬體器件的專用硬體實施可經建構以實施本文中所描述之實施例中的一或多者。可包括本文中所闡述之各種實施例的應用可廣泛地包括多種電子及電腦系統。因此,本申請案可涵蓋軟體、韌體及硬體實施,或其組合。 本說明書描述可參看特定標準及協定實施於特定實施例中的組件及功能,本發明不限於此等標準及協定。此等標準藉由本質上具有相同功能之更快或更有效率之等效物來週期性地替代。因此,具有相同或類似功能之替換標準及協定被視為其等效物。 本文中所描述之實施例的說明意欲提供各種實施例之一般理解。該等說明不意欲充當對利用本文中所描述之結構或方法的裝置及系統之所有元件及特徵的完整描述。熟習此項技術者可在審閱本發明後即顯見許多其他實施例。可利用其他實施例且可自本發明導出其他實施例,使得可在不偏離本發明之範疇之情況下進行結構及邏輯上的取代及改變。另外,該等說明僅為代表性的且可能不按比例繪製。可誇示該等說明內之某些比例,而可最小化其他比例。因此,本發明及圖式將被視為說明性而非約束性的。 本發明之一或多個實施例可在本文中藉由術語「揭示內容」個別及/或統一地提及,該術語「揭示內容」僅為了方便且不意欲自願地將本申請案之範疇限於任何特定揭示內容或發明概念。此外,儘管已在本文中說明並描述了特定實施例,但應瞭解,可用經設計以達成相同或類似目的之任何後續配置來取代所展示之特定實施例。本發明意欲涵蓋各種實施例之任何及所有後續調適或變化。熟習此項技術者將在審閱描述後即顯見以上實施例之組合及本文中未具體描述之其他實施例。 此外,在前述[實施方式]中,各種特徵可為使本發明流線型化之目的而集中在一起或描述於單一實施例中。本發明並不被解釋為反映如下意圖:所主張之實施例需要比每一請求項中明確敍述之特徵多的特徵。實情為,如以下申請專利範圍所反映,本發明之主題可針對少於所揭示實施例中之任一者的所有特徵。因此,將以下申請專利範圍併入至[實施方式]中,其中每一請求項由於定義分離主張之主題而獨立。 上文所揭示之主題應被視為說明性的而非約束性的,且附加申請專利範圍意欲涵蓋屬於本發明之真正精神及範疇內的所有此等修改、增強及其他實施例。因此,在法律所允許之最大程度上,本發明之範疇應由以下申請專利範圍及其等效物之最寬廣可准許解釋來判定,且不應受前述實施方式約束或限制。 Priority Claim This application claims priority to US Patent Application Serial No. 13/940,025, filed July 11, 2013, which is a continuation-in-part of US Patent Application Serial No. 13/905,889, filed May 30, 2013 The entirety of this application is incorporated herein by reference. According to one embodiment as illustrated in FIG. 1 , system 100 may include analyzer 130 , selectable residual transformer 160 , and encoder 170 . The analyzer 130 may analyze the current image area in the input video 110 to select a transition. The selectable residual converter 160 may be controlled by the analyzer 130 to perform a selectable transformation on residual images generated from the previous image region and the predicted current image region to generate a transformed residual image. Encoder 170 may encode the transformed residual image to generate output data 190 . The analyzer 130 can control the encoder 170 to encode information to identify the selectable transitions and to indicate that the selectable transitions of the current image region are different from transitions of previous image regions of the input video. Optionally, the system 100 may include a frame buffer 120 to store information of the input video 110, eg, previously processed image data. This data in frame buffer 120 may be used by inter-frame prediction 150, which is controlled by analyzer 130 to perform temporal prediction, ie, to generate predicted image data for the current image region based on data from previous frames. Alternatively, this data in frame buffer 120 may be used by intra-frame prediction 152, controlled by analyzer 130 to perform spatial prediction, ie, to generate predicted image data for the current image region based on data from another portion of the current frame. Optionally, analyzer 130 may perform its analysis based on data stored in frame buffer 120 . The predicted image area of the current image area generated by the inter-frame prediction 150 and/or the intra-frame prediction 152 may be combined by the integrator 140 with the current image area of the input video 110 (or subtracted from the current image area) to A residual image is produced. According to an embodiment, the current image area may be one of a frame, a block, and a coding tree structure unit. Optional conversions may include color space conversions. The encoder 170 may include an entropy encoder. The encoded information identifying the selectable transform may specify the coefficients of the selectable inverse transform. The encoded information identifying the selectable transform may be contained in one of a sequence parameter set, a picture parameter set, and a tile header before the encoded residual image data for the current image region. The encoder 170 may include a converter 172 and/or a quantizer 174, which may be controlled by the analyzer 130 to perform quantization. The analyzer 130 may select and change the selectable transforms of the selectable residual transformer 160 and correspondingly change, eg, the parameters of the inter prediction 150, the intra prediction 152, and the encoder 170 for data encoding, data decoding, encoding optimization of data size, error rate, and/or system resources required for encoding or decoding. The next-generation High Efficiency Video Writing (HEVC) standard introduces several new video writing tools in an effort to be relative to previous video writing standards and techniques such as MPEG-2, MPEG-4 Part 2, MPEG-4 AVC/H. 264, VC1 and VP8) to improve video coding efficiency. The new standard can support encoding of YUV 4:2:0 8- or 10-bit material using well-defined profiles (eg, Main Profile, Main 10 Profile, and Main Still Image Profile). There is considerable interest in professional applications such as cinema applications, capture, video editing, archiving, gaming, and in consumer applications (especially for screen content compression and sharing) to develop higher support (greater than 10 bits) Format of sample precision (bit depth) and different color sampling formats and color spaces (including YUV or RGB 4:4:4). The encoding principles of higher color sampling formats/spaces may be similar to those of formats with lower sampling precision (ie, 4:2:0 YUV) to properly handle differences in resolution of chroma components. One of the color components may be perceived as equivalent to the luma component in 4:2:0 YUV encoding, while the remaining color components may be treated similarly as chroma components while addressing higher resolutions. That is, prediction tools such as in-frame prediction and motion compensation need to address the increase in resolution, and conversion and quantization procedures also need to handle additional residual data for color components. Similarly, other procedures such as entropy coding, deblocking, and sample adaptive offsetting (SAO) may need to be extended to handle the increase in video data. Alternatively, all color components may be encoded separately as separate monochrome images, with each color component serving as luma information during the encoding or decoding process. To improve coding performance, additional color space conversions may be performed on the residual data, which may result in better decorrelation (lower redundancy) between all color components. Selectable color space transformations can be applied to dequantized (inverse quantized) and inverse transformed residual data using an adaptively derived color space transformation matrix, such as:
Figure 02_image001
The color conversion matrix may be derived using previously restored image data, such as image data to the left or above of the current conversion unit or image data of conversion units in the previous frame. The derivation may involve normalizing the reference samples in each color plane by subtracting the mean of the reference samples in each color plane and by computing and normalizing the covariance matrix across all color planes. This situation can achieve some "localized" coding performance benefits without adding any new signaling overhead to the HEVC specification. However, this situation may add complexity in both the encoder and the decoder for deriving the transformation parameters. To simplify adaptive color conversion on video encoding and decoding, color conversion is only applied to residual data. In accordance with the present invention, additional color transforms may be selected and signaled by the encoder, and the decoder may select and perform corresponding inverse color transforms based on the signal decoded from the encoded data. In particular, one or more color transforms may be implicitly or explicitly signaled at different levels within a codec such as HEVC. For example, the encoder may implicitly signal known color conversions from the RGB color space, such as limited or full range YUV Rec.709, Rec.2020 or Rec.601 and conversions on YcoCg. The encoder can explicitly signal color conversions by signaling with a predefined precision or specifying all inverse color conversion coefficients, eg, by listing the conversion coefficients or their correlations in portions of the encoded data. Color conversions including types, parameters and coefficients may be signaled or specified in Sequence Parameter Set (SPS) NALUs, Picture Parameter Sets (PPS) and/or tile headers. Signaling within a write code tree unit (CTU) is also possible, although depending on the bit rate it may cost an extra bit (which may not be ideal). If this transition information is specified for different levels of the video sequence (ie, sequences of CTUs, frames, and blocks of pixels), transitions can be predicted within the level of these elements. That is, transitions in PPS can be predicted from transitions defined in SPS, and transitions in tile headers can be predicted from transitions in PPS and/or SPS. New syntax elements and units can be defined and used to allow this prediction of transitions between video sequence levels at different levels, to include prediction or non-prediction of transitions from specified transitions or higher level transitions, as well as for conversion coefficients. Accuracy and prediction or non-prediction of the coefficients themselves. The derivation of explicitly defined color conversions can be based on available data, such as sample data from entire sequences, images, tiles, or CTUs. The encoder may choose or choose to use data corresponding to the current pixel sample, if available, or to use data from past frames or cells that have been encoded. Principal component analysis methods (eg, covariance methods, iterative methods, nonlinear iterative partial least squares, etc.) can be used to derive conversion coefficients. The system may instruct that only a single transformation applies to the entire sequence, thus disallowing any subcomponents of the sequence (i.e., images, Any change in color conversion within a tile, CTU, or conversion unit (TU). Similar constraints can be enforced at a lower level, ie, within a picture, tile or CTU. However, it may also be possible for the system to allow switching of color conversions within a sequence, image, tile or even CTU. Switching of color conversions for each image and block can be done by signaling new color conversion parameters for each new block of data, which changes control higher-level or previous block conversion parameters . Additional conversion parameters can be signaled at lower layers, effectively allowing switching of color conversions for the entire CTU, coding unit (CU), or even TU. However, this signaling can occupy a significant number of bits in the resulting encoded data stream, thus increasing the data stream size. Alternatively, color conversions may be derived based on a number of predefined or signaled conditions in the bitstream. In detail, a particular color transform may be pre-assigned to a particular transform block size, code unit size, or prediction mode (eg, within-frame versus between-frame). For example, assuming that the conversion units of luma and chrominance data are aligned for a particular video sequence, if the size of the luma conversion to be used is 16x16, then color conversion A is used; if 8x is to be used 8 Luminance conversion, color conversion B is used; and for 32×32 or 4×4 conversion, no color conversion is applied. If the conversion units of the luma and chrominance data are misaligned, an alternative but similar approach derived from the predefined conditions for color conversion can be used to resolve the misalignment of the conversion units. The system can buffer or cache several predefined color conversions along with associated processing algorithms in encoding or decoding, so that the system can store a codebook that can look up the predefined color conversions, eg, via a look-up table (LUT). The system can also calculate or predict color conversions and store the color conversions in a buffer for later lookup. In some codec standards, prediction units (PUs) and TUs may be defined within a CU without strict dependencies between the two. Therefore, prediction units (PUs) and TUs may not be directly related in size. In other codec standards, if the TU is strictly defined within the PU, then PU information such as prediction lists and reference indices can be used to derive color conversions. For systems where complexity is not a concern, a combination of the above approaches can be used. That is, for each CTU, CU, or transform block, the encoder can signal in the encoded data stream whether to use a previously defined/signaled color transform, or whether the color transform should be derived separately for the current unit based on neighborhood information . This situation allows the system to control the complexity of the decoder and avoid situations where there is insufficient information from its neighbors to derive color conversions. This is especially true around object or color edges or noisy data, where neighbor data can be decorrelated. Adaptively computed color transforms may be computed and updated at less frequent intervals (eg, once per CTU column or even once per CTU) to reduce decoder complexity. The stability of the color conversion can be increased by slowly adapting the color conversion using previously generated values. That is, computing the current color transform in unit (eg, transform unit) n may be performed as: Transform(n) = w 0 *Transform(n-1) + w 1 * ComputedTransform(n) where ComputedTransform(n) is pure The transformation is estimated based on local pixel group information. The two weights w 0 and w 1 may be predefined or signaled in the system, providing further flexibility in how the computation of the color conversion is controlled. That is, increasing the value of w0 relative to w1 increases the dependency of the resulting color transform Transform(n) on the adjacent color transform Transform(n-1). The encoding system may determine all transformations required to encode the video sequence by, for example, analyzing the image data in the video sequence, and perform cost-benefit assessments to optimize encoding, decoding, data quality, and/or the size of encoded data. For example, if the encoding system has sufficient computational resources, the encoding system can perform a "brute force" analysis by performing multiple possible color conversions on all individual frames and conversion units, and then at the rate-distortion-optimized The one color conversion that results in the lowest rate-distortion is selected for each conversion unit. However, this "brute force" analysis will require many computing resources, and will be slow, and thus may not be useful in applications where encoding needs to occur in near "real time" (eg, in "live" video streaming). Using different color conversions for each block can affect other parts of the encoding and decoding process. In detail, for example, based on the entropy coding assumption of Context Adaptive Binary Arithmetic Coding (CABAC), the coefficients in adjacent blocks are in the same color gamut, and the statistics of the entropy coding process can be accumulated accordingly. , and the deblocking can use the quantization parameter (QP) used for each color component when filtering the block edges. However, this may not be the case in systems using block-level adaptive color conversion that can affect coding performance. In the case of entropy coding, the effect may be insignificant, and thus the difference in color space may be ignored. Confining a procedure in considering adjacent data in the same color space can degrade complexity and performance in implementation because more context may need to be handled and compensated for each new color conversion that may have been used. Therefore, the system may not need to change the encoding procedure for adaptive color conversion. On the other hand, adaptive color conversion changes can be more easily resolved during deblocking. In particular, given the QP value used to write the converted residual, the signaled QP value can be used regardless of the color used when deriving appropriate threshold values for each color component of the block for decoding. space, or the QP value can be approximated in the native color gamut. For example, a simple way is to apply the same color transform applied to the residual data to the quantizer values as well, or to define and signal the used transform quantizer values that will facilitate the translation of the transformed residual to the native color space quantizer Additional conversion of the value. For simplicity, the system may not need to translate or adjust the quantization values of the adaptive color conversion. According to one embodiment as illustrated in FIG. 2 , system 200 may include a decoder 230 , a selectable residual inverse converter 220 , and an integrator 240 . Decoder 230 may receive and decode input data 210 . Selectable residual inverse transformer 220 may be controlled by decoder 230 to perform a selectable inverse transform on decoded input data to generate an inverse transformed residual image. The integrator 240 may combine the inverse transformed residual image with the predicted image of the current image region to generate the restored current image region of the output video 290 . Decoder 230 may select the selectable inverse transform based on encoded information in input data 210 that identifies the selectable inverse transform and indicates that the selectable inverse transform of the current image region is different from the transform of the previous image region of output video 290 . Optionally, the system 200 may include a frame buffer 280 to store information for the output video 290, eg, previously processed image data. This data in frame buffer 280 may be used by inter-frame prediction 250, which is controlled by decoder 230 to perform temporal prediction, ie, to generate predicted image data for the current image region based on data from previous frames. Intra-frame prediction 260 may be controlled by decoder 230 to perform spatial prediction, that is, to generate predicted image data for the current image region based on data in another portion of the current frame. The predicted image region of the current image region generated by inter prediction 250 and/or intra prediction 260 may be combined by integrator 240 with the inverse transformed residual image from selectable residual inverse converter 220 (or with the inverse transformation). residual image addition) to generate the restored current image area of the output video 290. The system 200 may include an adjuster 270 that performs adjustments to the restored current image area of the output video 290 . Adjuster 270 may include deblocking 272 and sample adaptive offset (SAO) 274 . Adjuster 270 may output to output video 290 and/or frame buffer 280 . According to an embodiment, the current image area may be one of a frame, a block, and a coding tree structure unit. The optional inverse transformation may include color space transformation. Decoder 230 may include an entropy decoder. The encoded information identifying the selectable inverse transform may specify the coefficients of the selectable inverse transform. The encoded information identifying the selectable inverse transform may be contained in one of a sequence parameter set, a picture parameter set, and a tile header before the encoded residual image data for the current image region. The decoder 230 may include an inverse transformer 232 and/or an inverse quantizer 233 that may perform quantization. The output video 290 can be connected to a display device (not shown) and displayed. Decoder 230 may select and alter the selectable inverse transform of selectable residual inverse transformer 220, and accordingly alter, eg, inter-frame prediction 250, based on encoded information in the received input data identifying the selectable inverse transform In-box prediction 260 and parameters of adjuster 270. FIG. 3 illustrates a method 300 according to an embodiment. The method 300 may include block 310 whereby the current image region in the input video is analyzed by the analyzer to select a transition. At block 320, a selectable transform is performed on residual images generated from the current image region and the predicted current image region by a selectable residual converter controlled by the analyzer to generate a transformed residual image. At block 330, the transformed residual image is encoded by an encoder to generate output data. According to one embodiment, the analyzer may control the encoder to encode information to identify the selectable transitions and to indicate that the selectable transitions of the current image region are different from transitions of previous image regions of the input video. According to one embodiment, the analyzer may analyze the input video and select an overall sequential color conversion for the entire video sequence, and analyze and select residual color conversions for individual frames, tiles, pixel blocks, CTUs, and the like. The analyzer may continuously analyze the input video as it is received and the encoding is processed and perform an in-place selection of color conversion for each frame. Alternatively, the analyzer may completely analyze the entire input video sequence before selecting a color conversion and starting encoding. FIG. 4 illustrates a method 400 according to an embodiment. The method 400 can include block 410, receiving and decoding input data by a decoder. At block 420, a selectable inverse transform is performed on the decoded input data by a selectable residual inverse transformer controlled by the decoder to generate an inverse transformed residual image. At block 430, the inversely transformed residual image and the predicted image of the current image region are combined by the integrator to generate the restored current image region of the output video. According to one embodiment, the decoder may select a selectable inverse transform based on encoded information in the input data that identifies the selectable inverse transform and indicates that the selectable inverse transform of the current image region is different from a previous image of the input video A conversion of the area. According to one embodiment, the selectable residual converter 160 in FIG. 1 may perform color conversion, wherein one color component of the result may be based on only one color component of the input. For example, residual converter 160 may be selected to perform the following color conversions:
Figure 02_image003
If the input data each has N bits, the color conversions can be combined into N bits including the sign by simple quantization. The subtraction in the above calculation can be done in two ways. First, a right shift operation (ie, the written code Rb can be derived by (B - G + 1) >> 1). Second, the clipping operation [i.e., min(max_range, max(min_range, B - G)), where min_range and max_range are the minimum and maximum values allowed in the conversion, and can be pre-specified in the protocol by Encoded systematically, or computed dynamically (eg, max_range = (1<<(N-1)) - 1, and min_range = -max_range -1)]. The above conversion can be advantageous because it is "causal" and corresponds to a sequence of how color components in image data can be decoded, eg, often from green (or YCbCr or YCgCo/YCgCo-R color spaces) lightness), followed by B (or Cb), followed by R (or Cr). The first color component may depend on only one color component of the input data, and may be independent of other (not yet coded) color components of the input data. However, after the first color component is encoded, the first color component can be used as a factor to calculate predictions for other color components. The corresponding decoding system may implement an inverse color conversion corresponding to the above color conversion. This situation may allow the implementation of encoding and decoding systems that operate on these color planes in series, allowing the color components to be processed as they are sent or received in sequence using the relatively simple calculations shown above, without adding Additional delay to wait for all color components to be queued and/or processed. The optional residual converter 160 in FIG. 1 may implement a separate or split processing path for each color component, where the input data may be split into individual color components and the resulting converted color components may later be passed through the encoder 170 merge. According to one embodiment, the optional residual converter 160 in Figure 1 may be implemented using a "closed" loop optimization of color conversion. That is, the optional residual converter 160 may receive feedback data for use in color conversion. Optional residual converter 160 may perform color conversion using raw samples as input data. For example, in GRbRr conversion, raw GBR color space data samples may be used to perform color conversion, where each new set of resulting converted data is computed independently using a new set of raw GBT color space data samples. Given the serial nature of the color conversions shown in the examples above, the green component data may be color converted and encoded first, followed by the other colors. The optional residual converter 160 in FIG. 1 may use the reconstructed green component data as input for color conversion of the other color components by, for example, using the following equation: G* = IQT(QT(G') , where QT is the quantization function, and IQT is the corresponding inverse quantization function, G' represents the green residual data, and G* represents the reconstructed green residual data, Rb' = (B - G*), where B represents the blue component data, and Rb' represents the residual data of the Rb component, Rb* = IQT(QT(Rb'), where Rb* represents the reconstructed Rb residual data, Rr' = (R - G*), where R represents the red component data, and Rr' represents the residual data of the Rr component, Rr* = IQT(QT(Rr'), where Rr* represents the reconstructed Rr residual data. With the above color conversion, based on the reconstructed green residual data, Encoded data components Rb', Rb*, Rr' and Rr* are generated. This situation may help the corresponding decoding system to achieve better performance. Because the corresponding decoding system may only have reconstructed color components for inverse color conversion data (such as G*) and do not have original color data samples, so an encoding system using reconstructed color component data will better match a decoding system, reducing any potential color component leakage caused in the quantization process. Understand that the present invention is not limited to the described embodiments, and can resolve any number of situations and embodiments with conflicting points.Although the present invention has been described with reference to several exemplary embodiments, it should be understood that the terms used are related to Words of description and illustration, not words of limitation. As presently stated and as amended, changes may be made within the purview of the appended claims without departing from the scope and spirit of the invention in its aspects. Notwithstanding the The invention has been described with reference to specific components, materials and embodiments, but the invention is not intended to be limited to the details disclosed; rather, the invention extends to all functionally equivalent structures, methods such as within the scope of the appended claims and use. Although a computer-readable medium may be described as a single medium, the term "computer-readable medium" includes a single medium or multiple media, such as a centralized or distributed database, and/or storing one or more sets of instructions The term "computer-readable medium" shall also include the ability to store, encode, or carry a set of instructions for execution by a processor or to cause a computer system to perform any of the embodiments disclosed herein Computer-readable media may include non-transitory computer-readable media and/or include transitory computer-readable media. In a specific non-limiting exemplary embodiment, computer-readable media may include solid state Memory, such as a memory card or other package that houses one or more non-volatile read-only memories. In addition, the computer-readable medium may be random access memory or other volatile rewritable memory. In addition, a computer Readable media may include magneto-optical or optical media, such as magnetic disks or tapes, or Other storage devices to capture carrier signals such as signals conveyed via the transmission medium. Accordingly, the present invention is considered to include any computer-readable medium or other equivalent and subsequent medium that can store data or instructions. Although this application describes specific embodiments that can be implemented as code segments in a computer-readable medium, it should be understood that special-purpose hardware implementations, such as application-specific integrated circuits, programmable logic arrays, and other hardware devices, can be constructed to implement one or more of the embodiments described herein. Applications that may include the various embodiments set forth herein may broadly include a variety of electronic and computer systems. Accordingly, this application may encompass software, firmware, and hardware implementations, or combinations thereof. This specification describes components and functions that may be implemented in particular embodiments with reference to particular standards and protocols, to which the invention is not limited. These standards are periodically replaced by faster or more efficient equivalents that have essentially the same function. Accordingly, alternative standards and conventions with the same or similar function are considered equivalents thereof. The descriptions of the embodiments described herein are intended to provide a general understanding of the various embodiments. These descriptions are not intended to serve as a complete description of all elements and features of devices and systems that utilize the structures or methods described herein. Many other embodiments will be apparent to those skilled in the art upon review of this disclosure. Other embodiments may be utilized and derived from the present invention, such that structural and logical substitutions and changes may be made without departing from the scope of the present invention. Additionally, these illustrations are representative only and may not be drawn to scale. Certain proportions within these descriptions may be exaggerated, while other proportions may be minimized. Accordingly, the present disclosure and drawings are to be regarded as illustrative rather than restrictive. One or more embodiments of the invention may be referred to herein individually and/or collectively by the term "disclosure" which is for convenience only and is not intended to voluntarily limit the scope of this application any specific disclosure or inventive concept. Furthermore, although specific embodiments have been illustrated and described herein, it should be understood that any subsequent arrangements, designed to achieve the same or similar purpose, may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments and other embodiments not specifically described herein will be apparent to those skilled in the art upon review of the description. Furthermore, in the foregoing [embodiments], various features may be grouped together or described in a single embodiment for the purpose of streamlining the invention. This invention is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Indeed, as reflected in the following claims, the inventive subject matter may be directed to less than all features of any of the disclosed embodiments. Accordingly, the following claims are incorporated into the [Embodiments], with each claim standing on its own by defining the subject matter of separate claims. The subject matter disclosed above is to be regarded in an illustrative rather than a restrictive sense, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments that fall within the true spirit and scope of the present invention. Thus, to the maximum extent permitted by law, the scope of the present invention should be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing embodiments.

100:系統 110:輸入視訊 120:圖框緩衝器 130:分析器 140:整合器 150:框間預測 152:框內預測 160:可選擇殘餘轉換器 170:編碼器 172:轉換器 174:量化器 190:輸出資料 200:系統 210:輸入資料 220:可選擇殘餘逆轉換器 230:解碼器 232:逆轉換器 234:反量化器 240:整合器 250:框間預測 260:框內預測 270:調整器 272:解區塊 274:樣本可適性偏移(SAO) 280:圖框緩衝器 290:輸出視訊 300:方法 400:方法 100: System 110: Input video 120: Frame buffer 130: Analyzer 140: Integrator 150: Inter-Box Prediction 152: In-box prediction 160: Optional residual converter 170: Encoder 172: Converter 174: Quantizer 190: output data 200: System 210: Enter data 220: Optional residual inverter 230: Decoder 232: Inverter 234: Dequantizer 240: Integrator 250: Inter-Box Prediction 260: In-Box Prediction 270: Adjuster 272: Unblock 274: Sample Adaptive Offset (SAO) 280: Frame buffer 290: output video 300: Method 400: Method

圖1說明根據本發明之一實施例的編碼系統。 圖2說明根據本發明之一實施例的解碼系統。 圖3說明根據本發明之一實施例的編碼方法。 圖4說明根據本發明之一實施例的解碼方法。 Figure 1 illustrates an encoding system according to one embodiment of the present invention. Figure 2 illustrates a decoding system according to one embodiment of the present invention. Figure 3 illustrates an encoding method according to one embodiment of the present invention. Figure 4 illustrates a decoding method according to one embodiment of the present invention.

100:系統 100: System

110:輸入視訊 110: Input video

120:圖框緩衝器 120: Frame buffer

130:分析器 130: Analyzer

140:整合器 140: Integrator

150:框間預測 150: Inter-Box Prediction

152:框內預測 152: In-box prediction

160:可選擇殘餘轉換器 160: Optional residual converter

170:編碼器 170: Encoder

172:轉換器 172: Converter

174:量化器 174: Quantizer

190:輸出資料 190: output data

Claims (6)

一種用於解碼視訊資料之裝置,該裝置包括: 解碼構件(means for decoding),用於解碼經編碼視訊資料以判定經轉換殘餘(residual)樣本資料且判定用於一當前影像區域之色彩轉換參數; 判定構件(means for determining),用於從該等色彩轉換參數判定用於該當前影像區域之一經選定反色彩轉換(a selected inverse color transform); 執行構件(means for performing),用於在該經轉換殘餘樣本資料上執行該經選定反色彩轉換以產生經逆轉換殘餘樣本資料;及 組合構件(means for combining),用於將該經逆轉換殘餘樣本資料與經運動預測影像資料(motion predicted image data)組合以產生用於一輸出視訊之該當前影像區域之經恢復(restored)影像資料。 A device for decoding video data, the device comprising: means for decoding for decoding encoded video data to determine converted residual sample data and to determine color conversion parameters for a current image region; means for determining a selected inverse color transform for one of the current image regions from the color transform parameters; means for performing performing the selected inverse color conversion on the converted residual sample data to generate inverse converted residual sample data; and means for combining the inverse transformed residual sample data and motion predicted image data to generate a restored image for the current image region of an output video material. 如請求項1之裝置,其中基於該經轉換殘餘樣本資料之僅一個色彩分量產生該經逆轉換殘餘樣本資料之一第一色彩分量,且基於該經逆轉換殘餘樣本資料之該第一色彩分量預測該經逆轉換殘餘樣本資料之其他色彩分量。The device of claim 1, wherein a first color component of the inversely converted residual sample data is generated based on only one color component of the converted residual sample data, and wherein the first color component of the inversely converted residual sample data is based Other color components of the inversely transformed residual sample data are predicted. 如請求項1之裝置,其中將該經編碼視訊資料分裂成個別色彩分量,且其中用於每一色彩分量之處理路徑係彼此分離。The device of claim 1, wherein the encoded video data is split into individual color components, and wherein the processing paths for each color component are separated from each other. 如請求項1之裝置,其中解碼該經編碼視訊資料包含熵解碼(entropy decoding)。The device of claim 1, wherein decoding the encoded video data comprises entropy decoding. 如請求項1之裝置,其中該等色彩轉換參數指定該經選定反色彩轉換之兩個係數。The device of claim 1, wherein the color conversion parameters specify two coefficients of the selected inverse color conversion. 如請求項1之裝置,其中: 基於該經轉換殘餘樣本資料之僅一第一色彩分量產生該經逆轉換殘餘樣本資料之一第一色彩分量, 藉由將該經逆轉換殘餘樣本資料之該第一色彩分量乘以該等色彩轉換參數之一第一者而預測一第二色彩分量,及 藉由將該經逆轉換殘餘樣本資料之該第一色彩分量乘以該等色彩轉換參數之一第二者而預測一第三色彩分量。 An apparatus as claimed in claim 1, wherein: generating a first color component of the inversely transformed residual sample data based on only a first color component of the transformed residual sample data, predicting a second color component by multiplying the first color component of the inversely transformed residual sample data by a first of the color conversion parameters, and A third color component is predicted by multiplying the first color component of the inversely transformed residual sample data by a second one of the color conversion parameters.
TW111105943A 2013-05-30 2014-04-21 Adaptive color space transform coding TWI807644B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US13/905,889 2013-05-30
US13/905,889 US9225988B2 (en) 2013-05-30 2013-05-30 Adaptive color space transform coding
US13/940,025 US9225991B2 (en) 2013-05-30 2013-07-11 Adaptive color space transform coding
US13/940,025 2013-07-11

Publications (2)

Publication Number Publication Date
TW202224438A true TW202224438A (en) 2022-06-16
TWI807644B TWI807644B (en) 2023-07-01

Family

ID=50771602

Family Applications (7)

Application Number Title Priority Date Filing Date
TW107126378A TWI717621B (en) 2013-05-30 2014-04-21 Adaptive color space transform coding
TW105119182A TWI601417B (en) 2013-05-30 2014-04-21 Adaptive color space transform coding
TW112122318A TW202341743A (en) 2013-05-30 2014-04-21 Adaptive color space transform coding
TW110102519A TWI758081B (en) 2013-05-30 2014-04-21 Adaptive color space transform coding
TW111105943A TWI807644B (en) 2013-05-30 2014-04-21 Adaptive color space transform coding
TW103114429A TWI547151B (en) 2013-05-30 2014-04-21 Adaptive color space transform coding
TW106124602A TWI634782B (en) 2013-05-30 2014-04-21 Adaptive color space transform coding

Family Applications Before (4)

Application Number Title Priority Date Filing Date
TW107126378A TWI717621B (en) 2013-05-30 2014-04-21 Adaptive color space transform coding
TW105119182A TWI601417B (en) 2013-05-30 2014-04-21 Adaptive color space transform coding
TW112122318A TW202341743A (en) 2013-05-30 2014-04-21 Adaptive color space transform coding
TW110102519A TWI758081B (en) 2013-05-30 2014-04-21 Adaptive color space transform coding

Family Applications After (2)

Application Number Title Priority Date Filing Date
TW103114429A TWI547151B (en) 2013-05-30 2014-04-21 Adaptive color space transform coding
TW106124602A TWI634782B (en) 2013-05-30 2014-04-21 Adaptive color space transform coding

Country Status (9)

Country Link
US (1) US9225991B2 (en)
EP (2) EP3352459A1 (en)
JP (5) JP6397901B2 (en)
KR (9) KR102232022B1 (en)
CN (5) CN110460848B (en)
AU (6) AU2014272154B2 (en)
HK (1) HK1219189A1 (en)
TW (7) TWI717621B (en)
WO (1) WO2014193538A1 (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9225991B2 (en) * 2013-05-30 2015-12-29 Apple Inc. Adaptive color space transform coding
US9225988B2 (en) 2013-05-30 2015-12-29 Apple Inc. Adaptive color space transform coding
US20140376611A1 (en) * 2013-06-21 2014-12-25 Qualcomm Incorporated Adaptive color transforms for video coding
TWI676389B (en) 2013-07-15 2019-11-01 美商內數位Vc專利控股股份有限公司 Method for encoding and method for decoding a colour transform and corresponding devices
US10178413B2 (en) 2013-09-12 2019-01-08 Warner Bros. Entertainment Inc. Method and apparatus for color difference transform
EP3114835B1 (en) 2014-03-04 2020-04-22 Microsoft Technology Licensing, LLC Encoding strategies for adaptive switching of color spaces
EP3565251B1 (en) 2014-03-04 2020-09-16 Microsoft Technology Licensing, LLC Adaptive switching of color spaces
CA2940015C (en) * 2014-03-27 2020-10-27 Microsoft Technology Licensing, Llc Adjusting quantization/scaling and inverse quantization/scaling when switching color spaces
US10687069B2 (en) 2014-10-08 2020-06-16 Microsoft Technology Licensing, Llc Adjustments to encoding and decoding when switching color spaces
GB2533109B (en) * 2014-12-09 2018-11-28 Gurulogic Microsystems Oy Encoder, decoder and method for data
GB2533111B (en) * 2014-12-09 2018-08-29 Gurulogic Microsystems Oy Encoder, decoder and method for images, video and audio
US10158836B2 (en) * 2015-01-30 2018-12-18 Qualcomm Incorporated Clipping for cross-component prediction and adaptive color transform for video coding
US10390020B2 (en) 2015-06-08 2019-08-20 Industrial Technology Research Institute Video encoding methods and systems using adaptive color transform
EP3297282A1 (en) * 2016-09-15 2018-03-21 Thomson Licensing Method and apparatus for video coding with adaptive clipping
US20210350210A1 (en) * 2018-07-30 2021-11-11 Intel Corporation Method and apparatus for keeping statistical inference accuracy with 8-bit winograd convolution
CN113841395B (en) * 2019-05-16 2022-10-25 北京字节跳动网络技术有限公司 Adaptive resolution change in video coding and decoding
CN114128258B (en) * 2019-07-14 2023-12-22 北京字节跳动网络技术有限公司 Restriction of transform block size in video codec
CN114651442A (en) 2019-10-09 2022-06-21 字节跳动有限公司 Cross-component adaptive loop filtering in video coding and decoding
CN117528080A (en) 2019-10-14 2024-02-06 字节跳动有限公司 Joint coding and filtering of chroma residual in video processing
EP4055827A4 (en) 2019-12-09 2023-01-18 ByteDance Inc. Using quantization groups in video coding
WO2021138293A1 (en) * 2019-12-31 2021-07-08 Bytedance Inc. Adaptive color transform in video coding
CN115443655A (en) 2020-06-09 2022-12-06 阿里巴巴(中国)有限公司 Method for processing adaptive color transform and low frequency inseparable transform in video coding
CN112449200B (en) * 2020-11-12 2023-01-31 北京环境特性研究所 Image compression method and device based on wavelet transformation
CN117579839B (en) * 2024-01-15 2024-03-22 电子科技大学 Image compression method based on rate-distortion optimized color space conversion matrix

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001054137A (en) * 1999-08-09 2001-02-23 Nippon Telegr & Teleph Corp <Ntt> Color image coder, its method, color image decoder and its method
US6754383B1 (en) 2000-07-26 2004-06-22 Lockheed Martin Corporation Lossy JPEG compression/reconstruction using principal components transformation
US20040125130A1 (en) * 2001-02-26 2004-07-01 Andrea Flamini Techniques for embedding custom user interface controls inside internet content
JP2003134352A (en) * 2001-10-26 2003-05-09 Konica Corp Image processing method and apparatus, and program therefor
CN1190755C (en) * 2002-11-08 2005-02-23 北京工业大学 Colour-picture damage-free compression method based on perceptron
US7469069B2 (en) * 2003-05-16 2008-12-23 Samsung Electronics Co., Ltd. Method and apparatus for encoding/decoding image using image residue prediction
US7333544B2 (en) 2003-07-16 2008-02-19 Samsung Electronics Co., Ltd. Lossless image encoding/decoding method and apparatus using inter-color plane prediction
KR100624429B1 (en) * 2003-07-16 2006-09-19 삼성전자주식회사 A video encoding/ decoding apparatus and method for color image
EP1538844A3 (en) * 2003-11-26 2006-05-31 Samsung Electronics Co., Ltd. Color image residue transformation and encoding method
KR100723408B1 (en) * 2004-07-22 2007-05-30 삼성전자주식회사 Method and apparatus to transform/inverse transform and quantize/dequantize color image, and method and apparatus to encode/decode color image using it
JP4742614B2 (en) * 2005-02-25 2011-08-10 ソニー株式会社 Data conversion apparatus and method, data reverse conversion apparatus and method, information processing system, recording medium, and program
US7792370B2 (en) * 2005-03-18 2010-09-07 Sharp Laboratories Of America, Inc. Residual color transform for 4:2:0 RGB format
KR101246915B1 (en) * 2005-04-18 2013-03-25 삼성전자주식회사 Method and apparatus for encoding or decoding moving picture
JP4700445B2 (en) * 2005-09-01 2011-06-15 オリンパス株式会社 Image processing apparatus and image processing program
CN101218831B (en) * 2005-09-20 2010-07-21 三菱电机株式会社 Image decoding method and device
EP1977602B1 (en) 2006-01-13 2013-03-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Picture coding using adaptive colour space transformation
CN103297769B (en) * 2006-01-13 2016-09-07 Ge视频压缩有限责任公司 Use the picture coding of adaptive colour space transformation
KR101330630B1 (en) * 2006-03-13 2013-11-22 삼성전자주식회사 Method and apparatus for encoding moving picture, method and apparatus for decoding moving picture, applying adaptively an optimal prediction mode
JP5157140B2 (en) 2006-11-29 2013-03-06 ソニー株式会社 Recording apparatus, recording method, information processing apparatus, information processing method, imaging apparatus, and video system
US20100091840A1 (en) * 2007-01-10 2010-04-15 Thomson Licensing Corporation Video encoding method and video decoding method for enabling bit depth scalability
TW200845723A (en) 2007-04-23 2008-11-16 Thomson Licensing Method and apparatus for encoding video data, method and apparatus for decoding encoded video data and encoded video signal
CN102223525B (en) * 2010-04-13 2014-02-19 富士通株式会社 Video decoding method and system
TWI574550B (en) * 2011-10-31 2017-03-11 三星電子股份有限公司 Method for video decoding
JP6030989B2 (en) * 2013-04-05 2016-11-24 日本電信電話株式会社 Image encoding method, image decoding method, image encoding device, image decoding device, program thereof, and recording medium recording the program
US9225991B2 (en) * 2013-05-30 2015-12-29 Apple Inc. Adaptive color space transform coding

Also Published As

Publication number Publication date
JP2022188128A (en) 2022-12-20
AU2014272154A1 (en) 2015-12-17
AU2021201660B2 (en) 2022-03-24
CN110460847A (en) 2019-11-15
JP2021002860A (en) 2021-01-07
KR20200043529A (en) 2020-04-27
CN110460850A (en) 2019-11-15
TWI601417B (en) 2017-10-01
CN110460850B (en) 2022-03-01
KR20200043528A (en) 2020-04-27
KR20240017093A (en) 2024-02-06
AU2017265177B2 (en) 2019-11-21
JP6397901B2 (en) 2018-09-26
CN110460849A (en) 2019-11-15
CN105308960A (en) 2016-02-03
CN110460849B (en) 2022-03-01
EP3352459A1 (en) 2018-07-25
TW202341743A (en) 2023-10-16
CN110460848B (en) 2022-02-18
JP2019198107A (en) 2019-11-14
TW201445982A (en) 2014-12-01
KR20170086138A (en) 2017-07-25
AU2017265177A1 (en) 2017-12-14
TW201840191A (en) 2018-11-01
KR20190020193A (en) 2019-02-27
KR20220104309A (en) 2022-07-26
KR101760334B1 (en) 2017-07-21
KR20160003174A (en) 2016-01-08
AU2019253875B2 (en) 2020-12-17
TWI717621B (en) 2021-02-01
KR102232022B1 (en) 2021-03-26
WO2014193538A1 (en) 2014-12-04
KR102423809B1 (en) 2022-07-21
JP2016523460A (en) 2016-08-08
TW202118301A (en) 2021-05-01
KR102230008B1 (en) 2021-03-19
EP3005697A1 (en) 2016-04-13
JP6768122B2 (en) 2020-10-14
AU2019253875A1 (en) 2019-11-14
TW201637453A (en) 2016-10-16
KR20210152005A (en) 2021-12-14
AU2020201214A1 (en) 2020-03-05
TWI807644B (en) 2023-07-01
KR102629602B1 (en) 2024-01-29
HK1219189A1 (en) 2017-03-24
JP2018110440A (en) 2018-07-12
AU2020201212B2 (en) 2020-12-03
KR20210031793A (en) 2021-03-22
TWI547151B (en) 2016-08-21
JP6553220B2 (en) 2019-07-31
AU2020201214B2 (en) 2020-12-24
KR101952293B1 (en) 2019-02-26
KR102103815B1 (en) 2020-04-24
TW201811054A (en) 2018-03-16
AU2020201212A1 (en) 2020-03-12
CN110460848A (en) 2019-11-15
AU2014272154B2 (en) 2017-08-31
TWI758081B (en) 2022-03-11
US20140355689A1 (en) 2014-12-04
KR102336571B1 (en) 2021-12-08
US9225991B2 (en) 2015-12-29
TWI634782B (en) 2018-09-01
AU2021201660A1 (en) 2021-04-08
CN105308960B (en) 2019-09-20
CN110460847B (en) 2022-03-01

Similar Documents

Publication Publication Date Title
TWI758081B (en) Adaptive color space transform coding