TW201215152A - System and method for enhanced DMVD processing - Google Patents
System and method for enhanced DMVD processing Download PDFInfo
- Publication number
- TW201215152A TW201215152A TW100123109A TW100123109A TW201215152A TW 201215152 A TW201215152 A TW 201215152A TW 100123109 A TW100123109 A TW 100123109A TW 100123109 A TW100123109 A TW 100123109A TW 201215152 A TW201215152 A TW 201215152A
- Authority
- TW
- Taiwan
- Prior art keywords
- dmvd
- block
- motion vector
- adjacent
- processor
- Prior art date
Links
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
201215152 六、發明說明 【發明所屬之技術領域】 本發明係有關用於增強型解碼器側移動向量導出 (DMVD )處理之系統及方法。 【先前技術】 在傳統視頻編碼系統中,可在編碼器執行移動估計 (ME)以取得當前編碼區塊之移動預測的移動向量。可 接著將移動向量編碼成二元流並傳送至解碼器。這允許解 碼器執行當前解碼區塊的移動補償。在一些先進視頻編碼 標準中,如H.264/AVC,可將巨集區塊(MB)分割成編 碼用之較小區塊,並且可分配移動向量至每一子分割區 塊。因此,若將MB分成4x4區塊,則針對預測型編碼 MB會有高達16個移動向量,且針對雙預測型編碼MB會 有高達32個移動向量,其可代表明顯的負擔。考慮到移 動編碼區塊具有強的時間及空間關聯之移動編碼區塊,在 解碼側可依據重建的參考圖像或重建的空間相鄰區塊執行 移動估計。這可讓解碼器自己導出當前區塊之移動向量, 取代從編碼器接收移動向量。此解碼器側移動向量導出 (DM VD )法可增加解碼器之運算複雜度,但其可藉由節 省頻寬來改善現有視頻編解碼系統的效率。 在解碼器側上,若使用DMVD法來編碼區塊,其之 移動向量僅可在執行解碼器側移動估計之後得到。這可以 下列兩種方面影響平行解碼實施。首先,若解碼器側移動201215152 VI. Description of the Invention [Technical Field] The present invention relates to a system and method for enhanced decoder side motion vector derivation (DMVD) processing. [Prior Art] In a conventional video coding system, a motion estimation (ME) can be performed at an encoder to obtain a motion vector of a motion prediction of a current coding block. The motion vector can then be encoded into a binary stream and transmitted to the decoder. This allows the decoder to perform motion compensation for the current decoded block. In some advanced video coding standards, such as H.264/AVC, macroblocks (MB) can be partitioned into smaller blocks for encoding, and motion vectors can be assigned to each sub-partition. Therefore, if the MB is divided into 4x4 blocks, there will be up to 16 motion vectors for the predictive type of coded MB, and up to 32 motion vectors for the double predictive type of coded MB, which may represent a significant burden. Considering that the mobile coding block has strong temporal and spatially associated mobile coding blocks, the motion estimation can be performed on the decoding side according to the reconstructed reference image or the reconstructed spatial neighboring block. This allows the decoder to derive the motion vector of the current block by itself, instead of receiving the motion vector from the encoder. This decoder side motion vector derivation (DM VD) method can increase the computational complexity of the decoder, but it can improve the efficiency of the existing video codec system by saving the bandwidth. On the decoder side, if the block is coded using the DMVD method, its motion vector can only be obtained after performing the decoder side motion estimation. This can affect parallel decoding implementation in two ways. First, if the decoder side moves
S -5- 201215152 估計使用空間相鄰重建畫素,DM VD區塊之解碼僅可在已 解碼其之所有的相鄰區塊(其含有用於移動估計之畫素) 之後開始。其次,若在DMVD模式中編碼一區塊,其之 移動向量可用於其相鄰區塊的移動向量預測。所以其之相 鄰區塊的解碼程序,其針對移動向量預測使用此當前 DMVD編碼區塊的移動向量,僅可在結束當前DMVD區 塊之移動估計後才開始。因此,在上述程序中有相依性, 其中這些相依性可減緩解碼。尤其,在解碼器側之處理較 不適合平行DMVD演算法。 另外,在一些實施中,在解碼器側之移動估計可能需 要在一搜尋窗中的可能移動向量候選者之中的搜尋。該搜 尋可爲窮舉式搜尋或可仰賴任何若干已知快速搜尋演算 法。即使若使用相對快速的搜尋演算法,在找出最佳候選 者前可能需要評估可觀數量的候選者。這亦代表在解碼器 側之處理中的無效率。 【發明內容及實施方式】 茲參照附圖敘述一實施例。雖討論特定組態及配置, 應了解到這僅是爲了說明。熟悉相關技藝人士將認知到可 使用其他組態及配置而不背離說明之精神及範疇。對熟悉 相關技藝人士明顯的是,亦可用於在此所述外之各種其他 系統及應用。 在此揭露的是增強在視頻壓縮/解壓縮系統中之解碼 器的處理之方法及系統。 -6- 201215152 在此所述之增強型處理可發生在分別實施視頻壓縮及 解壓縮之視頻編碼器/解碼器系統的情境中。第1圖描繪 —示範H.264視頻編碼器架構100,其可包括自身MV導 出模組140,其中H.2 64爲視頻編解碼器標準。以複數訊 框的形式可從當前的視頻區塊110提供當前的視頻資訊。 可傳送當前視頻至差分單元111»差分單元111可爲差分 脈碼調變(DPCM )(亦稱爲核心視頻編碼)迴路的一部 分,其可包括移動補償級1 22及移動估計級1 1 8。該迴路 亦包括框內預測(intra prediction)級120及框內內插級 124。在某些情況中,在該迴路中亦可使用迴路中去區塊 過灑器1 2 6。 可提供當前視頻至差分單元111並至移動估計級 1 18。移動補償級122或框內內插級124可透過切換器 123產生輸出,其可接著從視頻區塊11〇減去以產生餘 數。可接著在變換/量化級112變換並量化餘數並在區塊 1 1 4中經過熵編碼。在區塊1 1 6得到通道輸出。 可提供移動補償級122或框內內插級124的輸出至加 總器133,其亦可接收來自逆量化單元130及逆變換單元 132的輸入。後兩單元可取消變換/量化級112之變換及量 化。逆變換單元132可提供解量化及解變換資訊回到迴 路。 自身MV導出模組140可實施在此所述之用於從先前 解碼畫素導出移動向量之處理。自身MV導出模組140可 接收迴路中去區塊過濾器126的輸出,並可提供輸出至移S -5- 201215152 Estimating the use of spatially adjacent reconstructed pixels, decoding of DM VD blocks can only begin after all adjacent blocks have been decoded, which contain pixels for motion estimation. Second, if a block is coded in DMVD mode, its motion vector can be used for motion vector prediction of its neighboring blocks. Therefore, the decoding procedure of the adjacent block, which uses the motion vector of the current DMVD coding block for motion vector prediction, can only start after the motion estimation of the current DMVD block is ended. Therefore, there is a dependency in the above procedure, where these dependencies can slow down decoding. In particular, processing on the decoder side is less suitable for parallel DMVD algorithms. Additionally, in some implementations, motion estimation on the decoder side may require a search among potential motion vector candidates in a search window. The search may be an exhaustive search or may rely on any of several known fast search algorithms. Even if a relatively fast search algorithm is used, it may be necessary to evaluate a significant number of candidates before finding the best candidate. This also represents inefficiency in the processing on the decoder side. SUMMARY OF THE INVENTION An embodiment is described with reference to the drawings. While discussing specific configurations and configurations, it should be understood that this is for illustrative purposes only. Those skilled in the art will recognize that other configurations and configurations can be used without departing from the spirit and scope of the description. It will be apparent to those skilled in the art that it can be used in a variety of other systems and applications as described herein. Disclosed herein are methods and systems for enhancing the processing of decoders in video compression/decompression systems. -6- 201215152 The enhanced processing described herein can occur in the context of a video encoder/decoder system that implements video compression and decompression, respectively. Figure 1 depicts an exemplary H.264 video encoder architecture 100 that may include its own MV export module 140, where H.2 64 is a video codec standard. The current video information can be provided from the current video block 110 in the form of a plurality of frames. The current video can be transmitted to the difference unit 111. The difference unit 111 can be part of a differential pulse code modulation (DPCM) (also known as core video coding) loop, which can include a motion compensation stage 1 22 and a motion estimation stage 1 18 . The loop also includes an intra prediction stage 120 and an in-frame interpolation stage 124. In some cases, the de-blocking device 1 2 6 can also be used in the loop. The current video can be provided to the difference unit 111 and to the motion estimation stage 1 18 . The motion compensation stage 122 or the in-frame interpolation stage 124 can produce an output through the switch 123, which can then be subtracted from the video block 11 to produce a remainder. The remainder can then be transformed and quantized at transform/quantization stage 112 and entropy encoded in block 112. The channel output is obtained at block 1 16 . The output of the motion compensation stage 122 or the in-frame interpolation stage 124 may be provided to the adder 133, which may also receive inputs from the inverse quantization unit 130 and the inverse transform unit 132. The latter two units can cancel the transformation and quantization of the transform/quantization stage 112. Inverse transform unit 132 may provide dequantization and de-transformation information back to the loop. The self MV export module 140 can implement the processing described herein for deriving a motion vector from a previously decoded pixel. The self MV export module 140 can receive the output of the deblocking filter 126 in the loop and can provide an output to the shift
S 201215152 動補償級122。 第2圖繪示具有自身厘乂導出模組21〇之h.264視頻 解碼器200。在此,針對第i圖之編碼器1〇〇的解碼器 200可包括耦合至熵解碼單元240之通道輸入238。可提 供來自解碼單元240的輸出至逆量化單元242及逆變換單 元244,及至自身μV導出模組21 0。自身MV導出模組 2 1 〇可耦合至移動補償單元248。亦可提供熵解碼單元 240的輸出至框內內插單元254,其可饋送選擇器切換器 223。可接著將來自逆變換單元244,及如由切換器223 選擇之移動補償單元248或框內內插單元254的資訊加總 並提供至迴路中去區塊單元246,並饋送回框內內插單元 254。可接著將迴路中去區塊單元246的輸出饋送至自身 MV導出模組210。 在編碼器之自身MV導出模組可與視頻解碼器側同 步。自身MV導出模組可替代地應用於一般視頻編解碼器 架構,且不限於Η. 264編碼架構。 於上所述之編碼器及解碼器,以及如上述般由它們所 執行的處理,可在硬體、韌體、或軟體、或上述的組合中 加以實施。另外,在此揭露之任一或更多特徵可在硬體、 軟體、韌體、或上述的組合中加以實施,包括離散或積體 電路邏輯、特定應用積體電路(ASIC )邏輯、及微控制 器,且可實施成特定域積體電路封裝件或積體電路封裝件 的組合之一部分。術語「軟體」,如此處所用,意指電腦 程式產品,包括具有電腦程式邏輯儲存於其中之電腦可讀 -8 - 201215152 取媒體,以令電腦系統執行在此揭露的一或更多特徵及/ 或特徵之組合。 空間相鄰重建畫素之相依性 解碼器側移動估計(ME )係基於在參考圖像中當前 編碼區塊之移動可與其空間相鄰區塊的那些以及其時間相 鄰區塊的那些具有強關聯的假設。第3圖至第6圖顯示可 採用不同種的關聯之不同解碼器側ME法。 可藉由利用時間移動關聯來在兩參考訊框之間執行第 3圖中之鏡射型ME及第4圖中之推演型ME。在第3圖 之實施例中,在前向參考訊框3 2〇及後向參考訊框330之 間可有兩個雙預測訊框(B訊框)310及315。訊框310 可爲當前編碼訊框。當編碼當前區塊3 40時,可藉由分別 在參考訊框320及330之搜尋窗360及370中執行搜尋來 執行鏡射型ME以取得移動向量》如上述,當在解碼器當 前輸入區塊不可得時,可以兩參考訊框執行鏡射型ME ^ 第4圖顯示一示例性的推演型ME程序400,其可使 用兩個前像參考訊框,前像(FW) RefO (顯示爲參考訊 框420 )及FW Refl (顯示爲參考訊框430 )。這些參考 訊框可用來導出當前訊框P (顯示爲訊框410)中之當前 目標區塊440的移動向量。可在參考訊框420中指定搜尋 窗470,並可在搜尋窗470中指定搜尋路徑。針對在搜尋 路徑中之每一移動向量M V0,可在參.考訊框430的搜尋 窗460中決定其之推演型移動向量MV1。針對每一對的S 201215152 Dynamic compensation stage 122. Figure 2 shows the h.264 video decoder 200 with its own centistroneous derivation module 21〇. Here, the decoder 200 for the encoder 1 of the i-th diagram may include a channel input 238 coupled to the entropy decoding unit 240. The output from decoding unit 240 may be provided to inverse quantization unit 242 and inverse transform unit 244, and to its own μV derivation module 210. The own MV export module 2 1 〇 can be coupled to the motion compensation unit 248. The output of the entropy decoding unit 240 may also be provided to an in-frame interpolation unit 254, which may feed the selector switch 223. The information from the inverse transform unit 244, and the motion compensation unit 248 or the inter-frame interpolation unit 254 as selected by the switch 223 may then be summed and provided to the in-loop deblocking unit 246, and fed back into the interframe interpolation. Unit 254. The output of the deblocking unit 246 in the loop can then be fed to the own MV derivation module 210. The encoder's own MV export module can be synchronized with the video decoder side. The self MV export module can be applied to the general video codec architecture instead, and is not limited to the 264 encoding architecture. The encoders and decoders described above, and the processes performed by them as described above, can be implemented in hardware, firmware, or software, or a combination of the above. In addition, any or more of the features disclosed herein can be implemented in hardware, software, firmware, or a combination of the above, including discrete or integrated circuit logic, application specific integrated circuit (ASIC) logic, and micro The controller can be implemented as part of a combination of a specific domain integrated circuit package or an integrated circuit package. The term "software", as used herein, means a computer program product, including a computer readable medium having computer program logic stored therein, to enable the computer system to perform one or more of the features disclosed herein and/or Or a combination of features. Dependency of spatially adjacent reconstructed pixels The decoder side motion estimation (ME) is based on those in which the current coding block in the reference picture can be moved with those of its spatially adjacent blocks and those of its temporally adjacent blocks. Associated assumptions. Figures 3 through 6 show different decoder side ME methods that can use different kinds of associations. The mirror type ME in Fig. 3 and the derivation type ME in Fig. 4 can be executed between the two reference frames by using time shift association. In the embodiment of FIG. 3, there may be two dual prediction frames (B frames) 310 and 315 between the forward reference frame 320 and the backward reference frame 330. Frame 310 can be the current coded frame. When the current block 3 40 is encoded, the mirrored ME can be executed to perform the motion vector by performing a search in the search windows 360 and 370 of the reference frames 320 and 330, respectively, as described above, when in the current input area of the decoder. When the block is not available, the mirror type ME can be executed by the two reference frames. Fig. 4 shows an exemplary derivation type ME program 400, which can use two front image reference frames, front image (FW) RefO (displayed as Reference frame 420) and FW Refl (shown as reference frame 430). These reference frames can be used to derive the motion vector of the current target block 440 in the current frame P (shown as frame 410). A search window 470 can be specified in reference frame 420 and a search path can be specified in search window 470. For each motion vector M V0 in the search path, the push-type motion vector MV1 can be determined in the search window 460 of the reference frame 430. For each pair
S -9- 201215152 移動向量,MV0及其關聯的移動向量MV1,可計算在 (1)在參考訊框420中由MVO所指向之參考區塊480以 及(2)在參考訊框430中由MV1所指向之參考區塊450 之間的度量,如絕對差的總和。可接著選擇產生最佳度量 値(如最小絕對差的總和(SAD ))的移動向量MVO作 爲目標區塊440的移動向量。 欲改善當前區塊之輸出移動向量的準確度,某些實施 可包括在解碼器側ME之測量度量中之空間相鄰重建畫 素。在第5圖中,可藉由利用空間移動關聯對空間相鄰的 區塊執行解碼器側ME。桌5圖顯不一實施例500,其可 採用在當前圖像(或訊框)510中之一或更多相鄰區塊 540(在此顯示成在目標區塊530上方及左邊)β這可允 許基於分別在前一參考訊框520及後一參考訊框560中之 —或更多相應的區塊550及555之移動向量的產生,其中 用語「前」及「後」意指時間順序。可接著應用移動向量 至目標區塊53 0。在一實施例中,光柵掃描編碼順序可用 來決定在目標區塊之上方、左邊、左上方、及右上方的空 間相鄰區塊。這種方法可用於Β訊框,其使用前訊框及後 訊框兩者來解碼。 由第5圖所示範的方法可應用至當前訊框中之空間相 鄰區塊的可得畫素,只要在序列掃描編碼順序中比目標區 塊更早解碼相鄰區塊。此外,此方法可相關於針對當前訊 框的參考訊框列表中之參考訊框應用移動搜尋。 第5圖之實施例的處理可如下般發生》首先,可在當 -10- 201215152 前訊框中辨別一或更多的畫素區塊,其中經辨別的區塊鄰 接當前訊框之目標區塊。可接著基於在一時間隨後參考訊 框中之相應區塊及在前一參考訊框中之相應區塊執行經辨 別的區塊之移動搜尋。移動搜尋可產生經辨別的區塊之移 動向量。替代地,可在辨別那些區塊之前先決定相鄰區塊 的移動向量。可接著使用移動向量來導出目標區塊之移動 向量,其可接著用於目標區塊的移動補償。可使用此技藝 中具有通常知識者已知的任何適當程序來執行此導出。這 種程序可例如但不限於加權平均或中位數過濾。 若當前圖像在參考緩衝器中具有後向及前向參考訊框 兩者,則可使用用於鏡射型ME之相同方法來得到圖像級 及區塊級適應搜尋範圍向量。否則,若僅可得到前向參考 圖像,可使用上述針對推演型ME之方法來得到圖像級及 區塊級適應搜尋範圍。 在時間順序中先前及隨後之重建訊框的相應區塊可用 來導出移動向量。此方法繪示在第6圖中。欲編碼在當前 訊框610中之目標區塊630,使用已經解碼的畫素,其中 可在前一圖像(在此顯示爲訊框615)之相應的區塊640 中以及在下一訊框(在此顯示爲圖像655)之相應的區塊 665中找到這些畫素。可藉由進行經過參考訊框(圖像) 620之一或更多區塊650的移動搜尋來導出相應區塊640 的第一移動向量。(諸)區塊650可在參考訊框620中鄰 接相應於前一圖像615的相應區塊64 0之一區塊。可藉由 進行經過參考圖像(亦即訊框)660之一或更多區塊670The S -9- 201215152 motion vector, MV0 and its associated motion vector MV1, can be calculated by (1) the reference block 480 pointed to by the MVO in the reference frame 420 and (2) by the MV1 in the reference frame 430. The metric between the reference blocks 450 pointed to, such as the sum of absolute differences. The motion vector MVO that produces the best metric 値 (e.g., the sum of the smallest absolute differences (SAD)) can then be selected as the motion vector of the target block 440. To improve the accuracy of the output motion vectors of the current block, some implementations may include spatially adjacent reconstructed pixels in the measurement metrics of the decoder side ME. In Fig. 5, the decoder side ME can be performed on spatially adjacent blocks by using spatial motion correlation. Table 5 shows an embodiment 500 that may employ one or more adjacent blocks 540 in the current image (or frame) 510 (shown here above and to the left of the target block 530). The generation of motion vectors based on the respective ones of the previous reference frame 520 and the subsequent reference frame 560, or the corresponding blocks 550 and 555, may be permitted, wherein the terms "front" and "back" mean chronological order. . The motion vector can then be applied to the target block 530. In one embodiment, the raster scan coding sequence can be used to determine spatially adjacent blocks above, to the left, to the top left, and to the top of the target block. This method can be used for frame detection, which uses both the front frame and the back frame to decode. The method exemplified in Fig. 5 can be applied to the available pixels of the spatially adjacent blocks in the current frame as long as the adjacent blocks are decoded earlier in the sequence scan coding order than the target block. In addition, this method can be applied to the mobile search by reference to the reference frame in the reference frame list of the current frame. The processing of the embodiment of FIG. 5 may occur as follows. First, one or more pixel blocks may be identified in the pre-frame of -10-201215152, wherein the identified block is adjacent to the target area of the current frame. Piece. The identified block motion search can then be performed based on the corresponding block in the subsequent reference frame at a time and the corresponding block in the previous reference frame. The mobile search produces a motion vector of the identified block. Alternatively, the motion vectors of adjacent blocks can be determined prior to identifying those blocks. The motion vector can then be used to derive the motion vector of the target block, which can then be used for motion compensation of the target block. This derivation can be performed using any suitable program known to those skilled in the art. Such procedures may be, for example but not limited to, weighted average or median filtering. If the current image has both backward and forward reference frames in the reference buffer, the same method for the mirrored ME can be used to obtain image level and block level adaptive search range vectors. Otherwise, if only the forward reference image is available, the above-mentioned method for the deductive ME can be used to obtain the image level and block level adaptive search range. The corresponding blocks of the previous and subsequent reconstruction frames in the chronological order can be used to derive the motion vector. This method is illustrated in Figure 6. To be encoded in target block 630 in current frame 610, the already decoded pixels are used, which may be in the corresponding block 640 of the previous image (shown here as frame 615) and in the next frame ( These pixels are found in the corresponding block 665 shown here as image 655). The first motion vector of the corresponding block 640 can be derived by performing a motion search through one or more blocks 650 of the reference frame (image) 620. Block 650 may be adjacent to a block corresponding to the corresponding block 64 0 of the previous image 615 in reference frame 620. One or more blocks 670 may be passed through a reference image (i.e., frame) 660.
S -11 - 201215152 的移動搜尋來導出下一訊框655的相應區塊665的第二移 動向量。(諸)區塊670可在另一參考圖像660中鄰接相 應於下一訊框655的區塊665之一區塊。依據第一及第二 移動向量,可決定目標區塊630的前向及/或後向移動向 量。這些後者移動向量可接著用於目標區塊之移動補償。 針對這種情況之ME處理可如下。首先在前一訊框中 辨別一區塊,其中此經辨別的區塊可相應於當前訊框之目 標區塊。可決定此前一訊框之經辨別區塊的第一移動向 量,其中第一移動向量可相較於第一參考訊框之一相應區 塊來加以界定。可在一隨後訊框中辨別一區塊,其中此區 塊可相應於當前訊框之目標區塊。可決定此後一訊框之經 辨別區塊的第二移動向量,其中第二移動向量可相較於第 二參考訊框之一相應區塊來加以界定。可使用上述個別的 第一及第二移動向量來決定目標區塊之一或兩個移動向 量。可在解碼器發生類同的處理》 當編碼/解碼當前圖像時,可得到在前一訊框615與 參考訊框620之間的區塊移動向量》使用這些移動向量, 可以上述針對推演型ME的方式決定圖像級適應搜尋範 圍。在鏡射型ME的情況中,該相應區塊及與該相應區塊 空間相鄰的區塊之移動向量可用來導出區塊級適應搜尋範 圍。 由於空間相鄰重建畫素可用於解碼器側ME中,於 DM VD模式中經編碼的一區塊的解碼僅可在已解碼所有所 需之空間相鄰畫素之後才可開始。此解碼相依性會影響區 -12- 201215152 塊解碼之平行實施的效率。 爲了讓DMVD編碼的區塊可被平行解碼,可移除解 碼器側ME對空間相鄰重建畫素的相依性。接著可僅在兩 參考圖像上執行第3及4圖中之鏡射型ME及推演型 ME,並且在解碼器側ME的測量度量中可不考慮空間相 鄰重建畫素。第5圖中之空間相鄰區塊ME可被第6圖中 所示之時間並列區塊ME功能性取代,亦即可針對參考圖 像中之並列區塊,而非當前圖像中之空間相鄰區塊執行解 碼器側ME。 在第7圖中繪示此解碼策略。在710,可在解碼器接 收DMVD編碼區塊。在720,可執行ME。這可使用在參 考圖像中之時間相鄰重建畫素來完成。不使用空間相鄰重 建畫素。在730,可解碼DMVD編碼區塊。 在一實施例中,該解碼可與非DM VD編碼區塊的解 碼平行進行。由於重建的參考圖像在當前圖像之解碼前就 已就緒,使得僅可在參考圖像上執行解碼器側ME, DMVD編碼區塊不會對空間相鄰重建畫素有解碼相依性。 因此,可平行解碼DMVD編碼區塊及非DMVD編碼的框 間編碼區塊。 移動向量預測相依性 雖可以上述系統及方法來解決對空間相鄰重建畫素之 解碼相依性,在解碼程序中仍有移動向量預測相依性。在 H. 2 64/A VC標準中,欲移除移動向量冗餘,一區塊之移動The mobile search of S -11 - 201215152 derives the second motion vector of the corresponding block 665 of the next frame 655. Block 670 may abut one of the blocks 665 corresponding to the next frame 655 in another reference image 660. Based on the first and second motion vectors, the forward and/or backward motion vectors of the target block 630 can be determined. These latter motion vectors can then be used for motion compensation of the target block. The ME processing for this case can be as follows. First, a block is identified in the previous frame, wherein the identified block can correspond to the target block of the current frame. A first movement vector of the identified block of the previous frame may be determined, wherein the first motion vector may be defined relative to a corresponding block of one of the first reference frames. A block can be identified in a subsequent frame, where the block can correspond to the target block of the current frame. A second motion vector of the discerned block of the subsequent frame may be determined, wherein the second motion vector may be defined relative to a corresponding block of one of the second reference frames. The individual first and second motion vectors described above may be used to determine one or both of the target vectors. The same processing can be performed at the decoder. When the current image is encoded/decoded, the block motion vector between the previous frame 615 and the reference frame 620 can be obtained. These motion vectors can be used. The way ME determines the image level to adapt to the search range. In the case of a mirrored ME, the motion vector of the corresponding block and the block adjacent to the corresponding block space can be used to derive a block level adaptive search range. Since spatially adjacent reconstructed pixels can be used in the decoder side ME, decoding of a coded block in DM VD mode can only begin after all required spatial neighboring pixels have been decoded. This decoding dependency affects the efficiency of the parallel implementation of block -12-201215152 block decoding. In order for the DMVD encoded blocks to be decoded in parallel, the dependence of the decoder side ME on spatially adjacent reconstructed pixels can be removed. The mirror type ME and the derivation type ME in Figs. 3 and 4 can then be performed only on the two reference images, and the spatially adjacent reconstructed pixels can be ignored in the measurement metric of the decoder side ME. The spatially adjacent block ME in FIG. 5 can be replaced by the time-parallel block ME function shown in FIG. 6, that is, for the parallel block in the reference image instead of the space in the current image. The adjacent block performs the decoder side ME. This decoding strategy is illustrated in Figure 7. At 710, the DMVD encoding block can be received at the decoder. At 720, the ME can be executed. This can be done using the temporally reconstructed pixels in the reference image. Recreate pixels without using spatial neighbors. At 730, the DMVD coding block can be decoded. In an embodiment, the decoding can be performed in parallel with the decoding of the non-DM VD encoding block. Since the reconstructed reference picture is ready before the decoding of the current picture, so that the decoder side ME can only be performed on the reference picture, the DMVD coding block does not have decoding dependencies on spatially adjacent reconstructed pixels. Therefore, the DMVD coded block and the non-DMVD coded interframe coded block can be decoded in parallel. Motion Vector Prediction Dependency Although the above system and method can be used to solve the decoding dependence of spatially adjacent reconstructed pixels, there is still motion vector prediction dependency in the decoding process. In the H. 2 64/A VC standard, to remove motion vector redundancy, move a block
S -13- 201215152 向量可首先從其之空間或時間相鄰區塊的移動向量加以預 測。可接著將最終移動向量與預測移動向量間的差編碼到 傳送至解碼器側之位元流中。在解碼器側,欲獲得當前區 塊之最終移動向量,可首先從空間或時間相鄰區塊的已解 碼移動向量計算預測移動向量,並接著添加已解碼的移動 向量差至預測移動向量,以得到當前區塊之最終解碼移動 向量。 若使用DMVD模式,則解碼器可自己導出DMVD編 碼區塊之移動向量。但若針對非DMVD編碼區塊,則仍 可以上述方式解碼其之移動向量。現在,若在DMVD模 式中編碼一區塊,其之移動向量僅可在執行該解碼器側 ME之後才可得到。若這些移動向量將用來預測其之空間 相鄰區塊的移動向量,空間相鄰區塊的解碼僅可在DMVD 編碼區塊的解碼器側ME完成後才可開始。此移動向量預 測相依性會影響區塊解碼之平行實施的效率。 如第 8圖中所示,當編碼一當前區塊時,如區塊 810,其之四個空間相鄰區塊(A、B、C、及D)的移動 向量可用來預測其之移動向量。若以DMVD模式編碼區 塊A、B、C、及D的任一者,則可應用下列方案之一來 移除對DMVD區塊的移動向量相依性。 在一實施例中,若一空間相鄰區塊爲DMVD區塊, 其之移動向量可在移動向量預測程序中標記爲不可得。亦 即,從非DMVD編碼的相鄰區塊之移動向量預測當前區 塊的移動向量。這是繪示在第9圖中。在910,可連同一 -14- 201215152 或更多DMVD區塊接收當前非DMVD區塊。在920,可 做出相較於當前的非DMVD區塊是否有空間相鄰DMVD 區塊的判斷。尤其,可做出第8圖中所示之位置A...D的 任一者中是否有相鄰非DMVD區塊的DMVD區塊之判 斷。若是,則將這一DMVD區塊標記成針對非DMVD區 塊的移動向量預測爲不可得。在 930,可使用相鄰非 DMVD區塊的移動向量來預測當前非DMVD區塊之移動 向量。在940,可解碼當前非DMVD區塊。 在一替代實施例中,若在位置A...D的任一者中有空 間相鄰非DMVD區塊之DMVD區塊(參見第8圖),針 對解碼非DMVD區塊可使用不同方法。這一實施例是繪 示在第10圖中。在1010,解碼器連同一或更多空間相鄰 DMVD區塊接收當前的非DMVD區塊。在1020,若一空 間相鄰區塊爲DMVD區塊,則計算此DMVD區塊的移動 向量。在1030,計算出的移動向量可用來預測當前非 DMVD區塊的移動向量。在1040,在有此預測的移動向 量的前提下,可解碼當前非DMVD區塊。 由於可在執行解碼器側ME之前準備好此計算出的移 動向量,相鄰區塊的解碼程序,例如當前非 DMVD區 塊,可立刻開始而不需等待DMVD編碼區塊之解碼器側 ME程序完成。接著,可平行解碼DMVD編碼區塊及當前 非DMVD編碼區塊。 可以任何若干方式決定相鄰DMVD區塊之移動向 量。例如,在一實施例中,DMVD區塊之計算的移動向量S -13- 201215152 Vectors can be predicted first from the motion vectors of their spatial or temporal neighbors. The difference between the final motion vector and the predicted motion vector can then be encoded into the bitstream transmitted to the decoder side. On the decoder side, to obtain the final motion vector of the current block, the predicted motion vector may be first calculated from the decoded motion vector of the spatial or temporal neighboring block, and then the decoded motion vector difference is added to the predicted motion vector to Get the final decoded motion vector of the current block. If DMVD mode is used, the decoder can derive the motion vector of the DMVD code block by itself. However, if a non-DMVD coding block is used, the motion vector can still be decoded in the above manner. Now, if a block is coded in the DMVD mode, its motion vector can only be obtained after executing the decoder side ME. If these motion vectors are to be used to predict the motion vector of the spatial neighboring blocks, the decoding of the spatial neighboring blocks can only be started after the decoder side ME of the DMVD encoding block is completed. This motion vector prediction dependency affects the efficiency of parallel implementation of block decoding. As shown in FIG. 8, when a current block is encoded, such as block 810, the motion vectors of four spatially adjacent blocks (A, B, C, and D) can be used to predict its motion vector. . If any of blocks A, B, C, and D are encoded in the DMVD mode, one of the following schemes can be applied to remove the motion vector dependency on the DMVD block. In an embodiment, if a spatially adjacent block is a DMVD block, its motion vector may be marked as unavailable in the motion vector prediction procedure. That is, the motion vector of the current block is predicted from the motion vector of the non-DMVD encoded adjacent block. This is shown in Figure 9. At 910, the current non-DMVD block can be received for the same -14-201215152 or more DMVD blocks. At 920, a determination can be made as to whether there is a spatially adjacent DMVD block compared to the current non-DMVD block. In particular, it is possible to determine whether or not there is a DMVD block of an adjacent non-DMVD block in any of the positions A...D shown in Fig. 8. If so, marking this DMVD block as a motion vector prediction for a non-DMVD block is not available. At 930, the motion vectors of the neighboring non-DMVD blocks can be used to predict the motion vector of the current non-DMVD block. At 940, the current non-DMVD block can be decoded. In an alternate embodiment, if there is a DMVD block of a spatially adjacent non-DMVD block in any of the locations A...D (see Figure 8), a different method can be used for decoding the non-DMVD block. This embodiment is shown in Fig. 10. At 1010, the decoder receives the current non-DMVD block with the same or more spatially adjacent DMVD blocks. At 1020, if a spatially adjacent block is a DMVD block, the motion vector of the DMVD block is calculated. At 1030, the calculated motion vector can be used to predict the motion vector of the current non-DMVD block. At 1040, the current non-DMVD block can be decoded with this predicted mobile vector. Since the calculated motion vector can be prepared before the decoder side ME is executed, the decoding procedure of the adjacent block, such as the current non-DMVD block, can be started immediately without waiting for the decoder side ME program of the DMVD coding block. carry out. Then, the DMVD coding block and the current non-DMVD coding block can be decoded in parallel. The moving vector of an adjacent DMVD block can be determined in any number of ways. For example, in one embodiment, the calculated motion vector of the DMVD block
S -15- 201215152 可爲其之可得的空間相鄰區塊移動向量的加權平均。 在一替代實施例中,DMVD區塊之計算的移動向量可 爲其之可得的空間相鄰區塊移動向量的中位數過濾値。 在一替代實施例中,DMVD區塊之計算的移動向量可 爲定標可得的時間相鄰區塊移動向量的加權平均。 在一替代實施例中,DMVD區塊之計算的移動向量可 爲定標可得的時間相鄰區塊移動向量的中位數過濾値。 在一替代實施例中,DMVD區塊之計算的移動向量可 爲其之可得的空間相鄰區塊移動向量及定標可得的時間相 鄰區塊移動向量的加權平均。 在一替代實施例中,DMVD區塊之計算的移動向量可 爲其之可得的空間相鄰區塊移動向量及定標可得的時間相 鄰區塊移動向量的中位數過濾値。 藉由上述方案,可移除對DMVD區塊移動向量之移 動向量預測相依性。結合對空間相鄰重建畫素之相依性的 移除,解碼器可平行解碼框間編碼區塊(無論它們係以 DMVD模式或非DMVD模式編碼)。這可允許在多核心 平台上之解碼器的平行實施的較大使用。 移動向量之快速候選者搜尋 可使用在一搜尋窗內之完整搜尋,或使用任何其他快 速移動搜尋演算法,來執行DMVD區塊的ME,只要編碼 器及解碼器使用相同的移動搜尋方案。在一實施例中,可 使用基於快速候選者之ME程序。在此,移動搜尋程序僅 -16- 201215152 需檢查相對小的候選者移動向量組,取代檢查搜尋窗中之 所有可能性。編碼器及解碼器使用相同的候選者來避免任 何不匹配。 可從空間編碼相鄰區塊及時間編碼相鄰區塊的移動向 量導出候選者移動向量。可藉由執行在這種移動向量附近 的小範圍移動搜尋來精化候選者移動向量。 在一實施例中,可首先檢査所有候選者移動向量,並 選擇最佳者(如產生最小絕對差總和)。可接著在此最佳 候選者附近執行小範圍移動搜尋來得到最終之移動向量。 在一實施例中,可在每一候選者移動向量附近執行小 範圍移動捜尋以精化之,並可選擇最佳經精化候選者(如 具有最小SAD)作爲最終之移動向量。 實施 在此藉由繪示方法及系統之功能、特徵、及關係之功 能建構區塊的幫助來揭露方法及系統。在此爲了方便說 明,已經在此任意地界定這些功能建構區塊之至少一些邊 界。可界定替代邊界,只要能適當執行其之指定的功能及 關係。 在此揭露之一或更多特徵可在硬體、軟體、韌體、及 上述的組合中加以實施,包括離散或積體電路邏輯、特定 應用積體電路(ASIC )邏輯、及微控制器,且可實施成 特定域積體電路封裝件或積體電路封裝件的組合之一部 分。術語「軟體」,如此處所用,意指電腦程式產品,包S -15- 201215152 A weighted average of the spatially adjacent block motion vectors available for it. In an alternate embodiment, the calculated motion vector of the DMVD block may be a median filter for the spatially adjacent block motion vector available to it. In an alternate embodiment, the calculated motion vector of the DMVD block may be a weighted average of the temporal neighbor block motion vectors available for scaling. In an alternate embodiment, the calculated motion vector of the DMVD block may be the median filter of the temporally adjacent block motion vector available for scaling. In an alternate embodiment, the calculated motion vector of the DMVD block may be a weighted average of the spatially adjacent block motion vectors and the time-adjacent block motion vectors available for calibration. In an alternate embodiment, the calculated motion vector of the DMVD block may be a spatially adjacent block motion vector available for it and a median filter of the time-adjacent block motion vector available for scaling. With the above scheme, the motion vector prediction dependence on the DMVD block motion vector can be removed. In conjunction with the removal of the dependence of spatially adjacent reconstructed pixels, the decoder can decode the inter-frame coding blocks in parallel (whether they are encoded in DMVD mode or non-DMVD mode). This may allow for greater use of parallel implementations of decoders on multi-core platforms. Fast candidate search for motion vectors The ME of a DMVD block can be executed using a complete search within a search window, or using any other fast motion search algorithm, as long as the encoder and decoder use the same mobile search scheme. In an embodiment, an ME program based on a quick candidate can be used. Here, the mobile search program only checks for a relatively small candidate motion vector group -1615 201215152 instead of checking all possibilities in the search window. The encoder and decoder use the same candidates to avoid any mismatch. The candidate motion vector may be derived from spatially encoded neighboring blocks and temporally encoded neighboring blocks. The candidate motion vector can be refined by performing a small range motion search near such a motion vector. In an embodiment, all candidate motion vectors may be examined first and the best ones selected (e.g., producing a minimum absolute difference sum). A small range motion search can then be performed near this best candidate to get the final motion vector. In an embodiment, a small range of motion seeks may be performed near each candidate motion vector to refine, and an optimal refined candidate (e.g., having a minimum SAD) may be selected as the final motion vector. Implementation The method and system are disclosed herein by means of the functional blocks of the functions, features, and relationships of the method and system. Here, for convenience of description, at least some of the boundaries of these functional building blocks have been arbitrarily defined herein. Alternative boundaries can be defined as long as the specified functions and relationships are properly performed. One or more features disclosed herein can be implemented in hardware, software, firmware, and combinations of the above, including discrete or integrated circuit logic, application specific integrated circuit (ASIC) logic, and a microcontroller. And it can be implemented as part of a combination of a specific domain integrated circuit package or an integrated circuit package. The term "software", as used herein, means a computer program product, package
S -17- 201215152 括具有電腦程式邏輯儲存於其中之電腦可讀取媒體,以令 電腦系統執行在此揭露的一或更多特徵及/或特徵之組 合。 上述之處理的一軟體或韌體實施例係繪示在第11圖 中。系統1100可包括處理器1120及記億體1110之本 體,其可包括可儲存電腦程式邏輯1 140之一或更多電腦 可讀取媒體。記憶體1110可實施成例如硬碟及驅動器、 如光碟及驅動器之可移除式媒體、或唯讀記憶體(ROM ) 裝置。處理器1120及記憶體1110可使用此技藝中具有通 常知識者已知的任何若干技術來通訊,如匯流排。包含在 記憶體1110中的邏輯可由處理器1120加以讀取並履行。 —或更多I/O埠及/或I/O裝置,統一顯示爲I/O 1 130,亦 可連接至處理器1120及記憶體1110» 電腦程式邏輯1140可包括邏輯模組1150至1170。 在一實施例中,邏輯1150可負責上述針對當前區塊爲 DMVD區塊之情況的處理。邏輯1160可負責上述針對當 前區塊爲非DMVD區塊之情況的處理。邏輯1170可負責 上述移動向量之快速候選者搜尋的實施。 雖已經在此揭露各種實施例,應了解到其僅以例示而 非限制方式呈現》對熟悉此技藝人士很明顯的是,可在此 做出形式及細節上的各種改變而不背離在此揭露之方法及 系統的精神及範疇。因此,申請專利範圍之廣度及範疇應 不受限於在此揭露之任何範例實施例。 -18- 201215152 【圖式簡單說明】 第1圖爲根據一實施例之視頻編碼器系統的區塊圖° 第2圖爲根據一實施例之視頻解碼器系統的區塊圖。 第3圖爲繪示根據一實施例在解碼器的鏡射型移動估 計(Μ E )之圖。 第4圖爲繪示根據一實施例在解碼器的推演型ΜΕ2 圖。 第5圖爲繪示根據一實施例在解碼器的空間相鄰區塊 ME之圖。 第6圖爲繪示根據—實施例在解碼器的時間並列相鄰 區塊ME之圖。 第7圖爲繪示根據一實施例之DMVD編碼區塊的移 動估計及解碼之流程圖。 第8圖繪示根據一實施例之當前區塊及可用於當前區 塊之解碼的相鄰區塊。 第9圖爲繪示根據一實施例之非DMVD編碼區塊的 移動估計及解碼之流程圖。 第10圖爲繪示根據一替代實施例之非DMVD編碼區 塊的移動估計及解碼之流程圖》 第11圖爲繪示一實施例之軟體或韌體實施之圖。 【主要元件符號說明】 100 :視頻編碼器架構 11 〇 :視頻區塊S -17-201215152 includes computer readable media having computer program logic stored therein to cause the computer system to perform a combination of one or more of the features and/or features disclosed herein. A software or firmware embodiment of the above process is illustrated in Figure 11. System 1100 can include a processor 1120 and a body of a PC 1110 that can include one or more computer readable media that can store computer program logic 1 140. The memory 1110 can be implemented as, for example, a hard disk and a drive, a removable medium such as a compact disc and a drive, or a read only memory (ROM) device. Processor 1120 and memory 1110 can communicate using any of a number of techniques known in the art, such as bus bars. The logic contained in memory 1110 can be read and executed by processor 1120. - or more I/O ports and/or I/O devices, collectively shown as I/O 1 130, may also be coupled to processor 1120 and memory 1110. Computer program logic 1140 may include logic modules 1150 through 1170. In an embodiment, logic 1150 may be responsible for the processing described above for the case where the current block is a DMVD block. Logic 1160 may be responsible for the processing described above for the case where the current block is a non-DMVD block. Logic 1170 may be responsible for the implementation of the fast candidate search for the above described motion vectors. While various embodiments of the invention have been disclosed herein, it is understood that The spirit and scope of the method and system. Therefore, the breadth and scope of the patent application should not be limited to any example embodiments disclosed herein. -18- 201215152 [Simplified Schematic] FIG. 1 is a block diagram of a video encoder system according to an embodiment. FIG. 2 is a block diagram of a video decoder system according to an embodiment. Figure 3 is a diagram showing a specular motion estimation (Μ E ) at the decoder in accordance with an embodiment. FIG. 4 is a diagram showing a derivation type 在2 at the decoder according to an embodiment. Figure 5 is a diagram showing spatially adjacent blocks ME of a decoder in accordance with an embodiment. Figure 6 is a diagram showing the juxtaposition of adjacent blocks ME at the time of the decoder according to the embodiment. Figure 7 is a flow chart showing the motion estimation and decoding of a DMVD coding block according to an embodiment. Figure 8 illustrates a current block and adjacent blocks available for decoding of the current block, in accordance with an embodiment. FIG. 9 is a flow chart showing motion estimation and decoding of a non-DMVD coded block according to an embodiment. Figure 10 is a flow chart showing the motion estimation and decoding of a non-DMVD coding block according to an alternative embodiment. Fig. 11 is a diagram showing the implementation of a software or firmware of an embodiment. [Main component symbol description] 100 : Video encoder architecture 11 〇 : Video block
S -19- 201215152 1 11 :差分單元 112 :變換/量化級 114:區塊 1 1 8 :移動估計級 120 :框內預測級 122 :移動補償級 123 :切換器 1 24 :框內內插級 126:迴路中去區塊過爐器 1 3 0 :逆量化單元 132 :逆變換單元 1 3 3 :加總器 140 :自身MV導出模組 2 0 0 :視頻視頻解碼器 210 :自身MV導出模組 223 :選擇器切換器 2 3 8 :通道輸入 240 :熵解碼單元 242 :逆量化單元 244 :逆變換單元 246:迴路中去區塊單元 248 :移動補償單元 254 :框內內插單元 3 1 〇 :雙預測訊框 -20 201215152 3 1 5 :雙預測訊框 320:前向參考訊框 3 3 0 :後向參考訊框 340 :當前區塊 3 6 0 :搜尋窗 3 7 0 :搜尋窗 41 0 :當前訊框 420 :參考訊框 430 :參考訊框 440 :當前目標區塊 450 :參考區塊 460 :搜尋窗 470 :搜尋窗 5 0 0 :實施例 510:當前圖像(或訊框) 520 :參考訊框 5 3 0 :目標區塊 5 4 0 :相鄰區塊 550:區塊 555.區塊 5 6 0 :參考訊框 610 :當前訊框 615 :訊框 620 :參考訊框S -19- 201215152 1 11 : Difference unit 112: Transform/Quantization stage 114: Block 1 1 8 : Motion estimation stage 120: In-frame prediction stage 122: Motion compensation stage 123: Switch 1 24: In-frame interpolation stage 126: In-loop deblocking furnace 1 3 0: inverse quantization unit 132: inverse transform unit 1 3 3: adder 140: self MV export module 2 0 0: video video decoder 210: self MV derivation mode Group 223: selector switch 2 3 8 : channel input 240: entropy decoding unit 242: inverse quantization unit 244: inverse transform unit 246: in-loop deblocking unit 248: motion compensation unit 254: inter-frame interpolation unit 3 1 〇: Double prediction frame -20 201215152 3 1 5 : Double prediction frame 320: Forward reference frame 3 3 0 : Back reference frame 340: Current block 3 6 0 : Search window 3 7 0 : Search window 41 0: current frame 420: reference frame 430: reference frame 440: current target block 450: reference block 460: search window 470: search window 5 0 0: embodiment 510: current image (or frame) 520: reference frame 5 3 0 : target block 5 4 0 : adjacent block 550: block 555. block 5 6 0 : reference frame 610: current frame 615: frame 62 0 : Reference frame
S -21 - 201215152 6 3 Ο :目標區塊 640:區塊 650 ·區塊 6 5 5 :圖像 660 :參考訊框 665 :區塊 670:區塊 810 ·區塊 1 1 0 0 :系統 1 120 :處理器 1 130 :輸入/輸出 1140 :電腦程式邏輯 1 1 5 0 :邏輯模組 1 1 6 0 :邏輯模組 1 170 :邏輯模組 -22S -21 - 201215152 6 3 Ο : Target block 640: Block 650 · Block 6 5 5 : Image 660: Reference frame 665: Block 670: Block 810 · Block 1 1 0 0 : System 1 120: Processor 1 130: Input/Output 1140: Computer Program Logic 1 1 5 0: Logic Module 1 1 6 0 : Logic Module 1 170: Logic Module-22
Claims (1)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US36456510P | 2010-07-15 | 2010-07-15 | |
PCT/CN2010/002107 WO2012083487A1 (en) | 2010-12-21 | 2010-12-21 | System and method for enhanced dmvd processing |
Publications (2)
Publication Number | Publication Date |
---|---|
TW201215152A true TW201215152A (en) | 2012-04-01 |
TWI517671B TWI517671B (en) | 2016-01-11 |
Family
ID=46786668
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW100123109A TWI517671B (en) | 2010-07-15 | 2011-06-30 | System and method for enhanced dmvd processing |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI517671B (en) |
-
2011
- 2011-06-30 TW TW100123109A patent/TWI517671B/en not_active IP Right Cessation
Also Published As
Publication number | Publication date |
---|---|
TWI517671B (en) | 2016-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9509995B2 (en) | System and method for enhanced DMVD processing | |
US11765380B2 (en) | Methods and systems for motion vector derivation at a video decoder | |
KR101393824B1 (en) | System and method for low complexity motion vector derivation | |
TWI495328B (en) | Methods and apparatus for adaptively choosing a search range for motion estimation | |
JP2019115061A (en) | Encoder, encoding method, decoder, decoding method and program | |
KR20200015734A (en) | Motion Vector Improvement for Multiple Reference Prediction | |
EP2595392A1 (en) | Method and apparatus for estimating motion vector using plurality of motion vector predictors, encoder, decoder, and decoding method | |
EP2380354A1 (en) | Video processing method and apparatus with residue prediction | |
JP2010288098A (en) | Device, method and program for interpolation of image frame | |
TWI517671B (en) | System and method for enhanced dmvd processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MM4A | Annulment or lapse of patent due to non-payment of fees |