TWI412282B - Video decoding method and apparatus for concealment of transmission errors - Google Patents
Video decoding method and apparatus for concealment of transmission errors Download PDFInfo
- Publication number
- TWI412282B TWI412282B TW98138909A TW98138909A TWI412282B TW I412282 B TWI412282 B TW I412282B TW 98138909 A TW98138909 A TW 98138909A TW 98138909 A TW98138909 A TW 98138909A TW I412282 B TWI412282 B TW I412282B
- Authority
- TW
- Taiwan
- Prior art keywords
- pixel
- block
- motion vector
- error
- macroblock
- Prior art date
Links
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
本發明係關於一種當壓縮視訊於網路(無線或有線)上傳輸,且於傳輸過程中發生錯誤時,在接收端將錯誤予以隱藏(或回復),以增加視訊顯示品質的解碼裝置與方法。The present invention relates to a decoding apparatus and method for concealing (or replying) an error at a receiving end when a compressed video is transmitted over a network (wireless or wired) and an error occurs during transmission to increase the quality of the video display. .
現今視訊壓縮的核心技術在於去除視訊的空間域(Spatial Domain)與時間域(Time Domain)上的冗餘資料量。視訊在空間域的壓縮技術主要採用離散餘弦轉換(Discrete Cosine Transform,DCT)與量化(Quantization)兩種方法的結合,而時間域上的壓縮技術則採用運動向量估測(Motion Estimation)與運動補償(Motion Compensation)兩種方法的結合。接收端收到訊號後,必須經過解壓縮藉以還原原始視訊,而解壓縮就是對於壓縮的反運算,所以必須用反離散餘弦轉換(Inverse Discrete Cosine Transform)與反量化(Inverse Quantization)來還原空間域的壓縮訊號,而用運動補償來還原時間域的壓縮訊號。The core technology of video compression today is to remove redundant data on the spatial domain and time domain of the video. The video compression technique in the spatial domain mainly uses the combination of Discrete Cosine Transform (DCT) and Quantization, while the compression technique in the time domain uses Motion Estimation and Motion Compensation. (Motion Compensation) A combination of the two methods. After receiving the signal, the receiving end must be decompressed to restore the original video, and decompression is the inverse of the compression, so the inverse discrete cosine transform and inverse inverse Quantization must be used to restore the spatial domain. The compression signal is used, and motion compensation is used to restore the compressed signal in the time domain.
在理想的傳輸通道上,理論上壓縮後的資料可以有效的從傳輸端傳送到接收端而不發生錯誤,但現實中傳輸通道大部分都不是理想通道,傳送途中常受到雜訊干擾導致接收端收到的訊號並不是完整正確的訊號。當多媒體視訊資料經過高度壓縮後,信號傳輸過程中可能產生的資料遺失或錯誤,除了導致接收端無法解碼出正確影像外,也可能因為壓縮視訊在空間域和時間域上的相關性而產生嚴重的錯誤蔓延(error propagation)現象。錯誤蔓延可分空間域和時間域兩種,空間域錯誤蔓延是對於時間t 的影像而言,空間座標(x 0 (t ),y 0 (t ))像素的錯誤會造成以後的其他座標像素的錯誤。而時間域錯誤蔓延則是對於t 時刻而言,時間t 時刻所發生的錯誤會造成時間t 時刻以後的視訊錯誤。On the ideal transmission channel, the theoretically compressed data can be effectively transmitted from the transmission end to the receiving end without error, but in reality, most of the transmission channels are not ideal channels, and the receiving end is often interfered by noise to cause the receiving end. The signal received is not a complete and correct signal. When the multimedia video data is highly compressed, the data may be lost or incorrectly generated during the signal transmission. In addition to causing the receiving end to not decode the correct image, it may also be seriously caused by the correlation of the compressed video in the spatial domain and the time domain. The error propagation phenomenon. Error propagation can be divided into two types: spatial domain and time domain. The spatial domain error spread is that for the time t image, the space coordinate ( x 0 ( t ), y 0 ( t )) pixel error will cause other coordinate pixels in the future. mistake. The spread is the time domain error for time t, the time t error occurred will cause a later time t video error.
無線傳輸通道的干擾主要來自電磁波的多路徑傳輸(Multipath Propagation)效應所引起。多路徑傳輸效應是因為電磁波在無線傳輸時,通道的有許多障礙物,例如天氣或大樓等等,而會發生折射、繞射與散射等現象,所以接收端除了收到直接訊號之外還會同時收到經過上述三種現象影響的訊號。無線傳輸所造成接收端收到的訊號強弱不一致,這種現象稱為褪色(Fading),接收端收到訊號的強度變化可用統計的方法來度量,其分佈通常可用Rayleigh或Rician分佈。為了在非理想的無線傳輸通道上提高傳輸視訊品質,我們必須在傳輸端或接收端加上容錯處理(Error Resilience in Video Communication)。通常在接收端視訊解碼器中設計有錯誤偵測(error detection)及錯誤隱藏(error concealment)的機制。錯誤隱藏機制不需要增加資料傳輸量或通道頻寬,只要於影像發生錯誤時在解碼端加以修復,即可以有效防止錯誤快速擴散以確保視訊之品質。其發展的理論基礎大約可以分為三類:時間域、空間域以及結合時間及空間域。The interference of the wireless transmission channel is mainly caused by the multipath Propagation effect of electromagnetic waves. The multi-path transmission effect is because when the electromagnetic wave is transmitted wirelessly, there are many obstacles in the channel, such as weather or building, etc., and refraction, diffraction and scattering occur, so the receiving end receives the direct signal. At the same time, it receives signals that are affected by the above three phenomena. The signal received by the receiving end is inconsistent, and this phenomenon is called Fading. The intensity change of the signal received by the receiving end can be measured by statistical methods. The distribution can usually be distributed by Rayleigh or Rician. In order to improve the transmission video quality on non-ideal wireless transmission channels, we must add Error Resilience in Video Communication on the transmission side or the receiving end. A mechanism for error detection and error concealment is usually designed in the receiving video decoder. The error concealment mechanism does not need to increase the amount of data transmission or channel bandwidth. As long as the image is repaired at the decoding end when the image is wrong, it can effectively prevent the error from rapidly spreading to ensure the quality of the video. The theoretical basis of its development can be divided into three categories: time domain, spatial domain, and combined time and space domain.
從H.261、H.262、H.263以及MPEG-1、MPEG-2、MPEG-4、H.264/AVC等視訊編碼標準的發展來看,他們都尋求以最低的資料量而達到高品質的視訊,其中尤以H.264/AVC為目前最新標準。H.264為了對抗傳輸通道中會出現的各種錯誤,在標準中提出許多的抗誤機制。例如,在編碼端有‘‘彈性巨區塊順序”(Flexible macroblock order,FMO)、“參數組結構”(Parameter set structure)、及“資料分割”(Data partitioning)...等等,而解碼端則有錯誤隱藏(Error Concealment),以下就編碼端的抗誤機制做介紹。From the development of H.261, H.262, H.263 and video coding standards such as MPEG-1, MPEG-2, MPEG-4, H.264/AVC, they all seek to reach a high level with the lowest data volume. Quality video, especially H.264/AVC is the latest standard. In order to counter the various errors that may occur in the transmission channel, H.264 proposes many anti-missing mechanisms in the standard. For example, at the encoding end, there is a 'Flexible macroblock order (FMO), a "Parameter set structure", and a "Data partitioning"... The terminal has Error Concealment. The following is an introduction to the anti-missing mechanism of the encoding end.
FMO是H.264/AVC提出的一個新的抗誤機制,此機制突破傳統分割“薄片組”(slice group)的方法,它可以依據畫面中不同特性的巨區塊(一個巨區塊為16x16個像素)且利用一定的邏輯將各巨區塊分配到不同的薄片組,各薄片組獨立被編碼(移動估測)、符號編碼等。然而FMO也有缺點,包含了降低編碼效率或者增加多餘的延遲。FMO is a new anti-missing mechanism proposed by H.264/AVC. This mechanism breaks through the traditional method of segmenting the "slice group". It can be based on the huge blocks of different characteristics in the picture (a giant block is 16x16). Each pixel is allocated to a different slice group by a certain logic, and each slice group is independently coded (movement estimation), symbol coded, and the like. However, FMO also has shortcomings, including reducing coding efficiency or adding redundant delays.
此機制可以允許編碼器在相同的位元串流下面,放置一個或多個相同的巨區塊的多餘資訊,傳輸時可以同時傳送主要資訊與多餘資訊,主要資訊可用較小的量化參數,多餘資訊則用較高的量化參數,因為多餘資訊通常為較低解析度的資訊。如果主要資訊被成功的接收到,則捨棄多餘資訊,反之,若主要資訊發生錯誤,則用多餘資訊來重建。This mechanism allows the encoder to place redundant information of one or more identical macroblocks under the same bit stream. The main information and redundant information can be transmitted simultaneously during transmission. The main information can be used with smaller quantization parameters. Information uses higher quantization parameters because the extra information is usually lower resolution information. If the main information is successfully received, the excess information is discarded. Otherwise, if the main information is wrong, the redundant information is used to reconstruct.
在編碼的時候,有些資訊相對於其他資訊是比較重要的,少了重要的資訊,那整個傳輸資料也許都不能使用,所以編碼將資訊分三個種類,分別給予三種程度的重要性,如下所示:At the time of coding, some information is more important than other information. Without important information, the entire transmission data may not be used. Therefore, the coding divides the information into three categories and gives three levels of importance, as follows. Show:
1)A類:包括區塊種類、量化參數,移動向量,此類的檔案稱為檔頭資訊。1) Class A: including block types, quantization parameters, and motion vectors. This type of file is called header information.
2)B類:包含畫面內已編碼區塊圖形(Intra CBPs,Coded Block Patterns)和畫面內編碼係數(intra-coded coefficients),需要有A類來配合。2) Class B: Contains Intra CBPs (Coded Block Patterns) and intra-coded coefficients in the picture. It is necessary to have Class A to match.
3)C類:包含畫面間已編碼區塊圖形(Inter CBPs)和畫面間編碼係數(inter-coded coefficients),這部分訊息亦需要有A類來配合,但不需要B類。另外,這部分較不重要,這是由於它不具同步的作用,且不影響其他資訊的解碼。3) Class C: Contains inter-pictured block graphics (Inter CBPs) and inter-coded coefficients. This part of the message also needs to be class A to match, but does not require class B. In addition, this part is less important because it does not have synchronization and does not affect the decoding of other information.
當使用資料分割,編碼端會將不同種類的資訊放置在三個不同的緩衝器。而在解碼端來講,所有部份的訊息都具備時,才會開始標準的解碼程序,然而,如果有B類或C類發生遺失時,所剩餘的A類訊息仍然可以被用來改善錯誤隱藏的表現。例如,由於可以得知每一區塊的編碼類別和運動向量,所以可以達到比較好的錯誤隱藏效果。When using data splitting, the encoder will place different kinds of information in three different buffers. On the decoding side, the standard decoding process will start when all the information is available. However, if there is a loss of Class B or C, the remaining Class A messages can still be used to improve the error. Hidden performance. For example, since the coding class and motion vector of each block can be known, a better error concealing effect can be achieved.
在解碼端進行錯誤隱藏時,首先要進行錯誤偵測,也就是偵測出一個薄片組或一個巨區塊無法被正確解碼。在H.264/AVC或較新的視訊標準中,因為是以一個薄片組為單位來進行編碼,因此都是以一個薄片組為單位的錯誤。例如,若一個薄片組是代表一個整列的巨區塊,那麼錯誤的單位便是一列巨區塊。如果是以FMO方式編碼,且以黑白棋相間方式分為兩個群組,那麼每一個錯誤單位就是FMO編碼中的單一群組。因為目前大都以上述兩種方式進行編碼,因此下面所舉皆以他們為例。無論哪一種方式的編碼,當有錯誤發生時,解碼器會參考錯誤巨區塊周圍正確解碼巨區塊的訊息來協助進行錯誤回復。例如,整列巨區塊為一薄片組時,對每一個錯誤巨區塊而言,僅能參考其上下相鄰巨區塊的邊界訊息,對棋盤狀FMO編碼薄片而言,則能參考錯誤巨區塊的上下左右相鄰巨區塊的邊界訊息。When error concealment is performed on the decoding side, error detection is first performed, that is, it is detected that a slice group or a macro block cannot be correctly decoded. In H.264/AVC or the newer video standard, since encoding is performed in units of one slice group, errors are in units of one slice group. For example, if a slice group is a giant block representing a whole column, then the wrong unit is a column of giant blocks. If it is encoded in FMO mode and divided into two groups in a black and white chess phase, each error unit is a single group in the FMO code. Since most of them are currently encoded in the above two ways, the following examples are taken as examples. Regardless of the encoding of the method, when an error occurs, the decoder will refer to the message correctly decoding the macroblock around the error block to assist in the error reply. For example, when the entire column of giant blocks is a thin group, for each of the wrong macroblocks, only the boundary information of the upper and lower adjacent macroblocks can be referred to, and for the checkerboard FMO coded slices, the error can be referred to. The boundary information of the macro block adjacent to the top, bottom, left, and right of the block.
目前最常用的巨區塊回復法則稱為邊界比對演算法(Boundary Matching Algorithm,BMA)。它是以錯誤巨區塊的相鄰正確巨區塊的邊界條狀像素為基礎(每一方向的邊界條狀像素為16x1 pixel),到前一畫面進行運動估測(motion estimation),找出最類似的地方,並記錄該位置為所回復之運動向量(motion vector),再以該回復之運動向量進行運動補償(motion compensation),如此可得錯誤巨區塊之回復像素。BMA方法等同是一個在解碼端的運動估測程序,類似於編碼端的運動估測程序,所差異者在於:BMA使用的是別的巨區塊的像素資料(非自己的,因為未能正確解碼,故自己的像素資料不存在),且所能使用的像素點數較少。因此之故,BMA所得到的運動向量精確度有限,尤其是當一個薄片組是由一整列巨區塊組成時為甚(因只能使用上下相鄰巨區塊的像素訊息)。此外,由於BMA是對每一個錯誤巨區塊個別獨立進行運動估測與運動補償,而後續並無殘餘值可以修正不正確所導致的失真(殘餘值也未能正確解碼而不存在),因此容易導致相鄰錯誤巨區塊的回復結果不搭調(例如,一條跨越兩相鄰巨區塊的直線無法緊密縫合)。這些不完美的錯誤隱藏也正是本發明所要加以改善的。At present, the most commonly used macroblock recovery rule is called Boundary Matching Algorithm (BMA). It is based on the boundary strip pixels of the adjacent correct macroblocks of the wrong macroblock (the boundary strip pixels in each direction is 16x1 pixels), and the motion estimation is performed on the previous picture to find out The most similar place, and record the position as the motion vector that is replied, and then perform motion compensation with the motion vector of the reply, so that the reply pixel of the wrong macro block can be obtained. The BMA method is equivalent to a motion estimation program at the decoding end, similar to the motion estimation program at the encoding end. The difference is that the BMA uses the pixel data of other giant blocks (not its own, because it is not correctly decoded, Therefore, its own pixel data does not exist), and the number of pixels that can be used is small. For this reason, the motion vector obtained by BMA has limited accuracy, especially when a slice group is composed of a whole array of macroblocks (because only the pixel information of the upper and lower adjacent macroblocks can be used). In addition, since the BMA performs motion estimation and motion compensation independently for each error macroblock, and subsequent residual values can correct the distortion caused by the incorrect (the residual value is not correctly decoded and does not exist), It is easy to cause the reply result of the adjacent error giant block to be out of tune (for example, a straight line spanning two adjacent giant blocks cannot be closely stitched). These imperfect error concealments are also what the invention is intended to improve.
[1]S. Wenger,“H.264/AVC over IP,”IEEE Trans. on Circuits and Systems for Video Technology ,Vol.13,No.7,pp.645-656,July 2003.[1] S. Wenger, "H.264/AVC over IP," IEEE Trans. on Circuits and Systems for Video Technology , Vol. 13, No. 7, pp. 645-656, July 2003.
本發明是採用時間域的錯誤隱藏方法,我們的重點在於考量非FMO編碼時的通道錯誤(例如影像中一整列巨區塊的薄片為單位發生錯誤)及改良傳統的邊界比對演算法(BMA)。由於僅能使用錯誤巨區塊的上下相鄰巨區塊邊界像素到前一張畫面去估測目前畫面中錯誤巨區塊的運動向量,因此較具挑戰性。我們方法的特點是利用可變區塊大小的特性來進行錯誤隱藏,對單一巨區塊進行錯誤隱藏時不再侷限於16×16像素的大小,而是根據判斷來適應性改變錯誤隱藏時單一區塊的尺寸,例如8×8、16×8、8×16,16×16都可能做為單一區塊的尺寸。The present invention adopts a time domain error concealment method, and our focus is on channel errors in non-FMO encoding (eg, an entire column of macroblocks in the image is in error) and an improved traditional boundary alignment algorithm (BMA) ). It is more challenging to use only the upper and lower adjacent macroblock boundary pixels of the wrong macroblock to estimate the motion vector of the wrong macroblock in the current picture to the previous picture. The feature of our method is to use the characteristics of variable block size to carry out error concealment. When the error concealment of a single macroblock is no longer limited to the size of 16×16 pixels, it is adaptive to change the single error. The size of the block, such as 8x8, 16x8, 8x16, 16x16, may be the size of a single block.
本發明錯誤的樣本為假設獨立薄片(目前每個薄片包含一列巨區塊)。本發明核心技術為使用動態規劃(Dynamic Programming)演算法,其流程大致為:1)對錯誤薄片中所有的巨區塊執行改良式邊界比對演算法以求出初始運動向量,2)以步驟1所求出的初始運動向量為基礎執行加速式動態規劃法,3)疊代執行步驟2以達到較小區塊尺寸的錯誤隱藏,4)整合小區塊尺寸的運動向量而形成巨區塊的錯誤隱藏。The erroneous sample of the present invention is a hypothetical independent sheet (currently each sheet contains a column of giant blocks). The core technology of the present invention is to use a dynamic programming algorithm, and the flow is roughly as follows: 1) performing an improved boundary alignment algorithm on all the macroblocks in the error slice to obtain an initial motion vector, 2) taking steps The initial motion vector obtained by 1 is used to perform the accelerated dynamic programming method, 3) the iteration is performed to achieve the error concealment of the smaller block size, and 4) the motion vector of the cell block size is integrated to form the giant block. Error hiding.
請參考圖一所示,係為本發明實施例的錯誤隱藏方法流程圖。首先對所接收到的視訊編碼位元流進行解碼及錯誤偵測101,接著對16×16像素的巨區塊進行16×16像素的改良式邊界比對演算法102和基於16×16像素巨區塊的動態規劃法103,再判斷相鄰巨區塊經由改良式邊界比對演算法所估測的運動向量(MV)是否小且一致104,若判斷結果為是,則重建16×16像素之錯誤巨區塊108;若判斷結果為否,則先進行基於8×8像素區塊之動態規劃疊代程序105和8×8像素區塊之運動向量整合106後,再重建整個16×16像素之錯誤巨區塊107。Please refer to FIG. 1 , which is a flowchart of a method for error concealment according to an embodiment of the present invention. First, decoding and error detection 101 of the received video coded bit stream, and then performing a 16×16 pixel modified boundary alignment algorithm 102 and a 16×16 pixel macro for the 16×16 pixel macroblock. The dynamic programming method 103 of the block further determines whether the motion vector (MV) estimated by the adjacent macroblock via the improved boundary alignment algorithm is small and consistent 104, and if the judgment result is yes, reconstructs 16×16 pixels. The error macroblock 108; if the judgment result is no, the motion vector integration 106 based on the dynamic plan iterative program 105 of the 8×8 pixel block and the 8×8 pixel block is first performed, and then the entire 16×16 is reconstructed. The pixel error block 107.
傳統的邊界比對演算法在一個巨區塊左右兩側的資訊不見時,只會採用上下相鄰巨區塊的資訊來進行邊界比對,然而這種方式受制於資訊量不足會導致比對準確度降低(也就是所估測的運動向量較不精確),所以本發明採用改良過後的邊界比對演算法。首先我們對一條錯誤的薄片中個別巨區塊的上下相鄰像素計算其灰階邊緣強度並加總為T diff ,如公式(1)所示,圖二為上下相鄰巨區塊邊界之邊緣強度計算示意圖,其中錯誤區塊202上下包圍著許多像素點201,各個像素點透過公式(1)得到邊緣強度加總值。When the traditional boundary alignment algorithm does not see the information on the left and right sides of a giant block, only the information of the upper and lower adjacent blocks will be used for boundary comparison. However, this method is subject to insufficient information to cause comparison. The accuracy is reduced (i.e., the estimated motion vector is less accurate), so the present invention employs a modified boundary alignment algorithm. First, we calculate the gray-scale edge intensity of the upper and lower adjacent pixels of a single macroblock in a wrong slice and add it to T diff , as shown in formula (1). Figure 2 shows the edge of the upper and lower adjacent block boundaries. A diagram of the intensity calculation, in which the error block 202 is surrounded by a plurality of pixel points 201, and each pixel point obtains an edge intensity total value through the formula (1).
其中E up 代表錯誤巨區塊上方相鄰巨區塊的邊緣強度,E low 則為下方巨區塊的邊緣強度,(x ,y )代表每個錯誤巨區塊最左上角的座標,m 、n 分別代表圖二中上方與下方部分計算邊緣強度區域的y 軸與x 軸方向上長度。Where E up represents the edge strength of the adjacent giant block above the wrong giant block, E low is the edge intensity of the lower giant block, and ( x , y ) represents the coordinate of the top left corner of each wrong macro block, m , n represents the length in the y- axis and x- axis directions of the edge intensity region in the upper and lower portions of Fig. 2, respectively.
每一個錯誤巨區塊都可以計算一個T diff 值。我們可以先對T diff 值最高的錯誤巨區塊先進行邊界比對,其動機是較大T diff 的巨區塊代表與相鄰的巨區塊有較複雜的特徵,對於後續邊界比對會產生較準確的結果。本發明採取的邊界比對方法如公式(2)所示,在所有候選運動向量中能使得D p,q (v) 達到最小者,即為所求。Each error block can calculate a T diff value. We can first perform boundary comparison on the error block with the highest T diff value. The motivation is that the large block of large T diff represents more complex features with adjacent giant blocks, and the subsequent boundary comparison will be Produce more accurate results. The boundary alignment method adopted by the present invention is as shown in the formula (2), and D p,q (v) can be minimized in all candidate motion vectors.
其中among them
分別代表錯誤巨區塊利用上方巨區塊與下方巨區塊的部份像素去做邊界比對所求出的差值,則代表利用左邊與右邊相鄰巨區塊暫時重建之像素資訊去做邊界比對所求出的差值,v 為候選的運動向量,MAP (p,q )代表的是該相鄰巨區塊的權重值,定義在公式(3),而(p ,q )代表的是錯誤巨區塊的巨區塊指標,(p ,q -1)、(p ,q +1)分別代表錯誤巨區塊左邊及右邊相鄰巨區塊的巨區塊指標。 Representing the error giant block, respectively, using the upper macro block and the lower part of the macro block to do the boundary comparison to find the difference, It represents the difference between the pixel information temporarily reconstructed by the left and right adjacent macroblocks, and v is the candidate motion vector, and MAP ( p, q ) represents the adjacent macroblock. The weight value is defined in equation (3), and ( p , q ) represents the giant block indicator of the wrong macroblock, ( p , q -1), ( p , q +1) represent the wrong macroregion The giant block indicator of the adjacent giant block on the left and right sides of the block.
由公式(2)所求出的結果會作為目前錯誤巨區塊的初始運動向量,並且先暫時對此巨進行像素重建,此時被重建的巨區塊便可提供資訊來幫助左右相鄰尚未重建之巨區塊執行邊界比對(暫時重建的巨區塊的MAP (p ,q )=0.5),藉以找出個別巨區塊的初始運動向量。此外,由於相鄰巨區塊運動向量差距應該不大,所以可以用相鄰巨區塊的運動向量作為初始值,再往外擴增所需搜尋範圍而不用每個巨區塊都以±16 pixels的範圍做運動估測。相對於傳統邊界比對演算法(不考慮,且不考慮以T diff 作為復原優先順序的考量),此方法可以整合更多的資訊藉以提高運動向量估測的正確性。當估計完一個錯誤巨區塊的初始運動向量後,後續再從剩餘的錯誤巨區塊中找出擁有最大T diff 的巨區塊執行以上動作,並且重複直到所有錯誤巨區塊都找出初始運動向量為止。當錯誤薄片中的每個巨區塊都找出初始運動向量後即可進入動態規劃法中進行下一步的最佳化。The result obtained by formula (2) will be used as the initial motion vector of the current faulty macroblock, and the pixel reconstruction will be temporarily performed for the time being. The reconstructed giant block can provide information to help the left and right neighbors. The reconstructed giant block performs a boundary alignment ( MAP ( p , q ) = 0.5) of the temporarily reconstructed giant block to find the initial motion vector of the individual giant block. In addition, since the motion vector difference of adjacent macroblocks should be small, the motion vector of the adjacent macroblocks can be used as the initial value, and the required search range can be amplified further without using each macroblock by ±16 pixels. The scope of the exercise is estimated. Algorithm relative to traditional boundary alignment (don't consider And without considering T diff as a priority for restoration), this method can integrate more information to improve the accuracy of motion vector estimation. After an error when the estimated initial motion vector of the macroblock, and then find the subsequent macroblock has the largest T diff giant from the remaining error blocks perform the above action, and repeated until all errors have to find the initial macroblock The motion vector is up. When each macroblock in the error slice finds the initial motion vector, it can enter the dynamic programming method for the next optimization.
基本上,動態規劃法是將一個問題分成許多子問題並且對於每一個子問題找出最佳化答案的程序。實質上,動態規劃法可以用一個多級拓墣圖來描述,圖三為運動向量重建之動態規劃法示意圖,其中包含動態規劃法中起始點301和動態規劃法中終止點314,在兩點間有許多級(stage),每一個級有許多的節點(node),分別是動態規劃法中第一級節點302、動態規劃法中第一級節點306、動態規劃法中第一級節點310、動態規劃法中第二級節點303、動態規劃法中第二級節點307、動態規劃法中第二級節點311、動態規劃法中第N-1級節點304、動態規劃法中第N-1級節點308、動態規劃法中第N-1級節點312、動態規劃法中第N級節點305、動態規劃法中第N級節點309、動態規劃法中第N級段節點313,相鄰級的節點間有連接(edge),每一個節點和連接都各有一個成本(cost),分別為節點成本D bd (node cost)與連接成本D sm (edge cost)。因此,動態規劃法就是從一端的起始點301通過中間每一級中的節點到另一端的結束點314為止,找出一條路徑會使得總成本D total 達到最低。公式(4)為總成本D total 。Basically, dynamic programming is the process of breaking a problem into many sub-questions and finding an optimal answer for each sub-question. In essence, the dynamic programming method can be described by a multi-level topology diagram. Figure 3 is a schematic diagram of the dynamic programming method for motion vector reconstruction, which includes the starting point 301 in the dynamic programming method and the termination point 314 in the dynamic programming method. There are many stages between points. Each level has many nodes, which are the first level node 302 in the dynamic programming method, the first level node 306 in the dynamic programming method, and the first level node in the dynamic programming method. 310. The second-level node 303 in the dynamic programming method, the second-level node 307 in the dynamic programming method, the second-level node 311 in the dynamic programming method, the N-1-th node 304 in the dynamic programming method, and the Nth in the dynamic programming method - level 1 node 308, node N-1 node 312 in dynamic programming method, node N node 305 in dynamic programming method, node N node 309 in dynamic programming method, and node N node 313 in dynamic programming method There is an edge between the nodes in the adjacent level. Each node and connection has a cost, which is the node cost D bd (node cost) and the connection cost D sm (edge cost). Therefore, the dynamic programming method is to find a path from the starting point 301 of one end to the ending point 314 of the other end, so that the total cost D total can be minimized. Equation (4) is the total cost D total .
D total =αD bd +(1-α)D sm (4) D total =α D bd +(1-α) D sm (4)
其中α為0.0~1.0的權重值。Where α is a weight value of 0.0 to 1.0.
本發明所開發的技術假設錯誤單位為一條薄片(slice),因此可以把每一個錯誤的巨區塊視為一級,而所有可能的候選運動向量則視為節點。因此,節點成本被定義為邊界比對成本,而連接成本則被定義為平滑成本。其中邊界比對成本為公式(5)、(6)、(7)所示,而平滑成本則由公式(8)、(9)、(10)所示。The technique developed by the present invention assumes that the error unit is a slice, so each erroneous macroblock can be considered a level, and all possible candidate motion vectors are treated as nodes. Therefore, the cost of the node is defined as the cost of the boundary comparison. And the cost of connection is defined as the smoothing cost . The boundary comparison cost is shown in equations (5), (6), and (7), and the smoothing cost is shown by equations (8), (9), and (10).
其中分別代表第(p -1,q )個巨區塊與第(p +1,q )個巨區塊上下邊界像素值的比對誤差,圖四為描述邊界比對成本的計算示意圖,其中包含兩個時間的畫面,分別為時間t-1畫面401和時間t畫面402,在時間t-1畫面401內包含時間t-1畫面內16×16像素巨區塊扣除上下一列像素之區域403、時間t-1畫面內16×16像素巨區塊中最上面一列405和時間t-1畫面內16×16像素巨區塊中最下面一列406,而時間t畫面402內包含時間t畫面內16×16像素巨區塊404、時間t畫面內16×16像素巨區塊上方相鄰巨區塊中最下面一列407和時間t畫面內16×16像素巨區塊下方相鄰巨區塊中最上面一列408。among them It represents the comparison error of the pixel values of the upper and lower boundaries of the ( p -1, q ) giant block and the ( p +1, q ) giant block respectively, and FIG. 4 is a schematic diagram for calculating the boundary comparison cost, which includes The two time pictures are time t-1 picture 401 and time t picture 402 respectively, and time t-1 picture 401 includes time t-1 in the picture 16×16 pixel macro block minus the upper column pixel area 403, In the time t-1 picture, the uppermost column 405 of the 16×16 pixel macroblock and the lowermost column 406 of the 16×16 pixel macroblock in the time t-1 picture, and the time t picture 402 contains the time t in the picture 16 ×16 pixel macroblock 404, the lowest column in the adjacent macroblock above the 16×16 pixel macroblock in the time t picture, and the most in the adjacent macroblock below the 16×16 pixel macroblock in the time t picture Above column 408.
其中分別代表第0列與第15列的像素差異值,而v 1 與v 2 分別為第(p ,q )個與第(p ,q +1)個巨區塊的候選運動向量。圖五為描述平滑成本,其中包含兩個時間的畫面,分別為時間t-1畫面501和時間t畫面502,時間t-1畫面501內包含時間t-1畫面內16×16像素巨區塊503和時間t-1畫面內16×16像素巨區塊504,時間t畫面502內包含時間t畫面內16×16像素巨區塊505和時間t畫面內16×16像素巨區塊506。among them The pixel difference values of the 0th column and the 15th column are respectively represented, and v 1 and v 2 are the candidate motion vectors of the ( p , q )th and ( p , q +1)th macroblocks, respectively. FIG. 5 is a diagram for describing a smoothing cost, which includes two time pictures, which are time t-1 picture 501 and time t picture 502, and time t-1 picture 501 contains time t-1 picture 16×16 pixel macro block. 503 and the time t-1 picture 16×16 pixel macro block 504, the time t picture 502 includes a 16×16 pixel macro block 505 in the time t picture and a 16×16 pixel macro block 506 in the time t picture.
錯誤隱藏的應用在運算速度要求上是即時的,基本上,動態規劃法的計算複雜度是M 2 ×N ,其中M 是運動向量的搜尋範圍,例如M =32×32表示單一向量搜尋範圍從-16~+16像素,N 是級數(stage),由此看出運算量相當大,無法直接使用在錯誤隱藏應用上。因此本發明的方法對動態規劃法進行加速。理論上動態規劃法必須對每個級與節點都計算成本並且找出成本最低的路徑,然而某些應用上,級中的某些節點是可以在事前被排除的,一旦節點減少,相對於計算量就會降低。例如圖六所描述,5個級中各有6個節點,分別為第一級的節點601、602、603、604、605、606、第二級的節點607、608、609、610、611、612、第三級的節點613、614、615、616、617、618、第四級的節點619、620、621、622、623、624、第五級的節點625、626、627、628、629和630。所以原始運算複雜度為5×62 ,但如果可以事前找出圖六中成本非常高的節點,也就是不可能被包含在最佳路徑的節點如圖六中白色者,我們便可以只處理黑色的節點,此時運算複雜度即減低為5×32 ,此降低動態規劃計算複雜度的方法即為本發明的加速概念。The error concealed application is instantaneous in the operation speed requirement. Basically, the computational complexity of the dynamic programming method is M 2 × N , where M is the search range of the motion vector, for example, M = 32 × 32 indicates that the single vector search range is from -16~+16 pixels, N is the stage, which shows that the amount of computation is quite large and cannot be directly used in error concealment applications. The method of the present invention therefore speeds up the dynamic programming approach. In theory, dynamic programming must calculate the cost for each level and node and find the path with the lowest cost. However, in some applications, some nodes in the level can be excluded beforehand, once the node is reduced, relative to the calculation. The amount will decrease. For example, as described in FIG. 6, there are 6 nodes in each of the five levels, which are nodes 601, 602, 603, 604, 605, and 606 of the first level, and nodes 607, 608, 609, 610, and 611 of the second level, respectively. 612, nodes 613, 614, 615, 616, 617, 618 of the third level, nodes 619, 620, 621, 622, 623, 624 of the fourth level, nodes 625, 626, 627, 628, 629 of the fifth level And 630. So the original computational complexity is 5 × 6 2 , but if you can find out the very costly nodes in Figure 6, you can't be included in the best path as shown in Figure 6. White, we can only deal with The black node, at this time, the computational complexity is reduced to 5 × 3 2 , which reduces the computational complexity of dynamic programming is the acceleration concept of the invention.
本發明中每個級(即錯誤巨區塊)的運動向量在x 軸與y 軸方向的搜尋範圍皆是-16~+16個像素,相當於每級有1024個節點。然而,某些候選的運動向量可以利用邊界比對來篩選掉,例如背景或者物體移動較為緩慢的區域,執行邊界比對時會得到較小的運動向量,此時便可以將較大的運動向量(節點)刪除而不放入動態規劃中執行。因此,我們便可以對公式(5)、(8)中的候選運動向量加上範圍,如公式(11)、(12)以下所示。In the present invention, the motion vector of each stage (ie, the error macroblock) has a search range of -16 to +16 pixels in the x- axis and y- axis directions, which is equivalent to 1024 nodes per level. However, some candidate motion vectors can be filtered out by using boundary alignments, such as backgrounds or areas where objects move slowly. When performing boundary alignment, smaller motion vectors are obtained, and larger motion vectors can be obtained. (node) delete without being placed in dynamic planning. Therefore, we can add ranges to the candidate motion vectors in equations (5) and (8), as shown in equations (11) and (12) below.
v 1 initial -β1 <v 1 <v 1 initial +β1 ,v 2 initial -β1 <v 2 <v 2 initial +β1 (12) v 1 initial -β 1 < v 1 < v 1 initial +β 1 , v 2 initial -β 1 < v 2 < v 2 initial +β 1 (12)
其中v initial 為計算節點成本時,第(p ,q )個巨區塊由比對邊界所求出的初始運動向量,而v 1 initial ,v 2 initial 分別代表計算連接成本時第(p ,q )(p ,q +1)個巨區塊由邊界比對所求出的初始運動向量,β1 則為運動向量搜尋範圍。Where v initial is the computational node cost, the ( p , q ) giant block is determined by the comparison boundary, and v 1 initial , v 2 initial represents the ( p , q ) when calculating the connection cost. ( p , q +1 ) The initial motion vector obtained by the boundary comparison of the giant blocks, and β 1 is the motion vector search range.
大部分傳統的錯誤隱藏演算法都是建構於16×16像素大小的巨區塊基礎上。然而,影像中如果比較複雜的區域或者物體移動是不規則的狀況下,基於16×16像素的區塊復原受限於必需採用同一個運動向量,所以無法達到較細緻的結果。有鑑於此,我們便將16×16像素的區塊分割成8×8像素的大小來實現動態規劃法,然而分割的目的是為了解決複雜或物體移動不規則的區域,但一張影像中某些區域並不需要分割(例如不動的背景),強行分割的結果將反而導致四個8×8像素區塊的交界處產生無法契合的失真。H.264中每一個4×4像素子區塊皆有一個運動向量,我們的方法是擷取錯誤巨區塊上下鄰近的所有4×4像素子區塊的運動向量,並且去檢查這些運動向量的強度大小,如果有超過90%低於一個臨界值,便不執行8×8像素的疊代動態規劃。這是因為我們判斷該錯誤薄片可能屬於背景區域,所以不必分割成8×8像素的區塊進行復原,直接採用16×16像素方法即可。應用8×8像素動態規劃法之疊代式程序以下簡稱為疊代動態規劃。Most of the traditional error concealment algorithms are based on giant blocks of 16×16 pixels. However, if a relatively complex area or an object movement is irregular in an image, block restoration based on 16×16 pixels is limited to the necessity of using the same motion vector, so that a more detailed result cannot be achieved. In view of this, we divide the 16×16 pixel block into 8×8 pixels to realize the dynamic programming method. However, the purpose of segmentation is to solve the complex or irregular moving area of the object, but in an image These regions do not need to be segmented (for example, a stationary background), and the result of forced segmentation will instead result in uncoupling distortion at the junction of the four 8x8 pixel blocks. Each 4×4 pixel sub-block in H.264 has a motion vector. Our method is to extract the motion vectors of all 4×4 pixel sub-blocks adjacent to the upper and lower sides of the wrong macroblock, and check these motion vectors. The intensity of the intensity, if more than 90% is below a critical value, does not perform 8×8 pixel iterative dynamic programming. This is because we judge that the error slice may belong to the background area, so it is not necessary to divide into 8×8 pixel blocks for restoration, and the 16×16 pixel method can be directly used. The iterative program using the 8×8 pixel dynamic programming method is hereinafter referred to as iterative dynamic programming.
疊代動態規劃的一個區塊單位為8×8像素大小,而錯誤的巨區塊為16×16像素,所以必須將原本的薄片分割成上下兩個子列(sub-row),每一次只對一子列作處理,另外一子列則固定不動,當執行完畢則將兩個子列角色互換,整體步驟如下概述:The unit of the iterative dynamic programming is 8×8 pixels, and the wrong giant block is 16×16 pixels, so the original slice must be divided into two sub-rows, each time only For a sub-column, the other sub-column is fixed. When the execution is completed, the two sub-columns are interchanged. The overall steps are as follows:
步驟1:下子列區塊的運動向量(MV)固定,上子列執行動態規劃法。Step 1: The motion vector (MV) of the lower sub-column block is fixed, and the upper sub-column performs the dynamic programming method.
步驟2:上子列運算結束且更新上子列中每一區塊的MV資訊。Step 2: The upper sub-column operation ends and the MV information of each block in the upper sub-column is updated.
步驟3:上子列區塊的MV固定,下子列執行動態規劃法。Step 3: The MV of the upper sub-column block is fixed, and the lower sub-column performs the dynamic programming method.
步驟4:下子列運算結束且更新下子列的MV資訊。Step 4: The sub-column operation ends and the MV information of the next sub-column is updated.
步驟5:如上述步驟1~4執行程序未達一定次數,則回到步驟1。Step 5: If the execution of the procedure in steps 1 to 4 above does not reach a certain number of times, return to step 1.
疊代動態規劃是一種反覆執行運算的程序,每執行一次便更新運動向量資訊,所以每次執行便會使得結果越偏向正確的資訊,實際詳細步驟如下:Iterative dynamic programming is a program that repeatedly performs operations. The motion vector information is updated every time it is executed, so each execution will make the result biased to the correct information. The actual detailed steps are as follows:
步驟1’:為每個16×16像素巨區塊估測其初始運動向量,並將該初始運動向量指定為其四個8×8像素區塊的初始運動向量,如圖七所示。其中包含同一時間t時巨區塊切割前後,分別為切割前時間t畫面701和切割後時間t畫面702。切割前時間t畫面701中包含時間t畫面內16×16像素巨區塊703、時間t畫面內16×16像素巨區塊704和時間t畫面內16×16像素巨區塊705。切割後時間t畫面內702包含時間t畫面內8×8像素區塊706、時間t畫面內8×8像素區塊707和時間t畫面內8×8像素區塊708。圖七(a)中的運動向量代表16×16像素巨區塊動態規劃法所求出的結果,圖七(b)則代表目前欲處理的8×8像素區塊採用圖七(a)的結果當做初始值。Step 1': Estimate its initial motion vector for each 16×16 pixel macroblock and designate the initial motion vector as the initial motion vector of its four 8×8 pixel blocks, as shown in FIG. The pre-cut time t picture 701 and the post-cut time t picture 702 are respectively included before and after the macro block cutting at the same time t. The pre-cut time t picture 701 includes a 16×16 pixel macro block 703 in the time t picture, a 16×16 pixel macro block 704 in the time t picture, and a 16×16 pixel macro block 705 in the time t picture. The post-cut time t picture 702 includes an 8x8 pixel block 706 within the time t picture, an 8x8 pixel block 707 within the time t picture, and an 8x8 pixel block 708 within the time t picture. The motion vector in Fig. 7(a) represents the result obtained by the 16×16 pixel giant block dynamic programming method, and Fig. 7(b) represents the 8×8 pixel block currently to be processed using the figure 7(a). The result is taken as the initial value.
步驟2’:先將下子列的8×8像素區塊的運動向量固定,並且對上子列的區塊執行動態規劃,其路徑成本定義如公式(13)。這裡與先前動態規劃不同的地方是欲處理的8×8像素區塊的下方也是一個錯誤的8×8像素區塊,所以與下方區塊計算的成本是採用平滑成本。Step 2': The motion vector of the 8×8 pixel block of the lower sub-column is fixed first, and the dynamic block is performed on the block of the upper sub-column, and the path cost is defined as formula (13). The difference here from the previous dynamic programming is that the 8×8 pixel block to be processed is also a wrong 8×8 pixel block, so the cost calculated with the lower block is smoothed.
C total =λC bd +( 1-λ)(C smh +C smv ) (13) C total = λC bd + ( 1- λ)(C smh + C smv ) (13)
其中C total 為路徑總成本,C bd 為節點成本,C smh 、C smv 分別為水平平滑成本與垂直平滑成本,而λ 則是0.0~1.0的數(權重值),以下為各成本之介紹。Where C total is the total cost of the path, C bd is the node cost, C smh and C smv are the horizontal smoothing cost and the vertical smoothing cost, respectively, and λ is the number of 0.0~1.0 (weight value). The following is the introduction of each cost.
1.節點成本C bd 如公式(14)所示,圖八為8×8像素區塊之邊界比對示意圖,其中包含時間t-1畫面801和時間t-1畫面內8×8像素區塊803,時間t畫面802和時間t畫面內8×8像素區塊804。1. Node cost C bd is shown in equation (14). Figure 8 is a schematic diagram of boundary alignment of 8 × 8 pixel blocks, including time t-1 picture 801 and time t-1 8×8 pixel block in the picture. 803, time t picture 802 and time 8 picture 8 x 8 pixel block 804.
其中v initial 為初始運動向量,β 2 為搜尋範圍。Where v initial is the initial motion vector and β 2 is the search range.
2.水平平滑成本C smh 、垂直平滑成本C smv 分別如公式(15)(16)所示,圖九為其圖示,其中(a)圖中包含兩個時間畫面分別為時間t-1畫面901和時間t畫面902。時間t-1畫面901包含時間t-1畫面內8×8像素區塊903、時間t-1畫面內8×8像素區塊904;時間t畫面902內包含時間t畫面內8×8像素區塊905和時間t畫面內8×8像素區塊906。而(b)圖中包含時間t-1畫面907和時間t畫面908。時間t-1畫面907內有時間t-1畫面內8×8像素區塊909和時間t-1畫面內8×8像素區塊910,時間t畫面908內有時間t畫面內8×8像素區塊911和時間t畫面內8×8像素區塊912。2. The horizontal smoothing cost C smh and the vertical smoothing cost C smv are respectively shown in the formula (15) (16), and the figure 9 is the illustration thereof, wherein (a) the figure contains two time frames respectively for the time t-1 picture 901 and time t picture 902. The time t-1 picture 901 includes an 8×8 pixel block 903 in the time t-1 picture, and an 8×8 pixel block 904 in the time t-1 picture; the time t picture 902 includes the 8×8 pixel area in the time t picture. Block 905 and 8 x 8 pixel block 906 within time t picture. And (b) includes a time t-1 picture 907 and a time t picture 908. In the time t-1 picture 907, there are time t-1 in the picture 8×8 pixel block 909 and time t-1 in the picture 8×8 pixel block 910, time t picture 908 has time t picture 8×8 pixels Block 911 and 8 x 8 pixel block 912 within time t picture.
其中among them
其中,v 1 、v 2 分別為第(p ,q )、(p ,q +1)個巨區塊的候選運動向量,v bm 則為第(p +1,q )個巨區塊暫時估測的運動向量,而v 1 initial 、v 2 initial 分別為第(p ,q )、(p ,q +1)個巨區塊的初始運動向量。Where v 1 and v 2 are the candidate motion vectors of the ( p , q ), ( p , q +1) giant blocks, respectively, and v bm is the temporary estimate of the ( p +1, q ) giant blocks. The measured motion vector, and v 1 initial and v 2 initial are the initial motion vectors of the ( p , q ), ( p , q +1) giant blocks, respectively.
步驟3’:在步驟2’中上子列的8×8區塊薄片會找到最佳路徑,並且更新其初始運動向量。此時便將上子列更新過後的運動向量固定,改由下子列進行動態規劃,方法大致與步驟2’相同,只是上下相反,以下為其各成本值介紹。Step 3': The 8 x 8 tile slice of the upper sub-column in step 2' will find the best path and update its initial motion vector. At this point, the motion vector after the update of the upper sub-column is fixed, and the dynamic programming is performed by the next sub-column. The method is roughly the same as that of step 2', but the upper and lower sides are opposite, and the following are the cost values.
1.節點成本C bd 如公式(21)所示,圖十為其描述,其中包含時間t-1畫面1001、時間t畫面1002、時間t-1畫面內8×8像素區塊1003、時間t畫面內8×8像素區塊1004。1. The node cost C bd is as shown in formula (21), which is described in FIG. 10, which includes time t-1 picture 1001, time t picture 1002, time t-1 picture 8×8 pixel block 1003, time t 8×8 pixel block 1004 in the picture.
其中v initial 為初始運動向量,β2 為搜尋範圍。Where v initial is the initial motion vector and β 2 is the search range.
2.水平平滑成本C smh 、垂直平滑成本C smv 分別如公式(22)(23)所示,圖十一為其圖示,其中(a)圖內有時間t-1畫面1101、時間t畫面1102、時間t-1畫面內8×8像素區塊1103、時間t-1畫面內8×8像素區塊1104、時間t畫面內8×8像素區塊1105和時間t畫面內8×8像素區塊1106。(b)圖內有時間t-1畫面1107、時間t畫面1108、時間t-1畫面內8×8像素區塊1109、時間t-1畫面內8×8像素區塊1110、時間t畫面內8×8像素區塊1111和時間t畫面內8×8像素區塊1112。2. The horizontal smoothing cost C smh and the vertical smoothing cost C smv are respectively shown in formula (22) (23), and FIG. 11 is an illustration thereof, wherein (a) there is time t-1 picture 1101, time t picture 1102, 8×8 pixel block 1103 in time t-1 picture, 8×8 pixel block 1104 in time t-1 picture, 8×8 pixel block 1105 in time t picture and 8×8 pixel in time t picture Block 1106. (b) There is a time t-1 picture 1107, a time t picture 1108, a time t-1 picture 8×8 pixel block 1109, a time t-1 picture 8×8 pixel block 1110, time t picture 8 x 8 pixel block 1111 and 8 x 8 pixel block 1112 in time t picture.
其中among them
其中v 3 、v 4 分別為第(p ,q )、(p ,q +1)個巨區塊的候選運動向量,v up 則為上方第(p -1,q )個巨區塊在上一步驟估測的運動向量,而v 3 initial 、v 4 initial 分別為第(p ,q )、(p ,q +1)個巨區塊的初始運動向量。Where v 3 and v 4 are the candidate motion vectors of the ( p , q ), ( p , q +1) macroblocks respectively, and v up is the upper ( p -1, q ) giant blocks above. The motion vector estimated in one step, and v 3 initial and v 4 initial are the initial motion vectors of the ( p , q ), ( p , q +1) macroblocks , respectively.
步驟4’:上述步驟3’中下子列的8×8像素區塊會找到最佳路徑,也就是下子列每個8×8區塊都會得到暫時最佳的運動向量以取代其初始運動向量。此時便將下子列更新過後的運動向量固定,執行次數如未達設定值則回到步驟2’繼續執行。若執行次數已達預定次數,則跳至步驟5’。Step 4': The 8x8 pixel block of the lower sub-column in the above step 3' will find the best path, that is, each 8x8 block of the lower sub-column will obtain the temporally optimal motion vector instead of its initial motion vector. At this time, the motion vector after the sub-column update is fixed, and if the number of executions does not reach the set value, the process returns to step 2' to continue execution. If the number of executions has reached the predetermined number of times, skip to step 5'.
步驟5’:結束疊代動態規劃程序,每個16×16像素巨區塊會有四個運動向量。Step 5': End the iterative dynamic programming procedure, and each 16×16 pixel macroblock will have four motion vectors.
經過疊代動態規劃處理後,所有的8×8像素區塊的運動向量必須經過整合測試的程序。雖然這些運動向量是經過最佳化所求得,但兩兩相鄰8×8像素區塊如果其運動向量非常相似,此時便可以將兩者整合在一起,例如可以將兩個8×8像素區塊整合成8×16像素區塊或16×8像素區塊,也可以將四個8×8區塊整合成一個16×16區塊。本發明採用邊界比對成本的比較來整合8×8像素區塊的運動向量,執行步驟如以下所示。After the iterative dynamic programming process, the motion vectors of all 8×8 pixel blocks must undergo the integration test procedure. Although these motion vectors are optimized, two adjacent 8×8 pixel blocks can be integrated if they have very similar motion vectors. For example, two 8×8 blocks can be used. The pixel blocks are integrated into 8×16 pixel blocks or 16×8 pixel blocks, and four 8×8 blocks can also be integrated into one 16×16 block. The present invention integrates motion vectors of 8x8 pixel blocks using a comparison of boundary alignment costs, and the execution steps are as follows.
步驟1”:計算一個16×16像素巨區塊內四個8x8像素區塊修正運動向量之變異數,如果變異數小於一個臨界值,則合併該四個8x8像素區塊為一個16x16像素巨區塊,直接採用先前所求出的16x16像素巨區塊估計運動向量作為合併後16x16巨區塊之修正運動向量;若否,進入步驟2”;Step 1": Calculate the variation of the motion vector of the four 8x8 pixel blocks in a 16×16 pixel macroblock. If the variance is less than a threshold, merge the four 8x8 pixel blocks into a 16x16 pixel macroblock. Block, directly using the previously obtained 16x16 pixel giant block estimated motion vector as the corrected motion vector of the merged 16x16 giant block; if not, proceed to step 2";
步驟2”:針對該四個8x8像素區塊修正運動向量,測試任兩個水平或垂直相鄰8x8像素區塊合併成16x8或8x16像素區塊之可能性;若通過該合併測試,則以合併之兩個8x8像素區塊之修正運動向量平均值做為合併後區塊之修正運動向量;若不通過合併測試,則維持原來之8x8像素區塊。例如,圖十二中A區塊(16×16巨區塊中左上角8×8區塊1201)與B區塊(16×16巨區塊中右上角8×8區塊1202)的MV較為接近,所以A與B可以進行合併(16×8區塊中左上8×8區塊1205、16×8區塊中右上8×8區塊1206),而C(16×16區塊中左下8×8區塊1203)與D(16×16區塊中右下8×8區塊1204)的MV也較為接近,故C與D可以合併(16×8區塊中左下8×8區塊1207、16×8區塊中右下8×8區塊1208)。注意的是本發明不允許(A,D)或(B,C)合為一組。Step 2": Correct the motion vector for the four 8x8 pixel blocks, and test the possibility of merging any two horizontal or vertical adjacent 8x8 pixel blocks into 16x8 or 8x16 pixel blocks; if the merge test is passed, then merge The corrected motion vector average of the two 8x8 pixel blocks is used as the modified motion vector of the merged block; if the merge test is not passed, the original 8x8 pixel block is maintained. For example, the block A in Figure 12 (16) The octave of the 8×8 block in the upper left corner of the ×16 macroblock is closer to the MV of the B block (the 8×8 block 1202 in the upper right corner of the 16×16 giant block), so A and B can be merged (16). ×8 block in the upper left 8×8 block 1205, 16×8 block in the upper right 8×8 block 1206), and C (16×16 block in the lower left 8×8 block 1203) and D (16×) The MV of the lower right 8×8 block 1204 in the 16 block is also relatively close, so C and D can be combined (the lower left 8×8 block 1207, the 16×8 block in the 16×8 block is the lower right 8× Block 8 1208. Note that the present invention does not allow (A, D) or (B, C) to be grouped together.
步驟3”:對欲合併的兩個區塊計算其合併後邊界比對差值(合併後之區塊修正後運動向量為原始兩個8x8像素區塊之修正運動向量的平均值),如圖十三所示。圖十三(a)包含時間t畫面內16×8像素區塊1307、時間t-1畫面1308、時間t畫面1309和時間t-1畫面內16×8像素區塊1310,其代表的是一個16×8像素區塊使用合併後修正運動向量的邊界比對差值,圖十三(b)內包含時間t-1畫面內8×8像素區塊1301、時間t-1畫面內8×8像素區塊1302、時間t畫面內8×8像素區塊1303、時間t畫面內8×8像素區塊1304、時間t-1畫面1305和時間t畫面1306,而其代表2個8×8像素區塊各自利用運動向量計算邊界比對差值的和。將兩種邊界比對差值結果比較,如果圖十三(a)差值較低,則採用1個16×8像素重建,反之則採用兩個8×8像素區塊與v A 、v B 來重建。合併為8x16區塊的測試程序亦與上述類似,在此不在贅述。Step 3": Calculate the combined boundary comparison difference for the two blocks to be merged (the combined motion vector after the merged block is the average of the modified motion vectors of the original two 8x8 pixel blocks), as shown in the figure Thirteenth (a) includes a 16×8 pixel block 1307, a time t-1 picture 1308, a time t picture 1309, and a time t-1 picture 16×8 pixel block 1310 in the time t picture, It represents a boundary of the 16×8 pixel block using the modified motion vector, and FIG. 13(b) contains the time 8×8 pixel block 1301 and time t-1. 8×8 pixel block 1302 in the picture, 8×8 pixel block 1303 in time t picture, 8×8 pixel block 1304 in time t picture, time t-1 picture 1305 and time t picture 1306, and its representative 2 Each of the 8×8 pixel blocks uses the motion vector to calculate the sum of the boundary comparison differences. The two boundary comparison results are compared. If the difference in Figure 13 (a) is lower, a 16×8 is used. Pixel reconstruction, otherwise, using two 8×8 pixel blocks and v A , v B to reconstruct. The test procedure for merging into 8×16 blocks is similar to the above, and will not be described here.
在一實施例中,分別利用本發明之疊代動態規劃演算法、傳統邊界比對演算法(BMA)和文獻[2]做比較。本實驗採用封包錯誤率(Packet Loss Rate,PLR)分別為5%、10%、15%,每一封包代表一個薄片(一列巨區塊)。實驗數據為表1到表4。表1-3代表編碼時不同量化參數時的實驗結果,而表4則代表不同方法執行時所花的時間。其中可以看出:本發明所提出的方法,因為考慮到平滑成本,且考慮每一薄片中各巨區塊整體的最佳化,因此可以得到相鄰區塊復原較契合的結果(即PSNR值最高),而且其運算速度也較文獻[2]快。In one embodiment, the iterative dynamic programming algorithm, the traditional boundary alignment algorithm (BMA), and the literature [2] of the present invention are respectively compared. In this experiment, the Packet Loss Rate (PLR) was 5%, 10%, and 15%, and each packet represented a thin sheet (a column of giant blocks). The experimental data are Tables 1 to 4. Tables 1-3 represent experimental results for different quantization parameters when encoding, while Table 4 represents the time taken for different methods to be performed. It can be seen that the method proposed by the present invention considers the smoothing cost and considers the optimization of the entire macroblocks in each slice, so that the result of the restoration of the adjacent blocks can be obtained (ie, the PSNR value). The highest), and its computing speed is faster than the literature [2].
[2]Xueming Qian,Guizhong Liu,and Huan Wang,“Recovering connected error region based on adaptive error concealment order determination,”IEEE Trans. on Multimedia, Vol.11,pp.683-695,June 2009[2] Xueming Qian, Guizhong Liu, and Huan Wang, "Recovering connected error region based on adaptive error concealment order determination," IEEE Trans. on Multimedia, Vol.11, pp.683-695, June 2009
101...所接收之編碼位元流解碼及錯誤偵測101. . . Received encoded bit stream decoding and error detection
102...16×16像素巨區塊之改良式邊界比對演算法102. . . Improved boundary alignment algorithm for 16×16 pixel giant blocks
103...基於16×16像素巨區塊之動態規劃法103. . . Dynamic programming method based on 16×16 pixel giant block
104...相鄰巨區塊之MV是否小且一致104. . . Whether the MV of the adjacent giant block is small and consistent
105...基於8×8像素區塊之動態規劃疊代程序105. . . Dynamic programming iterative program based on 8×8 pixel block
106...8×8像素區塊之運動向量整合106. . . Motion vector integration of 8×8 pixel blocks
107...重建16×16像素之錯誤巨區塊107. . . Rebuilding a 16×16 pixel error block
108...重建16×16像素之錯誤巨區塊108. . . Rebuilding a 16×16 pixel error block
201...像素點201. . . pixel
202...錯誤巨區塊202. . . Wrong block
301...起始點301. . . Starting point
302...動態規劃法中第一級節點302. . . The first level node in the dynamic programming method
303...動態規劃法中第二級節點303. . . Second-level node in dynamic programming
304...動態規劃法中第N-1級節點304. . . The N-1th node in the dynamic programming method
305...動態規劃法中第N級節點305. . . The Nth node in the dynamic programming method
306...動態規劃法中第一級節點306. . . The first level node in the dynamic programming method
307...動態規劃法中第二級節點307. . . Second-level node in dynamic programming
308...動態規劃法中第N-1級節點308. . . The N-1th node in the dynamic programming method
309...動態規劃法中第N級節點309. . . The Nth node in the dynamic programming method
310...動態規劃法中第一級節點310. . . The first level node in the dynamic programming method
311...動態規劃法中第二級節點311. . . Second-level node in dynamic programming
312...動態規劃法中第N-1級節點312. . . The N-1th node in the dynamic programming method
313...動態規劃法中第N級節點313. . . The Nth node in the dynamic programming method
314...終止點314. . . Termination point
401...時間t-1畫面401. . . Time t-1 screen
402...時間t畫面402. . . Time t screen
403...時間t-1畫面內16×16像素巨區塊扣除上下一列像素403. . . The 16×16 pixel macro block in the time t-1 picture is deducted from the next column of pixels.
404...時間t畫面內16×16像素巨區塊404. . . 16 × 16 pixel giant block in time t picture
405...時間t-1畫面內16×16像素巨區塊中最上面一列405. . . The topmost column of the 16×16 pixel giant block in the time t-1 picture
406...時間t-1畫面內16×16像素巨區塊中最下面一列406. . . The bottommost column of the 16×16 pixel giant block in the time t-1 picture
407...時間t畫面內16×16像素巨區塊上方相鄰巨區塊中最下面一列407. . . In the time t picture, the bottommost column in the adjacent giant block above the 16×16 pixel giant block
408...時間t畫面內16×16像素巨區塊下方相鄰巨區塊中最上面一列408. . . The topmost column in the adjacent giant block below the 16×16 pixel giant block in the time t picture
501...時間t-1畫面501. . . Time t-1 screen
502...時間t畫面502. . . Time t screen
503...時間t-1畫面內16×16像素巨區塊503. . . 16×16 pixel giant block in time t-1 picture
504...時間t-1畫面內16×16像素巨區塊504. . . 16×16 pixel giant block in time t-1 picture
505...時間t畫面內16×16像素巨區塊505. . . 16 × 16 pixel giant block in time t picture
506...時間t畫面內16×16像素巨區塊506. . . 16 × 16 pixel giant block in time t picture
601、602、603、604、605、606...第一級節點601, 602, 603, 604, 605, 606. . . First level node
607、608、609、610、611、612...第二級節點607, 608, 609, 610, 611, 612. . . Second level node
613、614、615、616、617、618...第三級節點613, 614, 615, 616, 617, 618. . . Third level node
619、620、621、622、623、624...第四級節點619, 620, 621, 622, 623, 624. . . Fourth level node
625、626、627、628、629、630...第五級節點625, 626, 627, 628, 629, 630. . . Fifth level node
701...時間t畫面701. . . Time t screen
702...時間t畫面702. . . Time t screen
703...時間t畫面內16×16像素巨區塊703. . . 16 × 16 pixel giant block in time t picture
704...時間t畫面內16×16像素巨區塊704. . . 16 × 16 pixel giant block in time t picture
705...時間t畫面內16×16像素巨區塊705. . . 16 × 16 pixel giant block in time t picture
706...時間t畫面內8×8像素區塊706. . . 8 × 8 pixel block in time t picture
707...時間t畫面內8×8像素區塊707. . . 8 × 8 pixel block in time t picture
708...時間t畫面內8×8像素區塊708. . . 8 × 8 pixel block in time t picture
801...時間t-1畫面801. . . Time t-1 screen
802...時間t畫面802. . . Time t screen
803...時間t-1畫面內8×8像素區塊803. . . 8 x 8 pixel block in time t-1 picture
804...時間t畫面內8×8像素區塊804. . . 8 × 8 pixel block in time t picture
901...時間t-1畫面901. . . Time t-1 screen
902...時間t畫面902. . . Time t screen
903...時間t-1畫面內8×8像素區塊903. . . 8 x 8 pixel block in time t-1 picture
904...時間t-1畫面內8×8像素區塊904. . . 8 x 8 pixel block in time t-1 picture
905...時間t畫面內8×8像素區塊905. . . 8 × 8 pixel block in time t picture
906...時間t畫面內8×8像素區塊906. . . 8 × 8 pixel block in time t picture
907...時間t-1畫面907. . . Time t-1 screen
908...時間t畫面908. . . Time t screen
909...時間t-1畫面內8×8像素區塊909. . . 8 x 8 pixel block in time t-1 picture
910...時間t-1畫面內8×8像素區塊910. . . 8 x 8 pixel block in time t-1 picture
911...時間t畫面內8×8像素區塊911. . . 8 × 8 pixel block in time t picture
912...時間t畫面內8×8像素區塊912. . . 8 × 8 pixel block in time t picture
1001...時間t-1畫面1001. . . Time t-1 screen
1002...時間t畫面1002. . . Time t screen
1003...時間t-1畫面內8×8像素區塊1003. . . 8 x 8 pixel block in time t-1 picture
1004...時間t畫面內8×8像素區塊1004. . . 8 × 8 pixel block in time t picture
1101...時間t-1畫面1101. . . Time t-1 screen
1102...時間t畫面1102. . . Time t screen
1103...時間t-1畫面內8×8像素區塊1103. . . 8 x 8 pixel block in time t-1 picture
1104...時間t-1畫面內8×8像素區塊1104. . . 8 x 8 pixel block in time t-1 picture
1105...時間t畫面內8×8像素區塊1105. . . 8 × 8 pixel block in time t picture
1106...時間t畫面內8×8像素區塊1106. . . 8 × 8 pixel block in time t picture
1107...時間t-1畫面1107. . . Time t-1 screen
1108...時間t畫面1108. . . Time t screen
1109...時間t-1畫面內8×8像素區塊1109. . . 8 x 8 pixel block in time t-1 picture
1110...時間t-1畫面內8×8像素區塊1110. . . 8 x 8 pixel block in time t-1 picture
1111...時間t畫面內8×8像素區塊1111. . . 8 × 8 pixel block in time t picture
1112...時間t畫面內8×8像素區塊1112. . . 8 × 8 pixel block in time t picture
1201...16×16巨區塊中左上8×8區塊1201. . . The upper left 8×8 block in the 16×16 giant block
1202...16×16巨區塊中右上8×8區塊1202. . . The upper right 8×8 block in the 16×16 giant block
1203...16×16巨區塊中左下8×8區塊1203. . . The lower left 8×8 block in the 16×16 giant block
1204...16×16巨區塊中右下8×8區塊1204. . . The lower right 8×8 block in the 16×16 giant block
1205...16×8區塊中左上8×8區塊1205. . . The upper left 8×8 block in the 16×8 block
1206...16×8區塊中右上8×8區塊1206. . . Upper right 8×8 block in 16×8 block
1207...16×8區塊中左下8×8區塊1207. . . The lower left 8×8 block in the 16×8 block
1208...16×8區塊中右下8×8區塊1208. . . The lower right 8×8 block in the 16×8 block
1301...時間t-1畫面內8×8像素區塊1301. . . 8 x 8 pixel block in time t-1 picture
1302...時間t-1畫面內8×8像素區塊1302. . . 8 x 8 pixel block in time t-1 picture
1303...時間t畫面內8×8像素區塊1303. . . 8 × 8 pixel block in time t picture
1304...時間t畫面內8×8像素區塊1304. . . 8 × 8 pixel block in time t picture
1305...時間t-1畫面1305. . . Time t-1 screen
1306...時間t畫面1306. . . Time t screen
1307...時間t畫面內16×8像素區塊1307. . . 16×8 pixel block in time t picture
1308...時間t-1畫面1308. . . Time t-1 screen
1309...時間t畫面1309. . . Time t screen
1310...時間t-1畫面內16×8像素區塊1310. . . 16×8 pixel block in time t-1 picture
圖一係本發明之錯誤隱藏方法流程圖Figure 1 is a flow chart of the error concealment method of the present invention
圖二係相鄰巨區塊邊界之邊緣強度計算示意圖Figure 2 is a schematic diagram of the calculation of the edge strength of the boundary of adjacent giant blocks.
圖三係運動向量重建之動態規劃法示意圖Figure 3 is a schematic diagram of the dynamic programming method for motion vector reconstruction
圖四係邊界比對示意圖Figure 4 is a schematic diagram of boundary comparison
圖五係平滑比對示意圖Figure 5 is a schematic diagram of smooth comparison
圖六係刪除不適當節點示意圖Figure 6 is a schematic diagram of deleting inappropriate nodes
圖七係疊代動態規劃的初始運動向量指定Figure 7: Initial motion vector designation for iterative dynamic programming
圖八係8×8像素區塊之邊界比對(上子列)Figure 8 is the boundary comparison of 8 × 8 pixel blocks (upper sub-column)
圖九(a)係8×8像素區塊之水平平滑成本(上子列)Figure 9 (a) is the horizontal smoothing cost of the 8 × 8 pixel block (upper sub-column)
圖九(b)係8×8像素區塊垂直平滑成本(上子列)Figure 9 (b) is the vertical smoothing cost of the 8 × 8 pixel block (upper sub-column)
圖十係8×8像素區塊之邊界比對(下子列)Figure 10 is a boundary comparison of 8 × 8 pixel blocks (lower sub-column)
圖十一係8×8像素區塊平滑成本(下子列)Figure 11 is the 8 × 8 pixel block smoothing cost (lower sub-column)
圖十一(a)係8×8像素區塊之水平平滑成本(下子列)Figure XI (a) is the horizontal smoothing cost of the 8 × 8 pixel block (lower sub-column)
圖十一(b)係8×8像素區塊之垂直平滑成本(下子列)Figure XI (b) is the vertical smoothing cost of the 8 × 8 pixel block (lower sub-column)
圖十二係運動向量相似區塊分組示意圖Figure 12 is a schematic diagram of grouping of motion vector similar blocks
圖十三(a)係16×8像素區塊邊界比對運動向量整合示意圖Figure 13 (a) is a schematic diagram of 16×8 pixel block boundary alignment motion vector integration
圖十三(b)係8×8像素區塊邊界比對運動向量整合示意圖Figure 13 (b) is a schematic diagram of 8×8 pixel block boundary alignment motion vector integration
101...所接收之編碼位元流解碼及錯誤偵測101. . . Received encoded bit stream decoding and error detection
102...16×16像素巨區塊之改良式邊界比對演算法102. . . Improved boundary alignment algorithm for 16×16 pixel giant blocks
103...基於16×16像素巨區塊之動態規劃法103. . . Dynamic programming method based on 16×16 pixel giant block
104...相鄰巨區塊之MV是否小且一致104. . . Whether the MV of the adjacent giant block is small and consistent
105...基於8×8像素區塊之動態規劃疊代程序105. . . Dynamic programming iterative program based on 8×8 pixel block
106...8×8像素區塊之運動向量整合106. . . Motion vector integration of 8×8 pixel blocks
107...重建16×16像素之錯誤巨區塊107. . . Rebuilding a 16×16 pixel error block
108...重建16×16像素之錯誤巨區塊108. . . Rebuilding a 16×16 pixel error block
Claims (14)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW98138909A TWI412282B (en) | 2009-11-17 | 2009-11-17 | Video decoding method and apparatus for concealment of transmission errors |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW98138909A TWI412282B (en) | 2009-11-17 | 2009-11-17 | Video decoding method and apparatus for concealment of transmission errors |
Publications (2)
Publication Number | Publication Date |
---|---|
TW201119409A TW201119409A (en) | 2011-06-01 |
TWI412282B true TWI412282B (en) | 2013-10-11 |
Family
ID=44936079
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW98138909A TWI412282B (en) | 2009-11-17 | 2009-11-17 | Video decoding method and apparatus for concealment of transmission errors |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI412282B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104427347A (en) * | 2013-09-02 | 2015-03-18 | 苏州威迪斯特光电科技有限公司 | Method for improving image quality of network-camera video-monitoring system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW200737994A (en) * | 2006-01-20 | 2007-10-01 | Qualcomm Inc | Method and apparatus for error resilience algorithms in wireless video communication |
TW200737995A (en) * | 2006-01-20 | 2007-10-01 | Qualcomm Inc | Method and apparatus for determining an encoding method based on a distortion value related to error concealment |
EP0933948B1 (en) * | 1998-01-30 | 2008-03-19 | Kabushiki Kaisha Toshiba | Video encoder and video encoding method |
US7508874B2 (en) * | 2002-01-29 | 2009-03-24 | Broadcom Corporation | Error concealment for MPEG decoding with personal video recording functionality |
-
2009
- 2009-11-17 TW TW98138909A patent/TWI412282B/en not_active IP Right Cessation
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0933948B1 (en) * | 1998-01-30 | 2008-03-19 | Kabushiki Kaisha Toshiba | Video encoder and video encoding method |
US7508874B2 (en) * | 2002-01-29 | 2009-03-24 | Broadcom Corporation | Error concealment for MPEG decoding with personal video recording functionality |
TW200737994A (en) * | 2006-01-20 | 2007-10-01 | Qualcomm Inc | Method and apparatus for error resilience algorithms in wireless video communication |
TW200737995A (en) * | 2006-01-20 | 2007-10-01 | Qualcomm Inc | Method and apparatus for determining an encoding method based on a distortion value related to error concealment |
Also Published As
Publication number | Publication date |
---|---|
TW201119409A (en) | 2011-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8018998B2 (en) | Low complexity motion compensated frame interpolation method | |
KR100736041B1 (en) | Method and apparatus for concealing error of entire frame loss | |
KR100870115B1 (en) | Method for forming image using block matching and motion compensated interpolation | |
KR101217627B1 (en) | Method and apparatus for estimating motion vector based on block | |
US20060133495A1 (en) | Temporal error concealment for video communications | |
US9883200B2 (en) | Method of acquiring neighboring disparity vectors for multi-texture and multi-depth video | |
CN102263951B (en) | Quick fractal video compression and decompression method | |
CN102572446B (en) | Method for concealing entire frame loss error of multi-view video | |
CN106204456A (en) | Panoramic video sequences estimation is crossed the border folding searching method | |
KR100640498B1 (en) | Apparatus and method for concealing error of frame | |
EP3596698B1 (en) | Motion estimation method and apparatus for plurality of frames | |
JP2007519338A (en) | Video frame error concealment apparatus and method | |
TWI412282B (en) | Video decoding method and apparatus for concealment of transmission errors | |
WO2016131270A1 (en) | Error concealment method and apparatus | |
Korhonen | Learning-based prediction of packet loss artifact visibility in networked video | |
Hojati et al. | Error concealment with parallelogram partitioning of the lost area | |
JP4440233B2 (en) | Error concealment method and apparatus | |
CN108366265B (en) | Distributed video side information generation method based on space-time correlation | |
CN102204256B (en) | Image prediction method and system | |
KR100943068B1 (en) | Method for error concealment using curve interpolation | |
CN1993997A (en) | Error concealment technique for inter-coded sequences | |
CN114554213B (en) | Motion adaptive and detail-focused compressed video quality enhancement method | |
Ni et al. | Spatial error concealment algorithm based on adaptive edge threshold and directional weight | |
US10536723B2 (en) | Method and system for high performance video signal enhancement | |
KR20080073047A (en) | Method and apparatus for frame error concealment in video decoding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MM4A | Annulment or lapse of patent due to non-payment of fees |