TW201215163A - Method of frame error concealment in scable video decoding - Google Patents

Method of frame error concealment in scable video decoding Download PDF

Info

Publication number
TW201215163A
TW201215163A TW99131632A TW99131632A TW201215163A TW 201215163 A TW201215163 A TW 201215163A TW 99131632 A TW99131632 A TW 99131632A TW 99131632 A TW99131632 A TW 99131632A TW 201215163 A TW201215163 A TW 201215163A
Authority
TW
Taiwan
Prior art keywords
block
reference frame
zeroth
pixel
frame
Prior art date
Application number
TW99131632A
Other languages
Chinese (zh)
Other versions
TWI426785B (en
Inventor
Sheau-Fang Lei
Yu-Sheng Lin
Original Assignee
Univ Nat Cheng Kung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nat Cheng Kung filed Critical Univ Nat Cheng Kung
Priority to TW99131632A priority Critical patent/TWI426785B/en
Publication of TW201215163A publication Critical patent/TW201215163A/en
Application granted granted Critical
Publication of TWI426785B publication Critical patent/TWI426785B/en

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides a method of frame error concealment in scable video decoding for reconstructing a lost frame. The lost frame has a plurality of blocks. First, it detects whether there is a lost frame between a zero reference frame and a first reference frame. When there is a lost frame between the zero reference frame and the first reference frame, it gets the block of the lost frame and the position of the block. Then, it calculates the time differences between the block and the zero reference frame and between the block and the first reference frame for generating a zero time difference and a first time difference, respectively. If the zero time difference is smaller than the first time difference, it sets the first reference frame and a first corresponding reference frame as a current reference frame set for calculating the position of a compensation block. Finally, it expands the block for generating a expansion area, and reconstructs the block based on the expansion area.

Description

201215163 六、發明說明: 【發明所屬之技術領域】 本發明係關於影像處理之技術領域,尤指一種於可 调式影像解碼的全畫面錯誤掩蓋方法。 【先前技術】 H.264/AVC視訊編碼中引進了網路抽象層(netw〇rk abstract layer, NAL)的概念。網路抽象層的應用提高了影 像編碼位元串的網路可親性,不論是採用Byte_Stream Format或Packet-Transport的系統都能报簡單的運用。而 可調式視訊編碼(scalable video coding,SVC)為 H.264/AVC的延伸標準,自然也承襲網路抽象層這種特 性。 在現今網路上的傳輸,其可靠度往往受到許多因素 影響,例如通道頻寬的不穩定、通道的雜訊等都可能會 造成傳送的封包接收不完整甚至遺失。而封包的遺失便 造成編碼影像的接收不完整,因而在進行影像解碼時就 會使得畫面中的部分影像或全部影像有所缺失。在某些 低位元率的影像傳輸系統,如3G通訊網路,由於其影像 編碼後的資料量通常不高,因此封包的丟失容易會造成 整個影像畫面的丟失(wh〇ie frarTie i〇st)。 錯誤掩蓋方法(frame error concealment)是利用已經接 收到的正確的資訊來修補或掩蓋掉因為傳輸的錯誤所產 生的影像遺漏或錯誤的部分。錯誤掩蓋方法係屬於解碼 201215163 端的後處理(post processing)。而其中一種重要的錯誤掩 蓋方法便是時域上的錯誤掩蓋方法。 時域置換方法(Temporal Replacement, TR)係為一種習 知的時域錯誤掩蓋方法’其係將丟失的方塊的運動向量 (motion vector)直接設為零向量,也就是該方塊直接以參 考賴上相同的空間位置的對應方塊(c〇1〇cated bl〇ck)的 内容來做置換’此方法通常適用於掩蓋的區域沒有動作 的狀況。 另一種習知的時域錯誤掩蓋方法係利用邊界匹配演 异法(boundary matching algorithm, BMA)以改善時域置 換方法的缺點,其係將丟失的方塊的運動向量由周圍方 塊的運動向量十選取,以選取一個能夠使得補償後的方 塊與周圍方塊的if卩失真最小的當作重建的運動向量。 月J述以邊界匹配演算法(b〇undary matching algorithm)為基礎的錯誤掩蓋方法,都需要周圍的方塊的 資訊做為判斷丟失方塊重建運動向量的依據,然而當運 用在個低位元率的影像傳輸環境時,比如說行動網 路,則往往傳輸的錯誤會造成整個影像畫面的丟失,在 此情形周圍方塊的資訊也同樣遺失,而不可得知。 八針對前述問題,在svc的標準草案討論會議中則針 對王旦面衫像丟失提出兩種基本的方法,分別為幀複製 方法(frame copy, FC)及時域直接方法(temp〇rai价… TD)。 , 幀複製方法(FC)的方法是當有一個畫面的丟失發 生時,直接把參考幀的内容當作現在這個畫面的内容直 201215163 接複?過來,因此在實現上十分容易。但是傾複製方法 (FC)只適用在丟失的畫面與其參考幀的時間差距小且 晝面變動不大的影像中,若兩者的差異大則運用幀複製 方法(FC)來進行錯誤掩蓋將會造成报大的誤差而使得 重建的畫質降低許多。 全畫面影像丟失的情況下’當前畫面的所有資訊, 不’是運動向里、畫面殘值、方塊編碼模式等資訊都視 為:可得知。然而若可以利用影像在時域上的相關性, 十:t出丟失的運動向量,再利用運動補償的方法就 可《參考t面重建出丢失的畫面’時域直接方法(丁⑺即 是利用這個概念來對全晝面的影像錢進行錯誤掩蓋。 如圖。1所示,在時間t有一個B幀丟失,記做巧,其 表單(hst 1)的參考幢記作心’仏的第零表單(“Μ 〇) 參考幢記作砂,"兩_時間差距為TRd,钟# 兩鴨的時間差距為TRb,而TRpQ為F"與☆㈣的時 間差’亦即TRp0 = TRb + TRd。假設針對巧上的任—個 方塊4,在上可以找到與^有相同的空間位置的對應 方塊5C/ ’其中可以找到_個由厂"指向^的運動向量, 此即心的第零表單(llst 0)運動向量·由於b在時序 上跨過6 ’由於在紐時間内影像的移動具有連續性因 此可以利…刀在時序上進行内插來得出邱向^的運 動向量七以及邱向〜的運動向量…其中,邱向g 201215163 的運動向量化及F,指向❸的運動向量v:可用公式⑴表 示: TRb TRp0 x ^C,0 ' V;= TRd TRp0 xyc刀。201215163 VI. Description of the Invention: [Technical Field] The present invention relates to the technical field of image processing, and more particularly to a full-screen error masking method for tunable image decoding. [Prior Art] The concept of netw〇rk abstract layer (NAL) has been introduced in H.264/AVC video coding. The application of the network abstraction layer improves the network affinity of the image-encoded bit string, and the system using Byte_Stream Format or Packet-Transport can report simple use. Adjustable video coding (SVC) is an extension of H.264/AVC, and naturally inherits the characteristics of the network abstraction layer. In today's networks, the reliability is often affected by many factors, such as the instability of the channel bandwidth, the channel noise, etc., which may cause the transmitted packets to be incomplete or even lost. The loss of the packet causes the reception of the encoded image to be incomplete, so that some or all of the images in the picture are missing when the image is decoded. In some low bit rate image transmission systems, such as 3G communication networks, the amount of data encoded by the image is usually not high, so the loss of the packet is likely to cause the loss of the entire image (wh〇ie frarTie i〇st). The frame error concealment is to repair or mask out the missing or erroneous part of the image due to the transmission error by using the correct information that has been received. The error masking method belongs to the post processing of decoding 201215163. One of the important error concealing methods is the error concealment method in the time domain. Temporal Replacement (TR) is a well-known time domain error concealment method. It sets the motion vector of the lost block directly to the zero vector, that is, the block directly refers to the reference. The contents of the corresponding block (c〇1〇cated bl〇ck) of the same spatial position are used for replacement. This method is generally applicable to the case where the masked area has no action. Another conventional time domain error concealment method uses the boundary matching algorithm (BMA) to improve the shortcomings of the time domain permutation method, which selects the motion vector of the missing block from the motion vector of the surrounding block. To select a motion vector that can be reconstructed that minimizes the if卩 distortion of the compensated square and surrounding squares. The error concealment method based on the boundary matching algorithm (b〇undary matching algorithm) requires the information of the surrounding blocks as the basis for judging the missing square reconstruction motion vector, but when using the low bit rate image In the transmission environment, such as the mobile network, the transmission error often causes the entire image to be lost. In this case, the information of the surrounding blocks is also lost, and it is not known. Eight for the above problems, in the svc standard draft discussion meeting, two basic methods are proposed for the loss of Wang Dan's face shirt, respectively, frame copy (FC), time domain direct method (temp〇rai price... TD ). The method of frame copying method (FC) is to directly treat the content of the reference frame as the content of the current picture when the loss of one picture occurs. 201215163 Recover? Come here, so it's very easy to implement. However, the dumping method (FC) is only applicable to images in which the time difference between the lost picture and its reference frame is small and the facet changes little. If the difference between the two is large, the frame copy method (FC) is used to perform error concealment. The error caused by the big report makes the reconstructed picture quality much lower. In the case where the full-screen image is lost, all the information of the current picture, not the motion inward, the residual value of the picture, the block coding mode, etc., are regarded as: However, if you can use the correlation of the image in the time domain, ten: t out of the lost motion vector, and then use the motion compensation method to "refer to the t-plane reconstruction of the lost picture" time domain direct method (D (7) is the use This concept is to cover up the error of the full face image money. As shown in Fig. 1, there is a B frame lost at time t, which is described as clever, and the reference frame of the form (hst 1) is recorded as the heart's Zero form ("Μ 〇) reference frame recorded as sand, " two _ time gap is TRd, clock # two duck time gap is TRb, and TRpQ is F " and ☆ (four) time difference 'that is TRp0 = TRb + TRd Assume that for any box 4 on the grid, you can find the corresponding box with the same spatial position as ^55/ 'where you can find the motion vector of _ by factory " pointing to ^, which is the zeroth of the heart Form (llst 0) motion vector · Since b crosses 6' in time series because of the continuity of the image movement during the time of the button, it can be used to interpolate the time series to obtain the motion vector of Qi and the Qiu To the motion vector of ~...where, Qiu Xiang g 201215163 Trends quantization and F, point ❸ motion vectors v: ⑴ using equation represents: TRb TRp0 x ^ C, 0 'V; = TRd TRp0 xyc knife.

田57為巧的第零表單(ilst 〇),再藉由公式(丨)計算出 &中每個方塊的力與力,便可利用雙向預測與運動補償 以便將去失的晝面重建回來。Field 57 is the first zero form (ilst 〇), and then by formula (丨) to calculate the force and force of each square in &, you can use bidirectional prediction and motion compensation to reconstruct the lost face. .

月J述時域直接方法(TD)提供了 一個利用時序上的關 係來重建出丢失的畫面的運動向量,藉此達成錯誤掩蓋 的方法。然而時域直接方法(TD)在某些情況下仍然有可 以改進的部分。 考慮個圖片組(group 〇f pictures,G〇p)大小為8的 P白層式B畫面架構,其幀間預測的關係如圖2所示。圖 2係圖片組為8的階層式b畫面結構之示意圖。當傾6 (Frame 6)在傳送的時候遺失,根據時域直接方法⑽)的 方法、Ft=Frame6,Ftl=Frame8,F^FrcmeO。在此惰 由於與的時間跨度(TRpO = 8)相較於與巧 之時間跨度(TRd = 2)差距太大,因此利用公式⑴所計算 出來的運動向量將會有較大的誤差。 圖3係習擴充時域直接方法的運動向量推導之示意 圖如圖.3所不,6的第零表單⑴st 〇)參考中貞記作厂扣, 而仏的第-表單(丨ist 1)參考t貞記作%,對6上的任一方 塊4,我介1同樣可在仏上找到一個相制的方塊知, 201215163 其中可以找到一個幻為從心指向4之運動向量4與 ❿兩幀的時間差距為TRb,巧與^兩幀的時間差距為 侧,而TRpl為~與^兩+貞間的時間差,則可利用七 在時序/上進行内插來得出邱向_運動向量㈣Μ 指向^運動向量屮其卜邱向仏的運動向量喊 6指向β的運動向量力可用公式⑺表示: 考亀 m 例 Ft = Frame 6,Ft〇 := Frame 4,Ft〗〇 = Fmme 8。 此時/^與^便擁有一個較小的時間跨度(TRpi =句因 此能夠計算出一個誤差較小的運動向量。而擴充時域直 接方法(extend temporal direct meth〇d,ετ〇μ^ 是在做 運動向量的估計之前’先行計算TRpG#TRpi,倘若TRp〇 較小則利用公式⑴來做運動向量之估計,反之若TRpi較 J則利用a式(2)來做運動向量之估計,因而對TD做出了 改進》 别述之習知之時域直接方法(TD)與擴充時域直接方 法(ETDM)利用了影像時域上的連續性,在時域上利用前 後參考幀的運動向量以計算出丟失的畫面的運動向量, 藉此來進行錯誤掩蓋。然而,不論是時域直接方法(TD) 與擴充時域直接方法(ETDM),對每一個丟失的畫面中的 201215163 方鬼其運動向量都是由前後參考幀的相對應位置的方 塊的運動向量得之。 圖4係習知動作補償概念之示意圖。考慮如圖4的情 況,在丟失的畫面中有一個方塊屯,χ與y分別是此方 塊&最左上角的像素在空間中的水平與垂直位置以 七/尤少)來表不,而在第一表單(Hst U參考幀令與办有相The Time Domain Direct Method (TD) provides a method for reconstructing the motion vector of a lost picture using temporal relationships to achieve error concealment. However, the Time Domain Direct Method (TD) still has some areas that can be improved in some cases. Consider a picture group (group 〇f pictures, G〇p) size P white layer B picture architecture, the inter-frame prediction relationship is shown in Figure 2. 2 is a schematic diagram of a hierarchical b-picture structure in which a picture group is 8. When Pour 6 (Frame 6) is lost at the time of transmission, according to the method of time domain direct method (10)), Ft=Frame6, Ftl=Frame8, F^FrcmeO. In this idle, the time interval (TRpO = 8) is too large compared to the time span (TRd = 2), so the motion vector calculated by equation (1) will have a large error. Fig. 3 is a schematic diagram of the motion vector derivation of the extended time domain direct method as shown in Fig. 3. No, the zeroth form of 6 (1) st 〇) refers to the 贞 贞 作 厂 , , , , , , , , 表单 表单 表单 表单 表单 表单 表单 表单 表单t贞 is recorded as %, for any block 4 on 6, I can also find a phased block on the 仏, 201215163 which can find a motion vector 4 and ❿ two frames from the heart to the heart The time difference is TRb, and the time difference between the two frames is the side, and TRpl is the time difference between ~ and ^2 + ,, then the interpolation can be performed by using the time sequence/up to obtain the Qiu _ motion vector (four) Μ pointing ^Motion vector 屮qi Qiu Xiang's motion vector shouting 6 pointing to the β motion vector force can be expressed by the formula (7): test m example Ft = Frame 6, Ft〇: = Frame 4, Ft〗 〇 = Fmme 8. At this point /^ and ^ have a smaller time span (TRpi = sentence can therefore calculate a motion vector with less error. And extend temporal direct meth〇d, ετ〇μ^ is in Before doing the estimation of the motion vector, the TRpG#TRpi is calculated first. If the TRp is small, the formula (1) is used to estimate the motion vector. If TRpi is J, the equation (2) is used to estimate the motion vector. TD has made improvements. The well-known time domain direct method (TD) and extended time domain direct method (ETDM) exploit the continuity of the image time domain and use the motion vectors of the front and back reference frames in the time domain to calculate The motion vector of the lost picture is used to perform error concealment. However, both the time domain direct method (TD) and the extended time domain direct method (ETDM), for each lost picture, the 201215163 square ghost its motion vector It is obtained by the motion vector of the block corresponding to the position of the frame before and after. Figure 4 is a schematic diagram of the conventional motion compensation concept. Consider the case of Figure 4, there is a block in the lost picture. χ and y are a block this & top left corner pixel horizontal and vertical position in space of seven / esp less) to the table is not, in the first form (Hst U reference frame and make do with a phase

同工間位置的方塊稱為的相對應方塊,以少j表 不其中上標的1代表的是第一表單⑴st 〇參考幀。 寺減直接力法(TD)與擴充時域直接方法(ETDM) 中,由指向第零表單(Hst0)參考傾的運動向量^ 與指向第—表單(listl)參考㈣運動向量V)可由^㈣ 的第零表單(ustG)參考t貞之運動向量h算出由 可以得出: ’ v0 VC,0The block in the inter-work position is called the corresponding block, and the less than j is not the superscript 1 which represents the first form (1) st 〇 reference frame. In the Temple Reduction Direct Force Method (TD) and the Extended Time Domain Direct Method (ETDM), the motion vector ^ that points to the zeroth form (Hst0) and the reference to the first form (listl) reference (four) motion vector V) can be ^(4) The zeroth form (ustG) is calculated by reference to the motion vector h of t贞: 'v0 VC,0

TR 、p0 以1、化代㈣的水平與垂直分量n少代表々 :Jc平與垂直分置。因此砂,力由向量々指向第零表單 (list 0)的參考方塊盆最 '、左上角像素的水平與垂直位置分 別為X + V知、少+ v 則 0 干。而w曰 尾就以Vx + U + vo j表 不而由向2: V/指向第—矣留 &# .. ,, ^ 早(hst丨)的參考方塊其最左上 角像素的水平與垂直位置 為 JC + Vh、3; + V/),,則此方 塊以Ύ_χ + ν/χ,少+ 表示。 201215163 少)便藉由略〇c + v以,_y + v〇;J 及 <^ + V/;c,少+ ν/^ 進行雙向運動補償(bi— directional motion compensation),也就是方塊的每一個像素都是由 万+ U + v0_y J及方〆x + νλχ,少+ )内的對應像素做 平均而得。而這個補償的動作其實就意味著將方塊 與方塊+ ν以少+ V0〕J及方塊 4(χ + ν7χ,少+ v^J視為在不同時序上擁有相同内容的方 塊,而此方塊移動的軌跡,以方塊之左上角像素為代表, 即從^ 經過到“ + ν7χ,少+ v/;j。 前述說明是將時域直接方法(TD)與擴充時域直接方 法(ETDM)所得出的參考運動向量所代表的移動執跡進 行分析,也就是經過錯誤掩蓋後的移動執跡。因為上述 運動補償用的運動向量%與心皆是由七刀得出,因此再來 將針對化刀所代表的移動軌跡進行分析。 以vC,0x Vc刀少代表VC,(9的水平與垂直分量,由於 是由第-表單⑽丨)參考巾貞上的方塊如^指向第零表 單(list 0)的運動向罝,因此力在第零表單(id 〇)參 考巾貞上的參4方塊其左±角像素的位置為 。+ vc你,r + π—,而此方塊就以如+咖,^ + — 表示而由。(^力才日向去失的畫面的運動向i,可以藉 由對i•刀做内插,得出該向量為: 10 201215163 ,意,, 其水平與垂直分量分別為—,因此忍&卜,…在丟 失里面的參考方塊其左上角像素位置為 %,少—V7少),該方塊就以表示。 同樣的,也可將方塊略^ + Vd少+ 與方塊 _ 5匸(^,3^還有方塊&(^-"1,3;-〜少>>視為不同時序上擁有 相同内容的方塊,而此方塊移動的轨跡,以方塊左上角 的像素為代表,即從h + Vd少+ ν^刀少)經過 广Wx’Y 到&,少j。而此轨跡乃是由交^刀這個已知的 運動向量建構出來的,因此可以視為是實際的移動軌跡。 由則述說明可知,經過時域直接方法(TD)或擴充時域 直,方法(ETDM)錯誤掩蓋的移動轨跡與實際的移動軌 跡實際上是不完全相同的。當這兩個轨跡所包含的方塊 # 在畫面中是屬於同一個物件,或者兩者的移動是一致 的,則此時畫面上不同位置的運動向量差異不大,利用 經過時域直接方法(TD)或擴充時域直接方法(£丁]〇1^)錯 誤掩蓋的移動轨跡來做運動補償將不會對畫面的重建造 成衫響。但當兩個軌跡包含的方塊是屬於不同的物件, 且物件的移動並不一致,有相對的位置變化,則時域直 接方法(TD)或擴充時域直接方法(ETDM)計算出來的運 動向量與正確的運動向量將會存在差異,因此若利用經 過時域直接方法(TD)或擴充時域直接方法(ETDM)錯誤 201215163 掩蓋的移動轨跡來做運動補償將可能重建出一個錯誤的 畫面,因而使得重建的效果變差。 表1係針對時域直接方法(TD)或擴充時域直接方 法(ETDM)所計算出來的運動向量與未經過損失的正確 的運動向量進行比較並統計兩者之間的差異。假設擴充 時域直接方法(ETDM)計算出來的運動向量為 MVETDM,正確的向量為MVCRCT,將MVETDM與 MVCRCT的距離平方,與一個門檻值(threshold,以RTH 表示)進行比較,統計兩者距離平方大於此門檻的比例。 如表1所示,當門檻值RTH設為256時,亦即MVETDM與 MVCRCT的距離大於16,依照各影片的特性,最高有 47.23%的比例兩者距離大於16,而當門檻值RTH設為 64,亦即MVETDM與MVCRCT的距離大於8,此時最高有 58.40%的比例兩者距離大於8,且最低也有17.52%的比 例。由此數據可以得出時域直接方法(TD)或擴充時域直 接方法(ETDM)所計算出來的向量與正確的向量的確有 相當的差距。故習知全畫面錯誤掩蓋方法仍有改善空間。 ^\^equence BUS CITY CREW FOOTBALL 256 18.97% 10.09% 30.58% 47.23% 64 23.92% 24.21% 45.06% 58.40% ^^equence FOREMAN ICE MOBILE SOCCER 256 16.09% 12.27% 4.36% 31.07% 64 30.44% 19.17% 17.52% 42.57% 表一 12 201215163 【發明内容】 發月之主要目的係在提供—種於可調式影像解碼 的全畫面錯誤掩蓋方法’其係一全新架構,同時,其可 較^知技術獲得正確的重建結果。 依據本發明之-特色,本發明提出-種於可調式影 象解馬的王晝面錯誤掩蓋方法’其係運用於-影像巾貞序 列解碼t以重建在時序上位於—第零參考巾貞及__第一參 幀之間w遺失惝’該遺失幀分成複數個方塊,該第 ^考巾貞屬於一第零表單,t亥第一參考㈣於一第一表 早’該第零參考㈣應於該第—表單内具有-第零對應 參考幀,該第一參考幀對應於該第零表單内具有一第一 $應參考幀,該方法包含(A)係偵測該第零參考幀及該第 —參考t貞之間是否有—遺㈣;(B)當步驟⑷中制該第 零參考幀及該第一參考幀之間有一遺失幀時,獲取該遺 失楨的一方塊;(C)獲取該方塊的位置;(D)分別計算 X方塊在该第零參考幀及該第一參考幀的一第零時間差 距及—第一時間差距;(E)判斷該第零時間差距是否小 於該第一時間差距;(F)當判定該第零時間差距小於該 時間差距時,設定該第一參考幀及該第一對應參考 幀為現行參考幀組,並以該現行參考幀組計算補償方塊 位置;以及(G)擴展該方塊,以產生—擴充區域,依據 έ亥擴充區域以對該方塊進行重建。 13 201215163 【實施方式】 圖5係本發明一種於可嘴十旦/你4 樁f方W,… 周式衫像解碼的全畫面錯誤 ^ — 、 像解碼裝置的影僳 幀序列解碼中以重建在時序上位 ’· 乐1參考幀F及 一第一參考幀尸之間的一撗类 "4失幢心言亥遺失傾分成複數 個方塊。 圖6係本發明圖片組(g_p Gf pktures,G〇p)的關係 之示意圖。圖7係本發明圖片組(G〇p)的另一關係之示意 圖。如圖6所示,例如當傾6(frame_ber ^為遺失以 時,第零表單(list 〇)則為幀號碼在幀6之前的幀之集合, 第一表單(list 1)則為幀號碼在幀6之後的幀之集合。於圖 6中,第零表單(list〇)為幀〇至幀5之集合,第一表單(丨^ 〇 為幀7至幀8之集合。 該第零參考幀F的屬於第零表單(list 0),該第一參考 中貞/¾屬於第一表單(Hst 〇 ’該第零參考幀尸扣對應於該第 一表單(list 1)内具有一第零對應參考幀幀號碼為8之 中貞)’該第—參考幀對應於該第零表單(list 0)内具有一 第一對應參考幀/幀號碼為〇之幀)。 而圓7的情形,為熟於該技術者可由圖6推知,不予 贅述。 首先於步驟(A)中,偵測該第零參考幀巧0及該第一參 考Ί5貞/^之間是否有一遺失巾貞6。其可判斷δ亥第I參考 幀尸的及該第一參考巾貞厂^之間的幀號碼是否連續,當該第 201215163 日零參考幀❼及該第-參考幀F"之間的幀號碼非連續 時,表示有一遺失幀6,此時執行步驟(B),反之,重回 步驟。 當步驟(A)中偵測該第零參考幀F的及該第一參考傾 巧7之間有一遺失幀6時,於步驟(B)中獲取該遺失幀巧的 一方塊。 於步驟(C)中獲取該方塊的位置(x,y)。 • 於步驟(D)中分別計算該方塊在該第零參考幀及 該第一參考幀6 /的一第零時間差距烈^^及一第一 1夺間 差距没"。其中,該第零時間差距伙外為^^與^^的時 間差距,該第一時間差距没^^為厂^與厂的的時間差距。 於一實施例中,第零時間差距可計算f以與^^的傾號 碼差距,該第一時間差距可計算//0與60的巾貞號碼 差距。 於步驟(E)中,判斷該第零時間差距找以^是否小於該 第一時間差距:TRpJ。 备判疋該第零時間差距小於該第一時間差距 77^/時,於步驟(F)中設定該第一參考幀及該第一對應 參考幀為現行參考幀組,並以該現行參考幀組計算補 償方塊位置。 15 201215163 圖8係本發明當第零時間差距〜小於該第—時間 差距〜時之示意圖。於步驟(F)中設定該第一對應參考 賴#為該第零表單(llst 〇)的參考幢,設定該第一參考巾貞 %為該第-表單(llst υ的參㈣,並分別計算該方塊對 應於該第零表單⑽〇)及該第—表單(丨心)相對方塊的 運動向-碼與V)) ’並依據該運動向量(一 v))設定該方塊位TR and p0 are 1, and the horizontal and vertical components n of the generation (4) are less than 々: Jc is divided vertically and vertically. Therefore, the force of the sand from the vector 々 points to the zeroth form (list 0) is the highest of the reference squares. The horizontal and vertical positions of the upper left pixel are X + V, less + v, then 0. And w曰 tail with Vx + U + vo j not to the 2: V / point to the first - 矣 &&# ..,, ^ early (hst 丨) reference block its top left pixel level and The vertical position is JC + Vh, 3; + V/), and the square is represented by Ύ_χ + ν/χ, less +. 201215163 Less) by bi-directional motion compensation (bi- directional motion compensation) by slightly 〇c + v, _y + v〇; J and <^ + V/;c, less + ν/^ Each pixel is averaged by the corresponding pixels in 10,000+ U + v0_y J and square 〆 x + νλχ, less + ). And this compensation action actually means that the square and the square + ν are less + V0]J and the square 4 (χ + ν7χ, less + v^J is regarded as a square having the same content at different timings, and the square moves The trajectory is represented by the pixel in the upper left corner of the block, that is, from ^ to "+ ν7χ, less + v/;j. The foregoing description is derived from the time domain direct method (TD) and the extended time domain direct method (ETDM). The motion trace represented by the reference motion vector is analyzed, that is, the motion trace after the error concealment. Since the motion vector % and the motion for the motion compensation are all derived from the seven-knife, the tool will be targeted again. The represented movement trajectory is analyzed. With vC, 0x Vc knife represents less VC, (9 horizontal and vertical components, because it is by the first form (10) 丨) reference frame on the frame such as ^ points to the zero form (list 0 The movement is 罝, so the force is in the zeroth form (id 〇) on the reference frame. The position of the left corner pixel is + vc you, r + π-, and the square is like + coffee , ^ + — indicates the reason. (^ force the movement of the lost picture to i, can By interpolating the i•knife, the vector is: 10 201215163 , meaning, its horizontal and vertical components are respectively —, so forbearance & Bu, ... in the missing reference frame, the upper left pixel position is %, less - less than V7), the square is indicated. Similarly, the square can be slightly ^ + Vd less + with the square _ 5 匸 (^, 3^ and the square &(^-"1,3;-~少>> is considered as a block with the same content at different timings, and the trajectory of this block is represented by the pixel in the upper left corner of the block, that is, less from h + Vd + less than ν^ Wx'Y to &, less j. This trajectory is constructed from the known motion vector of the knives, so it can be regarded as the actual trajectory. From the description, it can be seen that the time domain is directly Method (TD) or extended time domain straight, the moving trajectory masked by the method (ETDM) error is not exactly the same as the actual moving trajectory. When the two tracks contain the square # in the picture belongs to the same An object, or the movement of the two is consistent, then the motion vector difference at different positions on the screen at this time It doesn't matter much, using the time domain direct method (TD) or the extended time domain direct method (£丁]〇1^) to make the motion compensation for the motion trajectory will not cause a shirting of the reconstruction of the picture. The two tracks contain blocks that belong to different objects, and the movement of the objects is inconsistent. If there is a relative position change, the motion vector calculated by the time domain direct method (TD) or the extended time domain direct method (ETDM) is correct. There will be differences in motion vectors, so if you use motion trajectories masked by time domain direct method (TD) or extended time domain direct method (ETDM) error 201215163, motion compensation will likely reconstruct a wrong picture, thus making reconstruction The effect is worse. Table 1 compares the motion vectors calculated for the time domain direct method (TD) or the extended time domain direct method (ETDM) with the correct motion vector without loss and counts the difference between the two. Assume that the motion vector calculated by the extended time domain direct method (ETDM) is MVETDM, the correct vector is MVCRCT, the distance between MVETDM and MVCRCT is squared, and a threshold value (threshold, expressed in RTH) is compared, and the distance between the two is squared. The ratio is greater than this threshold. As shown in Table 1, when the threshold RTH is set to 256, that is, the distance between MVETDM and MVCRCT is greater than 16, according to the characteristics of each movie, the maximum ratio is 47.23%, the distance between them is greater than 16, and when the threshold RTH is set to 64, that is, the distance between MVETDM and MVCRCT is greater than 8, and the highest ratio of 58.40% is greater than 8 and the lowest is 17.52%. From this data, it can be concluded that the vector calculated by the time domain direct method (TD) or the extended time domain direct method (ETDM) does have a considerable gap with the correct vector. Therefore, there is still room for improvement in the conventional full-screen error masking method. ^\^equence BUS CITY CREW FOOTBALL 256 18.97% 10.09% 30.58% 47.23% 64 23.92% 24.21% 45.06% 58.40% ^^equence FOREMAN ICE MOBILE SOCCER 256 16.09% 12.27% 4.36% 31.07% 64 30.44% 19.17% 17.52% 42.57 % Table 1 12 201215163 [Invention] The main purpose of the month is to provide a full-screen error masking method for adjustable image decoding, which is a new architecture, and at the same time, it can obtain correct reconstruction results by knowing the technology. . According to the features of the present invention, the present invention proposes a method for masking the error of the king of the adjustable image. The system is applied to the image frame sequence decoding t to reconstruct the time series - the zero reference frame. And __ between the first reference frame w lost 惝 'The lost frame is divided into a plurality of squares, the first test towel 贞 belongs to a zeroth form, t hai first reference (four) in a first table early 'the zero reference (4) having a -th zero corresponding reference frame in the first form, the first reference frame corresponding to the first zero reference frame in the zeroth form, the method comprising (A) detecting the zero reference Whether there is a - (4) between the frame and the first reference t; (B) when there is a lost frame between the zero reference frame and the first reference frame in the step (4), acquiring a missing block; C) obtaining the position of the block; (D) respectively calculating a zero time difference between the zeroth reference frame and the first reference frame and a first time difference; (E) determining whether the zeroth time gap is Less than the first time gap; (F) when determining that the zeroth time gap is less than the time gap Setting the first reference frame and the first corresponding reference frame as the current reference frame group, and calculating the compensation block position by using the current reference frame group; and (G) expanding the block to generate an extended area, according to the expansion The area is reconstructed from the square. 13 201215163 [Embodiment] FIG. 5 is a full-screen error in the decoding of a weekly shirt image, which is decoded by a decoding device, to reconstruct a shadow frame sequence of a decoding device. In the sequence of the upper position '· 乐 1 reference frame F and a first reference frame between the corpse of a & & 4 遗 遗 遗 遗 遗 遗 遗 遗 遗 遗 遗 遗 遗 遗 遗 遗 遗 遗 遗 遗 遗 遗 遗 遗Fig. 6 is a view showing the relationship of the picture groups (g_p Gf pktures, G 〇 p) of the present invention. Fig. 7 is a schematic view showing another relationship of the picture group (G〇p) of the present invention. As shown in FIG. 6, for example, when tilting 6 (frame_ber^ is lost, the zeroth form (list 〇) is the set of frames whose frame number is before frame 6, and the first form (list 1) is the frame number. The set of frames after frame 6. In Fig. 6, the zeroth form (list〇) is a set of frames 〇 to 5, and the first form (丨^ 〇 is a set of frames 7 to 8. The zeroth reference frame F belongs to the zeroth form (list 0), and the first reference 贞/3⁄4 belongs to the first form (Hst 〇 'the zeroth reference frame corpus corresponds to the first form (list 1) has a zeroth corresponding The reference frame frame number is 8 贞) 'The first reference frame corresponds to a frame having a first corresponding reference frame/frame number in the zeroth form (list 0)). In the case of circle 7, Those skilled in the art can infer from FIG. 6 and do not repeat them. First, in step (A), it is detected whether there is a lost frame between the zeroth reference frame and the first reference frame. It can be judged whether the frame number between the 参考海第I reference frame corpse and the first reference frame ^ factory is continuous, when the 201215163 zero reference frame ❼ and the first - When the frame number between the frames F" is discontinuous, it indicates that there is a lost frame 6, and at this time, step (B) is performed, otherwise, the step is returned. When the zeroth reference frame F is detected in step (A) When there is a lost frame 6 between the first reference deciding 7, the block of the lost frame is obtained in step (B). The position (x, y) of the block is obtained in step (C). D) respectively calculating the difference between the zero reference frame and the first zero of the first reference frame 6/, and the difference between the first and the first one is not " wherein the zeroth time gap is The time difference between ^^ and ^^ is not the same as the time difference between the factory and the factory. In an embodiment, the zeroth time gap can calculate the difference between f and the number of the ^^. The first time difference can calculate the gap between the numbers of //0 and 60. In step (E), it is determined whether the zeroth time gap is found to be less than the first time gap: TRpJ. When the time difference is less than the first time difference of 77^/, the first reference frame and the first corresponding reference frame are set in the step (F). The row reference frame group is calculated, and the compensation block position is calculated by the current reference frame group. 15 201215163 FIG. 8 is a schematic diagram of the present invention when the zeroth time gap is smaller than the first time difference 〜, which is set in step (F). A corresponding reference Lai is the reference building of the zeroth form (llst 〇), and the first reference frame % is set to the first form (the llst 参 parameter (4), and the square is respectively calculated corresponding to the zeroth form (10) 〇) and the first-form (丨心) relative block motion-code and V)) ' and according to the motion vector (a v)) set the square

置、該方塊對應於該第零表單(list Q)之方塊位置、及該 方塊對應於該第一表單(Hst υ之方塊位置。 戎方塊對應於該第零表單(list 0)相對方塊的運動 向量b及該方塊對應於該第一表單(list 1}相對方塊的 運動向量P7)分別為: TRbTRp0 X^C,0 TRd 一^i〇XVc,0The block corresponds to the block position of the zeroth form (list Q), and the block corresponds to the first form (Hst 方块 block position. The 戎 block corresponds to the movement of the zero block (list 0) relative block The vector b and the square corresponding to the first form (the motion vector P7 of the block relative to the block 1) are: TRbTRp0 X^C, 0 TRd a ^i〇XVc,0

當中,%刀為該第一參考幀Fr/對應於該第零表單(list 〇)的運 動向量’ TRb為該遺失❹r與該第一對應參考幀^的時間差 距’™為該遺失❹,與該第-參考❹"的時間差^巨。並設 疋该方塊位置為— ,該方塊對應於該第零表 早nm 〇)之方塊位置為&+VCiw + vc^,該方塊對應 於該第-表單⑽丨)之方塊位置為㈣。%、代表^ 的水平與垂直分量n 代表心的水平與垂直 分量。 16 201215163 於步驟(G)中,擴展該方塊以產生一擴充區域,依據 5玄擴充區域以對該方塊進行重建。 t判定該第零時間差距非小於該第一時間差距 Γ/^7時’於步驟(H)中,設定該第零參考幀/¾及該第零 對應參考巾貞為一現行參考幀組’以該現行參考幀組計 算補侦方塊位置,並執行步驟(G)。Wherein, the % knife is the first reference frame Fr/the motion vector 'TRb corresponding to the zeroth form (list 〇) is the time difference 'TM of the lost ❹r and the first corresponding reference frame ^' is the missing ❹, and The time difference of the first-reference ❹" is huge. The block position is -, the block corresponds to the zeroth table, and the square position of the block is &+VCiw + vc^, and the square corresponding to the first form (10) is (4). The horizontal and vertical components n representing %, representing ^ represent the horizontal and vertical components of the heart. 16 201215163 In step (G), the block is expanded to generate an extended area, and the square is reconstructed according to the 5 meta-expansion area. t determining that the zeroth time difference is not less than the first time difference Γ / ^ 7 'in step (H), setting the zeroth reference frame / 3⁄4 and the zeroth corresponding reference frame is a current reference frame group ' The position of the re-detection block is calculated by the current reference frame group, and step (G) is performed.

圖9係本發明當第零時間差距77^ 0非小於該第一時 間差距伙"時之示意圖。於步驟(Η)中設定該第零參考幀 厂的為泫第零表單(list 〇)的參考幀,設定該第零對應參考 W/o為該第-表單(list υ的參考傾’並分別計算該方塊 =於該第日零表單(llst Q)及該第—表單(Hst 〇相對方塊 .動向量(%與心),並依據運動向量與力)設 =方,位置、該方塊對應於該第零表單 位置1方塊對應於該第-表單⑽υ之方塊位置。 玄方塊對應於㈣零表單(Hst G)相對方塊的運動 運二°及該方塊對應於該第—表單叫t 1)相對方塊的 運動向量心分別為: TRprc^ r 當中,v—c,7為該第零參考幢心對應於該第—表單 (^丨)的運動向量,恤為該遺失巾貞⑼該第零 >考賴〜的時間差距’ TRd為該遺失悄巧與該第 17 201215163 零對應參考巾貞^的時間差距。並設定該方塊位置為 ,該方塊對應於該第零表單(丨ist 〇)之方塊 位置為aw,該方塊對應於該第—表單(Hst丨)之方塊位 置為, ^ + vc,/x,少+ 咖、V办代表%的水平與垂直分 里vC,/x、VC,/少代表%,/的水平與垂直分量。 於步驟⑴巾’判斷㈣塊是Η該遺失㈣的最後 方塊,右是,結束該全晝面錯誤掩蓋方法若否,獲取 下一方塊,並重回步驟(C)。 圖10係本發明動作補償後的重疊與未包含區域之示 意圖。由於本發明是利用丟失晝面的參考幀上的相對應 方塊之運動向量的移動轨跡來決定重建的方塊的位置, 因此在畫面重建的時候,就會如圖1 〇所示會有部分方塊 重疊(overlap area)以及部分像素未包含(unc〇ver ^“)的 狀況。為了解決這個問題,在步驟(G)中對運動補償的部 分進行了修改。 為了處理重疊區域的運動補償情形,必須了解正在 補償的像素位置是否已於前面方塊進行運動補償時被補 償過,且現在的像素位置是否位於一擴充區域(expansi〇n area)亦會影響補償的結果。因此利用一個矩陣 PxCheck[pic_width][pic_height]來記錄每個像素目前的 狀態。 其中,PxCheck[x][y] = 0 ’表示在(X,y)位置的像素 尚未被重建;PxCheck[x][y] = -1,表在(X,y)位置的像素 18 201215163 已被重建’但屬於一擴展區域(extend area); pxcheck[x][y] =1 ’表在在(X,y)位置的像素已被重建,且非擴展區域 (extend area)。於開始時,將所有像素的PxCheck[x][y]均 初始化為〇。 圖11係本發明步驟(G)之詳細流程圖。該方塊由Μ x N 個像素所組成。在Η·264的標準中,該方塊的大小可以 有 4x4、8x4、4x8、8x8、16x8、8x16、16x16 像素。為 了避免重建時有像素未被包含,於步驟中將該方塊由 ΜχΝ 個像素擴充為(M + abs(Vh) + abs(Vh))x(N + abs(v〇) + abs(V/y))個像素的該擴充區域(expansi〇n area) ’當中vGjc為運動向量^在χ軸的大小,v时為 運動向量勿在y軸的大小,當中ν/Λ為運動向量心 在X軸的大小,為運動向量心在y軸的大小。 於步驟(G1)中,獲取該擴充區域(expansi〇n)的一像 素’其中該擴充區域係由該方塊及一擴展區域(extend area)。圖12係本發明擴充區域之示意圖,由圖〖2可知’ 擴展區域(extend area)係指非該方塊的斜線部分。 於步驟(G2)中,獲取該像素的位置(xy)。 於步驟(G3)中,判斷該像素的位置(x,y)是否位於該 擴展區域。 其中,於步驟(G3)中擴展區域的判斷則是由運動向 量的方向來決定,圖13、圖14所示為擴展區域的分布。 圖13係本發明經錯誤掩蓋後的方塊與參考方塊的 關係之示意圖’其係以第零表單⑽Q)參考㈣基 201215163 礎(TRp0>TRpl)。圖14係本發明經錯誤掩蓋後的方 塊與參考方塊的關係之示意圖,其係以第一表單 (list 1)參考幀為基礎(TRp0< = TRpl)。 依照重建方塊與相對應方塊的相關位置,可分為8 種情形,其可歸納如表2所示,其中情形0(case 0)到情 形3(case 3)為以第零表單(list 0)參考幀為基礎(TRpO > TRpl)。情形 4(case4)至,J 情形 7(case 7)以第一表單(list 1) 參考幀為基礎(TRpO<=TRpl)。 參考幀 情 形 落在擴展區域之條件 第零表單 (list 0) TRp〇>TRpl 0 /<abs(v;i) || />=(M+abs(v/J) ||y<abs(v0,) ||y>=(N+abs(v/>,)) 1 /<abs(vat) || />=(M+abs(vto)) Hy^absiv^) ||y>=(N+abs(v/r)) 2 i'<abs(vAt) || j>=(M+abs(v^)) ||;<abs(v0>,) ||y>=(N+abs(v〇),)) 3 i<abs(vat) || i>=(M+abs(vto)) H^absiv^,) ||;>=(N+abs(v〇r)) 第一表單 (list 1) TRp〇<=TRpi 4 i<abs(vat) || i>=(M+abs(v〇.r)) ||_/<abs(v0>/)丨|y>=(N+abs(v0》) 5 /<abs(v,x) || z>=(M+abs(v,J) ||;'<abs(v0>,) ||7>=(N+abs(v0>,)) 6 /<abs(vfe) || />=(M+abs(v0t)) ||y<abs(v/y) ||y>=(N+abs(v/v)) 7 i'<abs(v/^) || />=(M+abs(v/t)) H^absiv;,) ||y>=(N+abs(v/y)) (Ο)為相對於當前補償範圍最左上角之座標 _ 表2 依據表2,步驟(G3)即可判斷該像素的位置(x,y)是否 位於該擴展區域。 若步驟(G3)判定該像素的位置(x,y)位於該擴展區域 (非4X4的部分),於步驟(G4)中,再判斷該像素是否已有 重建值,且位於先前重建方塊的非擴展區域。其係判斷 PxCheck[x][y]是否為 1。 20 201215163 若判疋4像素已有重建值且位於該先前重建方塊的 非擴展區域’於步驟(G5)中’再判斷該像素是否為該擴 充區域(expansion area)最後一像素,若是,結束步驟(G), 若否’獲取下—像素’並執行步驟(G2)。 若步驟(G4)判定該像素沒有重建值,且非位於先前 f建方塊的非擴展區域’於步驟_中,再判斷該像素 疋否尚未破重建。其係判斷PxCheck[x][y]是否為〇 〇 若步驟(G6)判定該像素尚未被重建,於步驟(G7) 中則對該像素進行重建。步驟(G7)包含步驟(G7〇 及步驟(G72)。於步驟(G71)中,設定pxcheckw[y]為卜 於f驟⑹2)中,m象素之值設定為 pe CO + Pelc| + 1)>>1,並執行步驟(G5)。當中 為區塊Be。相對庫的德杏杜 于應的像素值,pelcl為區塊Bcl相對 w “牛、值,>>為位元右移運算(right Shlft)。 =驟(G6)判定該像素已被被重建表示該像素已 中,則對令t金於°玄先别重建方塊的擴展區域,於步驟(G8) 區域之重^^進行重建並與該先前重建方塊的擴展 坟之重建值平均,再執行步驟(G5)。 於步驟(G8)中將該像素之值設定為& Ρ 1 Pelc〇 + pelc 丨+ 3)>>2,者中 重建值,為像素先前的 區塊B ^ C〇相對應的像素值,Pe丨£|為 〔塊^^相對應的像素值,》為 (right shift) 〇 々位兀右移運算 KG9)中’再判斷該像素是否已有重建值且 21 201215163 位於该先則重建方塊的非擴展區域。其係判斷 PxCheck[x][y]疋否為1 ’若是,執行步驟(eg),若否,設 定PxCheck[x][y]為I,並執行步驟(G72)。 由於時域直接方法(TD)與擴充時域直接方法 (ETDM)是以經過錯誤掩蓋的移動執跡來進行運 動補償,而這並不能在所有的情況中都獲得正確 的重建結果,當畫面中的物件不是一致的移動 時,時域直接方法(TD)與擴充時域直接方法(ETDM)利 用經過錯誤掩蓋的移動軌跡將會對重建後的結 果產生誤差。本發明的錯誤掩蓋方法在對丟失的 畫面進打運動補償時,選擇了確實的移動軌跡而 非經過錯s吳掩蓋的移動執跡來判斷補償的對象 方塊,可較習知的時域直接方法(TD)與擴充時域直 接方法(ETDM)獲得正確的重建結果。本發明的 錯誤掩蓋方法較時域直接方法(TD)在信噪比 (PSNR)上有平均0 32化的改善而與擴充時域直 接方法(ETDM)相比,本發明的錯誤掩蓋方法在信 噪比(PSNR)上有平均〇 2心的改善。 由刖述說明可知,本發明提供一種全新架構 的於可調式影像解碼的全畫面錯誤掩蓋方法其 可知:S知的時域直接方法(T D)與擴充時域直接 方法(ETDM)獲得正確的重建結果。 由上述可知,本發明無論就目的、手段及功效,在 在均顯不其迥異於習知技術之特徵,極具實用價值。惟 應’主思的是,上述諸多實施例僅係為了便於說明而舉例 22 201215163 而已’本發明所主張之權利範圍自應以申請專利範圍所 述為準,而非僅限於上述實施例。 【圖式簡單說明】 圖1係習知時域直接方法的運動向量推導之示意圖。 圖2係圖片組為8的階層式B畫面結構之示意圖。 圖3係習知擴充時域直接方法的運動向量推導之示意圖。 圖4係習知動作補償概念之示意圖。 “ 圖5係本發明一種於可調式影像解碼的全畫面錯誤掩蓋 方法的流程圖。 圖6係本發明圖片組的關係之示意圖。 圖7係本發明圖片組的另一關係之示意圖。 圖8係本發明當第零時間差距小於該第一時間差距時之 不意圖。 圖9係本發明當第零時間差距非小於該第—時間差距時 之不意圖。 圖10係本發明動作補償後的重疊與未包含區域之示意 圖,。 圖11係本發明步驟(G)之詳細流程圖。 圖12係本發明擴充區域之示意圖。 圖13、圖14係本發明為擴展區域的分布之示意圖。 【主要元件符號說明】 步驟(A)〜步驟(I) 步驟(G1)〜步驟(G9) 23Figure 9 is a schematic diagram of the present invention when the zeroth time gap 77^0 is not less than the first time gap. Setting the reference frame of the zeroth reference frame factory (list 〇) in the step (Η), setting the zeroth corresponding reference W/o as the first form (list υ reference tilt 'and respectively Calculate the square = the first day zero form (llst Q) and the first form (Hst 〇 relative block. The motion vector (% and heart), and according to the motion vector and force) set = square, the position, the square corresponds to The zeroth form position 1 block corresponds to the block position of the first form (10). The black square corresponds to the (four) zero form (Hst G) relative to the movement of the square and the square corresponds to the first form called t 1) The motion vector heart of the block is: TRprc^ r where v-c, 7 is the motion vector corresponding to the first form (^丨) of the zeroth reference building, and the shirt is the lost item (9) the zeroth> ; Kao Lai ~ time gap 'TRd for the time gap between the lost sneaky and the 17th 201215163 zero corresponding reference towel 贞 ^. And setting the position of the block to be, the square corresponding to the zeroth form (丨ist 〇) is aw, and the square corresponds to the position of the first form (Hst丨), ^ + vc, /x, Less + coffee, V office represents % of horizontal and vertical scores vC, /x, VC, / less represents %, / horizontal and vertical components. In the step (1), the judgment (four) block is the last square of the lost (four), and right, the end of the full face error concealment method, if not, the next block is obtained, and the step (C) is returned. Fig. 10 is an illustration of the overlapping and non-contained regions after the motion compensation of the present invention. Since the present invention determines the position of the reconstructed block by using the movement trajectory of the motion vector of the corresponding block on the reference frame of the lost face, when the picture is reconstructed, there will be a part of the block as shown in FIG. The overlap area and some pixels do not contain (unc〇ver ^"). In order to solve this problem, the motion compensation part is modified in step (G). In order to deal with the motion compensation situation of the overlap region, it is necessary to Knowing whether the pixel position being compensated has been compensated for when the front block is motion compensated, and whether the current pixel position is in an extended area (expansi〇n area) also affects the compensation result. Therefore, using a matrix PxCheck[pic_width] [pic_height] to record the current state of each pixel. Where PxCheck[x][y] = 0 ' indicates that the pixel at (X, y) position has not been reconstructed; PxCheck[x][y] = -1, table The pixel 18 201215163 at the (X,y) position has been reconstructed 'but belongs to an extended area; pxcheck[x][y] =1 'the table at the (X,y) position has been reconstructed, Non-extended area. At the beginning, PxCheck[x][y] of all pixels is initialized to 〇. Figure 11 is a detailed flowchart of step (G) of the present invention. The block consists of N x N pixels In the standard of Η·264, the size of the block can be 4x4, 8x4, 4x8, 8x8, 16x8, 8x16, 16x16 pixels. In order to avoid pixels not being included in the reconstruction, the block is used in the step. The pixel is expanded to (M + abs(Vh) + abs(Vh)) x(N + abs(v〇) + abs(V/y)) for the extended region of the pixel (expansi〇n area) 'where vGjc is The motion vector ^ is the size of the χ axis, v is the motion vector, not the size of the y axis, where ν / Λ is the size of the motion vector heart on the X axis, and the size of the motion vector heart on the y axis. In step (G1) A pixel of the extended area (expansi〇n) is obtained, wherein the extended area is composed of the square and an extended area. FIG. 12 is a schematic diagram of the extended area of the present invention, and the extended area is known from the figure [2] (extend area) means a portion of the diagonal line that is not the square. In step (G2), the position of the pixel (xy) is obtained. In step (G3), it is determined whether the position (x, y) of the pixel is located in the extended area. wherein the determination of the extended area in step (G3) is determined by the direction of the motion vector, FIG. Figure 14 shows the distribution of the extended area. Figure 13 is a schematic illustration of the relationship between the error-masked block and the reference block of the present invention, which is based on the zeroth form (10)Q) with reference to (4) base 201215163 (TRp0>TRpl). Figure 14 is a diagram showing the relationship between the error-masked block and the reference block of the present invention, based on the first form (list 1) reference frame (TRp0< = TRpl). According to the position of the reconstruction block and the corresponding block, it can be divided into 8 cases, which can be summarized as shown in Table 2, where case 0 (case 0) to case 3 (case 3) is the zeroth form (list 0) Based on the reference frame (TRpO > TRpl). Case 4 (case 4) to, J Case 7 (case 7) is based on the first form (list 1) reference frame (TRpO<=TRpl). The reference frame case falls within the condition of the extended area. The zeroth form (list 0) TRp〇>TRpl 0 /<abs(v;i) || />=(M+abs(v/J) ||y< Abs(v0,) ||y>=(N+abs(v/>,)) 1 /<abs(vat) || />=(M+abs(vto)) Hy^absiv^) | |y>=(N+abs(v/r)) 2 i'<abs(vAt) || j>=(M+abs(v^)) ||;<abs(v0>,) || y>=(N+abs(v〇),)) 3 i<abs(vat) || i>=(M+abs(vto)) H^absiv^,) ||;>=(N+abs (v〇r)) First form (list 1) TRp〇<=TRpi 4 i<abs(vat) || i>=(M+abs(v〇.r)) ||_/<abs( V0>/)丨|y>=(N+abs(v0)) 5 /<abs(v,x) || z>=(M+abs(v,J) ||;'<abs(v0&gt ;,) ||7>=(N+abs(v0>,)) 6 /<abs(vfe) || />=(M+abs(v0t)) ||y<abs(v/y) ||y>=(N+abs(v/v)) 7 i'<abs(v/^) || />=(M+abs(v/t)) H^absiv;,) || y>=(N+abs(v/y)) (Ο) is the coordinate with respect to the top left corner of the current compensation range _ Table 2 According to Table 2, step (G3) can determine the position of the pixel (x, y) Whether it is located in the extended area. If step (G3) determines that the position (x, y) of the pixel is located in the extended area (not part of 4X4) In step (G4), it is determined whether the pixel has a reconstructed value and is located in a non-extended region of the previously reconstructed block. It is determined whether PxCheck[x][y] is 1. 20 201215163 If 4 pixels have been determined Reconstructing the value and located in the non-extended region of the previously reconstructed block 'in step (G5)' to determine whether the pixel is the last pixel of the expansion area, and if so, ending step (G), if not 'acquiring - Pixel' and perform step (G2). If step (G4) determines that the pixel has no reconstructed value, and is not in the non-extended region of the previous f-block, in step _, it is determined whether the pixel has not been reconstructed yet. It is determined whether PxCheck[x][y] is 〇 〇 If step (G6) determines that the pixel has not been reconstructed, the pixel is reconstructed in step (G7). Step (G7) includes a step (G7〇 and step (G72). In step (G71), setting pxcheckw[y] to b (6) 2), the value of the m pixel is set to pe CO + Pelc| + 1 )>>1 and perform step (G5). Among them is the block Be. Relative to the pixel value of Deqin Du Yuying, pelcl is the block Bcl relative w "Cow, value, >> is the bit right shift operation (right Shlft). = (G6) determines that the pixel has been Reconstruction indicates that the pixel is already in the middle, and the reconstruction area of the block is reconstructed in the step (G8) area, and the reconstruction value of the extended grave of the previous reconstructed block is averaged. Step (G5) is performed. In step (G8), the value of the pixel is set to & Ρ 1 Pelc〇+ pelc 丨+ 3)>>2, and the reconstructed value is the previous block of the pixel B ^ C〇 corresponds to the pixel value, Pe丨£| is [block ^^ corresponding pixel value," is (right shift) 〇々 bit 兀 right shift operation KG9) 're-determine whether the pixel has a reconstruction value and 21 201215163 Located in the non-extended area of the first reconstruction block. It is judged whether PxCheck[x][y] is 1 'if yes, execute step (eg), if not, set PxCheck[x][y] to I, And perform the step (G72). Since the time domain direct method (TD) and the extended time domain direct method (ETDM) are performed by the error-masked mobile obstruction Compensation, and this does not achieve correct reconstruction results in all cases. When the objects in the picture are not moving uniformly, the Time Domain Direct Method (TD) and the Extended Time Domain Direct Method (ETDM) use error-masked The moving trajectory will have an error on the reconstructed result. The error concealing method of the present invention selects the actual moving trajectory instead of the mobile trajectory masked by the error s in the motion compensation of the lost picture to judge the compensation. The object block can obtain correct reconstruction results by the well-known time domain direct method (TD) and extended time domain direct method (ETDM). The error concealment method of the present invention is better than the time domain direct method (TD) in signal to noise ratio (PSNR). There is an improvement of the average 0 32. Compared with the extended time domain direct method (ETDM), the error concealment method of the present invention has an improvement of the average 〇 2 center in the signal-to-noise ratio (PSNR). The invention provides a novel architecture full-screen error concealing method for adjustable image decoding, which can be known that: S-known time domain direct method (TD) and extended time domain direct method (ETDM) obtain positive From the above, it can be seen that the present invention is highly practical in terms of its purpose, means and efficacy, and is extremely different from the characteristics of the prior art. However, it should be considered that the above embodiments are only For the convenience of the description, the scope of the claims is intended to be limited to the above-mentioned embodiments, and is not limited to the above embodiments. [Figure 1 is a conventional time domain direct Schematic diagram of the motion vector derivation of the method. FIG. 2 is a schematic diagram of a hierarchical B picture structure in which a picture group is 8. FIG. 3 is a schematic diagram of motion vector derivation of a conventional extended time domain direct method. Figure 4 is a schematic diagram of a conventional motion compensation concept. Fig. 5 is a flow chart of a full picture error concealment method for adjustable image decoding according to the present invention. Fig. 6 is a schematic diagram showing the relationship of picture groups of the present invention. Fig. 7 is a schematic diagram showing another relationship of the picture group of the present invention. In the present invention, when the zeroth time difference is smaller than the first time difference, FIG. 9 is a schematic diagram of the present invention when the zeroth time difference is not less than the first time difference. BRIEF DESCRIPTION OF THE DRAWINGS Figure 11 is a detailed flow chart of the step (G) of the present invention. Figure 12 is a schematic view of the extended region of the present invention. Figure 13 and Figure 14 are schematic views showing the distribution of the extended region of the present invention. Explanation of main component symbols] Step (A) to Step (I) Step (G1) to Step (G9) 23

Claims (1)

201215163 七、申請專利範圍: ^ 一種於可調式影像解碼的全晝面錯誤掩蓋方 法,其係運用於一影像幀序列解碼中以重建在時序上位 於一第零參考幀及一第一參考幀之間的一遺失幀,該遺 失幀分成複數個方塊,該第零參考幀屬於—第零表單, °玄第一參考幀屬於一第—表單,該第零參考幀對應於該 第一表單内具有一第零對應參考幀,該第一參考幀對應 於該第零表單内具有一第一對應參考幀,該方法包含: (A) 係偵測該第零參考幀及該第一參考幀之間是否有 一遺失幀; (B) 當步驟(A)中偵測該第零參考幀及該第一參考幀 之間有一遺失幀時,獲取該遺失幀的一方塊; (C) 獲取該方塊的位置; (D) 分別計算該方塊在該第零參考幀及該第一參考 幀的一第零時間差距及一第一時間差距; (E) 判斷該第零時間差距是否小於該第一時間差距; (F) ^判疋该第零時間差距小於該第一時間差距 時’設定該第一參考幀及該第一對應參考幀為現行參考 幀組’並以該現行參考幀組計算補償方塊位置;以及 (G) 擴展該方塊,以產生—擴充區域,依據該擴充 區域以對該方塊進行重建。 2.如申請專利範圍第1項所述之於可調式影像解碼 的全畫面錯誤掩蓋方法,其更包含: (H) 當判定該第零時間差距非小於該第一時間差距 時,設定該第零參考幀及該第零對應參考幀為一現行參 24 201215163 塊位置,並執行 考鴨組’以該現行參考幀組計算補償方 步驟(G;)。 3.如巾清專利範圍第2項所述之於可調式影像解碼 的全畫面錯誤掩蓋方法,其更包含: (I)判斷該方塊是否為該遺失幀的最後方塊,若是, 結束該全畫面錯誤掩蓋方法,若否,獲取下—方塊並 重回步驟(C)。201215163 VII. Patent application scope: ^ A full-face error concealment method for adjustable image decoding, which is applied to an image frame sequence decoding to reconstruct a timing reference frame and a first reference frame. a lost frame, the lost frame is divided into a plurality of blocks, the zeroth reference frame belongs to a -th zero form, and the first reference frame belongs to a first form, and the zeroth reference frame corresponds to the first form a zeroth corresponding reference frame, the first reference frame corresponding to the first zero corresponding form having a first corresponding reference frame, the method comprising: (A) detecting between the zeroth reference frame and the first reference frame Whether there is a lost frame; (B) when detecting a missing frame between the zeroth reference frame and the first reference frame in step (A), acquiring a block of the lost frame; (C) obtaining the position of the block (D) separately calculating a zeroth time difference between the zeroth reference frame and the first reference frame and a first time difference; (E) determining whether the zeroth time gap is smaller than the first time gap; (F) ^ Judging the zeroth When the difference is smaller than the first time difference, 'set the first reference frame and the first corresponding reference frame to be the current reference frame group' and calculate the compensation block position with the current reference frame group; and (G) expand the block to Generate - an extent that is reconstructed based on the extent. 2. The full-screen error concealing method for adjustable image decoding according to claim 1 of the patent application, further comprising: (H) setting the first when determining that the zeroth time difference is not less than the first time gap The zero reference frame and the zeroth corresponding reference frame are an active parameter 24 201215163 block position, and the test duck group 'calculates the compensation side step (G;) with the current reference frame group. 3. The full-screen error masking method for adjustable image decoding according to item 2 of the patent scope of the invention, further comprising: (I) determining whether the square is the last square of the lost frame, and if so, ending the full screen Error masking method, if not, get the next-square and return to step (C). 4.如申請專利範圍第3項所述之於可調式影像解石馬 的全畫面錯誤掩蓋方法,其中,於步 對應參考㈣該第零表單的參考傾,設(定)該第 為該第-表單的參考Φ貞,並分別計算該方塊對應於該第 零表單及該第一表單相對方塊的運動向量,並依據哼 運動向量&定該方塊位置、該方塊對應於該第零表單 之方塊位置、該方塊對應於該第—表單之方塊位置。 5·如申請專利範圍第4項所述之於可調式影像解碼 的全畫面錯誤掩蓋方法,其中,該方塊對應於該第零表 單相對方塊的運動向量€及該方塊對應於該第—表單 相對方塊的運動向量7分別為: TRb Xu C,0 * TRd^Zx^〇 當中,€為該第一參考幀對應於該第零表單的運動 向量,TRb為該遺失巾貞與該第一對應參考幀的時 間差距’ TRd為該遺失鴨與該第一參考❹"的時間 差距 25 201215163 6•如申請專利範圍第5項所述之於 的全畫面錯誤掩蓋方法’其中,”,於步驟(H)中: 該第零參考㈣該第零表單的參考㈣,設定該第交對應 參考.貞為該:-表單的參考鳩,並分別計算該方塊對應 於該第零表早及該第一表單相對方塊的運動向量,並 依,運動向量設定該方塊位置、該方塊對應於該第零 表早之方塊位置、該方塊對應於該第一表4. The full screen error concealing method for the adjustable image calculus horse according to item 3 of the patent application scope, wherein the reference step (4) of the reference form of the zeroth form is set to (the) the first a reference Φ 表单 of the form, and respectively calculating a motion vector corresponding to the zeroth form and the opposite block of the first form, and determining the position of the block according to the 哼 motion vector & the square corresponding to the zeroth form The block position, the block corresponds to the block position of the first form. 5. The full-screen error masking method for adjustable image decoding as described in claim 4, wherein the block corresponds to a motion vector of the zero-form relative block and the square corresponds to the first-form relative The motion vectors 7 of the squares are respectively: TRb Xu C, 0 * TRd^Zx^〇, where € is the motion vector of the first reference frame corresponding to the zeroth form, and TRb is the missing frame and the first corresponding reference The time difference of the frame 'TRd is the time gap between the lost duck and the first reference 25" 25 201215163 6• The full-screen error masking method described in the fifth paragraph of the patent application scope is 'in which,', in step (H) ): the zeroth reference (four) the reference to the zeroth form (four), set the first reference corresponding to the reference. 贞 is the: - the reference of the form, and calculate the square corresponding to the zeroth table early and the first form Relative to the motion vector of the block, and according to the motion vector, the block position is set, the block corresponds to the early block position of the zeroth table, and the block corresponds to the first table …請專利範圍第6項所述之於可調式方二置碼 的全畫面錯誤掩蓋方法’其中,該方塊對應於該第零表 單相對方塊的運動向量巧及該方塊對應於該第單 相對方塊的運動向量g分別為...the full-screen error masking method of the adjustable square two-coded method described in the sixth item of the patent scope, wherein the square corresponds to the motion vector of the opposite block of the zeroth form and the square corresponds to the first relative block The motion vector g is 當中,C為該第零參考幀F的對應於該第—表單 (list 1)的運動向量’ TRb為該遺失巾貞^與該第零參考 幀尸⑴的時間差距,TRd為該遺失幀巧與該第零對應 參考幀的時間差距。 8·如申請專利範圍第7項所述之於可調式影像解碼 的全晝面錯誤掩蓋方法,其中,於步驟(G)中將該方塊由 ΜχΝ 個像素擴充為(M + abs(Vh) + abs(u)x(N + abs(v0y) + abS(v〇,))個像素的該擴充區域’當 , ^ ^ 0 X 為運動向量泛在X軸的大小,v〜為運動向量仄在y 轴的大小’當中ν/Λ為運動向量7在x軸的大小, 1 y 為運動向量泛在y軸的大小。 26 201215163 9.如申請專利範圍第8項所述之於 的全畫面錯誤掩蓋方法,其中,該步驟⑹更包含象解馬 /G1)獲取該擴充區域的―像素,其中㈣ 由。亥方塊及一擴展區域所組成; (G2)獲取該像素的位置; (G3)判斷該像素的位置是否位於該擴展區域:Wherein, C is the motion vector 'TRb of the zeroth reference frame F corresponding to the first form (list 1) is the time difference between the lost frame and the zeroth reference frame corpse (1), and TRd is the lost frame The time difference from the reference frame corresponding to the zeroth. 8. The full facet error concealment method for adjustable image decoding as described in claim 7 wherein the block is expanded from 像素 pixels to (M + abs(Vh) + in step (G). Abs(u)x(N + abs(v0y) + abS(v〇,)) The extended region of the pixel 'When, ^ ^ 0 X is the size of the motion vector ubiquitous on the X axis, v~ is the motion vector 仄ν/Λ in the size of the y-axis is the size of the motion vector 7 on the x-axis, and 1 y is the size of the motion vector on the y-axis. 26 201215163 9. Full-screen error as described in item 8 of the patent application a masking method, wherein the step (6) further comprises: “solving a horse/G1) to acquire a “pixel” of the extended region, wherein (four) is. a block of Hai and an extended area; (G2) obtaining the position of the pixel; (G3) determining whether the position of the pixel is located in the extended area: (G4)若步驟(G3)判定該像素的位置位於該擴展區 域(非4X4的部分),再判斷該像素是$已有重建值,且位 於先前重建方塊的非擴展區域; (G5)若判定該像素已有重建值且位於該先前重建 方塊的非擴展區域,再判斷該像素是否為該擴充區域最 後-像素’ ^是,結束步驟(G),若否,獲取下一像素, 並執行步驟(G2)。 ’、 10.如申請專利範圍第9項所述之於可調式影像解 碼的全畫面錯誤掩蓋方法,其更包含下列步驟: (G6)若步驟(G4)判定該像素沒有重建值且非位於 先前重建方塊的非擴展區域,再判斷該像素是否尚未被 ^ (G7)若步驟(G6)判定該像素尚未被重建,則對 該像素進行重建,並執行步驟(G5); (G8)若步驟(G6)判定該像素已被被重建表示該像 素已有重建值且位於該先前重建方塊的擴展區域,則對 該像素進行重建並與該先前重建方塊的擴展區域之重 建值平均,再執行步驟(G5)。 27 201215163 11.如申請專利範圍第1〇項所述之於 碼的全畫面錯誤掩蓋方法,其更包含下❹驟式衫像解 (G9)若步驟(G3)判定該像素的位置(x,y)非位於該擴 展區域’再判斷該像素是否已有重建值且位於該先前 重建方塊的非擴展區域;,若是,執行步驟(g8),若否 執行步驟(G7)。(G4) If the step (G3) determines that the position of the pixel is located in the extended area (not part of 4×4), it is determined that the pixel is an existing reconstructed value and is located in a non-expanded area of the previously reconstructed block; (G5) The pixel has a reconstructed value and is located in the non-extended region of the previously reconstructed block, and then determines whether the pixel is the last-pixel of the extended region. ^ Yes, the step (G) is ended. If not, the next pixel is acquired, and steps are performed. (G2). '10. The full picture error concealment method for adjustable image decoding according to claim 9 of the patent application, further comprising the following steps: (G6) if step (G4) determines that the pixel has no reconstructed value and is not located in the previous Rebuilding the non-extended region of the block, and then determining whether the pixel has not been ^ (G7). If the step (G6) determines that the pixel has not been reconstructed, reconstruct the pixel and perform step (G5); (G8) if the step ( G6) determining that the pixel has been reconstructed to indicate that the pixel has a reconstructed value and is located in an extended region of the previously reconstructed block, then reconstructing the pixel and averaging the reconstructed values of the extended region of the previously reconstructed block, and then performing the step ( G5). 27 201215163 11. The full-screen error masking method for the code according to the first aspect of the patent application, further comprising a step-down shirt image solution (G9), if the step (G3) determines the position of the pixel (x, y) not located in the extended area 're-determining whether the pixel has a reconstructed value and is located in a non-extended area of the previously reconstructed block; if yes, performing step (g8), if not performing step (G7). 八、圖式(請見下頁):Eight, schema (see next page): 2828
TW99131632A 2010-09-17 2010-09-17 Method of frame error concealment in scable video decoding TWI426785B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW99131632A TWI426785B (en) 2010-09-17 2010-09-17 Method of frame error concealment in scable video decoding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW99131632A TWI426785B (en) 2010-09-17 2010-09-17 Method of frame error concealment in scable video decoding

Publications (2)

Publication Number Publication Date
TW201215163A true TW201215163A (en) 2012-04-01
TWI426785B TWI426785B (en) 2014-02-11

Family

ID=46786669

Family Applications (1)

Application Number Title Priority Date Filing Date
TW99131632A TWI426785B (en) 2010-09-17 2010-09-17 Method of frame error concealment in scable video decoding

Country Status (1)

Country Link
TW (1) TWI426785B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9842595B2 (en) 2012-09-24 2017-12-12 Samsung Electronics Co., Ltd. Frame error concealment method and apparatus, and audio decoding method and apparatus
US10096324B2 (en) 2012-06-08 2018-10-09 Samsung Electronics Co., Ltd. Method and apparatus for concealing frame error and method and apparatus for audio decoding

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040001546A1 (en) * 2002-06-03 2004-01-01 Alexandros Tourapis Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation
CN1213613C (en) * 2003-09-12 2005-08-03 浙江大学 Prediction method and apparatus for motion vector in video encoding/decoding

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10096324B2 (en) 2012-06-08 2018-10-09 Samsung Electronics Co., Ltd. Method and apparatus for concealing frame error and method and apparatus for audio decoding
US10714097B2 (en) 2012-06-08 2020-07-14 Samsung Electronics Co., Ltd. Method and apparatus for concealing frame error and method and apparatus for audio decoding
US9842595B2 (en) 2012-09-24 2017-12-12 Samsung Electronics Co., Ltd. Frame error concealment method and apparatus, and audio decoding method and apparatus
US10140994B2 (en) 2012-09-24 2018-11-27 Samsung Electronics Co., Ltd. Frame error concealment method and apparatus, and audio decoding method and apparatus

Also Published As

Publication number Publication date
TWI426785B (en) 2014-02-11

Similar Documents

Publication Publication Date Title
JP5242669B2 (en) A method for performing error concealment on digital video.
TWI353183B (en) Method and apparatus for processing a video signal
JP5174117B2 (en) Method and apparatus for motion compensated frame rate upconversion
Yan et al. A hybrid frame concealment algorithm for H. 264/AVC
US20090086828A1 (en) MPEG-4 Streaming System With Adaptive Error Concealment
JP2008545297A (en) Method and apparatus for unified error concealment framework
CN110024409B (en) Method and apparatus for key frame de-emphasis of video stream with multiple receivers
US20150222926A1 (en) Inter-prediction method and video encoding/decoding method using the inter-prediction method
CN108924568B (en) Depth video error concealment method based on 3D-HEVC framework
TW201204054A (en) Techniques for motion estimation
Wu et al. A temporal error concealment method for H. 264/AVC using motion vector recovery
CN108206956B (en) Method and device for processing video decoding errors
CN102026001B (en) Method for evaluating importance of video frame based on motion information
Liu et al. Error concealment for whole frame loss in HEVC
TW201215163A (en) Method of frame error concealment in scable video decoding
CN102223544B (en) Method for processing error after detecting error in H264 video stream
JP2015528247A (en) Video quality assessment at the bitstream level
CN102378012A (en) Data hiding-based H.264 video transmission error code recovery method
JP2012070278A (en) Video decoding device, video decoding method and video decoding program
Pang et al. Relativity analysis-based error concealment algorithm for entire frame loss of stereo video
Xu et al. Frame rate up-conversion with true motion estimation and adaptive motion vector refinement
Patel et al. Hybrid spatio-temporal error concealment technique for image/video transmission
Yang et al. Efficient Panoramic Video Coding for Immersive Metaverse Experience
Garcia-V et al. Image processing for error concealment
Chen et al. Temporal error concealment using selective motion field interpolation

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees