TW200803523A - Frame level multimedia decoding with frame information table - Google Patents

Frame level multimedia decoding with frame information table Download PDF

Info

Publication number
TW200803523A
TW200803523A TW96112157A TW96112157A TW200803523A TW 200803523 A TW200803523 A TW 200803523A TW 96112157 A TW96112157 A TW 96112157A TW 96112157 A TW96112157 A TW 96112157A TW 200803523 A TW200803523 A TW 200803523A
Authority
TW
Taiwan
Prior art keywords
information
layer
multimedia material
processing
error
Prior art date
Application number
TW96112157A
Other languages
Chinese (zh)
Inventor
Fang Shi
Seyfullah Halit Oguz
Vijayalakshmi R Raveendran
Original Assignee
Qualcomm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Inc filed Critical Qualcomm Inc
Publication of TW200803523A publication Critical patent/TW200803523A/en

Links

Abstract

Apparatus and method to decode video data while maintaining a target video quality using an integrated error control system including error detection, resynchronization and error recovery are described. Robust error control can be provided by a joint encoder-decoder functionality including multiple error resilience designs. In one aspect, error recovery may be an end-to-end integrated multi-layer error detection, resynchronization and recovery mechanism designed to achieve reliable error detection and error localization. The error recovery system may include cross-layer interaction of error detection, resynchronization and error recovery subsystems. In another aspect, error handling of a scalable coded bitstream is coordinated across a base-layer and enhancement layer of scalable compressed video.

Description

200803523 九、發明說明: 【發明所屬之技術領域】 本揭不案係針對多媒體信號處理,且更特定言之,係針 對視訊編碼及解碼。 【先前技術】 諸如視訊編碼器之多媒體信號處理系統可使用基於諸如 MPEG_X& H.26X標準之國際標準的編碼方法來編碼多媒體 φ 貝料。此等編碼方法通常係針對壓縮多媒體資料以用於傳 輸及/或儲存。概括而言,壓縮為自資料中移除冗餘之過 程。 可根據圖片序列描述視訊信號,圖片包括圖框(完整圖 片)或場(例如,一交錯視訊信號包含圖片之交替奇數或偶 數行之場)。如本文中所使用,術語"圖框"指的是一圖片、 一圖框或一場。圖框可由包括個別圖元、通常稱為區塊之 圖元組及通常稱為切片(slice)之區塊組的視訊資料的各種 • =小部分組成。視訊編碼方法藉由使用無損或有損壓縮演 异法以壓縮每一圖框來壓縮視訊信號。框内編碼(本文中 稱為内編碼)指的是使用—圖框編碼該圖框。框間編碼(本 文中稱為間編碼)指的是基於其他("參考")圖框編碼一圖 框。舉例而言’視訊信號通常展現空間冗餘,其中相同圖 框中視訊圖框樣本之彼此靠近的部分至少具有彼此匹配或 至乂大致匹配之部分。另外,圖框通常展現時間冗餘,可 使用諸如運動補償預測之技術將時間冗餘移除。 可(例如,使用可縮放編碼)將目標為一單個應用之多媒 120028.doc 200803523 體位元流(諸如一視訊位元流)編碼成諸如一基礎層及一或 多個增強層之兩個或兩個以上獨立層。接著,此等層可用 於提供可縮放性,例如,時間及/或SNR(信雜比)可縮放 性。可縮放編碼用於動態通道,其中可縮放位元流可經調 適以匹配網路頻寬之起伏。在易於產生錯誤之通道中,可 縮放編碼可藉由基礎層及增強層之不等錯誤保護而增加穩 健性。 ^ 無線通道易於產生錯誤,包括位元錯誤及封包丟失。因 為視訊壓縮固有地移除冗餘,所以經壓縮之資料變得關 鍵。在傳輸_此資料之任何部分的丟失影響解碼器處之 重建視況D〇負。若丟失之資料為運動補償預測及/或空間 預測之參考部分的—部分,則該影響加劇,從而引起時間 及/或空間錯誤傳播。另外,可縮放編碼亦可加劇錯誤傳 原 +例而σ,若增強層資料取決於基礎層,則基礎層之 丟失可使正確接收之增強層資料變得無用。同樣,若可再 :步’則歸因於導致產生可顯示的丟失視訊之甚至更大部 分的内容相關編碼及預測編碼而可在解碼器處丟失同步。 若歸因於錯誤而丟失視訊之大部分,則錯誤㈣、_及 復原對於解碼n應用而言可能是困難或^可能的。所 的係-可靠錯誤控制系統,其包括(至少部分地包括)錯誤 ㈣、再同步及/或最大程度地使用所接收資訊的錯^復 【發明内容】 本揭示案之系統、方法及設備各自具有若干態樣,其中 120028.doc 200803523 沒有單個-者單獨地對其所需屬性負#。在不限制如由隨 後申請專利範圍所表示的本揭示案之範缚的情況下,現將 簡要論述其更主要的特徵。在考慮此論述後,且特別在閱 讀標題為"實施方式(Detailed Descripticm Qf200803523 IX. Description of the invention: [Technical field to which the invention pertains] This disclosure is directed to multimedia signal processing, and more specifically, to video encoding and decoding. [Prior Art] A multimedia signal processing system such as a video encoder can encode a multimedia φ material using an encoding method based on an international standard such as the MPEG_X& H.26X standard. These encoding methods are typically directed to compressing multimedia material for transmission and/or storage. In summary, compression is the process of removing redundancy from the data. The video signal can be described in terms of a sequence of pictures, including a picture frame (complete picture) or a field (e.g., an interlaced video signal containing fields of alternating odd or even lines of pictures). As used herein, the term "frame" refers to a picture, a frame, or a field. The frame may consist of various = small portions of video material including individual primitives, tuples commonly referred to as tiles, and block groups commonly referred to as slices. The video coding method compresses the video signal by compressing each frame by using a lossless or lossy compression algorithm. In-frame coding (referred to herein as intra-coding) refers to the use of a frame to encode the frame. Inter-frame coding (referred to herein as inter-coding) refers to encoding a frame based on other ("reference") frames. For example, a video signal typically exhibits spatial redundancy, wherein portions of the video frame samples that are close to each other in the same frame have at least portions that match each other or substantially match. In addition, the frame typically exhibits temporal redundancy, and temporal redundancy can be removed using techniques such as motion compensated prediction. A meta-stream (such as a video bitstream) that is targeted to a single application can be encoded (eg, using scalable coding) into two, such as a base layer and one or more enhancement layers, or More than two independent layers. These layers can then be used to provide scalability, such as time and/or SNR (signal to noise ratio) scalability. Scalable coding is used for dynamic channels, where the scalable bit stream can be adapted to match the fluctuations in the network bandwidth. In channels that are prone to errors, scalable coding can increase robustness by unequal error protection of the base layer and the enhancement layer. ^ Wireless channels are prone to errors, including bit errors and packet loss. Because video compression inherently removes redundancy, the compressed data becomes critical. The loss of any part of this data in the transmission _ affects the reconstruction of the decoder at the D state. If the missing data is part of the reference part of the motion compensated prediction and/or spatial prediction, the effect is exacerbated, causing time and/or spatial error propagation. In addition, scalable coding can also aggravate the error propagation + instance and σ. If the enhancement layer data depends on the base layer, the loss of the base layer can make the correctly received enhancement layer data useless. Similarly, synchronization can be lost at the decoder if it can be re-stepped due to content-related coding and predictive coding that results in an even larger portion of the displayable lost video. Errors (4), _, and recovery may be difficult or possible for decoding n applications if most of the video is lost due to an error. A system-reliable error control system that includes (at least in part) error (four), resynchronization, and/or maximum use of received information. [Invention] The systems, methods, and devices of the present disclosure are each There are several aspects, where 120028.doc 200803523 does not have a single - individually negative for its required attributes #. Without limiting the scope of the present disclosure as expressed by the following claims, the more essential features thereof will now be briefly discussed. After considering this discussion, and especially after reading the title "Detailed Descripticm Qf"

Aspects)"之部分後,將瞭解本揭示案之實例特徵如何向多 媒體編碼及解碼提供優點,包括(例如)改良之錯誤隱藏 (error concealment)及/或改良之效率。 • 提供-種處理多媒體資料之方法。該方法包括:接收多 媒體資料;在第一層中組織關於多媒體資料的描述性資 訊’其中該描述性資訊與第二層中多媒體資料的處理相 關,及至少部分基於描述性資訊提供與第二層中多媒體資 料的處理相關的指令。 ' 提供一種用於處理多媒體資料之裝置。該裝置包括:一 接收器,其經組態以接收多媒體資料;一資訊組織器,其 經組態以在第一層中組織關於多媒體資料之描述性資訊, • 其中該描述性資訊與第二層中多媒體資料之處理相關·及 -錯誤控制決策子系統,其經組態以至少部分地基:二 性資訊提供與第二層中多媒體資料之處理相關的指令。 提供一種包含程式碼之機器可讀媒體。當該程式碼在— 或多個機器上執行時,使該一或多個機器執行程式操作。 該程式碼包括:用於接收多媒體資料之程式碼;用於在第 -層中組織關於多媒體f料之描述性資訊的程式碼,其中 該描述性資訊與第二層中多媒體資料的處理相關;及用於 至少部分地基於描述性資訊提供與第二層中多媒體資料之 120028.doc 200803523 處理相關的指令的程式碼。 提供一種處理多媒體資料之方法。該方法包括··接收多 媒體資料;在一上層中處理多媒體資料;至少部分地基於 與上層中多媒體資料之處理相關聯的資訊指示一下層;及 至少部分地基於與上層中多媒體資料之處理相關聯的資訊 而在下層中處理多媒體資料。 提供一種用於處理多媒體資料之裝置。該裝置包括:一 φ 接收器,其經組態以接收多媒體資料;一上層解碼器子系 統,其經組態以在上層中處理多媒體資料,並至少部分地 基於與上層中多媒體資料的處理相關聯之資訊指示一下 層,及一下層解碼器子系統,其經組態以至少部分地基於 與上層中多媒體資料的處理相關聯之資訊而在下層中處理 多媒體資料。 提供一種包含程式碼之機器可讀媒體。當該程式碼在一 或多個機器上執行時,使該一或多個機器執行程式操作。 • 該程式碼包括··用於接收多媒體資料之程式碼丨用於在上 層中處理多媒體資料的程式碼;用於至少部分地基於與上 層中夕媒體資料的處理相關聯之資訊指示一下層的程式 碼;及用於至少部分地基於與上層中多媒體資料的處_ 關聯之資訊而在下層中處理多媒體資料的程式碼。 提供-種處理多媒體資料之方法。該方法包括:接收多 媒體資料;自第-層接收關於多媒體資料的描述性資訊, 其中該描述性資訊與第二層中多媒體資料的處理相關;及 至少部分基於所接收之描述性資訊而在第二層中處理多媒 120028.doc 200803523 體資料。 提供一種用於處理多媒體資料之裝置。該裝置包括:一 接收器’其經組態以接收多媒體資料;一解碼器,其經組 態以自第一層接收關於多媒體資料的描述性資訊(其中該 ’ 描述性資訊與第二層中多媒體資料之處理相關),並至少 部分地基於所接收之描述性資訊而在第二層中處理多媒體 資料。 Φ 提供一種包含程式碼之機器可讀媒體。當該程式碼在一 或多個機器上執行時,使該一或多個機器執行程式操作。 該程式碼包括:用於接收多媒體資料之程式碼;用於自第 一層接收關於多媒體資料之描述性資訊的程式碼,其中該 描述性資訊與第二層中多媒體資料的處理相關;及用於至 少部分地基於所接收之描述性資訊而在第二層中處理多媒 體資料的程式碼。 【實施方式】 • 以下[實施方式]係針對本揭示案之某些特定例示性態 樣。短語,,一項態樣"、,,另一態樣"、,,一另外態樣,,、”態樣,,、 一些悲樣”某些態樣”及其類似物之使用並非意欲意謂 各種態樣内之元件的各種態樣的相互排他性。因此,各種 態樣及各種態樣之元件可被取消及/或被組合並仍在申請 案之範疇内。然而,本揭示案之各種態樣可以如由申請專 利範圍所界定及覆蓋的大量不同方式來體現。在此描述 中,參考圖式,其中在全文中用相同數字表示相同部件。 t樣包括改良在多媒體傳輸系、統中的、編碼器及解碼器中 120028.doc -10· 200803523 之處理的系統及方法。多媒體資料可 訊、靜止影像或任何其他適當類型之 動視訊、音 者。態樣包括使用-包括 、 -貝料中之-或多 整合錯誤控制系統來解碼誤復 及:法。可藉由一包括多個錯誤彈性設 據-態樣已發現:錯誤7:;二舉例而言,根After the section of Aspects), it will be appreciated how the example features of the present disclosure provide advantages to multimedia encoding and decoding, including, for example, improved error concealment and/or improved efficiency. • Provide a way to process multimedia materials. The method includes: receiving multimedia material; and synthesizing descriptive information about the multimedia material in the first layer, wherein the descriptive information is related to processing of the multimedia material in the second layer, and at least partially based on the descriptive information providing and the second layer Instructions related to the processing of multimedia materials. 'Provides a device for processing multimedia material. The apparatus includes: a receiver configured to receive multimedia material; an information organizer configured to organize descriptive information about the multimedia material in the first layer, wherein the descriptive information and the second The processing of the multimedia data in the layer and the error control decision subsystem are configured to provide at least part of the foundation: the binary information provides instructions related to the processing of the multimedia material in the second layer. A machine readable medium containing a code is provided. When the code is executed on one or more machines, the one or more machines are caused to perform program operations. The code includes: a code for receiving multimedia data; a code for organizing descriptive information about the multimedia material in the first layer, wherein the descriptive information is related to processing of the multimedia material in the second layer; And a code for providing instructions related to the processing of the 12202.doc 200803523 of the multimedia material in the second layer based at least in part on the descriptive information. A method of processing multimedia material is provided. The method includes receiving multimedia material; processing multimedia material in an upper layer; indicating a layer based at least in part on information associated with processing of the multimedia material in the upper layer; and based at least in part on processing associated with the multimedia material in the upper layer The information is processed in the lower layer. An apparatus for processing multimedia material is provided. The apparatus includes: a φ receiver configured to receive multimedia material; an upper decoder subsystem configured to process multimedia material in an upper layer and based at least in part on processing associated with the multimedia material in the upper layer The associated information indicates the layer, and the lower layer decoder subsystem, which is configured to process the multimedia material in the lower layer based at least in part on the information associated with the processing of the multimedia material in the upper layer. A machine readable medium containing a code is provided. When the code is executed on one or more machines, the one or more machines are caused to perform program operations. • The code includes: a code for receiving multimedia material, a code for processing multimedia material in an upper layer; and a layer for indicating, at least in part, information associated with processing of the media file in the upper layer And a code for processing the multimedia material in the lower layer based at least in part on the information associated with the location of the multimedia material in the upper layer. Provide a method of processing multimedia materials. The method includes: receiving multimedia material; receiving descriptive information about the multimedia material from the first layer, wherein the descriptive information is related to processing of the multimedia material in the second layer; and based at least in part on the descriptive information received The second layer handles the multi-media 12028.doc 200803523 body data. An apparatus for processing multimedia material is provided. The apparatus includes a receiver configured to receive multimedia material, and a decoder configured to receive descriptive information about the multimedia material from the first layer (where the 'descriptive information is in the second layer The processing of the multimedia material is related, and the multimedia material is processed in the second layer based at least in part on the descriptive information received. Φ provides a machine readable medium containing the code. When the code is executed on one or more machines, the one or more machines are caused to perform program operations. The code includes: a code for receiving multimedia data; a code for receiving descriptive information about the multimedia material from the first layer, wherein the descriptive information is related to processing of the multimedia material in the second layer; The code of the multimedia material is processed in the second layer based at least in part on the received descriptive information. [Embodiment] The following [Embodiment] is directed to some specific exemplary aspects of the present disclosure. Phrase, an aspect, ",,, another aspect", ",, an additional aspect,", "situation,", some sadness "some aspects" and the use of their analogues It is not intended to imply that the various aspects of the elements in the various aspects are mutually exclusive. Therefore, various aspects and various aspects of the elements may be eliminated and/or combined and still in the scope of the application. The various aspects of the present invention can be embodied in a number of different ways as defined and covered by the scope of the claims. In this description, reference is made to the drawings in which the System and method for processing 12028.doc -10· 200803523 in the system, encoder and decoder. Multimedia data can be transmitted, still images or any other suitable type of motion picture, sound, etc. The aspect includes use - including - In the bedding - or more integrated error control system to decode the error and the method: can be found by including a number of error elastic settings - the error 7:; For example, the root

測及錯誤定位的端至端整相於達成可靠錯誤伯 機制。亦已發現由在錯誤價測、再同步及復原 《現.猎由在育料處理期間實施某-跨層互動 =成處理效能方面的益處。在另_錄中,跨越可縮放 視訊之基礎層及増強層協調可縮放編碼位线之錯誤 處理。 圖1為說明根據-項態樣之多媒體通信系統1〇〇之功能方 塊圖。系統1〇〇包括一經由網路14〇與一解碼器設備15〇通 L之編碼器设備1丨〇。在一個實例中,編碼器設備自一外 部源102接收多媒體信號並編碼該信號以用於在網路140上 傳輸。 在此實例中,編碼器設備11〇包含一耦接至一記憶體114 及一收發器116的處理器112。該處理器112編碼來自多媒 體資料源之資料並將其提供給收發器1 i 6以用於經由網路 140傳達。 在此實例中,解碼器設備150包含一耦接至一記憶體154 及一收發器156的處理器152。處理器152可包括一通用處 理器及/或一數位信號處理器及/或一特殊應用硬體處理器 120028.doc -11- 200803523The end-to-end phasing of the measurement and error localization is achieved by a reliable error. It has also been found to be beneficial in terms of error price measurement, resynchronization and recovery of the current performance of a certain cross-layer interaction = during processing. In another recording, the error processing of the scalable coded bit lines is coordinated across the base layer and the bare layer of the scalable video. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a functional block diagram showing a multimedia communication system according to an item aspect. The system 1 includes an encoder device 1 that communicates with a decoder device 15 via a network 14A. In one example, the encoder device receives a multimedia signal from an external source 102 and encodes the signal for transmission over the network 140. In this example, the encoder device 11A includes a processor 112 coupled to a memory 114 and a transceiver 116. The processor 112 encodes the data from the multimedia data source and provides it to the transceiver 1 i 6 for communication via the network 140. In this example, the decoder device 150 includes a processor 152 coupled to a memory 154 and a transceiver 156. The processor 152 can include a general purpose processor and/or a digital signal processor and/or a special application hardware processor. 120028.doc -11- 200803523

中之一或多者。記憶體154可包括固態或基於碟片之儲存 器或任何可讀及可寫隨機存取記憶體設備中之一或多者。 收發器156係經組態以經由網路140接收多媒體資料並使其 可用於處理器152以供解碼用。在一個實例中,收發器156 包括一無線收發器。網路14〇可包含有線或無線通信系統 及/或無線系統中之一或多者,其中該有線或無線通信系 統包括乙太網路、電話(例如,P0TS)、電纜、電線及光纖 _ 糸"’先中之一或多者,該無線系統包含分碼多重存取(CDMA 或CDMA2000)通信系統、分頻多重存取(Fdma)系統、諸 如GSM/GPRS(通用封包無線電服務)/EDGE(增強型資料 GSM環境)之分時多重存取(TDMA)系統、tetra(陸地中 、、麈無線電)行動電話糸統、寬頻分碼多重存取(WCdma)系 統、高資料速率(IxEV-DO或1乂£^00金牌多播)系統、 IEEE 802.11系統、MediaFL〇系統、DMB系統、正交分頻 多重存取(OFDM)系統或DVB-Η系統中之一或多者。 φ 因為無線通道經歷隨機位元錯誤與突發錯誤,所以錯誤 復原設計用於有效處理此等錯誤類型兩者。已發現:藉由 使用一整合多層錯誤控制系統,可有效處理兩種類型錯誤 類I已赉現·藉由在應用層處使用空間或時間錯誤隱藏 T有效處理〜響隔離之視訊部分(包括(例如)一或多個圖 元,或甚至包括一或多個實體層封包(PLP)之丟失)的隨機 位元錯誤。然而,借助於嵌入於如下文論述之傳送及同步 層中的錯誤控制模組可更有效地處理導致丟失多個連續 PLP之突發錯誤。 120028.doc •12- 200803523One or more of them. Memory 154 can include one or more of a solid state or disk based storage or any readable and writable random access memory device. Transceiver 156 is configured to receive multimedia material via network 140 and make it available to processor 152 for decoding. In one example, transceiver 156 includes a wireless transceiver. The network 14 may include one or more of a wired or wireless communication system including an Ethernet, a telephone (eg, P0TS), a cable, a wire, and an optical fiber _ 糸" 'One or more of the first, the wireless system includes a code division multiple access (CDMA or CDMA2000) communication system, a frequency division multiple access (Fdma) system, such as GSM/GPRS (General Packet Radio Service) / EDGE Time-division multiple access (TDMA) system (enhanced data GSM environment), tetra (land, 麈 radio) mobile phone system, wideband code division multiple access (WCdma) system, high data rate (IxEV-DO Or one or more of a 金£^00 Gold Medal Multicast) system, an IEEE 802.11 system, a Media FL® system, a DMB system, an Orthogonal Frequency Division Multiple Access (OFDM) system, or a DVB-Η system. φ Because the wireless channel experiences random bit errors and burst errors, the error recovery design is designed to effectively handle both of these error types. It has been found that by using an integrated multi-layer error control system, two types of error classes I can be effectively handled. By using space or time error concealing T at the application layer, the video portion is effectively processed (including ( For example, one or more primitives, or even random bit errors that include the loss of one or more physical layer packets (PLPs). However, burst errors that result in the loss of multiple consecutive PLPs can be handled more efficiently by means of error control modules embedded in the transport and synchronization layers as discussed below. 120028.doc •12- 200803523

圖2為包括諸如圖1中所說明的系統中之編碼器設備110 及解碼器設備150中之一跨層錯誤控制系統的用於劃分任 務的多層協定堆疊之實例的方塊圖。參看圖丨及圖2,諸如 編碼器設備110及解碼器設備150之通信設備可使用一用於 分配處理任務的多層協定堆疊。編碼器設備110及解碼器 設備150中之上層組件可包括多個應用程式,例如視訊或 音訊編碼器及/或解碼器。某些實施例可包括意欲同時解 碼之多個資訊流。在此等狀況下,亦可在上層組件中執行 多個流之同步任務。在編碼器設備11〇中,一上層組件可 提供經由無線網路及/或有線網路14〇傳輸之位元流中的編 碼時序資訊。在解碼器設備150中,上層組件可剖析多個 資訊流’使得相關應用程式大約在同時將其解碼。 編碼器設備110之上層組件分佈於應用層2〇5及同步層 210中之一或多者中。編碼器設備11〇之下層組件分佈於傳 送層215、流及/或媒體存取控制(MAC)層22〇及實體層 中之一或多者中。類似地,解碼器設備15〇之上層組件分 佈於應用層230及同步層235中之一或多者中。解碼器設備 150之下層組件分佈於傳送層240、流及/或媒體存取控制 (MAC)層245及實體層250中之一或多者中。熟習此項技術 者將瞭解此等層且熟悉其中各種任務之分配。應注意,如 本文中所使用的術語"上層"及”下層"為相對術語。舉例而 言’可根據應用層230將同步層235稱為下層,但可根據傳 送層240將其稱為上層。 跨越此實例中之該等層中的每一層提供編碼器設備〗i 〇 120028.doc -13- 2008035232 is a block diagram of an example of a multi-layer protocol stack for partitioning tasks including a cross-layer error control system, such as one of encoder device 110 and decoder device 150 in the system illustrated in FIG. Referring to Figures 2 and 2, a communication device such as encoder device 110 and decoder device 150 can use a multi-layer protocol stack for distributing processing tasks. The upper layer components of encoder device 110 and decoder device 150 may include multiple applications, such as video or audio encoders and/or decoders. Some embodiments may include multiple streams of information that are intended to be decoded simultaneously. In these situations, multiple stream synchronization tasks can also be performed in the upper component. In the encoder device 11, an upper layer component can provide code timing information in the bit stream transmitted over the wireless network and/or the wired network 14〇. In decoder device 150, the upper layer component can parse multiple streams of information so that the associated application decodes it at about the same time. The upper layer components of the encoder device 110 are distributed in one or more of the application layer 2〇5 and the synchronization layer 210. The lower layer components of the encoder device 11 are distributed among one or more of the transport layer 215, the stream and/or the medium access control (MAC) layer 22, and the physical layer. Similarly, the upper layer components of the decoder device 15 are distributed among one or more of the application layer 230 and the synchronization layer 235. The lower layer components of the decoder device 150 are distributed among one or more of the transport layer 240, the stream and/or the medium access control (MAC) layer 245, and the physical layer 250. Those skilled in the art will be aware of these layers and are familiar with the assignment of various tasks. It should be noted that the terms "upper layer" and "lower layer" as used herein are relative terms. For example, the synchronization layer 235 may be referred to as the lower layer according to the application layer 230, but may be referred to according to the transport layer 240. For the upper layer. Encoder device is provided for each of the layers in this example. i 〇120028.doc -13- 200803523

中之一錯誤彈性系統255。編碼器設備ιι〇中之下層袓件可 包括提供錯誤彈性之各種機制。下層組件中提供的此等錯 誤彈性機制可包括—❹個錯㈣制料_、交錯機制 及熟習此項技術者已知之其他機制。解碼器設備15〇中之 下層組件可包括賦能彳貞測並校正錯誤之彳目應錯誤解碼組 件。某些經由有線及/或無線網路14G引人之錯誤可能無法 由解碼器設備15〇之下層組件校正。對於彼等無法校:之 錯誤,諸如由編碼器設備110之下層組件請求重新編 誤部分之解決方案可能對於某些情況係不可行的。 編碼器設備m之上層組件可將與通信之各種層 的、關於多媒體資料之封包的描述性資訊附於標頭中。在 某些實例中’在各種層處執行封包化以允許多個資料流在 編碼過程中被分離(被剖析)並在解碼期間至少部分地使用 由編碼器之各種層添加的標頭資訊將其重新組合。舉例而 言’同步層2U)可添加識別與可同時解碼多種類型封包之 多個解碼H組件相關聯之多種類型封包的標頭資訊。同步 層標頭資訊可包括識別-資料序列時間、—資料序列持二 時間、目標解碼器組件(例如,音訊、視訊及閉合字幕)’ 圖框號碼、封包數目及其他資訊的欄位。在某些實例中, 同步層封包可為可變長度。此可歸因於各種編碼機制,例 如包括可變長度編碼機制之數位壓縮機制。 傳送層215亦可將描述性資訊在傳送標頭中附至傳送層 封包中。傳送層封包可為固定長度,以支援各種錯誤編: 機制、調變機制及使用固定長度封包之其他機_,卜傳送標 120028.doc -14- 200803523 頭可含有識別由一單個同步層封包剖析的傳送層封包之數 目的資訊。若同步層封包為可變長度,則需要含有資料的 傳送層封包之數目亦可為可變的。 在一項態樣中,至少某些包括於傳送及/或同步標頭中 之資訊可包括於一目錄中。該目錄可包括與諸如應用層 205、同步層210、傳送層21 5及其他層之各種層相關的標 頭資訊。可將該目錄傳達至解碼器。資訊可由解碼器設備 用於復原各種錯誤(包括識別錯誤接收之錯誤封包的大 小、識別下一可用封包以再同步及其他)。來自標頭目錄 之標頭資訊可用於替代資料流中之丟失或錯誤的原始標頭 資訊。標頭目錄之其他細節可見於2006年9月25日申請之 且標題為"VIDEO ENCODING METHOD ENABLING HIGHLY EFFICIENT PARTIAL DECODING OF H.264 AND OTHER TRANSFORM CODED INFORMATION”之申請案第 ll/527,022號中,該申請案讓渡給其受讓人且以引用之方 式全部併入本文中。 在此實例中,跨越每一層提供在解碼器設備150中之錯 誤復原系統260。解碼器設備150可包括提供錯誤復原之各 種機制。此等錯誤復原機制可包括下層錯誤偵測及校正組 件(諸如李德-所羅Η編碼(Reed-Solomon coding)及/或渦輪 編碼(Turbo-coding))以及用於替代及/或隱藏不可由下層方 法校正之資料的上層錯誤復原及/或錯誤隱藏機制。應用 層230中之各種錯誤復原組件可得益於可用於諸如同步層 235及傳送層240之下層的資訊。資訊可包含在傳送層標 120028.doc -15- 200803523 頭^步層標頭、標頭目錄(若其可用)中,或該資訊可在 解碼盗處基於對接收之資料的估計而產生。 如上文所論述,編碼器設備110中之錯誤彈性系統加及 解碼器設m5G中之錯誤復原系統形成本文中稱為錯誤 ϋ系統的端至端整合多層錯誤偵測、再同步及復原機 制。現在將論述錯誤控制系統之細節。 應注意··可省略、重新配置、分開及/或組合圖丨及圖2 中所不之編碼器設備丨1〇或解碼器設備15〇中之一或多個元 件。 圖3Α為一說明可用於諸如圖j中所說明之系統丨〇〇之系統 的解碼器設備150之一態樣的功能方塊圖。在此態樣中, 解碼器150包含一接收器元件3〇2、一資訊組織器元件 3〇4、一錯誤控制決策元件3〇6及一多媒體解碼器元件 308 〇 該接收器302接收經編碼之視訊資料(例如,由圖1之編 碼器110編碼的資料)。接收器3〇2可經由諸如圖1之網路 140的有線或無線網路接收經編碼之資料。在一項態樣 中’接收之資料包括表示源多媒體資料之變換係數。將變 換係數變換成一其中相鄰樣本之相關性顯著減少的域。舉 例而言,影像通常在空間域中展現一高度的空間相關性。 另一方面,變換係數通常彼此正交,從而展現零相關。可 用於多媒體資料之變換的某些實例包括(但不限於): DCT(離散餘弦變換)、DFT(離散傅立葉變換)、哈德瑪得 (Hadamai:d)(或沃爾什-哈德瑪得(Walsh-Hadamard))變換、 120028.doc -16- 200803523 離散子波變換、DST(離散正弦變換)、哈爾(細)變換、斜 變換、KL(卡,忽南-拉維(Karhunen_L〇eve))變換及諸如 H.264中所使用的整數變換n變㈣於變換多媒體樣 本之矩陣或陣列。通常使用二維矩陣,但亦可使用—維陣 列。 接收之資料亦包括指示如何編碼經編碼之區塊的資訊。 此資訊可包括諸如運動向量及圖框序號之框間編碼參考資 訊,及包括區塊大小及空間預測方向性指示符的框内 參考資訊及其他。某-接收之資料包括指示如何由某一位 元數來估計每-變換係數的量化參數、指示經變換之矩陣 中多少變換係數不為零的非零指示符及其他。 貧訊組織器元件304自位元流收集關於多媒體資料之描 述性資訊。在-項‘態樣中,f訊組織器3〇4解譯傳送及同 步層標頭資料以供進-步處理。傳送標頭可經處理以確定 圖框及超級圖框邊界,其中超級圖框為通常可獨立解碼之 -組圖框。超級圖框可包括覆蓋—範圍在約Q2秒至約2〇 秒之間之固定時間週期的圖框。超級圖框大小可經選擇以 允許-合理賴取時間。傳送標頭亦可經處理以確定圖框 長度及位元流中之圖框的字組偏移以處理自流/ m A c層接 收的錯誤PLP。同步層標頭可經處理以:提取圖框號碼並 解譯基礎及增強圖框、在錯誤情況下提取插人呈現時間戮 記(presentation time stamp)所需之圖框速率及/或插入並得 到經由圖框速率上轉換(FRUC)而插入之圖框的pTs。同步 標頭亦可經處理以:提取視訊圖框之呈現時間戳記以與相 120028.doc • 17· 200803523 關聯之音訊圖框同步;在導致解竭器中丟失同步的錯誤情 況下提取隨機存取點位置以標記下一再同步點。若如上文 所論述之標頭目錄可用,則資訊組織器3〇4亦可自該標頭 目錄收集資訊。One of the errors in the elastic system 255. The lower layer of the encoder device ιι〇 can include various mechanisms for providing error resilience. Such error resilience mechanisms provided in the underlying components may include - erroneous (four) recipes, interlacing mechanisms, and other mechanisms known to those skilled in the art. The lower component of the decoder device 15 can include an error-correcting component that enables the guessing and correction of errors. Some errors introduced via the wired and/or wireless network 14G may not be corrected by the lower layer components of the decoder device 15. For their inability to correct: a solution such as requesting a recompilation portion by a component below the encoder device 110 may not be feasible in some cases. The layer component above the encoder device m can attach descriptive information about the packets of the multimedia material to the various layers of the communication in the header. In some instances 'encapsulation is performed at various layers to allow multiple streams to be separated (parsed) during encoding and to be at least partially used during decoding by header information added by various layers of the encoder Regroup. For example, the 'synchronization layer 2U' may add header information identifying a plurality of types of packets associated with a plurality of decoding H components that can simultaneously decode multiple types of packets. The synchronization layer header information may include fields of identification-data sequence time, data sequence duration, target decoder components (e.g., audio, video, and closed captions) frame number, number of packets, and other information. In some instances, the synchronization layer packet can be of variable length. This can be attributed to various encoding mechanisms, such as digital compression mechanisms including variable length encoding mechanisms. Transport layer 215 may also attach descriptive information to the transport layer packet in the transport header. The transport layer packet can be fixed length to support various error coding: mechanism, modulation mechanism and other machines using fixed length packets _, 卜 transmit standard 12028.doc -14- 200803523 header can contain identification by a single synchronization layer packet parsing Information on the number of transport layer packets. If the synchronization layer packet is of variable length, the number of transport layer packets that need to contain data may also be variable. In one aspect, at least some of the information included in the transmit and/or sync headers can be included in a directory. The directory may include header information associated with various layers such as application layer 205, synchronization layer 210, transport layer 215, and other layers. This directory can be communicated to the decoder. The information can be used by the decoder device to recover various errors (including identifying the size of the error packet received by the error, identifying the next available packet for resynchronization, and others). Header information from the header directory can be used to replace missing or erroneous original header information in the data stream. Further details of the header directory can be found in application No. ll/527,022, filed on September 25, 2006, entitled "VIDEO ENCODING METHOD ENABLING HIGHLY EFFICIENT PARTIAL DECODING OF H.264 AND OTHER TRANSFORM CODED INFORMATION" The application is assigned to its assignee and is hereby incorporated by reference in its entirety. In this example, error recovery system 260 in decoder device 150 is provided across each layer. Decoder device 150 may include providing error recovery Various mechanisms. These error recovery mechanisms may include underlying error detection and correction components (such as Reed-Solomon coding and/or Turbo-coding) and for replacement and/or Or hiding upper error recovery and/or error concealment mechanisms of data that are not calibrated by the underlying method. The various error recovery components in application layer 230 may benefit from information that may be used, for example, between synchronization layer 235 and transport layer 240. Included in the transport layer standard 12028.doc -15- 200803523 header layer header, header directory (if available), or the information can be decoded in the pirate Based on the estimation of the received data. As discussed above, the error resilience system in the encoder device 110 and the error recovery system in the decoder set m5G form an end-to-end integrated multi-layer error referred to herein as an error system. Detection, resynchronization and recovery mechanisms. Details of the error control system will now be discussed. It should be noted that the encoder device can be omitted, reconfigured, separated and/or combined with the encoder and the decoder shown in Figure 2. One or more of the elements of the device 15A. Figure 3A is a functional block diagram illustrating one aspect of a decoder device 150 that may be used in a system such as the one illustrated in Figure j. In this aspect The decoder 150 includes a receiver component 〇2, an information organizer component 〇4, an error control decision component 〇6, and a multimedia decoder component 308. The receiver 302 receives the encoded video material ( For example, the data encoded by the encoder 110 of Figure 1. The receiver 〇2 can receive the encoded material via a wired or wireless network, such as the network 140 of Figure 1. In one aspect, the data received package Transform coefficients representing source multimedia data. Transform transform coefficients into a domain in which the correlation of adjacent samples is significantly reduced. For example, images typically exhibit a high degree of spatial correlation in the spatial domain. Orthogonal to each other to exhibit zero correlation. Some examples of transforms that can be used for multimedia data include (but are not limited to): DCT (Discrete Cosine Transform), DFT (Discrete Fourier Transform), Hadmai (d) (Hadamai:d) Or Walsh-Hadamard transformation, 120028.doc -16- 200803523 Discrete wavelet transform, DST (discrete sine transform), Hal (fine) transform, oblique transform, KL (card, The Karhunen_L〇eve transform and the integer transform used in H.264, n, are used to transform a matrix or array of multimedia samples. A two-dimensional matrix is usually used, but a dimensional array can also be used. The information received also includes information indicating how to encode the coded block. This information may include inter-frame coded reference information such as motion vectors and frame numbers, as well as in-frame reference information including block size and spatial prediction directional indicators and others. The certain-received data includes a quantization parameter indicating how to estimate the per-transform coefficient from a certain number of bits, a non-zero indicator indicating how many transform coefficients in the transformed matrix are not zero, and others. The poor organizer component 304 collects descriptive information about the multimedia material from the bitstream. In the "item" aspect, the f-organizer 3〇4 interprets the transmission and synchronization layer header data for further processing. The transfer header can be processed to determine the frame and superframe boundaries, where the superframe is a group frame that can usually be decoded independently. The superframe may include a cover-frame of a fixed time period ranging from about Q2 seconds to about 2 seconds. The super frame size can be selected to allow - reasonable time. The transmit header can also be processed to determine the frame length and the block offset of the frame in the bitstream to handle the error PLP received by the self-flow/m A c layer. The synchronization layer header can be processed to: extract the frame number and interpret the base and enhance the frame, extract the frame rate required for the presentation time stamp and/or insert and obtain in the case of an error. The pTs of the frame inserted via frame rate up conversion (FRUC). The synchronization header can also be processed to: extract the presentation timestamp of the video frame to synchronize with the audio frame associated with the phase 12028.doc • 17·200803523; extract random access in the event of an error that causes synchronization in the decompressor Point location to mark the next resynchronization point. If the header directory as discussed above is available, the information organizer 3〇4 can also collect information from the header directory.

除自標頭及標頭目錄收集資訊外,資訊組織器3〇4亦可 產生關於視訊資料之描述性資訊。各種標頭總和檢查碼、 有效負載總和檢查碼及錯誤控制機制可皆用於識別資料之 哪一部分係錯誤的。所產生之資訊可包括識別資料之此等 錯誤部分的賴。錯誤資料可為—錯誤分佈量㈣一錯誤 率量測。可在自圖框層至切片層(切#為―組經編碼之圖 疋區塊)、圖元區塊層或甚至一圖元層的任一層上組織錯 誤資料。此等類型之關於錯誤資料的描述性資訊可用於定 位並確定錯誤範圍。下文中將論述可由資訊組織器则識 J 、、扁澤收集、保持、旗標表示或產生的資訊之類型的 細節。 抑在一項態樣中,錯誤控制決策元件3〇6使用由資訊組織 器3〇4收集及/或產生的描述性資訊(例如,以表格形式儲 存)以提供與多媒體資料之處理相關的指令。錯誤控制決 策兀件306分析描述性資訊,以便定位錯誤並確定視訊之 那些部分被影響及此等部分錯誤至何程度。使用此資訊, 、曰誤控制决策元件3〇6可確定一用於處理該等錯誤條件之 錯誤控制方法。在另一態樣中,錯誤控制決策元件自 上層接收反饋資訊。該反饋資訊可包括與上層中多媒體的 處理相關聯之資訊。該反饋資訊可包括向上傳遞至上層的 120028.doc -18· 200803523 描述性資訊中之不正確的資訊。此資訊可用於校正儲存於 下層中之表。另外,反饋資訊可包括處理時間、處理動 作、處理狀態及其他資訊。可由錯誤控制決策元件3〇6來 分析此類資訊以確定如何指示上層。 錯誤控制決策元件306分析已收集之資訊,以決定當多 媒體資料轉遞至上層時上層應如何處理該多媒體資料。決 策可包括選擇若干錯誤控制方法中之一或多者。錯誤控制 _ 方法可包括視訊資料之錯誤部分的空間及/或時間錯誤隱 藏。錯誤控制方法亦可包括錯誤復原技術,其中將錯誤資 料分析為可基於可用於上層應用的内容或其他資訊而以某 一方式作出補救。可使用的時間錯誤隱藏之極形㈣⑽邮 form)已知為圖框速率上變換或fruc。fruc基於其他圖 汇(通兩為;^跨待建構之圖框的兩個圖框)建構一新圖框。 當貝料之錯誤部分(例如,經確定可視情況而隱藏的一圖 P刀 單個圖框或大量圖框)處於一可管理的程 _ 度時,錯誤控制決策元件3〇6可指示上層使用$間及/或時 間錯誤隱藏、錯誤復原或FRUC,以及其他錯誤控制機 制。然而,若錯誤資料之範圍太廣,則錯誤控制元件可指 丁上層跳過錯决部分之解碼。下文論述錯誤控制決策元件 306確疋如何指示上層時所使用的細節。 夕媒體解碼益疋件3〇8執行與可包括音訊、視訊閉合字 =二他的夕媒體位元流之解勒關的功能。多媒體解碼 次:對應於用於編碼資料的編碼操作之逆操作。經編碼 可為框間編碼貪料(例如,時間預測之資料)及/或框 120028.doc -19 - 200803523 内編石馬資料。參看圖2,可在多層(諸如,傳送層240 /同 步層235及應用層25〇)處執行由多媒體解碼器3〇8執行的功 月b。傳达層功能可包括用於校正錯誤並識別不可校正錯誤 的錯鈇偵測及校正機制。可將經識別之不可校正錯誤傳達 至資訊組織器304以將其包括於如上文所論述之描述性資 吾中同步層功能可包括緩衝多個位元流之接收的資料直 有同v >料準備被解碼為止。此時,將所有同步資料 φ #遞至應用層解碼器以進行幾乎同時解碼。應用層功能可 包括音訊、視訊及閉合字幕位元流的解壓縮。各種解壓縮 功月b可包括解量化及用於重建視訊資料之逆變換。在一項 態樣中,在資訊組織器304及錯誤控制決策元件3〇6已執行 上=所論述之功能後,視訊解碼器元件308之應用層以解 碼次序每次一圖框地接收視訊圖框。 在某些態樣中,可省略、重新配置及/或組合圖从之解 碼器150的元件中之一或多者。該等元件可以硬體、軟 ⑩ 冑_體、中間體、微碼或其任何組合形式來實施。下文 將根據圖5A至圖5C中所說明之方法來論述由解碼器15〇之 元件執行的動作之細節。 圖3B為一說明可用於一諸如圖i中所說明之系統的解碼 器=備之電腦處理器系統之實例的方塊圖。此實例之解碼 器认備150包括一預處理器元件32〇、一隨機存取記憶體 (RAM)兀件322、一數位信號處理器(DSp)元件μ4及一視 訊核心元件326。 該預處理器320用於一項態樣中以執行由圖从中之各種 120028.doc 200803523 :所執仃的動作中之-或多個動作。預處理器剖析視訊 位兀流並將資料寫入至讀322。另外,在一項態樣中, 預處理器320實施資訊組織器234、錯誤控制決策元件3〇6 =多媒體解竭器3G8之預處理部分(例如,錯誤隱藏、錯誤 後原等動作。藉由在預處理器32g中執行此等更有效、 更少計算強度之動作’可在高效視訊核心326中以因果次 序執行更多汁算強度之視訊解碼。 DSP 324擷取儲存於RAM 322中之經剖析的視訊資料並 ::重新',且織以由視訊核心326來處理。視訊核心W6執行 解里化(亦已知為再縮放或縮放)、逆變換及解區塊功能以 及其他視訊解壓縮功能。視訊核心通常以高度最佳化及流 水線方式來實施。歸因於此,當關果次序料視訊資料 時:該視訊資料可以最快方式被解碼。藉由在預處理器中 執行無序剖析、錯誤偵測、資訊組織或錯誤控制,在慮及 改良之總解碼效能的情況下,對於視訊核心中的解碼保持 因果次序。 如上文所論述,為了錯誤控制,資訊組織器元件可 收集描述性資訊、將其組織成一表並將該表轉遞至上層。 描述性貧訊之一來源為附加至各種封包化層之封包的各種 標頭。圖4展示多層封包機制之一實例的說明。此實例封 包機制用於解釋錯誤控制系統之某些態樣,但亦可使用其 他封包機制。傳送層及同步層為一定框(framing)&總和檢 查碼協定。彼等層其提供一分層機制以在各種層級處(包 括(例如)在一超級圖框(選定數目之經處理的圖框)層級 120028.doc 200803523 處、在一視訊存取單元(VAU)層級處或在一 PLP層級處)偵 測錯誤。因此,可在此等層中之任一層或所有層處執行有 效錯誤定位。包含一單個視訊圖框之VAU在同步層封包上 方之應用層處提供完整性檢查之額外層級。 在此實例中,應用層封包405 A及405B可為固定及/或可 變長度封包。應用層封包405A及405B各自可為一完整的 視訊圖框或VAU。同步層附加一同步層標頭(SH)410至每 一應用層封包405A及405B,從而導致產生同步層封包 406八及4063(圖4中之同步層封包406人及4066包括一同步 層標頭410並分別包括應用層封包405 A及405B)。接著將同 步層封包406 A及406B輸入至傳送層。在此實例中,傳送 層封包為固定長度。傳送層將同步層封包拆分成對應於傳 送層封包大小的部分並將傳送層標頭(TH)415附加至所得 傳送層封包。傳送層亦可將一圖框總和檢查碼(FCS)附加 至每一同步層封包(未圖示)。FCS可用於偵測同步層封包 中之錯誤。在此實例中,將包含應用層封包405A之同步層 封包406A分成兩個傳送層封包420A及420B,其中封包 4206包括同步層封包406人之剩餘部分4258及同步層封包 406B之第一部分425C。在此實例中,在下一同步層封包 406B開始之前,將額外傳送層標頭415附加至傳送層封包 420B之部分425C。一第三傳送層封包420D含有同步層封 包406B之下一部分425D。 同步層標頭410及傳送層標頭415可含有目標為使一解碼 器能夠重新組合同步層封包及應用層封包的相似資訊。一 120028.doc -22- 200803523 &頭y包括諸如封包尺寸、封包數目、封包中之標頭的位 置、資料序列時間、資料序列持續時間、圖框時間、圖框 w馬&機存取點旗標、圖框速率及/或一組中之相關聯 封包之數目的資訊。另外,標頭資訊可包括將相關聯封包 識別為屬於視訊位元流、音訊位元流及/或閉合字幕位元 流的流識別資訊。現將論述傳送及同步層標頭之一特定實 例。 、 φ 傳送層之一功能為提供一優於流/MAC層之基於八位元 組之服務的封包服務。傳送層亦提供在存在實體層錯誤的 險況下確定其有效負載封包(圖4中所示之實例中的VAU)之 •邊I的機制。多個定框協定可與傳送層相關聯使用。與傳 ^層相關聯之定框協定指定用於組合其有效負載封包之規 貝建立待傳遞至應用層中之解碼器的封包。定框協定亦 才曰疋用於處理PLP錯誤及可由解碼器所預期的結果行為之 規則。 • 、表1中給_出傳送層標頭415中之某些欄位的一例示性格 式在此只例中,定框協定規則提供一 122字組固定長度 之PLP除扣不有效負載(此實例中之VAU)之開頭及末端 以外’傳送標頭亦用於輸送錯誤PLP至上層。 表1 攔位 LENGTH LAST 表1中之傳送標 類型 UNIT ⑺ BIT(1) 頭為一子組長。 範圍 0-121 0/1 七位元LENGTH欄位指 120028.doc -23- 200803523 示以字組計的有效負載之長度且具有一自0至121個字組 (由於PLP為122個字組長且標頭為一個字組,因此最大值 為121)的範圍。設定為一之LAST攔位指示此傳送層封包 含有VAU之最後片斷。在此實例中,若PLP經確定為錯誤 的(如由總和檢查碼及/或錯誤校正機制中之一或多者所確 定),則傳送層將LENGTH欄位之值設定為122,從而將整 個PLP標記為不可用於其所轉遞至的上層。 表2中給出同步層標頭410中之某些欄位的例示性格式。 同步層封包形成對應於視訊的傳送層之有效負載。在一個 實例中,視訊之圖框形成一同步層封包。在表2中所示之 實例中,同步層封包標頭410為一固定4字組標頭且相應同 步層封包為對應於一個視訊圖框之可變長度有效負載。包 括於表2之同步標頭欄位中的資訊可包括諸如以下資訊的 資訊:視訊圖框類型、圖框速率、呈現時間戳記、隨機存 取旗標、超級圖框中之圖框號碼,及資料是否與一基礎或 增強層位元流相關聯及其他。 表2 攔位名稱 攔位類型 描述 Stream一ED UNIT(2) 00-視訊;01-音訊;10-閉合字幕 PTS UINT(14) 呈現時間戳記 Frame一ID FRAMEJD—TYPE ⑺ Frame一Number 及 Enhancement一Flag Frame—Number : SF中的當前圖框號碼 Enh. Flag:0-基礎; 1-增強層 RAP 一FLAG BIT(l) 隨機存取點 1 PAD FRAME_RATE UINT(3) 1-KAJr 000-15 fps,001-30 〇>s 等 RESERVED UINT(5) 保留位元 120028.doc -24- 200803523In addition to collecting information from the header and header catalogs, the information organizer 3〇4 can also generate descriptive information about the video material. Various header sum check codes, payload sum check codes, and error control mechanisms can be used to identify which part of the data is incorrect. The information generated may include the identification of such erroneous portions of the data. The error data can be - error distribution amount (four) - error rate measurement. Error data can be organized on any layer from the frame layer to the slice layer (cut # is a group coded block), a primitive block layer, or even a primitive layer. These types of descriptive information about the error data can be used to locate and determine the scope of the error. Details of the types of information that can be collected, maintained, flagged, or generated by the information organizer, J., Bianze, will be discussed below. In one aspect, the error control decision element 〇6 uses descriptive information collected and/or generated by the information organizer 〇4 (eg, stored in a table format) to provide instructions related to the processing of the multimedia material. . The error control decision component 306 analyzes the descriptive information to locate errors and determine which portions of the video are affected and to what extent these partial errors are. Using this information, the error control decision element 3〇6 can determine an error control method for handling such error conditions. In another aspect, the error control decision component receives feedback information from the upper layer. The feedback information may include information associated with the processing of multimedia in the upper layer. This feedback may include up-to-date information in the descriptive information of the upper layer of 120028.doc -18· 200803523. This information can be used to correct the tables stored in the lower layer. In addition, feedback information can include processing time, processing actions, processing status, and other information. Such information can be analyzed by the error control decision element 3〇6 to determine how to indicate the upper layer. The error control decision component 306 analyzes the collected information to determine how the multimedia material should be processed when the multimedia material is forwarded to the upper layer. The decision may include selecting one or more of a number of error control methods. The error control _ method may include spatial and/or temporal error concealment of the error portion of the video material. The error control method can also include an error recovery technique in which the error data is analyzed to be remedied in a manner that can be based on content or other information available to the upper application. The available time error concealed pole shape (4) (10) mail form) is known as frame rate up-conversion or fruc. Fruc builds a new frame based on other graphs (two through; ^ two frames across the frame to be constructed). The error control decision element 3〇6 may indicate that the upper layer uses $ when the wrong portion of the bead material (for example, a picture P-single frame or a large number of frames that are determined to be visually hidden) is in a manageable range. Inter- and/or time error concealment, error recovery or FRUC, and other error control mechanisms. However, if the scope of the erroneous data is too wide, the error control element can refer to the decoding of the upper part skipping the wrong part. The error control decision element 306 is discussed below to determine how to use the details of the upper layer. The eve media decoding benefit 3〇8 performs and can include audio, video closed words = two of his eve media bit stream solution. Multimedia Decoding Secondary: Corresponds to the inverse of the encoding operation used to encode the material. Encoded may be inter-frame coding (for example, time prediction data) and/or framed stone information in the box 120028.doc -19 - 200803523. Referring to Fig. 2, the power b performed by the multimedia decoder 〇8 can be performed at a plurality of layers such as the transfer layer 240/synchronization layer 235 and the application layer 25A. The communication layer function can include error detection and correction mechanisms for correcting errors and identifying uncorrectable errors. The identified uncorrectable error can be communicated to the information organizer 304 to include it in the descriptive resources as discussed above. The synchronization layer function can include buffering the receipt of the plurality of bitstreams directly with v > The material is ready to be decoded. At this time, all the synchronization data φ # is passed to the application layer decoder for almost simultaneous decoding. Application layer functions include decompression of audio, video, and closed caption bitstreams. The various decompressions can include dequantization and inverse transformations used to reconstruct the video data. In one aspect, after the information organizer 304 and the error control decision element 〇6 have performed the functions discussed above, the application layer of the video decoder component 308 receives the videogram at a frame in decoding order. frame. In some aspects, one or more of the elements of decoder 100 may be omitted, reconfigured, and/or combined. The elements can be implemented in the form of a hard body, a soft body, an intermediate, a microcode, or any combination thereof. Details of the actions performed by the elements of the decoder 15 will be discussed below in accordance with the methods illustrated in Figures 5A-5C. Figure 3B is a block diagram showing an example of a computer processor system that can be used in a decoder such as the one illustrated in Figure i. The decoder component 150 of this example includes a preprocessor element 32A, a random access memory (RAM) component 322, a digital signal processor (DSp) component μ4, and a video core component 326. The pre-processor 320 is used in an aspect to perform - or multiple actions in the actions performed by the various 120028.doc 200803523 from the figure. The preprocessor parses the video stream and writes the data to read 322. In addition, in one aspect, the preprocessor 320 implements the information organizer 234, the error control decision element 3〇6 = the pre-processing part of the multimedia decompressor 3G8 (for example, error concealment, error after the original action, etc.) Performing such more efficient, less computational intensive actions in the pre-processor 32g can perform more video decoding of the intensity in the causal order in the efficient video core 326. The DSP 324 retrieves the memory stored in the RAM 322. The parsed video data is: "re-', and is processed by the video core 326. The video core W6 performs de-living (also known as re-scaling or scaling), inverse transform and deblocking functions, and other video decompression. Function: The video core is usually implemented in a highly optimized and pipelined manner. Due to this, when the video sequence is recorded, the video data can be decoded in the fastest way. By performing disorder in the preprocessor Parsing, error detection, information organization, or error control maintains a causal order for decoding in the video core, taking into account the improved overall decoding performance. As discussed above, for the sake of error Control, the information organizer component collects descriptive information, organizes it into a table and forwards the table to the upper layer. One source of descriptive poverty is the various headers attached to the packets of various packetized layers. Figure 4 shows Description of an example of a multi-layered packet mechanism. This example packetization mechanism is used to explain some aspects of the error control system, but other packet mechanisms can also be used. The transport layer and the synchronization layer are framing & checksum protocol The layers provide a layering mechanism at various levels (including, for example, at a super frame (selected number of processed frames) level 12,026.doc 200803523, in a video access unit (VAU) Detecting errors at the level or at a PLP level. Therefore, effective error location can be performed at any or all of these layers. The application layer of the VAU containing a single video frame above the synchronization layer packet An additional level of integrity check is provided. In this example, application layer packets 405 A and 405B may be fixed and/or variable length packets. Application layer packets 405A and 405B may each be a The entire video frame or VAU. The synchronization layer adds a synchronization layer header (SH) 410 to each application layer packet 405A and 405B, resulting in synchronization layer packets 406 and 4063 (the synchronization layer packet 406 in FIG. 4) And 4066 includes a synchronization layer header 410 and includes application layer packets 405 A and 405B respectively. The synchronization layer packets 406 A and 406B are then input to the transport layer. In this example, the transport layer packet is of a fixed length. The synchronization layer packet is split into portions corresponding to the size of the transport layer packet and a transport layer header (TH) 415 is attached to the resulting transport layer packet. The transport layer may also attach a frame sum check code (FCS) to each sync. Layer package (not shown). FCS can be used to detect errors in the synchronization layer packet. In this example, the synchronization layer packet 406A containing the application layer packet 405A is divided into two transport layer packets 420A and 420B, wherein the packet 4206 includes the remaining portion 4258 of the synchronization layer packet 406 and the first portion 425C of the synchronization layer packet 406B. In this example, an additional transport layer header 415 is appended to portion 425C of transport layer packet 420B prior to the start of next sync layer packet 406B. A third transport layer packet 420D contains a portion 425D below the sync layer packet 406B. Synchronization layer header 410 and transport layer header 415 may contain similar information targeted to enable a decoder to recombine synchronization layer packets and application layer packets. A 120028.doc -22- 200803523 & head y includes such as packet size, number of packets, location of the header in the packet, data sequence time, data sequence duration, frame time, frame w horse & machine access Information on the number of points, the frame rate, and/or the number of associated packets in a group. Additionally, the header information can include stream identification information identifying the associated packet as belonging to the video bitstream, the audio bitstream, and/or the closed caption bitstream. A specific example of a transport and synchronization layer header will now be discussed. One of the functions of the φ transport layer is to provide a packet service that is superior to the octet-based service of the stream/MAC layer. The transport layer also provides a mechanism for determining the edge I of its payload packet (the VAU in the example shown in Figure 4) in the presence of a physical layer error. Multiple framing protocols can be used in association with the transport layer. The framing protocol associated with the transport layer specifies a protocol for combining its payload packets to establish a packet to be delivered to the decoder in the application layer. The boxing agreement is also used to deal with PLP errors and the rules of behavior that can be expected by the decoder. • An exemplary format for some of the fields in the transport layer header 415 in Table 1. In this example, the framing protocol provides a 122-character fixed-length PLP de-assert payload (this In addition to the beginning and end of the VAU in the example, the 'transfer header' is also used to transport the wrong PLP to the upper layer. Table 1 Block LENGTH LAST The label in Table 1 Type UNIT (7) The BIT(1) header is a subgroup length. Range 0-121 0/1 Seven-bit LENGTH field refers to 12028.doc -23- 200803523 shows the length of the payload in words and has a number from 0 to 121 words (since the PLP is 122 words long and The header is a block, so the maximum is 121). The LAST check bit set to one indicates that this transport layer packet contains the last fragment of the VAU. In this example, if the PLP is determined to be erroneous (as determined by one or more of the sum check code and/or the error correction mechanism), the transport layer sets the value of the LENGTH field to 122, thereby The PLP is marked as unavailable for the upper layer to which it is forwarded. An illustrative format for certain fields in the synchronization layer header 410 is given in Table 2. The synchronization layer packet forms a payload corresponding to the transport layer of the video. In one example, the video frame forms a synchronization layer packet. In the example shown in Table 2, the sync layer packet header 410 is a fixed 4-word header and the corresponding sync layer packet is a variable length payload corresponding to a video frame. The information included in the sync header field of Table 2 may include information such as video frame type, frame rate, presentation time stamp, random access flag, frame number in the super frame, and Whether the data is associated with a base or enhancement layer bit stream and others. Table 2 Block Name Block Type Description Stream-ED UNIT(2) 00-Video; 01-Audio; 10-Closed Caption PTS UINT(14) Presentation Timestamp Frame-ID FRAMEJD-TYPE (7) Frame-Number and Enhancement-Flag Frame—Number : Current frame number in SF Enh. Flag: 0-base; 1-Enhancement layer RAP-FLAG BIT(l) Random access point 1 PAD FRAME_RATE UINT(3) 1-KAJr 000-15 fps,001 -30 〇>s and other RESERVED UINT(5) reserved bits 12028.doc -24- 200803523

Stream一ID欄位用於私示複數個多媒體流中 <有效負載 資料與其相關聯的一者(例如,音訊、視訊、閉人^ 料等)。PTS欄位用於指示可用於同步音訊、禎 ’ 疋δ礼等之呈現 時間。Frame—ID攔位包括一循環圖框號碼(例如 一 /個表示 圖框0-127的位元元)部分及一指示資料為基礎層或為辦強 層資料的增強位元。若未使用可縮放編碼,則可省略增強 位元。RAP一FLAG欄位用於指示圖框是否可由一解碼設備The Stream-ID field is used to privately display the <one of the payload data associated with the plurality of multimedia streams (e.g., audio, video, closed, etc.). The PTS field is used to indicate the presentation time available for synchronizing audio, ’ 疋 疋 礼, etc. The Frame_ID block includes a loop frame number (e.g., a bit representing the frame 0-127) and an enhancement bit indicating that the data is the base layer or the strong layer data. If no scalable encoding is used, the enhancement bit can be omitted. The RAP-FLAG field is used to indicate whether the frame can be decoded by a device.

用作-隨機存取點。可轉考任何其他先前或將來圖^或 視訊之其他部分而解碼隨機存取點。腸则―反細搁位指 示複數個可能之圖框速率中之一者。圖框速率範圍可自每 移約15個圖框或更低變至每秒約60個圖框或更高。 SERVED欄位可用於傳達熟習此項技術者可發丨見益處的 其他類型的資訊。 除傳迗‘頭貪訊及同步標頭資訊以外,用於資訊組織元 =之描述性賫訊的另_來源可為—標頭目錄(如上文所描 述)払碩目錄為作為(在一個實例中)與視訊及/或音訊位 元流分開之套次%, 万貝訊(side information)傳輸之複製標頭資訊 的表。標頭目錄資訊諸如表3中所列出。 120028.doc -25- 200803523 表3 攔位名稱 MESSAGEJD MEDIA一TYPE NUM_VSL^RECORDS VSL_REC0RDsUsed as a - random access point. The random access point can be decoded by referring to any other previous or future picture or other part of the video. Intestines—reverse placements indicate one of a number of possible frame rates. The frame rate range can be changed from about 15 frames per frame or lower to about 60 frames per second or higher. The SERVED field can be used to convey other types of information that can be seen by those skilled in the art. In addition to the confession of 'head greed and synchronization header information, another source of descriptive information for the information organization element= can be the header directory (as described above) as a directory (in one instance) Medium) A set of copies of the header information separated by the video and/or audio bit stream, and a copy of the header information transmitted by the side information. Header directory information is listed in Table 3. 120028.doc -25- 200803523 Table 3 Block Name MESSAGEJD MEDIA TYPE NUM_VSL^RECORDS VSL_REC0RDs

攔位類型 UINT(8) UINT(2) UINT(l) VSL一RECORD一TYPE RAP-FLAG_BITS BIT(60) B_FRAME_FLAG_BITS BIT(60) RESERVED BIT(3)Intercept Type UINT(8) UINT(2) UINT(l) VSL-RECORD-TYPE RAP-FLAG_BITS BIT(60) B_FRAME_FLAG_BITS BIT(60) RESERVED BIT(3)

攔位值 5:視訊同步目錄 0:視訊同步目錄訊息 0:1 VSL一records; 1:2 VSL—records VSL_record 包含, 一 1. 圖框速率 2. 圖框號碼 3. 第一圖框PTS 4. 最後圖框PTS SF中之RAP圖框位置位元映射 SF中之B圖框位置位元映射 TBDBlock value 5: Video sync directory 0: Video sync directory message 0: 1 VSL-records; 1:2 VSL-records VSL_record contains, 1. Frame rate 2. Frame number 3. First frame PTS 4. The last frame PTS SF in the RAP frame position bit map SF B frame position bit map TBD

標頭目錄可作為一可變長度有效負載傳輸。資訊中之_ 多資訊為封包機制之各種標頭中之資訊的複製(例如,_ 框速率、呈現時間戳記、隨機存取點)。然而,可包括額 外資訊。此額外資訊可包括B—FRAME_FLAG—BITS攔位, 其指不B圖框在超級圖框中之位置。超級圖框通常以~諸 如框内編碼圖框之可獨立解碼之圖框開始。超級圖框中之 其他圖框通常包含單向預測部分(本文中稱為P圖框部分_ 簡單地稱為P圖框)及雙向預測部分(本文中稱為B圖框部分 或簡單地稱為B圖框)。在表3之實例中,將超級圖框中之 隨機存取點映射至RAP—FLAG—BITS欄位中。 標頭目錄提供標頭資訊及涉及某些圖框(例如,B圖樞) 在超級圖框中之位置的額外資訊。此資訊可用於替代吾失 之標頭資訊(歸因於錯誤而丟失)以及使資訊組織器元件3〇4 能夠確定在其他方面無法識別之資料的錯誤部分的可能身 份0 圖5 A為說明在一諸如圖1中所說明之系統中處理多媒體 120028.doc -26- 200803523 資料之方法的實例的流程圖。過程500在區塊5〇5處開始, 在S亥區塊5 05處,解碼器設備接收經編碼之多媒體資料。 經編碼之多媒體資料可以與多媒體位元流相關聯之壓縮資 料之形式存在。解碼器設備可經由諸如圖丨中所示之網路 140的有線及/或無線網路接收多媒體資料。多媒體資料可 包含多個同步及/或不同步位元流,其包括(但不限於):音 訊、視訊、閉合字幕及其類似物。多媒體資料可包含包括 應用層封包、同步層封包及傳送層封包在内之多個封包 層。多層可各包括如上文所論述之標頭資訊。標頭資訊可 包括諸如上文在表i及表2中所列出之資訊。視訊資料可配 置於諸如圖框、切片、圖元區塊等之部分中。圖框可組成 多個圖框之超級圖框。所接收之多媒體資料亦可包括-如 上文所論述之標頭目錄。所接收之多媒體資料可編碼於諸 如基礎層及增強層之可縮放層中。標頭目錄可含有諸如表 3中所列出之資訊。圖3A中之解碼器設備15〇之接收器元件 3〇2可在區塊5〇5處執行該等功能。 在於區塊505處接收多媒體資料後,過程5〇〇繼續進行至 區塊別,在區塊51G處,解碼器設備組織關於接收之多媒 體貧料的描述性資訊。如上文參看圖3A所論述的,在區塊 505處,資訊組織器元侔3〇4白 一 仵04自位疋流收集關於多媒體資料 之描述性資訊。傳送標頭可 息臾描代㈣了 4為確㈣框及超級圖框 邊界。傳从頭亦可經處理以確定圖框長度及位元流中的 。同步層標頭可經處理以··提取圖框號碼 並解澤基似” _(例如,料可縮放、㈣位元旬、 120028.doc 200803523 提取圖框速率及/或插入並得到圖框之PTS。同步標頭亦可 經處理以提取呈現時間戳記或提取隨機存取點/。若在區塊 505處接收如上文所論述之標頭目錄,則在區塊51〇處所識 別、編譯、收集、保持、旗標表示或產生的資訊亦可自該 標頭目錄獲得。 在區塊5 10處組織之描述性資訊亦可包括關於錯誤資料 之資訊。錯誤資料可包括一錯誤分佈量測或一錯誤率量 測。可在自一圖框層至一切片層(切片為一組經編碼之圖 元區塊)、圖元區塊層或甚至一圖元層的任何層上組織錯 誤資料。關於錯誤資料之此等類型之描述性資訊可用於定 位並確定錯誤範圍。現論述可在區塊510處組織之描述性 資訊之表的實例。 表4列出可在區塊5 1 〇處產生之圖框資訊表之一實例。類 似表亦可在諸如切片、圖元區塊等之其他層處組織。 表4 框碼 圖號The header directory can be transmitted as a variable length payload. Information in the message _ Multi-information is the copying of information in various headers of the packet mechanism (eg, _ frame rate, presentation timestamp, random access point). However, additional information may be included. This additional information may include the B-FRAME_FLAG-BITS block, which refers to the position of the B-frame in the superframe. The super frame usually begins with an independently decodable frame such as an in-frame coded frame. Other frames in the superframe usually contain a unidirectional prediction part (referred to herein as a P-frame part _ simply referred to as a P-frame) and a bi-directional prediction part (referred to herein as a B-frame part or simply referred to as B frame). In the example of Table 3, the random access points in the superframe are mapped into the RAP_FLAG_BITS field. The header directory provides header information and additional information about the location of certain frames (for example, B-Block) in the Super Frame. This information can be used to replace my lost header information (missed due to errors) and to enable the information organizer component 3〇4 to determine the possible identity of the wrong part of the otherwise unrecognizable material. Figure 5A shows A flow chart of an example of a method of processing multimedia 12028.doc -26-200803523 in a system such as that illustrated in FIG. Process 500 begins at block 5〇5, where the decoder device receives the encoded multimedia material. The encoded multimedia material may exist in the form of compressed information associated with the multimedia bitstream. The decoder device can receive the multimedia material via a wired and/or wireless network such as the network 140 shown in FIG. The multimedia material may include a plurality of synchronized and/or unsynchronized bitstreams including, but not limited to: audio, video, closed captioning, and the like. The multimedia material may include multiple packet layers including application layer packets, synchronization layer packets, and transport layer packets. The plurality of layers may each include header information as discussed above. The header information may include information such as those listed in Tables i and 2 above. Video data can be placed in sections such as frames, slices, primitive blocks, and the like. Frames can be used to form a super frame of multiple frames. The multimedia material received may also include - a header directory as discussed above. The received multimedia material can be encoded in a scalable layer such as a base layer and an enhancement layer. The header directory can contain information such as those listed in Table 3. The receiver component 3〇2 of the decoder device 15 in Fig. 3A can perform these functions at block 5〇5. After receiving the multimedia material at block 505, process 5 continues to block. At block 51G, the decoder device organizes descriptive information about the received multimedia poor. As discussed above with reference to FIG. 3A, at block 505, the information organizer 自 〇 白 自 自 自 自 自 自 自 自 自 自 自 自 自 自 自 自 自 自 。 。 。 。 。 。 。 。 。 。 。 。 The transmission header can be traced to (4) 4 to the (four) box and the super frame boundary. The header can also be processed to determine the length of the frame and the stream in the bit stream. The sync layer header can be processed to extract the frame number and resolve the base like " _ (for example, material scalable, (4) bit, 120028.doc 200803523 to extract the frame rate and / or insert and get the PTS of the frame. The sync header can also be processed to extract the presentation timestamp or extract the random access point/. If the header directory as discussed above is received at block 505, the block 51 is identified, compiled, collected, and maintained. The information indicated or generated by the flag may also be obtained from the header directory. The descriptive information organized at block 5 10 may also include information about the erroneous data. The erroneous data may include an error distribution measurement or an error rate. Measurement. Error data can be organized on any layer from a frame layer to a slice layer (slices are a set of coded primitive blocks), a primitive block layer, or even a primitive layer. These types of descriptive information can be used to locate and determine the scope of the error. An example of a table of descriptive information that can be organized at block 510 is now discussed. Table 4 lists the frames that can be generated at block 5 1 〇. An example of a news form. Similar tables can also be organized at other layers such as slices, primitive blocks, etc. Table 4 Frame Codes

框度 圖長UL2L3L4 礎礎礎礎 層 基基基基 PTS 圖框 類型 RAP 旗標 PLP錯誤 分佈 PLP 錯誤率 動作 PTS1 I 1 Error—dist_l 15% TBD PTS2 P 0 Error—dist2 10% TBD PTS3 P 0 Error一dist一 3 0% TBD PTS4 P 0 Error_dist 4 40% TBD 圖框號碼、層(例如,基礎或增強)、圖框長度、PTS、 圖框類型、RAP-FLAG欄位可自已知為不錯誤的同步層標 頭獲得。若在區塊505處接收標頭目錄,則此等欄位亦可 自該標頭目錄獲得。若將若干錯誤圖框串聯在一起(例 如’歸因於同步標頭之訛誤),則可將圖框長度欄位設定 120028.doc -28- 200803523 為一等於串聯圖框之字彡且之鍤盤沾盾 予、、、之w數的值。例如,圖框類型攔 :可用於指示【圖框、P圖框或B圖框。某些此等欄位由於 Λ料之洗誤而無法被填充。 PLP錯誤分佈欄位用於提供與所偵測圖框中之錯誤資料 之位置相關的描述性資訊。每—圖框可由如上文參看圖# 所描述之若干PLP組成。”Error—dist—η,,變數含有pLp之哪 一部分含有錯誤資料之指示。可使用指示錯誤分佈之若干 方法。舉例而言,錯誤分佈可上捨入為圖框之1/16部分並 可由兩字組"Error—dist—η"變數來表示。兩字組變數之各二 進位(bin)或位元指示對應於圖框之1/16部分的錯誤pLp之 存在。值1指示存在對應於該範圍之錯誤pLp,且"〇,,指 示無錯誤PLP部分。若若干圖框串聯在一起,則pLp錯誤 分佈俘獲串聯圖框中之所有PLP的總錯誤分佈。此處,在 過程500中,表4中列出之圖框資訊表的最後攔位"動作 (Action)’’未填寫且可在區塊515處基於圖框資訊表中所含 有的其他資訊來確定。圖框資訊表可儲存於圖i中之解碼 器設備150之記憶體元件154中。圖3A之解碼器設備15〇之 資訊組織器304可在區塊510處執行該等功能。 在於區塊5 10處組織描述性資訊後,過程5 〇 〇繼續進行至 區塊515 ,在該區塊515處,解碼器設備提供與第二層中多 媒體資料之處理相關的指令。第二層可為一上層或一下 層。上文論述之實例已涉及一提供指令至上層(例如,應 用層)的下層(例如,傳送層及/或同步層)。然而,下文論 述之方法將展示:上層亦可基於在上層中獲得之描述性資 120028.doc -29- 200803523 訊提供指令至下層。 在一項態樣中,解碼器設備提供與待於另一層(例如, 應用層)中執行之錯誤控制方法相關的指令。錯誤控制方 法可包括各種錯誤復原技術。在錯誤復原技術中,嘗試補 救錯誤f料巾所含有之變數值。此等方法可包括使用上文 論述之標頭目錄(若在區塊505處接收該標頭目錄)以識別序 列層封包之圖框有效負载之大小^標頭目錄可含有識別編 碼類型、傳送層封包數目及大小、時序資訊等的資訊。 可執行之錯誤控制之另-形式為錯誤隱藏。錯誤隱藏技 術通常涉及自其他已經接收及/或解碼之μ值來估計圖 兀值。錯誤隱藏技術可使用時間及/或空間隱藏。舉例而 2 ’若Ρ圖框之-部分為錯誤的,則可基於已經解碼之先 前圖框將錯誤隱藏選擇為時間隱藏。η圖框之—部分為 錯誤的,則可使用來自兩個其他經接收及/或經解碼之圖 框的時間預測。 可執行之錯誤控制之另—形式為FRUC。在簡c技術 中,基於一或多個其他圖框建構一完整圖框。聊C技術 可使用類似於彼等用於圖植之部分的時間隱藏技術,㈣ 在完整圖框上被簡單地執行。 、 在一項態樣中,圖3A之解碼器設備15〇的錯誤 元件遍在區塊515處執行該等動作。錯誤控制決策元件 使用在區塊別處纽織之錯誤分佈特 種錯誤控制技術中之哪一籍 焉朝Μ丁各 種錯誤控制技術之方法的_^ =㈣薦哪― 的、、、田即。在某些狀況下,錯誤控制 120028.doc -30- 200803523Frame degree length UL2L3L4 Basic base layer Foundation PTS Frame type RAP Flag PLP error distribution PLP Error rate action PTS1 I 1 Error—dist_l 15% TBD PTS2 P 0 Error—dist2 10% TBD PTS3 P 0 Error A dist -3 0% TBD PTS4 P 0 Error_dist 4 40% TBD frame number, layer (for example, base or enhancement), frame length, PTS, frame type, RAP-FLAG field can be self-known as not wrong The sync layer header is obtained. If the header directory is received at block 505, such fields may also be obtained from the header directory. If several error frames are concatenated together (for example, 'caused by the synchronization header'), the frame length field can be set to 12028.doc -28- 200803523 to be equal to the word of the tandem frame and then The value of the number of w for the shield, and the number of w. For example, the frame type block: can be used to indicate [frame, P frame or B frame. Some of these fields cannot be filled due to washing errors. The PLP Error Distribution field is used to provide descriptive information about the location of the error data in the detected frame. Each frame may consist of several PLPs as described above with reference to Figure #. "Error-dist-η," the variable contains an indication of which part of pLp contains the error data. Several methods indicating the error distribution can be used. For example, the error distribution can be rounded up to 1/16 of the frame and can be two The word "Error-dist-η" variable is used to indicate that each bin or bit of the two-word variable indicates the presence of an error pLp corresponding to the 1/16 portion of the frame. A value of 1 indicates that there is a corresponding The range of errors pLp, and "〇,, indicates an error-free PLP part. If several frames are concatenated together, the pLp error distribution captures the total error distribution of all PLPs in the tandem frame. Here, in process 500 The last block "Action'' of the frame information table listed in Table 4 is not filled and can be determined at block 515 based on other information contained in the frame information table. It can be stored in the memory element 154 of the decoder device 150 in Figure i. The information organizer 304 of the decoder device 15 of Figure 3A can perform such functions at block 510. The organization description at block 5 10 After the sexual information, process 5 continues Passing to block 515, at block 515, the decoder device provides instructions related to the processing of the multimedia material in the second layer. The second layer can be an upper layer or a lower layer. The examples discussed above have been directed to providing The instructions are to the lower layer (eg, the transport layer and/or the synchronization layer) of the upper layer (eg, the application layer). However, the method discussed below will show that the upper layer can also be based on the descriptive capital obtained in the upper layer 120028.doc -29- 200803523 provides instructions to the underlying layer. In one aspect, the decoder device provides instructions related to the error control method to be performed in another layer (eg, the application layer). The error control method may include various error recovery techniques. In the error recovery technique, an attempt is made to remediate the variable values contained in the error f. These methods may include using the header directory discussed above (if the header directory is received at block 505) to identify the sequence layer packet map. The size of the box payload ^The header directory may contain information identifying the type of encoding, the number and size of the transport layer packets, timing information, etc. Executable error control It is hidden for errors. Error concealment techniques usually involve estimating the value of the graph from other values that have been received and/or decoded. Error concealment techniques can be hidden using time and/or space. For example, the part of the ''framelet' is Incorrect, the error concealment can be selected as time concealment based on the previous frame that has been decoded. If the part of the n frame is wrong, time prediction from two other received and/or decoded frames can be used. Another form of executable error control is FRUC. In the simple c technique, a complete frame is constructed based on one or more other frames. The C technology can use time similar to those used for the image. Hidden technology, (iv) is simply executed on the full frame. In one aspect, the error elements of decoder device 15A of Figure 3A perform such actions throughout block 515. Error Control Decision Element Which of the mis-distribution special error control techniques used in the block is used to _^ = (4) recommend the method of the various error control techniques. In some cases, error control 120028.doc -30- 200803523

決策元件3 0 6可確定沒有錯誤控制技術可行且可推薦跳過 一或多個圖框之錯誤控制。在此狀況下,可改為顯示經成 功解碼之最後圖框。在一項態樣中,將在區塊515處確定 之錯誤控制方法儲存於如表3中所示之圖框資訊表的"動作 攔位中。將圖框資訊表傳遞至其中執行錯誤控制方法的 層。視訊解碼器自圖框資訊表中取得相應圖框之,,動作,,項 並將其用作導引解碼過程的起始點。應注意,可組合、省 略、重新配置或其任何組合過程5〇〇之某些區塊。 圖5Β為一說明在諸如圖i中所說明之系統中處理多媒體 資料之方法520的另一實例的流程圖。方法52〇可在具有一 在下層中執行®5A之方法_之下層的解碼器設備之應用 層中執行。 方法520在區塊525處開始,在該區塊525處,在執行方 法別之層處接收多媒體資料。多媒體資料可為諸如圖 框、切片或圖元區塊之多媒體資料之部分。在一項態樣 中,在區塊525處所接收之多媒體資料之部分已於一諸如 傳送及/或組合傳送層封包以形成一完整同步層封包之同 步層的下層處被編譯。完整同步層封包可為—完整圖框或 可被解碼之視訊的某一其他部分。在某些態樣中,以多媒 體資料之部分可好媒料列中顯示之次序接收在區塊 525處所接收的多媒體資料之部分。圖i中所展示之解碼器 設備15〇之多媒體解碼n子系統可執行區塊525處之動 作。 在於區塊525處接收多媒體資料後,執行過程52〇之解碼 120028.doc • 31 - 200803523 器層在區塊530處自第一層接收關於多媒體資料的描述性 資訊。第一層可為一下層(例如,傳送層或同步層)。在區 塊530處所接收之描述性資訊可在上文論述之過程5〇〇的區 塊5 10處被識別、編譯、收集、保持、旗標表示或產生。 在區塊53 0處所接收之描述性資訊可以包括諸如上文在表3 或表4中所示之彼等條目之條目的圖框資訊表的形式存 在。圖框資訊表可包括一與處理多媒體資料相關之推薦之 ”動作”。圖1中所示之解碼器設備15〇的多媒體解碼器子系 統3 0 8可執行區塊5 3 0處之動作。 在於區塊525處接收多媒體資料及在區塊53〇處接收關於 多媒體資料之描述性資訊後,過程52〇在區塊535處繼續, 在該區塊535處,第二層至少部分地基於所接收之描述性 資訊處理所接收之多媒體資料。若描述性資訊含有一推薦 之"動作,,,則執行過程520之解碼器子系統可能使用或可 能不使用所推薦之動作。如上文所論述,所推薦之動作可 包含一或多個錯誤控制技術,該等錯誤控制技術包括(但 不限於):錯誤復原技術、錯誤隱藏技術或跳過解碼。解 碼器設備視在錯誤復原期間可復原何資料而可能遵循或可 能不遵循所推薦的動作。舉例而言,組織在區塊53〇處接 收之撝述性資訊的下層過程可能未能識別在錯誤資料之部 /刀中有多少圖框。上層錯誤復原技術可能能夠識別錯誤資 料之部分中的圖框之數目且可選擇執行未在圖框資訊表之 動作欄位中推薦的某些錯誤復原或隱藏技術。圖丨中所 示之解碼器設備150之多媒體解碼器子系統3〇8可執行區塊 120028.doc -32- 200803523 5 3 5處之動作。應注意,可組合、省略、重新配置或其任 何組合過程520之某些區塊。 圖5C為一說明在諸如圖1中所說明之系統中處理多媒體 資料之方法540的另一實例的流程圖。方法540在區塊545 處開始’在該區塊545處,解碼器設備接收經編碼之多媒 體資料。在區塊545處執行之動作可類似於彼等在圖5a中 所說明之過程500之區塊505處執行的動作。圖3A中之解碼 器設備150之接收器元件302可執行區塊545處的功能。 過程540之剩餘動作包括在下層處執行的動作55〇及在上 層處執行的動作570。下層動作550包括某些可類似於圖5A 中所說明之過程50G中執行之某些動作的動作。同樣,上 層動作570包括某些可類似於圖5b中所說明之過程52〇中執 行之某些動作的動作。 圖5C中所說明之方法540可由諸如圖6中所示之多層多媒 體解碼器子系統來執行。在一項態樣中,一多媒體解碼器 600包含傳送層及同步層中之一下層媒體模組子系統6〇5。 多媒體解碼器600亦包括一位於應用層中之上層子系統。 媒體模組子系統605可包括圖3A中所說明之資訊組織器3〇4 及錯誤控制決策子系統306。應用層包括一多媒體解碼 器,該多媒體解碼器包括一視訊解碼層(VDL)610及錯誤控 制子系統615。如由向上箭頭620所指示,下層媒體模組提 供描述性資訊及/或指令至上層。如由箭頭625所指示,上 層子系統610及615可提供反饋至下層。 參看圖5 C ’在於區塊5 4 5處接收經編碼之多媒體資料 120028.doc -33 - 200803523 後,過程540在區塊555處繼續,在該區塊555處,下層組 織關於所接收之多媒體資料的描述性資訊。在區塊555處 所執行之動作可類似於彼等在圖5A中所說明之過程500之 區塊510處所執行的動作。描述性資訊可包括諸如表3及表 4中所說明之資訊的上文論述之資訊中的任一資訊或全部 資訊。 在於區塊555處組織描述性資訊後,過程540在區塊560 處繼續,在該區塊560處,確定與多媒體資料之處理相關 的指令。可基於錯誤分佈及在區塊555處組織之其他描述 性資訊來確定該等指令。另外,在過程540中,下層接收 來自上層之反饋。反饋可包括與上層中多媒體資料之處理 相關的資訊。反饋可包括諸如多媒體資料之特定部分的處 理時間、上層中執行的處理動作(例如,錯誤控制動作)及 處理狀態(例如,哪些圖框已經解碼並被顯示)之資訊。反 饋可用於在區塊555處重新組織描述性資訊。下文論述用 於在區塊560處確定與多媒體資料之處理相關之指令的方 法的細節。圖5A中之解碼器設備150之錯誤控制決策子系 統306可執行區塊560處的動作。 在區塊565處’下層子系統將與多媒體資料之處理相關 的描述性資訊及/或指令提供至上層子系統。在區塊575 處,上層子系統接收描述性資訊及/或指令。多媒體解碼 器子系統308可執行區塊565及575處的動作。 在於區塊5 75處接收描述性資訊及/或指令後,過程54〇 在區塊580處繼續,在該區塊58〇處,上層子系統基於指令 120028.doc -34- 200803523 及/或描述性資訊來處理多媒體資料。在區塊處執行之 動作可類似於彼等在圖5B中所說明之方法別的區塊奶處 執行的動作。若描述性資訊含有一推薦之"動作",則執行 過程540之解碼器子系統可能使用或可能不使用所推薦之 — 自作。如上文所論述,推薦之動作可包含-或多個錯誤押 、 制技術,該等錯誤控制技術包括(但+限於):錯誤復原^ 術、錯誤隱藏技術或跳過解碼。解碼器設備視在錯誤復原 φ ^間可復原何貝料而可能遵循或可能不遵循所推薦之動 作2舉例而[在區塊555處組織描述性資訊之下層過程 Z能未能識別在錯誤資料之部分中有多少圖框。上層錯誤 復原技術可能能夠識別錯誤資料之勒中的圖框之數目並 可選擇執行未在圖框資訊表之”㈣"欄位中推冑的某些錯 誤復原或隱藏技術。圖丨中所示之解碼器設備15〇之多媒體 解碼器子系統308可執行區塊58〇處的動作。 過程540在區塊585處繼續,在該區塊585處,上層多媒 ♦ 體解碼器基於在上層動作570中執行的處理以反饋資訊指 ’、下層反饋可包括解碼多媒體資料之某一部分所需之處 理時間或完全解碼資料之一部分的處理時間。藉由比較完 • 正處理時間與在區塊545處所接收之新的多媒體資料的呈 現時間戳5己,若上層處理時間基於過去的處理效能而展示 洛後的指示,則下層過程可指示上層跳過某些圖框(例 如’ B圖框)。可將在下層處接收之反饋資訊組織成在區塊 555處所組織的描述性資訊。 反饋亦可包括關於上層中所執行之處理動作的細節。舉 120028.doc -35- 200803523 例而δ,反饋可指不對特定圖框或多媒體資料之其他部分 進行的特定錯誤控制技術及/或正常解碼動作。反饋亦可 包括處理狀恶(例如,圖框之成功解碼或未成功解碼)。藉 由將處理動作及處理狀態反饋資訊包括於區塊555處所組 織之資料中,下層可調整在區塊560處基於更新之描述性 貧訊所確定的指令。若處理經備份,則下層可指示上層跳 過諸如Β圖框或增強層資料之某些圖框的解碼。圖i中所示 • 之解碼器設備150之多媒體解碼器子系統3〇8可執行區塊 585處的動作。應注意,可組合、省略、重新配置或其任 何組合過程540之某些區塊。 圖7為一說明組織可用於執行圖5A及圖5C中所說明之方 法中的某些動作的描述性資訊之方法的實例的流程圖。可 執行過程700以在圖5A中所說明之過程500之區塊5 1〇處或 在圖5C中所說明之過程54〇之區塊555處組織描述性資訊。 過程700在區塊705處開始,在該區塊705處,將(例如)在過 φ 程500中之區塊5〇5處接收之多媒體資料之超級圖框儲存於 記憶體緩衝器中。超級圖框為通常可獨立解碼之一組圖 框。超級圖框可包括覆蓋一範圍自約〇·2秒至約2〇秒之固 定時間週期的圖框。亦可根據固定數目之構成圖框定超級 圖框之大小’從而具有一可變時間週期。超級圖框大小可 經選擇以允許一合理擷取時間。在於區塊705處儲存多媒 體資料之超級圖框後,過程700在區塊710處繼續,在該區 塊710處,確定資料是否包括多層(例如,一基礎層及一或 多個增強層)。若僅於超級圖框中編碼一單個資料層,則 120028.doc •36- 200803523 過程700在區塊715A處繼續。若於超級圖框中編碼兩個或 兩個以上資料層,則過程700在區塊715B處繼續。超級圖 框標頭可含有一指示超級圖框中是否存在多層的旗標。在 區塊715A或715B處,初始化圖框資訊表(fit)。初始化FIT 可經執行以將欄位設定為某些預設值。在初始化FIT後, 過程700進行至區塊720A或區塊720B,此取決於超級圖框 是否含有多層。在任一狀況下,在區塊720A或區塊720B 處輸入可選標頭目錄中所含有之資訊。標頭目錄可含有如 上文所論述之資訊中之任一資訊。 分別在於區塊715A或區塊715B處初始化fit,且在區塊 720A或區塊720B處輸入可選標頭目錄後,過程7〇〇前進以 为別在區塊72 5-740或區塊745-760處用迴圈處理超級圖 框。在區塊730及區塊750處,解碼器設備識別完整之視訊 存取單元(VAU),解碼器設備可經由可用標頭資訊識別完 整之視訊存取單元。標頭資訊可包括(例如)表1及表2中所 示之傳送標頭或同步標頭(或任何其他標頭)中之欄位的任 一者。亦可使用可選標頭目錄中之資訊。將過程7〇〇中之 VAU假定為圖框,但亦可在區塊730或區塊75〇處識別諸如 切片或區塊之其他部分。在識別一完整VAU後,分別在區 塊735或區塊755處識別經識別之VAU中之視訊資料的錯誤 邛分。可猎由標頭總和檢查碼失效或傳送層總和檢查碼失 效等來識別該等錯誤部分。用於偵測錯誤資料之許多技術 為热習此項技術者所知。錯誤部分可用於編譯FIT之錯誤 分佈資訊(參見表4中之PLP錯誤分佈及PLP錯誤率欄位)。 120028.doc -37- 200803523 在於區塊735或區塊755處識別VAU之錐誤部分後,分別在 區塊740或區塊760處組織FIT資訊。FIT中之資訊可包括上The decision component 306 can determine that no error control techniques are feasible and can recommend skipping the error control of one or more frames. In this case, the last frame of the successful decoding can be displayed instead. In one aspect, the error control method determined at block 515 is stored in the "action block of the frame information table as shown in Table 3. Pass the frame information table to the layer in which the error control method is executed. The video decoder takes the corresponding frame, action, and item from the frame information table and uses it as the starting point of the guided decoding process. It should be noted that certain blocks of the process may be combined, omitted, reconfigured, or any combination thereof. Figure 5 is a flow chart illustrating another example of a method 520 of processing multimedia material in a system such as that illustrated in Figure i. The method 52 can be performed in an application layer having a decoder device that performs the method_lower layer of the ®5A in the lower layer. The method 520 begins at block 525 where the multimedia material is received at the layer of the execution method. The multimedia material can be part of a multimedia material such as a frame, slice or primitive block. In one aspect, portions of the multimedia material received at block 525 are compiled at a lower level, such as a transport layer and/or a combined transport layer packet to form a synchronization layer of a complete sync layer packet. The full sync layer packet can be either a full frame or some other part of the video that can be decoded. In some aspects, portions of the multimedia material received at block 525 are received in an order in which the portions of the multimedia material are displayed in the good media column. The multimedia decoding n subsystem of the decoder device 15 shown in Figure i can perform the operation at block 525. After receiving the multimedia material at block 525, the decoding of the process 52 is performed. 120028.doc • 31 - 200803523 The layer receives descriptive information about the multimedia material from the first layer at block 530. The first layer can be a lower layer (eg, a transport layer or a sync layer). The descriptive information received at block 530 can be identified, compiled, collected, maintained, flagged, or generated at block 5 10 of process 5 discussed above. The descriptive information received at block 530 may include the form of a frame information table such as entries for the entries shown in Table 3 or Table 4 above. The frame information table may include a "action" associated with the processing of the multimedia material. The multimedia decoder subsystem 308 of the decoder device 15A shown in Figure 1 can perform the action at block 530. After receiving the multimedia material at block 525 and receiving descriptive information about the multimedia material at block 53, the process 52 continues at block 535 where the second layer is based, at least in part, on the The descriptive information received is processed by the received multimedia material. If the descriptive information contains a recommended "action, then the decoder subsystem executing process 520 may or may not use the recommended action. As discussed above, the recommended actions may include one or more error control techniques including, but not limited to, error recovery techniques, error concealment techniques, or skip decoding. The decoder device may or may not follow the recommended actions depending on what data can be recovered during error recovery. For example, the underlying process of organizing the repetitive information received at block 53 may fail to identify how many frames are in the wrong data/knife. The upper error recovery technique may be able to identify the number of frames in the portion of the error message and may choose to perform certain error recovery or hiding techniques that are not recommended in the action field of the frame information table. The multimedia decoder subsystem 3〇8 of the decoder device 150 shown in the figure 可执行 can perform the action of the block 120028.doc -32- 200803523 5 3 5 . It should be noted that certain blocks of process 520 may be combined, omitted, reconfigured, or any combination thereof. Figure 5C is a flow diagram illustrating another example of a method 540 of processing multimedia material in a system such as that illustrated in Figure 1. Method 540 begins at block 545. At block 545, the decoder device receives the encoded multimedia material. The actions performed at block 545 may be similar to the actions performed at block 505 of process 500 illustrated in Figure 5a. Receiver component 302 of decoder device 150 in Figure 3A can perform the functions at block 545. The remaining actions of process 540 include actions 55 performed at the lower layer and actions 570 performed at the upper layer. The lower layer action 550 includes certain actions that may be similar to certain actions performed in the process 50G illustrated in Figure 5A. Similarly, the upper layer action 570 includes certain actions that may be similar to certain actions performed in the process 52 of Figure 5b. The method 540 illustrated in Figure 5C can be performed by a multi-layer multimedia decoder subsystem such as that shown in Figure 6. In one aspect, a multimedia decoder 600 includes a lower layer media module subsystem 6〇5 in the transport layer and the synchronization layer. The multimedia decoder 600 also includes an upper layer subsystem located in the application layer. The media module subsystem 605 can include the information organizer 3〇4 and the error control decision subsystem 306 illustrated in FIG. 3A. The application layer includes a multimedia decoder including a video decoding layer (VDL) 610 and an error control subsystem 615. As indicated by the up arrow 620, the underlying media module provides descriptive information and/or instructions to the upper layer. As indicated by arrow 625, upper subsystems 610 and 615 can provide feedback to the lower layers. Referring to Figure 5C', after receiving the encoded multimedia material 120028.doc -33 - 200803523 at block 545, process 540 continues at block 555 where the lower layer organizes the received multimedia. Descriptive information about the data. The actions performed at block 555 may be similar to the actions performed at block 510 of process 500 illustrated in Figure 5A. Descriptive information may include any or all of the information discussed above, such as the information described in Tables 3 and 4. After the descriptive information is organized at block 555, process 540 continues at block 560 where an instruction associated with the processing of the multimedia material is determined. The instructions may be determined based on the error distribution and other descriptive information organized at block 555. Additionally, in process 540, the lower layer receives feedback from the upper layer. The feedback may include information related to the processing of the multimedia material in the upper layer. The feedback may include information such as processing time for a particular portion of the multimedia material, processing actions performed in the upper layer (e.g., error control actions), and processing status (e.g., which frames have been decoded and displayed). Feedback can be used to reorganize descriptive information at block 555. Details of the method for determining instructions associated with the processing of multimedia material at block 560 are discussed below. The error control decision subsystem 306 of the decoder device 150 of Figure 5A can perform the actions at block 560. At block 565, the underlying subsystem provides descriptive information and/or instructions related to the processing of the multimedia material to the upper subsystem. At block 575, the upper subsystem receives descriptive information and/or instructions. The multimedia decoder subsystem 308 can perform the actions at blocks 565 and 575. After receiving the descriptive information and/or instructions at block 575, the process 54 continues at block 580 where the upper subsystem is based on instructions 12028.doc -34-200803523 and/or a description Sexual information to process multimedia materials. The actions performed at the block may be similar to the actions performed by the other block milks of the method illustrated in Figure 5B. If the descriptive information contains a recommended "action", then the decoder subsystem performing process 540 may or may not use the recommended one. As discussed above, the recommended actions may include - or multiple erroneous techniques, including (but limited to) error recovery techniques, error concealment techniques, or skip decoding. The decoder device may rely on or may not follow the recommended action 2 example in the event that the error recovery φ ^ can be recovered. [At the block 555, the descriptive information below the organization process Z can fail to identify the error data. How many frames are in the section. The upper-level error recovery technique may be able to identify the number of frames in the error data and optionally perform some error recovery or hiding techniques that are not pushed in the "(4)" field of the frame information table. The multimedia decoder subsystem 308 of the decoder device 15 can perform the action at block 58. Process 540 continues at block 585, where the upper multimedia player is based on the upper layer action The processing performed in 570 is referred to as feedback information, and the lower layer feedback may include processing time required to decode a portion of the multimedia material or processing time of a portion of the fully decoded data. By comparing the processing time and the processing at block 545 The presentation timestamp of the received new multimedia material is 5, and if the upper processing time is displayed based on the past processing performance, the lower layer process may instruct the upper layer to skip some frames (for example, 'B frame'). The feedback information received at the lower layer is organized into descriptive information organized at block 555. The feedback may also include information about the processing actions performed in the upper layer. In the case of δ, feedback may refer to specific error control techniques and/or normal decoding actions that are not performed on a particular frame or other portion of the multimedia material. Feedback may also include processing spoofs (eg, The frame is successfully decoded or unsuccessfully decoded. By including the processing action and processing state feedback information in the data organized at block 555, the lower layer can be adjusted at block 560 based on the updated descriptive information. If the process is backed up, the lower layer may instruct the upper layer to skip decoding of certain frames such as frames or enhancement layer data. The multimedia decoder subsystem 3〇8 of the decoder device 150 shown in Figure i The actions at block 585 may be performed. It should be noted that certain blocks of process 540 may be combined, omitted, reconfigured, or any combination thereof. Figure 7 is a diagram illustrating an organization that may be used to perform the methods illustrated in Figures 5A and 5C. A flowchart of an example of a method of descriptive information for certain actions in the process. The process 700 can be performed at block 5 1〇 of process 500 illustrated in FIG. 5A or process 54 illustrated in FIG. 5C. Descriptive information is organized at block 555. Process 700 begins at block 705 where a supermap of multimedia material received, for example, at block 5〇5 in process 500 is performed. The box is stored in a memory buffer. The super frame is a set of frames that can usually be independently decoded. The super frame can include a frame covering a fixed time period ranging from about 2 seconds to about 2 seconds. The size of the super frame can also be determined according to a fixed number of constituent frames to have a variable time period. The super frame size can be selected to allow a reasonable acquisition time. The super frame of the multimedia material is stored at block 705. Thereafter, process 700 continues at block 710 where it is determined whether the material includes multiple layers (eg, a base layer and one or more enhancement layers). If a single data layer is encoded only in the superframe, then 120028.doc • 36-200803523 Process 700 continues at block 715A. If two or more data layers are encoded in the superframe, then process 700 continues at block 715B. The superframe header can contain a flag indicating whether there are multiple layers in the superframe. At block 715A or 715B, the frame information table (fit) is initialized. Initializing FIT can be performed to set the field to some preset value. After initializing the FIT, process 700 proceeds to block 720A or block 720B, depending on whether the superframe contains multiple layers. In either case, the information contained in the optional header directory is entered at block 720A or block 720B. The header directory may contain any of the information as discussed above. The fit is initialized at block 715A or block 715B, respectively, and after the optional header directory is entered at block 720A or block 720B, process 7 advances to block at block 72 5-740 or block 745- At 760, the super frame is processed with a loop. At block 730 and block 750, the decoder device identifies the complete video access unit (VAU), and the decoder device can identify the complete video access unit via the available header information. The header information may include, for example, any of the transmission headers or synchronization headers (or any other headers) shown in Tables 1 and 2. Information in the optional header directory can also be used. The VAU in process 7 is assumed to be a frame, but other portions such as slices or blocks may also be identified at block 730 or block 75A. After identifying a complete VAU, an error score for the video material in the identified VAU is identified at block 735 or block 755, respectively. The error portion can be identified by the failure of the header sum check code or the failure of the transfer layer sum check code. Many techniques for detecting erroneous data are known to those skilled in the art. The error section can be used to compile FIT error distribution information (see PLP Error Distribution and PLP Error Rate Fields in Table 4). 120028.doc -37- 200803523 After identifying the cone error portion of the VAU at block 735 or block 755, the FIT information is organized at block 740 or block 760, respectively. Information in FIT can include

文在表4中所論述之資訊中的任一資訊。過程7〇〇繼續用迴 圈處理超級圖框(區塊725-740或區塊745-760)直至在決策 區塊725或區塊745處識別超級圖框之末端為止。當識別超 級圖框之末端時,過程700繼續進行至其中確定錯誤控制 動作之區塊800。圖3A中之解碼器設備150之資訊組織器組 件304可執行過程700之動作。應注意,可組合、省略、重 新配置或其任何組合過程700之某些區塊。 圖8A及8B為說明確定圖7中所說明之方法中的錯誤控制 動作之方法的實例的流程圖。過程8〇〇亦可經執行以確定 錯誤控制動作並在圖5八中所說明之過程5〇〇之區塊515處或 在圖5C中所說明之過程54〇之區塊56〇及565處提供相應指 令。在一項態樣中,過程8〇〇用於確定待提供至上層的錯 ^控制動作。例如’基於多媒體圖框之錯誤分佈及,或錯 誤率確定錯誤控制動作。在另―祕巾,過程_識別上 層在執仃所推薦之錯誤控制動作時可能會使用的多媒體 料的其他部分。 ' 所褕述,可在諸如 即工又 ▼一 q Μ 疋巧IWJ如,敢鬲有 或多個增強層(例如,最低有效位元)之多層中編碼 =媒體資料。增強層亦可含有㈣框之所有資料。在 之任一FIT含有基礎層與增強層之部分且該等層中 中所說;in::為錯誤的。圖9繪示包括用於諸如圖1 M、、了縮放編碼基礎層及增強層之實體層封 120028.doc -38- 200803523 包之實例的結構。基礎層900含有多個PLp 9i〇,pLp 9i〇Any of the information in the information discussed in Table 4. Process 7 continues to process the superframe (blocks 725-740 or blocks 745-760) with loops until the end of the hyperframe is identified at decision block 725 or block 745. When the end of the superframe is identified, process 700 proceeds to block 800 where an error control action is determined. The information organizer component 304 of the decoder device 150 of Figure 3A can perform the acts of the process 700. It should be noted that certain blocks of process 700 may be combined, omitted, reconfigured, or any combination thereof. 8A and 8B are flow charts illustrating an example of a method of determining an error control action in the method illustrated in Fig. 7. Process 8 can also be performed to determine the error control action and at block 515 of process 5, illustrated in Figure 5-8, or at blocks 56 and 565 of process 54, illustrated in Figure 5C. Provide the appropriate instructions. In one aspect, the process 8〇〇 is used to determine the erroneous control action to be provided to the upper layer. For example, the error control action is determined based on the error distribution of the multimedia frame and the error rate. In the other, the process, the process identifies the other parts of the multimedia that the upper layer may use when performing the recommended error control actions. The description can be coded = media data in multiple layers such as 即 又 一 I I I I I I I I I I I I I 鬲 多个 多个 多个 多个 多个 多个 多个 多个 多个 多个 多个 多个 多个 多个 例如 例如 例如 例如 = =. The enhancement layer may also contain all the information in the (iv) box. Any of the FITs contains a portion of the base layer and the enhancement layer and is described in the layers; in:: is erroneous. Figure 9 illustrates a structure including an example of a physical layer seal 120028.doc-38-200803523 package for use in, for example, Figure 1 M, a scaled code base layer, and an enhancement layer. The base layer 900 contains a plurality of PLp 9i〇, pLp 9i〇

含有傳送層標頭915、同步層標頭920及傳送層總和檢查碼 尾口P 925基礎層905可合有諸如圖框93〇(標記為^之工圖 框及諸如圖框935(標記為F3rp圖框的最高有效位元。增 強層95G亦3有PLP 91G、傳送標頭915、同步標頭92〇及傳 送層總和檢查碼925。此實例中之增強層含们圖框93〇(標 記為Fr)及P圖框935(標記為之最低有效位元。另外, 增強層950含有自I圖框93〇及p圖框935雙向預測之完整B圖 框940(標記為F2)的同步層封包,其中在建構B圖框&之前 組合並解碼基礎層及增強層對匕、Fi,及h、I,。過程 ά计為考慮此形式之可縮放編碼。 過程800在區塊805處起始,在區塊8〇5處,解碼器設備 將標頭資訊整合至含有錯誤VAU之FIT的部分(諸如彼等在 圖7中所況明之過程700中的區塊735及755處所識別)中。 標頭資訊可自正確接收之傳送及/或同步層標頭獲得或自 標頭目錄獲得(若接收到該標頭目錄)。若錯誤資料不能對 特疋PLP之資料隔離(例如,歸因於同步之丟失),則標頭 目錄資訊可用於識別PLP邊界且可能填充FIT之”pLp錯誤 勿佈及/或"PLP錯誤率”攔位。若標頭目錄不可用,則可 將PLP錯誤率设定為1QQ%。例示性方法$⑼在確定推薦 哪一錯誤控制”動作,,時使用"pLp錯誤率,,。然而,熟習此 項技術者將瞭解在確定”動作"時使用”1>1^錯誤分佈”資訊 以及其他形式錯誤資料之方式。 在於區塊805處填充與錯誤圖框相關之FIT之攔位後,過 120028.doc -39- 200803523 程800繼續進行以在區塊810處起始用迴圈處理超級圖框中 之圖框。在決策區塊810處,解碼器設備檢查FIt PLP錯誤 率資料並確定連續丟失(亦即,錯誤)圖框之數目是否大於 臨限值’’lost—th"。若連續丟失圖框之灰目超過該臨限值, 則過程800在區塊815處繼續,在該區塊815處,將丟失圖 框之FIT的’’動作”欄位設定為一推薦跳過丟失圖框之解碼 的值。可將"lost一th"臨限值設定為其中其他錯誤控制技術 經確定為無效或經充分降級使得不被保證的圖框的數目。 臨限值’’lost一th’’可在約3個圖框至約6個圖框範圍内。當以 一大於對應於每秒30個圖框之圖框速率的3個圖框之時間 距離執行時,時間隱藏技術之效能通常降級。更快之圖框 速率允許更大之臨限值,諸如在每秒6〇個圖框之圖框速率 下的約6個圖框至約12個圖框。在於區塊815處將對於丟失 圖框之”動作”設定為跳過後,過程8〇〇繼續進行至決策區 塊820。若已到達超級圖框之末端,則過程繼續至圖8B中 所說明之過程的剩餘部分。若更多圖框留待處理,則過程 800繼續返回至決策區塊81〇。 在決策區塊810處,若連續丟失之圖框的數目不超過臨 限值(包括完全無錯誤圖框之狀況),則過程8〇〇在決策區塊 825處繼續,在該決策區塊8乃處,FIT之"圖框類型”攔位 用於確定當前圖框是否為B圖框。對B圖框執行之錯誤控 制動作不同於此實例中對P圖框及〗圖框執行的彼等動作。 若當前圖框不為一 B圖框,則過程800在決策區塊830處繼 續,在該決策區塊830處,將PLP錯誤率(PLPJERR)與臨限 120028.doc -40- 200803523 值Ρ_ΤΗ比較。臨限值ρ—τη設定對PLP錯誤率之限制,滿 足該限制,正常錯誤隱藏技術(例如,空間及時間錯誤隱 藏)係有效的。P_TH臨限值可在約2〇%至約4〇%之範圍内。 若PLP錯誤率超過P_TH臨限值,則在區塊835處將對於當 前圖框之"動作"設定為等於跳過。若pLp錯誤率不超過= 臨限值,則在區塊840處將對於當前圖框之"動作"設定為 一指示執行正常錯誤隱藏(EC)的值。在於區塊835或區塊 φ MO處設定對於當前圖框之"動作"後,過程8〇〇繼續進行至 決策區塊820且如上文所論述若更多圖框保持於超級圖框 中’則循環返回至區塊8 1 〇。 返回至決策區塊825 ’若當前圖框經確定為B圖框,則過The transport layer header 915, the sync layer header 920, and the transport layer sum check code tail P 925 base layer 905 can be combined with, for example, frame 93 (the work frame labeled ^ and such as frame 935 (labeled F3rp map) The most significant bit of the box. The enhancement layer 95G also has a PLP 91G, a transmission header 915, a synchronization header 92, and a transport layer sum check code 925. The enhancement layer in this example is shown in Figure 93 (labeled Fr). And P frame 935 (labeled as the least significant bit. Additionally, enhancement layer 950 includes a synchronization layer packet of full B frame 940 (labeled F2) bi-predicted from I frame 93〇 and p frame 935, The base layer and enhancement layer pairs, Fi, and h, I are combined and decoded prior to constructing the B-frame & the process is considered to consider scalable coding of this form. Process 800 begins at block 805. At block 8.5, the decoder device integrates the header information into portions of the FIT containing the erroneous VAUs (such as those identified at blocks 735 and 755 in process 700 as illustrated in Figure 7). Header information can be obtained from the correct receiving transmission and/or synchronization layer header or obtained from the header directory (if received Header directory). If the error data cannot be isolated from the data of the special PLP (for example, due to loss of synchronization), the header directory information can be used to identify the PLP boundary and possibly fill the FIT "pLp error" and/or "PLP error rate" block. If the header directory is not available, the PLP error rate can be set to 1QQ%. The exemplary method $(9) uses the "pLp error when determining which error control is recommended" action Rate, however, those skilled in the art will understand how to use the 1>1^Error Distribution information and other forms of error data when determining the "action". The block 805 is populated with the error frame. After the FIT is blocked, the process proceeds to 12028.doc -39 - 200803523. The process 800 continues to process the frame in the superframe with the loop at block 810. At decision block 810, the decoder device checks the FIt. PLP error rate data and determine if the number of consecutive lost (ie, error) frames is greater than the threshold ''lost-th". If the gray of the consecutive lost frames exceeds the threshold, then process 800 is in the block 815 continues in the area At block 815, the 'action' field of the missing frame is set to a value that is recommended to skip the decoded frame of the lost frame. The "lost-th" threshold can be set to other error control techniques. The number of frames that are determined to be invalid or fully degraded so as not to be guaranteed. The threshold ''lost-th'' can range from about 3 frames to about 6 frames. When one is greater than one corresponding to each When the time interval of the three frames of the frame rate of 30 frames per second is executed, the performance of the time concealment technique is usually degraded. The faster frame rate allows for greater margins, such as from about 6 frames to about 12 frames at a frame rate of 6 frames per second. After the "action" for the missing frame is set to skip at block 815, process 8 continues to decision block 820. If the end of the superframe has been reached, the process continues to the remainder of the process illustrated in Figure 8B. If more frames are left to process, then process 800 continues to return to decision block 81. At decision block 810, if the number of consecutively lost frames does not exceed the threshold (including the condition of the completely error-free frame), then process 8 continues at decision block 825 where the decision block 8 is continued. However, the FIT "Frame Type" block is used to determine whether the current frame is a B-frame. The error control action performed on the B-frame is different from the one performed on the P-frame and the frame in this example. If the current frame is not a B-frame, then process 800 continues at decision block 830 where the PLP error rate (PLPJERR) and the threshold are 120028.doc -40-200803523 The value Ρ_ΤΗ is compared. The threshold ρ_τη sets a limit on the PLP error rate, and the normal error concealment technique (for example, space and time error concealment) is valid. The P_TH threshold can be about 2〇% to Within a range of approximately 4%. If the PLP error rate exceeds the P_TH threshold, then at block 835 the "action" for the current frame is set equal to skip. If the pLp error rate does not exceed = threshold Value, then at block 840, the "action" for the current frame is set to A value indicating the execution of normal error concealment (EC). After the "action" for the current frame is set at block 835 or block φ MO, process 8 continues to decision block 820 and as above If the more frames remain in the super frame, then the loop returns to block 8 1 〇. Return to decision block 825 'If the current frame is determined to be B frame, then

程_繼續進行至決策區塊845。在所示之實例中,假定B 圖框位於I圖框與P圖框之間,或位於兩個p圖框之間。若 先前圖框之,,動作&quot;經確定為一跳過&quot;動作&quot;,則在區塊850 處’過程800將當前B圖框之&quot;動作&quot;亦設定為跳過。由於自 ·#預測當前B圖框之資料不可用,因此B圖框之正常構造 係不可行的且亦可降級其他錯誤隱藏選擇。 返回至決策區堍845,戈1 、 右並非將先如圖框之,,動作&quot;確定 為跳過,則過程_繼續進行至區塊⑸,在該區塊W 处將PLP錯誤率與另一臨限值b—扭相比較。若PL?錯誤 於Β一ΤΗ ’則在區塊86〇處,將對於當前圖框之,,動作&quot; ,又疋為FRUC,否則在區塊⑹處,將對於當前圖框之&quot;動 作又疋為正吊錯誤隱藏。&amp;實例中對於關框之正常錯誤 隱藏為自兩個經解碼之圖框的時間預測。該等圖框通常包 120028.doc -41- 200803523 含一在β圖框之前的圖框及一在B圖框之後的圖框。然 而’亦可使用兩個先前或兩個隨後B圖框。使用當前b圖 框之無錯誤部分的空間隱藏亦可與在區塊86〇處確定之正 吊錯誤隱藏一起使用。由於存在兩個參考圖框供選擇且在 預測時不需要使用兩者,因此臨限值B—TH可高於用於p圖 框之臨限值P—TH。然而,在某些狀況下,fruC可更穩健 且可比正常錯誤隱藏更好的隱藏且因此可將B一τη之值設 定為一低於P-TH的值。B—TH及Ρ-ΤΉ之值可取決於諸如通 道條件之類型及如何引入錯誤的條件D用於FRUC之隱藏 可類似於正常B圖框錯誤隱藏,但其係對全部圖框執行。 在已對超級圖框中之所有圖框作出&quot;動作,,決策後,在決 策區塊820處’過程8〇〇繼續進行至圖8B中之區塊87〇。自 區塊875至895之循環以可用於上層以執行所決定之&quot;動作,, 的資Λ來填充FIT表。在區塊870處,過程800經由FIT起始 另一回(pass)(在開頭處開始),並用迴圈處理所有基礎及增 強層圖框。 在決策區塊875處,若當前圖框並非為β圖框或待使用 FRUC來隱藏之圖框,則過程在區塊88〇處繼續,在該區塊 880處,使用一用於時間錯誤隱藏之變數skip一num來填充 FIT表。skip一num變數指示在時間上距離當前圖框之圖框 數目,將使用時間錯誤隱藏自該圖框數目來預測當前圖 框。 圖10A以圖示方式說明當前p圖框1〇〇5及一位於當前圖 框之前二個圖框的先前經解碼之p圖框1〇1〇的位置。在此 120028.doc -42- 200803523 實例中,將skip-num變數設定為等於三。因此,將不使用 由解碼器跳過的P圖框1〇15及1020。實情為,可縮放當前p 圖框1005之運動向量1025(見縮放之運動向量ι〇3〇)以指向 先前經解碼之P圖框1010。圖10A將圖框說明為一維的,但 其實際上為二維的且運動向量1025指向先前圖框中之二維 位置。在圖10A之實例中,圖框1〇10中之物件1〇35在圖框 1005中向上移動。若物件之運動相對恆定,則運動向量 1025之線性外推可準確地指向圖框丨〇 1〇中之正確位置,從 而將物件1035在圖框1〇〇5中向上重新定位。物件1〇35之顯 示位置可在跳過之圖框1015及1020中保持怪定。 返回至圖8B,在確定當前圖框之skip—num變數後,過程 800在區塊885處繼續,在該區塊88s處,將由skip-num變 數指示之圖框的旗標MV—FLAG設定為一指示上層解碼器 應保存圖框之經解碼之值以用於將來錯誤隱藏的值。圖 10B以圖示方式說明用於其他錯誤圖框之錯誤隱藏的經解 碼之圖框的旗標表示。在圖10B之實例中,經解碼之圖框 1040經旗標表示以用於使用正常錯誤隱藏來隱藏錯誤圖框 1045。經解碼之圖框1050及1055皆經旗標表示以用於執行 對於圖框1060之FRUC。此等僅為實例且位於前面及/或後 面之圖框的其他組合可用於正常錯誤隱藏及FRUC。 返回至圖8B,在設定待用於隱藏當前圖框之圖框的 MV—FLAG後,過程800在區塊895處繼續。在區塊895處, 解碼器檢查看看是否已達到超級圖框之末端。若已偵測到 超級圖框之末端,則過程800結束於當前超級圖框。若更 120028.doc -43- 200803523 多之圖框保持在超級圖框中,則過程800返回至決策區塊 875以用迴圈處理剩餘圖框。 在區塊875處,右當前圖框為b圖框或一待使用fruC:隱 藏之圖框’則過程8 0 0在區塊8 9 0處繼續,在該區塊8 9 〇 處’確定定位兩個圖框之位置以執行雙向預測的變數 B—NUM及b 一num。圖l〇C以圖示方式說明用於指示用於使 用FRUC 1¾藏一錯誤圖框的兩個經解碼之圖框之位置的變 數(相同變數可用於一錯誤之B圖框)。當前錯誤圖框1〇65 已經確疋為使用FRUC來隱藏。將變數b—num設定為二以指 示一位於兩個圖框之外的先前圖框1070為第一參考圖框。 在所示之實例中,圖框1075係使用接收之運動向量1〇85而 自圖框1070預測。將變數B-NUM設定為等於3以指示一位 於圖框1070之前三個圖框的圖框1〇75為第二參考圖框。在 所示之實例中,經解碼之圖框1075係使用接收之運動向量 1085而自圖框1 070預測。可縮放所接收之運動向量 1085(導致產生縮放之運動向量1〇9〇)以指向錯誤圖框 1065。由圖框1075及1070中之所接收之運動向量1〇85定位 的經解碼之部分可接著用於隱藏由縮放之運動向量i 〇9〇定 位之圖框1065的一部分。在此實例中,b圖框1〇8〇不用於 隱藏圖框1065(B圖框通常不用於預測)。通常,最靠近的 經正確解碼之圖框將用於執行錯誤隱藏。兩個經解碼之圖 框亦皆可在被隱藏之錯誤圖框的前面或後面。 在於區塊890處用B—NUM及b一num變數填充FIT後,過程 繼續用迴圈處理區塊875-895直至整個超級圖框之FIT被填 120028.doc •44- 200803523 充為止,此時過程800結束。在一項態樣中,在將個別圖 框及FIT轉遞至上層以待解碼前,使用過程7〇〇及過程8〇〇 為超級圖框中之所有圖框填充FIT。以此方式,圖框可以 其被解碼之次序來轉遞。另外,應跳過之圖框可被轉遞或 可能不被轉遞。在另一態樣中,一旦對於一圖框完成過程 700與過程800兩者,便可將圖框及FIT之相應條目轉遞至 上層。圖3A中之解碼器設備150之錯誤控制決策子系統3〇6 可執行過程800之動作。 例示性過程700及800將圖框用作VAU。然而,VAU亦可 為切片或圖元之區塊且可對應於此等部分而非圖框填充 FIT。應注意,可組合、省略、重新配置或其任何組合過 程700及800之某些區塊。 圖11為說明可用於在諸如圖1中所說明之系統中處理多 媒體資料的解碼器設備150之另一實例的功能方塊圖。此 恶樣包括·用於接收多媒體資料之構件;用於在第一層中 組織關於多媒體資料之描述性資訊的構件,其中該描述性 資訊與第二層中多媒體資料之處理相關;及用於至少部分 地基於描述性資訊提供與第二層中多媒體資料的處理相關 之指令的構件。此態樣之某些實例包括:其中該接收構件 包含一接收器1102 ;其中該組織構件包含一資訊組織器子 系統1104 ;及其中該提供構件包含一錯誤控制決策子系統 1106。 圖12為說明可用於在諸如圖1中所說明之系統中處理多 媒體資料的解碼器設備150之另一實例的功能方塊圖。此 120028.doc -45- 200803523 態樣包括:用於接收多媒體資料之構件;用於在第一層中 組織關於多媒體資料之描述性資訊的構件,其中該描述性 資訊與第二層中多媒體資料之處理相關;及用於至少部分 地基於描述性資訊提供與第二層中多媒體資料之處理相關 之指令的構件。此態樣之某些實例包括··其中該接收構件 包含一用於接收之模組1202 ;其中該組織構件包含一用於 組織資訊之模組1204 ;及其中該提供構件包含一用於提供 | 指令之模組1206。 圖13為說明可用於在諸如圖1中所說明之系統中處理多 媒體資料的解碼器設備15〇之另一實例的功能方塊圖。此 恶樣包括:用於接收多媒體資料之構件;用於在上層中處 理多媒體資料的構件;用於至少部分地基於與上層中多媒 體資料之處理相關聯的資訊指示一下層之構件;及用於至 少部分地基於與上層中多媒體資料之處理相關聯的資訊在 下層中處理多媒體資料之構件。此態樣之某些實例包括: # 其中該接收構件包含一接收器1302 ;其中該上層處理構件 包含一應用層多媒體解碼器子系統13〇8,其中該用於指示 之構件包含應用層多媒體解碼器子系統1308 ;及其中該下 層處理構件包含一傳送/同步層多媒體解碼器子系統 1306 ° 圖14為說明可用於在諸如圖1中所說明之系統中處理多 媒體資料的解碼器設備150之另一實例的功能方塊圖。此 態樣包括:用於接收多媒體資料之構件;用於在上層中處 理多媒體資料之構件;用於至少部分地基於與上層中多媒 120028.doc •46- 200803523 體資料之處理相關聯的資訊指示一下層之構件;及用於至 少部分地基於與上層中多媒體資料之處理相關聯的資訊在 下層中處理多媒體資料之構件。此態樣之某些實例包括: 其中邊接收構件包含一用於接收之模組i 4〇2 ;其中該上層 處理構件包含一用於在上層中處理多媒體之模組1408 ;其 中該用於指示之構件包含用於在下層中處理多媒體之模組 1408’及其中該下層處理構件包含用於在下層中處理多媒 體資料之模組1406。 圖15為說明可用於在諸如圖1中所說明之系統中處理多 媒體資料的解碼器設備150之另一實例的功能方塊圖。此 態樣包括:用於接收多媒體資料之構件;用於自第一層接 收關於該多媒體資料之描述性資訊的構件,其中該描述性 資訊與第二層中多媒體資料之處理相關;及用於至少部分 地基於所接收之描述性資訊在第二層中處理多媒體資料之 構件。此態樣之某些實例包括:其中該用於接收多媒體資 料之構件包含一接收器1502 ;其中該用於接收描述性資訊 之構件包含一多媒體解碼器1508 ;及其中該處理構件包含 多媒體解碼器1508。 圖16為說明可用於在諸如圖1中所說明之系統中處理多 媒體資料的解碼器設備150之另一實例的功能方塊圖。此 態樣包括:用於接收多媒體資料之構件;用於自第一層接 收關於該多媒體資料之描述性資訊的構件,其中該描述性 資訊與第二層中多媒體資料之處理相關;及用於至少部分 地基於所接收之描述性資訊在第二層中處理多媒體資料之 120028.doc -47- 200803523 構件。此態樣之某些實例包括:其中該用於接收多媒體資 料之構件包含一用於接收之模組1602;其中該用於接收描 述性資訊之構件包含一用於解碼多媒體之模組16〇8;及其 中該處理構件包含用於解碼多媒體之模組1608。 一般熟習此項技術者將瞭解:可使用各種不同技術及方 法中之任何一種來表示資訊及信號。舉例而言,可藉由電 壓、電流、電磁波、磁場或粒子、光場或粒子或其任何組 φ 合來表示可能貫穿上文描述所指之資料、指令、命令、資 訊、信號、位元、符號及碼片。 一般熟習此項技術者將進一步瞭解:可將結合本文所揭 不之實例而描述的各種說明性邏輯區塊、模組及演算法步 驟實施為電子硬體、韌體、電腦軟體、中間體、微碼或其 組合。為清楚地說明硬體及軟體之此互換性,上文通常已 根據其功能性而描述各種說明性組件、區塊、模組、電路 及步驟。此功能性是實施為硬體或是軟體是視特定應用及 Φ 施加於整個系統之設計約束條件而定。熟習此項技術者可 對每一特定應用以不同方式來實施所述之功能性,但是不 應將此實施決策理解為引起偏離所揭示方法之範疇。 差口合本文中所揭示之實例而描述的各種說明性邏輯區 塊、組件、模組及電路可用一通用處理器、一數位信號處 理器(DSP)、一特殊應用積體電路(ASIC)、一場可程式化 閘極陣列(FPGA)或其他可程式化邏輯設備、離散閘極或電 晶體邏輯、離散硬體組件或經設計以執行本文中所描述之 功此的其任何組合來實施或執行。通用處理器可為微處理 120028.doc -48 - 200803523 器’但或者該處理器可為任何習知處理器、控制器、微控 制器或狀態機。處理器亦可實施為計算設備之組合,例 如’DSP與微處判之組合、複數個微處理n、與DSP核 ^ASIC核心結合之—或多個微處理器,或任何其他此組 態0 結合本文中所揭示之實例而描述的方法或演算法之步驟 可直接地體現於硬體、藉由處理器而執行的軟體模組或兩 • 者之組合中。軟體模組可駐存於Ram記憶體、快閃記憶 體、ROM記憶體、EPR〇M記憶體、騰r〇m記憶體、暫存 器;·硬碟、抽取式碟片、⑶-職、光學儲存媒體或此項 技術中已知之任何其他形式的儲存媒體中。例示性儲存媒 體麵接至處理器,使得處理器可自健存媒體讀取資訊並將 貧訊寫人至儲存媒體中。或者,健存媒體可整合至處理 器。處理器及館存媒體可駐存於特殊應用積體電路(ASK) 中ASIC可駐存於無線數據機中。或者,處理器及儲存媒 • 冑可作為離散組件而駐存於無線數據機中。 提供所揭示之實例的先前描述以使__般熟習此項技術者 月&amp;夠利用或使用所揭示之方法及裝置。熟習此項技術者將 易於瞭解對此等實例的各種修改,且本文中所界定之原理 可應用於其他實例且可添加額外元件。 因此,已描述對多媒體資料執行高度有效且穩健之錯誤 控制的方法及裝置。 、 【圖式簡單說明】 圖1為說明根據-態樣之多媒體通信系統之方塊圖。 120028.doc -49- 200803523 器設備110 丨於劃分任 圖2為包括諸如圖1中所說明之系統中之編碼 及解碼器設備150中之一跨層錯誤控制系統的用 務的多層協定堆疊之實例的方塊圖。 之解碼器設 圖3 A為說明可用於諸如圖1中所說明的系統 備之一態樣的方塊圖。 之解碼設 圖3B為說明可用於諸如圖i中所說明的系統 備之一電腦處理器系統之一實例的方塊圖。 φ 圖4展示多層之封包機制之實例的說明。 圖5A為說明在諸如圖丨中所說明的系統中處理多媒體資 料之方法之實例的流程圖。 、 圖5B為說明在諸如圖i中所說明的系統中處理多媒體資 料之方法之另一實例的流程圖。 圖5C為說明在諸如圖i中所說明的系統中處理多媒體資 料之方法之另一實例的流程圖。 圖6為可用於執行圖5C中所說明之方法的多層多媒體解 φ 碼器子系統之實例的方塊圖。 圖7為說明組織可用於執行圖5八及5C中所說明之方法中 的某些動作之描述性資訊的方法之實例的流程圖。 圖8A及8B為說明確定圖7中所說明之方法中的錯誤控制 動作之方法之實例的流程圖。 圖9繪示包括用於諸如圖J中所說明之系統的可縮放編石馬 基礎層及增強層的實體層封包之實例的結構。 圖1 〇 A以圖示方式說明一當前p圖框及一位於當前圖框 之知二個圖框的先前解碼之p圖框的位置。 120028.doc -50· 200803523 圖1 〇B以圖示方式說明用於其他錯誤圖框之錯誤隱藏的 經解碼之圖框的旗標表示。 圖10C以圖示方式說明用於指示用於使用FRXJC隱藏一錯 誤圖框的兩個經解碼之圖框之位置的變數。 圖11為說明可用於諸如圖1中所說明之系統的解碼器設 備150之另一實例的功能方塊圖。 圖12為說明可用於諸如圖1中所說明之系統的解碼器設 備150之另一實例的功能方塊圖。 圖13為說明可用於諸如圖1中所說明之系統的解碼器設 備150之另一實例的功能方塊圖。 圖14為說明可用於諸如圖1中所說明之系統的解碼器設 備150之另一實例的功能方塊圖。 圖15為說明可用於諸如圖1中所說明之系統的解碼器設 備150之另一實例的功能方塊圖。 圖16為說明可用於諸如圖1中所說明之系統的解碼器設 備150之另一實例的功能方塊圖。 【主要元件符號說明】 100 多媒體通信系統 102 外部源 110 編碼器設備 112 處理器 114 記憶體 116 收發器 140 網路 120028.doc _51_ 200803523 150 解碼器設備 152 處理器 154 記憶體 156 收發器 205 應用層 ~ 210 同步層 215 傳送層 220 流及/或媒體存取控制(MAC)層 225 實體層 230 應用層 235 同步層 240 傳送層 245 流及/或媒體存取控制(MAC)層 250 實體層 255 錯誤彈性系統 260 錯誤復原系統 302 接收器元件 304 資訊組織器元件 306 錯誤控制決策元件 308 多媒體解碼器元件 320 預處理器元件 322 隨機存取記憶體(RAM)元件 324 數位信號處理器(DSP)元件 326 視訊核心元件 120028.doc -52- 200803523The process continues to decision block 845. In the example shown, it is assumed that the B frame is between the I frame and the P frame, or between the two p frames. If the previous frame, action &quot; is determined to be a &quot;action&quot;, then at block 850, process 800 sets the &quot;action&quot; of the current B frame to skip. Since the data from the ## prediction current B frame is not available, the normal construction of the B frame is not feasible and can also downgrade other error concealment options. Returning to decision area 堍 845, Ge 1 , right is not the first frame, action &quot; determined to skip, then process _ proceeds to block (5), where the PLP error rate and the other A threshold value b-twist comparison. If PL? is wrong, then at block 86, for the current frame, the action &quot; is again FRUC, otherwise at block (6), the action will be for the current frame. It is also hidden for the wrong hanging. The normal error for the close box in the & instance is hidden as a time prediction from the two decoded frames. These frames usually include 120028.doc -41- 200803523 with a frame before the beta frame and a frame after the B frame. However, two previous or two subsequent B-frames can also be used. The space concealment using the error-free portion of the current b-frame can also be used with the hang-up error concealment determined at block 86〇. Since there are two reference frames to choose from and there is no need to use both in the prediction, the threshold B_TH can be higher than the threshold P-TH for the p-frame. However, in some cases, fruC may be more robust and may be better hidden than normal error concealment and thus the value of B_τη may be set to a value lower than P-TH. The values of B-TH and Ρ-ΤΉ may depend on the type of channel conditions and how to introduce the wrong condition D. The hiding of FRUC may be similar to the normal B-frame error concealment, but it is performed on all frames. After the &quot;action has been made to all of the frames in the superframe, after decision, at decision block 820, process 8 continues to block 87 of Figure 8B. The loop from blocks 875 to 895 fills the FIT table with the resources available to the upper layer to perform the determined &quot;action,&quot;. At block 870, process 800 initiates another pass via FIT (starting at the beginning) and processes all of the base and enhancement layer frames with a loop. At decision block 875, if the current frame is not a beta frame or a frame to be hidden using FRUC, then the process continues at block 88, where a time error concealment is used. The variable skip one num to fill the FIT table. The skip-num variable indicates the number of frames in the current frame from the time frame, and the current frame is predicted by using the time error to hide the number of frames. Figure 10A graphically illustrates the position of the current p-frame 1〇〇5 and a previously decoded p-frame 1〇1〇 of the two frames preceding the current frame. In the example 120028.doc -42- 200803523, the skip-num variable is set equal to three. Therefore, P frames 1〇15 and 1020 skipped by the decoder will not be used. Instead, the motion vector 1025 of the current p-frame 1005 (see the scaled motion vector ι〇3〇) can be scaled to point to the previously decoded P-frame 1010. Figure 10A illustrates the frame as one-dimensional, but it is actually two-dimensional and the motion vector 1025 points to the two-dimensional position in the previous frame. In the example of Fig. 10A, the object 1〇35 in the frame 1〇10 is moved upward in the frame 1005. If the motion of the object is relatively constant, the linear extrapolation of the motion vector 1025 can be accurately pointed to the correct position in the frame , 1 , so that the object 1035 is repositioned upward in the frame 1 〇〇 5 . The display position of the object 1 〇 35 can be kept in the skipped frames 1015 and 1020. Returning to Figure 8B, after determining the skip-num variable of the current frame, process 800 continues at block 885, where the flag MV_FLAG of the frame indicated by the skip-num variable is set to A value indicating that the upper decoder should save the decoded value of the frame for future error concealment. Figure 10B graphically illustrates the flag representation of the decoded frame for error concealment of other error frames. In the example of Figure 10B, the decoded frame 1040 is flagged for hiding the error frame 1045 using normal error concealment. The decoded frames 1050 and 1055 are all flagged for execution of the FRUC for block 1060. These other combinations of only the examples and the preceding and/or following frames can be used for normal error concealment and FRUC. Returning to Figure 8B, after setting the MV_FLAG to be used to hide the frame of the current frame, process 800 continues at block 895. At block 895, the decoder checks to see if the end of the superframe has been reached. If the end of the superframe has been detected, the process 800 ends with the current superframe. If the more 120028.doc -43- 200803523 frame remains in the super frame, then process 800 returns to decision block 875 to process the remaining frames with a loop. At block 875, the right current frame is a b frame or a fruC: hidden frame is used. Then the process 800 continues at block 890, where the location is determined. The positions of the two frames are used to perform bidirectionally predicted variables B-NUM and b-num. Figure 〇C illustrates, by way of illustration, the variables used to indicate the location of the two decoded frames used to locate the error frame of the FRUC 13 (the same variable can be used for an erroneous B-frame). The current error frame 1〇65 has been confirmed to be hidden using FRUC. The variable b_num is set to two to indicate that a previous frame 1070 located outside of the two frames is the first reference frame. In the illustrated example, block 1075 is predicted from block 1070 using the received motion vector 1 〇 85. The variable B-NUM is set equal to 3 to indicate that the frame 1 〇 75 of one of the three frames before the frame 1070 is the second reference frame. In the illustrated example, the decoded frame 1075 is predicted from block 1 070 using the received motion vector 1085. The received motion vector 1085 (resulting in a scaled motion vector 1〇9〇) can be scaled to point to error frame 1065. The decoded portion of motion vector 1 〇 85 received by frames 1075 and 1070 can then be used to hide a portion of frame 1065 that is positioned by the scaled motion vector i 〇 9 。. In this example, b frame 1〇8〇 is not used to hide frame 1065 (B frame is usually not used for prediction). Normally, the closest correctly decoded frame will be used to perform error concealment. Both decoded frames can also be in front of or behind the hidden error frame. After filling the FIT with the B-NUM and b-num variables at block 890, the process continues to process the block 875-895 with the loop until the FIT of the entire super frame is filled with 120028.doc •44-200803523. Process 800 ends. In one aspect, before transferring individual frames and FITs to the upper layer for decoding, use Process 7〇〇 and Process 8〇〇 to populate FIT for all frames in the Super Frame. In this way, the frames can be forwarded in the order in which they were decoded. In addition, frames that should be skipped may or may not be forwarded. In another aspect, once both process 700 and process 800 are completed for a frame, the corresponding entries of the frame and FIT can be forwarded to the upper layer. The error control decision subsystem 〇6 of the decoder device 150 in FIG. 3A can perform the actions of the process 800. The illustrative processes 700 and 800 use the frame as a VAU. However, the VAU can also be a block of slices or primitives and can be filled with FIT corresponding to these parts instead of the frame. It should be noted that certain blocks of processes 700 and 800 may be combined, omitted, reconfigured, or any combination thereof. 11 is a functional block diagram illustrating another example of a decoder device 150 that can be used to process multimedia material in a system such as that illustrated in FIG. The malicious sample includes: means for receiving multimedia material; means for organizing descriptive information about the multimedia material in the first layer, wherein the descriptive information is related to processing of the multimedia material in the second layer; and Means for providing instructions related to processing of multimedia material in the second layer based at least in part on the descriptive information. Some examples of this aspect include: wherein the receiving component includes a receiver 1102; wherein the tissue component includes an information organizer subsystem 1104; and wherein the providing component includes an error control decision subsystem 1106. Figure 12 is a functional block diagram illustrating another example of a decoder device 150 that can be used to process multimedia material in a system such as that illustrated in Figure 1. The 120028.doc -45-200803523 aspect includes: means for receiving multimedia material; means for organizing descriptive information about the multimedia material in the first layer, wherein the descriptive information and the multimedia material in the second layer Processing associated; and means for providing instructions related to processing of the multimedia material in the second layer based at least in part on the descriptive information. Some examples of this aspect include: wherein the receiving component includes a module 1202 for receiving; wherein the tissue component includes a module 1204 for organizing information; and wherein the providing component includes a Module 1206 of the instruction. Figure 13 is a functional block diagram illustrating another example of a decoder device 15 that can be used to process multimedia material in a system such as that illustrated in Figure 1. The malicious sample includes: means for receiving multimedia material; means for processing multimedia material in the upper layer; means for indicating a layer based at least in part on information associated with processing of the multimedia material in the upper layer; The means for processing the multimedia material in the lower layer based at least in part on the information associated with the processing of the multimedia material in the upper layer. Some examples of this aspect include: # wherein the receiving component includes a receiver 1302; wherein the upper layer processing component includes an application layer multimedia decoder subsystem 13A8, wherein the means for indicating includes application layer multimedia decoding The subsystem 1308; and the lower processing element thereof includes a transport/synchronization layer multimedia decoder subsystem 1306 °. FIG. 14 is a diagram illustrating another decoder device 150 that can be used to process multimedia material in a system such as that illustrated in FIG. A functional block diagram of an example. This aspect includes: means for receiving multimedia material; means for processing multimedia material in an upper layer; for at least in part based on information associated with processing of multimedia data in the upper layer of the medium 12028.doc • 46-200803523 Denoting the components of the layer; and means for processing the multimedia material in the lower layer based at least in part on the information associated with the processing of the multimedia material in the upper layer. Some examples of this aspect include: wherein the edge receiving component includes a module i 4 〇 2 for receiving; wherein the upper layer processing component includes a module 1408 for processing multimedia in the upper layer; wherein the The component includes a module 1408' for processing multimedia in the lower layer and the lower processing component includes a module 1406 for processing multimedia material in the lower layer. 15 is a functional block diagram illustrating another example of a decoder device 150 that can be used to process multimedia material in a system such as that illustrated in FIG. The aspect includes: means for receiving multimedia data; means for receiving descriptive information about the multimedia material from the first layer, wherein the descriptive information is related to processing of the multimedia material in the second layer; and The means for processing the multimedia material in the second layer based at least in part on the received descriptive information. Some examples of such aspects include: wherein the means for receiving multimedia material comprises a receiver 1502; wherein the means for receiving descriptive information comprises a multimedia decoder 1508; and wherein the processing component comprises a multimedia decoder 1508. 16 is a functional block diagram illustrating another example of a decoder device 150 that can be used to process multimedia material in a system such as that illustrated in FIG. The aspect includes: means for receiving multimedia data; means for receiving descriptive information about the multimedia material from the first layer, wherein the descriptive information is related to processing of the multimedia material in the second layer; and The 12028.doc-47-200803523 component of the multimedia material is processed in the second layer based at least in part on the received descriptive information. Some examples of the aspect include: the means for receiving multimedia material includes a module 1602 for receiving; wherein the means for receiving descriptive information comprises a module for decoding multimedia 16 8 And the processing component therein includes a module 1608 for decoding multimedia. Those of ordinary skill in the art will appreciate that information and signals can be represented using any of a variety of different technologies and methods. For example, data, instructions, commands, information, signals, bits, etc., which may be referred to by the above description, may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, light fields or particles, or any combination thereof. Symbols and chips. It will be further appreciated by those skilled in the art that various illustrative logic blocks, modules, and algorithm steps described in connection with the examples disclosed herein can be implemented as electronic hardware, firmware, computer software, intermediates, Microcode or a combination thereof. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. This functionality is implemented as hardware or software depending on the specific application and the design constraints imposed on the overall system by Φ. Those skilled in the art can implement the described functionality in a different manner for each particular application, but this implementation decision should not be construed as causing a departure from the scope of the disclosed method. The various illustrative logic blocks, components, modules, and circuits described in the examples disclosed herein may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), A programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein for implementation or execution . The general purpose processor may be a microprocessor 120028.doc -48 - 200803523' or the processor may be any conventional processor, controller, microcontroller or state machine. The processor can also be implemented as a combination of computing devices, such as 'DSP and micro-combination, multiple microprocessors n, combined with a DSP core ASIC core—or multiple microprocessors, or any other such configuration. The steps of a method or algorithm described in connection with the examples disclosed herein may be embodied in a hardware, a software module executed by a processor, or a combination of both. The software module can reside in Ram memory, flash memory, ROM memory, EPR 〇M memory, 腾 r〇m memory, scratchpad; · hard disk, removable disk, (3)-, Optical storage media or any other form of storage medium known in the art. An exemplary storage medium is interfaced to the processor such that the processor can read information from the health storage medium and write the poor message to the storage medium. Alternatively, the health media can be integrated into the processor. The processor and library media can reside in a special application integrated circuit (ASK) where the ASIC can reside in the wireless data machine. Alternatively, the processor and the storage medium can reside in the wireless data unit as discrete components. The previous description of the disclosed examples is provided to enable a person skilled in the art to utilize or use the disclosed methods and apparatus. Various modifications to these examples will be readily apparent to those skilled in the art, and the principles defined herein may be applied to other examples and additional elements may be added. Thus, methods and apparatus for performing highly efficient and robust error control of multimedia material have been described. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram showing a multimedia communication system according to an aspect. 120028.doc -49- 200803523 DEVICE DEVICE 110 </ RTI> </ RTI> </ RTI> Figure 2 is an example of a multi-layer protocol stack that includes the use of one of the inter-layer error control systems of the encoder and decoder device 150 in the system illustrated in Figure 1. Block diagram. Decoder Setup Figure 3A is a block diagram illustrating one aspect of the system that can be used, such as illustrated in Figure 1. Decoding Setup Figure 3B is a block diagram illustrating one example of a computer processor system that may be used in a system such as that illustrated in Figure i. φ Figure 4 shows an illustration of an example of a multi-layered packeting mechanism. Figure 5A is a flow chart illustrating an example of a method of processing multimedia material in a system such as that illustrated in Figure 。. Figure 5B is a flow chart illustrating another example of a method of processing multimedia material in a system such as that illustrated in Figure i. Figure 5C is a flow chart illustrating another example of a method of processing multimedia material in a system such as that illustrated in Figure i. 6 is a block diagram of an example of a multi-layer multimedia coder unit that can be used to perform the method illustrated in FIG. 5C. Figure 7 is a flow diagram illustrating an example of a method of organizing descriptive information for performing certain of the methods illustrated in Figures 5 and 5C. 8A and 8B are flow charts illustrating an example of a method of determining an error control action in the method illustrated in Fig. 7. 9 is a diagram showing the structure of an example of a physical layer packet including a scalable stencil base layer and an enhancement layer for a system such as that illustrated in FIG. Figure 1 〇 A graphically illustrates the position of a current p-frame and a previously decoded p-frame of two frames located in the current frame. 120028.doc -50· 200803523 Figure 1 〇B graphically illustrates the flag representation of the decoded frame for error concealment of other error frames. Figure 10C graphically illustrates variables used to indicate the location of two decoded frames for hiding an error frame using FRXJC. 11 is a functional block diagram illustrating another example of a decoder device 150 that may be used in a system such as that illustrated in FIG. FIG. 12 is a functional block diagram illustrating another example of a decoder device 150 that may be used in a system such as that illustrated in FIG. 1. FIG. 13 is a functional block diagram illustrating another example of a decoder device 150 that may be used in a system such as that illustrated in FIG. 1. FIG. 14 is a functional block diagram illustrating another example of a decoder device 150 that may be used in a system such as that illustrated in FIG. 1. Figure 15 is a functional block diagram illustrating another example of a decoder device 150 that may be used in a system such as that illustrated in Figure 1. Figure 16 is a functional block diagram illustrating another example of a decoder device 150 that may be used in a system such as that illustrated in Figure 1. [Major component symbol description] 100 Multimedia communication system 102 External source 110 Encoder device 112 Processor 114 Memory 116 Transceiver 140 Network 120028.doc _51_ 200803523 150 Decoder device 152 Processor 154 Memory 156 Transceiver 205 Application layer ~ 210 Synchronization Layer 215 Transport Layer 220 Stream and/or Media Access Control (MAC) Layer 225 Physical Layer 230 Application Layer 235 Synchronization Layer 240 Transport Layer 245 Stream and/or Media Access Control (MAC) Layer 250 Physical Layer 255 Error Elastic System 260 Error Recovery System 302 Receiver Element 304 Information Organizer Element 306 Error Control Decision Element 308 Multimedia Decoder Element 320 Preprocessor Element 322 Random Access Memory (RAM) Element 324 Digital Signal Processor (DSP) Element 326 Video core component 12028.doc -52- 200803523

405A、405B 應用層封包 406A、406B 同步層封包 410 同步層標頭 415 傳送層標頭 420A、420B、 傳送層封包 420D 425B 同步層封包406A之剩餘部分 425C 同步層封包406B之第一部分 425D 同步層封包406B之下一部分 600 多媒體解碼器 605 下層媒體模組子系統 610 視訊解碼層/VDL 615 錯誤控制子系統 620 箭頭 625 箭頭 905 基礎層 910 PLP 915 傳送層標頭 920 同步層標頭 925 傳送層總和檢查碼尾部 930 圖框 935 圖框 940 B圖框 950 增強層 120028.doc -53- 200803523405A, 405B application layer packet 406A, 406B synchronization layer packet 410 synchronization layer header 415 transport layer header 420A, 420B, transport layer packet 420D 425B synchronization layer packet 406A remaining portion 425C synchronization layer packet 406B first portion 425D synchronization layer packet 406B part 600 multimedia decoder 605 lower layer media module subsystem 610 video decoding layer / VDL 615 error control subsystem 620 arrow 625 arrow 905 base layer 910 PLP 915 transport layer header 920 synchronization layer header 925 transport layer sum check Code tail 930 frame 935 frame 940 B frame 950 enhancement layer 12028.doc -53- 200803523

1005 當前P圖框 1010 經解碼之P圖框 1015 &gt; 1020 跳過的P圖框 1025 運動向量 1030 縮放之運動向量 103 5 物件 1040 經解碼之圖框 1045 錯誤圖框 1050 、 1055 經解碼之圖框 1060 圖框 1065 當前錯誤圖框 1070 先前圖框 1075 經解碼之圖框 1080 B圖框 1085 運動向量 1090 縮放之運動向量 1102 接收器 1104 資訊組織器子系統 1106 錯誤控制決策子系統 1202 用於接收之模組 1204 用於組織資訊之模組 1206 用於提供指令之模組· 1302 接收器 1306 傳送/同步層多媒體解碼器子系統 120028.doc -54- 200803523 1308 應用層多媒體解碼器子系統 1402 用於接收之模組, 1406 用於在下層中處理多媒體資料之模組 1408 用於在上層中處理多媒體之模組 1502 接收器 1508 多媒體解碼器 1602 用於接收之模組 1608 用於解碼多媒體之模組 120028.doc -55-1005 Current P Frame 1010 Decoded P Frame 1015 &gt; 1020 Skipped P Frame 1025 Motion Vector 1030 Scaled Motion Vector 103 5 Object 1040 Decoded Frame 1045 Error Frame 1050, 1055 Decoded Figure Block 1060 Frame 1065 Current Error Frame 1070 Previous Frame 1075 Decoded Frame 1080 B Frame 1085 Motion Vector 1090 Scaled Motion Vector 1102 Receiver 1104 Information Organizer Subsystem 1106 Error Control Decision Subsystem 1202 for Reception The module 1204 is used to organize the information module 1206 for providing the module of the instruction. 1302 Receiver 1306 Transmission/synchronization layer multimedia decoder subsystem 120028.doc -54- 200803523 1308 Application layer multimedia decoder subsystem 1402 For receiving modules, 1406 is used to process multimedia data in the lower layer, module 1408 is used to process multimedia in the upper layer, 1502, receiver 1508, multimedia decoder 1602 is used to receive module 1608, which is used to decode multimedia. Group 120028.doc -55-

Claims (1)

200803523 十、申請專利範圍: 1 · 一種處理多媒體資料之方法,其包含·· 接收該多媒體資料; 在一第一層中組織關於該多媒體資料的描述性資訊, 其中該描述性資訊與一第二層中該多媒體資料的該處理 相關;及 至少部分地基於該描述性資訊提供與該第二層中該多 媒體資料之該處理相關的指令。 2·如請求項1之方法,其進一步包含將該描述性資訊傳遞 至該第二層。 3 ·如明求項1之方法,其中該描述性資訊包含圖框特徵資 訊、基礎或增強資料識別資訊、時序資訊、一編碼類 型、-圖框類型、同步資訊及預測編碼相關資訊中之一 或多者。 4· 2 '求項1之方法’其中該多媒體資料包含某錯誤資 方法進—步包含組織該描述性資訊以將表示該錯 之—錯誤分佈之資訊包括於該多媒體資料中。 5 ·如明求項4之方法,f止a 誤分怖次▲七 ,、 乂匕含至少部分地基於該錯 、 貝矾確定該等指令。 6·如請求項!夕士 指令改㈣第 其進-步包含至少部分地基於該等 7.如請求項厂之方:多媒體資料的該處理。 8_如請求項以:八申該描述性資訊包含元資料。 、之方法,其進一 分佈資訊確佘 ^ 3至少部分地基於錯誤 確疋—錯誤控制方法,其中W第二層之 120028.doc 200803523 該等指令係與該確定之錯誤控制方法相關。 9·如請求項8之方法,其中該確定之錯誤控制方法包含錯 誤復原、錯誤隱藏及一圖框之插入中的一或多者。 10· —種用於處理多媒體資料之裝置,其包含: 一接收器,其經組態以接收該多媒體資料; 一貧訊組織器,其經組態以組織關於一第一層中該多 媒體貧料之描述性資訊,其中該描述性資訊與一第二層 中該多媒體資料之該處理相關;及 一錯誤控制決策子系統,其經組態以至少部分地基於 該描述性資訊提供與該第二層中該多媒體資料之該處理 相關的指令。 11.如请求項10之裝置,其中該資訊組織器進一步組態以將 該描述性資訊傳遞至該第二層。 12·如明求項1〇之裝置,其中該描述性資訊包含圖框特徵資 訊基礎或增強資料識別資訊、時序資訊、一編碼類 型、——圖框類型、同步資訊及預測編碼相關資訊中之一 或多者。 13. 如請求項10之裝置.,其中該多媒體資料包含某錯誤資 料且&quot;亥夕媒體資料處理器進一步組態以組織該描述性 資訊以將表示該錯誤資料之一錯誤分佈的資訊包括於該 多媒體資料中。 14. 如請求項13之裝置,其中該錯誤控制決策子系統進一步 組態以至少部分地基於該錯誤分佈資訊確定該等指令。 15·如請求項1〇之袈置,其進一步包含一多媒體解螞器,該 120028.doc 200803523 多媒體解碼器經組態以至少部分地基於該等指令改變該 第二層中該多媒體資料之該處理。 16. 如請求項10之裝置,其中該描述性資訊包含元資料。 17. 如請求項10之裝置,其中該錯誤控制決策子系統進一步 組態以至少部分地基於錯誤分佈資訊確定一錯誤控制方 法其中&amp;供至该弟一層之該等指令係與該碟定之錯誤 控制方法相關。 _ I8·如明求項17之裝置,其中該確定之錯誤控制方法包含錯 誤復原、錯誤隱藏及一圖框之插入中之一或多者。 19· 一種用於處理多媒體資料之裝置,其包含: 用於接收該多媒體資料之構件; 用於组織關於一第一層中該多媒體資料之描述性資訊 的構件,其中該描述性資訊與一第二層中該多媒體資料 之該處理相關;及 用於至少部分地基於該描述性資訊提供與該第二層中 _ 該多媒體資料之該處理相關之指令的構件。 2〇·如请求項19之裝置,其進一步包含用於傳遞該描述性資 訊至該第二層的構件。 21·如明求項19之裝置,其中該描述性資訊包含圖框特徵資 訊、基礎或增強資料識別資訊、時序資訊、一編碼類 型、一圖框類型、同步資訊及預测編碼相關資訊中之一 或多者。 22.如請求項19之裝置,其中該多媒體資料包含某錯誤資 料,其中該組織構件組織該描述性資訊以將表示該錯誤 120028.doc 200803523 資料之一錯誤分佈的資訊包括於該多媒體資料中。 23·如請求項22之裝置,其進一步包含用於至少部分地基於 該錯誤分佈資訊確定該等指令之構件。 24. 如請求項19之裝置,其進一步包含用於至少部分地基於 該等指令改變該第二層中該多媒體資料之該處理的構 件。 25. 如請求項19之裝置,其中該描述性資訊包含元資料。 • 26.如請求項19之裝置,其進一步包含用於至少部分地基於 錯誤分佈資訊確定一錯誤控制方法的構件,其中提供至 該第二層之該等指令與該確定之錯誤控制方法相關。 27·如請求項26之裝置,其中該確定之錯誤控制方法包含錯 誤復原、錯誤隱藏及一圖框之插入中的一或多者。 28. 一種包含程式碼之機器可讀媒體,當該程式碼在一或多 個機H上執行時,使該—或多個機器執行程式操作,該 程式碼包含: _ 用於接收多媒體資料之程式碼; 用於組織關於—第—層中該多媒體資料之描述性資訊 的程式碼,其中該描述性資訊與一第二層中該多媒體資 料之該處理相關·,及 用於至少部分地基於該描述性資訊提供與該第二層中 該多媒體貧料之該處理相關的指令的程式碼。 29. 如請求項28之機器可讀媒體,其進-步包含用於傳遞該 描述性資訊至該第二層之程式碼。 3〇.如請求項28之機器可讀媒體,其中該描述性資訊包含圖 120028.doc 200803523 1 匡特徵資訊、基礎或增強資料識別資訊、時序資訊、一 '4竭類型、—圖框類型、同步資訊及預測編碼相關資訊 中之一或多者。 31·如:求項28之機器可讀媒體,其中該多媒體資料包含某 錯誤=貝料,且進一步包含用於組織該描述性資訊以將表 丁該錯誤 &gt; 料之一錯誤分佈的資訊包括於該多媒體資料 中之程式碼。200803523 X. Patent application scope: 1 · A method for processing multimedia data, comprising: receiving the multimedia material; organizing descriptive information about the multimedia material in a first layer, wherein the descriptive information and a second The processing of the multimedia material in the layer is related; and providing instructions related to the processing of the multimedia material in the second layer based at least in part on the descriptive information. 2. The method of claim 1, further comprising communicating the descriptive information to the second layer. 3. The method of claim 1, wherein the descriptive information includes one of frame feature information, basic or enhanced data identification information, timing information, a coding type, a frame type, synchronization information, and prediction coding related information. Or more. 4· 2 'Method of claim 1' wherein the multimedia material contains an error message method further includes organizing the descriptive information to include information indicating the error-error distribution in the multimedia material. 5 · If the method of claim 4 is used, f is a misclassification ▲ seven, and 乂匕 is based at least in part on the error, Bessie determines the instructions. 6. If requested! The eve of the instruction (4) is further based on, at least in part, the processing of the multimedia material. 8_If the request is for: Eight claims that the descriptive information contains metadata. The method, the further information of the distribution is confirmed 佘 ^ 3 based at least in part on the error confirmation - error control method, wherein the second layer of the 120028.doc 200803523 these instructions are related to the determined error control method. 9. The method of claim 8, wherein the determined error control method comprises one or more of error recovery, error concealment, and insertion of a frame. 10. An apparatus for processing multimedia material, comprising: a receiver configured to receive the multimedia material; a poor organizer configured to organize the multimedia poor in a first layer Descriptive information relating to the processing of the multimedia material in a second layer; and an error control decision subsystem configured to provide the same based at least in part on the descriptive information The processing related instructions of the multimedia material in the second layer. 11. The device of claim 10, wherein the information organizer is further configured to pass the descriptive information to the second layer. 12. The device of claim 1, wherein the descriptive information comprises a frame feature information base or enhanced data identification information, timing information, a coding type, a frame type, synchronization information, and prediction coding related information. One or more. 13. The device of claim 10, wherein the multimedia material includes an error message and the &quot;Chai Media Data Processor is further configured to organize the descriptive information to include information indicative of an incorrect distribution of the error material In the multimedia material. 14. The apparatus of claim 13, wherein the error control decision subsystem is further configured to determine the instructions based at least in part on the error distribution information. 15. The device of claim 1 further comprising a multimedia decoder, the 120028.doc 200803523 multimedia decoder configured to change the multimedia material in the second layer based at least in part on the instructions deal with. 16. The device of claim 10, wherein the descriptive information comprises metadata. 17. The apparatus of claim 10, wherein the error control decision subsystem is further configured to determine an error control method based at least in part on the error distribution information, wherein the instructions to the one of the layers are incorrectly determined by the disc Control method related. _ I8. The apparatus of claim 17, wherein the determined error control method comprises one or more of error recovery, error concealment, and insertion of a frame. 19. An apparatus for processing multimedia material, comprising: means for receiving the multimedia material; means for organizing descriptive information about the multimedia material in a first layer, wherein the descriptive information and The processing of the multimedia material in the second layer is related; and means for providing an instruction related to the processing of the multimedia material in the second layer based at least in part on the descriptive information. 2. The device of claim 19, further comprising means for communicating the descriptive information to the second layer. 21. The device of claim 19, wherein the descriptive information comprises frame feature information, basic or enhanced data identification information, timing information, a coding type, a frame type, synchronization information, and prediction coding related information. One or more. 22. The device of claim 19, wherein the multimedia material comprises an error message, wherein the organizational component organizes the descriptive information to include information indicative of an error distribution of one of the errors 120028.doc 200803523 in the multimedia material. 23. The device of claim 22, further comprising means for determining the instructions based at least in part on the error distribution information. 24. The apparatus of claim 19, further comprising means for altering the processing of the multimedia material in the second layer based at least in part on the instructions. 25. The device of claim 19, wherein the descriptive information comprises metadata. 26. The device of claim 19, further comprising means for determining an error control method based at least in part on error distribution information, wherein the instructions provided to the second layer are associated with the determined error control method. 27. The device of claim 26, wherein the determined error control method comprises one or more of error recovery, error concealment, and insertion of a frame. 28. A machine readable medium comprising code for causing the one or more machines to perform program operations when the code is executed on one or more machines H, the code comprising: _ for receiving multimedia material a code for organizing descriptive information about the multimedia material in the first layer, wherein the descriptive information is related to the processing of the multimedia material in a second layer, and is used to be based, at least in part, on The descriptive information provides a code of an instruction associated with the processing of the multimedia poor material in the second layer. 29. The machine readable medium of claim 28, further comprising code for communicating the descriptive information to the second layer. 3. The machine readable medium of claim 28, wherein the descriptive information includes a map 120028.doc 200803523 1 匡 feature information, basic or enhanced data identification information, timing information, a 'type of exhaustion', type of frame, One or more of synchronization information and predictive coding related information. 31. The machine-readable medium of claim 28, wherein the multimedia material comprises an error=bedding, and further comprising information for organizing the descriptive information to misinterpret one of the errors&gt; The code in the multimedia material. 如請求項31之機器可讀媒體,其進一步包含用於至少部 分地基於該錯誤分佈資訊確定該等指令之程式碼。 如明求項28之機器可言買媒體,其進一步包含用於至少部 分地基於該等指令改變該第二層中該多媒體資料之該處 理的程式碼。 34·如請求項28之機器可讀媒體,其中該描述性資訊包含元 資料。 35. 如請求項28之機器可讀媒體,其進一步包含用於至少部 分地基於錯誤分佈資訊確定一錯誤控制方法的程式碼, 其中提供至該第二層之該等指令與該確定之錯誤控制方 法相關。 36. 如請求項35之機器可讀媒體,其中該確定之錯誤控制方 法包含錯誤復原、錯誤隱藏及一圖框之插入中之一或多 者。 37· —種處理多媒體資料之方法,其包含: 接收該多媒體資料; 在一上層中處理該多媒體資料; 120028.doc 200803523 至少部分地基於與該上層中該多媒體資料之該處理相 關聯的資訊指示一下層;及 至少部分地基於與該上層中該多媒體資料之該處理相 關聯的該資訊在該下層中處理該多媒體資料。 如靖求項3 7之方法,其進一步包含至少部分地基於與該 上層中該多媒體資料之該處理相關聯的該資訊在該下層 中組織關於該多媒體資料的描述性資訊。 39·如請求項38之方法,其進一步包含至少部分地基於該描 述性貧訊提供與該上層中該多媒體資料之該處理相關的 指令。 4〇·如凊求項38之方法,其中該描述性資訊包含元資料。 41·如清求項38之方法,其中該描述性資訊包含圖框特徵資 几基礎或增強資料識別資訊、時序資訊、一編碼類 型、一圖框類型、同步資訊及預測編碼相關資訊中之一 或多者。 42·如睛求項37之方法,其中指示該下層包含:傳遞包含處 理時間、處理動作及處理狀態中之一或多者的資訊。 43· —種用於處理多媒體資料之裝置,其包含: 一接收器,其經組態以接收該多媒體資料; 一上層解碼器子系統,其經組態以在一上層中處理該 多媒體資料,並至少部分地基於與該上層中該多媒體資 料之該處理相關聯的資訊指示一下層;及 一下層解碼器子系統’其經組態以至少部分地基於與 該上層中该多媒體資料之該處理相關聯的該資訊在該下 120028.doc 200803523 層中處理該多媒體資料。 44·如請求項43之裝置,其進-步包含-貧訊組織器,該資 ^ =織器經組恶以至少部分地基於與該上層中該多媒體 ' “處理相關聯的该資訊在該下層中組織關於該多 媒體資料的描述性資訊。 如明求項44之裝置’其進一步包含一錯誤控制決策子系 統’該錯誤控制決策子系統經組態以至少部分地基於該 • 描述性資訊提供與該上層中該多媒體資料的該處理相關 的指令。 46.如明求項44之裝置,其中該描述性資訊包含元資料。 47·如明求項44之裝置,其中該描述性資訊包含圖框特徵資 訊、基礎或增強資料識別資訊、時序資訊、一編碼類 型、一圖框類型、同步資訊及預測編碼相關資訊中之一 或多者。 ~ 48.如請求項43之裝置,其中該上層解碼器子系統進一步組 • 態以藉由傳遞包含處理時間、處理動作及處理狀態中之 一或多者的資訊而指示該下層。 49· 一種用於處理多媒體資料之裝置,其包含: 用於接收該多媒體資料之構件; 用於在一上層中處理該多媒體資料之構件; 用於至少部分地基於與該上層中該多媒體資料之該處 理相關聯的肓訊指不一下層的構件;及 用於至少部分地基於與該上層中該多媒體資料之該處 理相關聯的該資訊在該下層中處理該多媒體資料的構 120028.doc 200803523 件。 /求項49之裝置,其進一步包含用於至少部分地基於 ” ^層巾4多_體資料之該處理相關聯的該資訊在該 下自中、、且織關於該多媒體資料之描述性資訊的構件。 ,51·,請求項5〇之裝置,其進一步包含用於至少部分地基於 4描述性找提供與該上層中該多媒體資料之該處理相 關之指令的構件。 籲 月求項50之裝置,其中該描述性資訊包含元資料。 月求員50之裝置,其中該描述性資訊包含圖框特徵資 a:、基礎或增強資料識別資訊、時序資訊、一編碼類 型、一®框類型、同步資訊及預測編碼相關資訊中之一 或多者。 月求項49之裝置,其巾該用於指示該下層之構件包 含:用於傳遞包含處理時間、處理動作及處理狀態中之 一或多者的資訊的構件。 籲55. 一種包含程式碼之機器可讀媒體,當該程式碼在一或多 個機II上執行時,使該—或多個機器執行程式操作,該 程式碼包含: 用於接收多媒體資料之程式碼; 用於在—上層中處理該多媒體資料之程式碼; 用於至少部分地基於與該上層中該多媒體資料之該處 理相關聯的資訊指示一下層的程式碼;及 用於至少冑分地基於與該上層中該多媒冑資料之該處 理相關聯的該資訊在該下層中處理該多媒體資料的程式 120028.doc 200803523 碼0 56.如明求項55之機器可讀媒體,其進一步包含用於至少部 :地基於與該上層中該多媒體資料的該處理相關聯的該 貝訊在δ亥下層中組織關於該多媒體資料之描述性資訊的 泰·式石馬。 57. 如明求項56之機器可讀媒體,其進一步包含用於至少部The machine readable medium of claim 31, further comprising code for determining the instructions based, at least in part, on the error distribution information. The machine of claim 28, wherein the medium is commercially available, further comprising code for at least partially changing the processing of the multimedia material in the second layer based on the instructions. 34. The machine readable medium of claim 28, wherein the descriptive information comprises metadata. 35. The machine readable medium of claim 28, further comprising code for determining an error control method based at least in part on error distribution information, wherein the instructions provided to the second layer and the determined error control Method related. 36. The machine readable medium of claim 35, wherein the determined error control method comprises one or more of error recovery, error concealment, and insertion of a frame. 37. A method of processing multimedia material, comprising: receiving the multimedia material; processing the multimedia material in an upper layer; 120028.doc 200803523 based at least in part on an information indication associated with the processing of the multimedia material in the upper layer The lower layer; and processing the multimedia material in the lower layer based at least in part on the information associated with the processing of the multimedia material in the upper layer. The method of claim 37, further comprising organizing descriptive information about the multimedia material in the lower layer based at least in part on the information associated with the processing of the multimedia material in the upper layer. 39. The method of claim 38, further comprising providing an instruction related to the processing of the multimedia material in the upper layer based at least in part on the descriptive poorness. 4. The method of claim 38, wherein the descriptive information comprises metadata. 41. The method of claim 38, wherein the descriptive information comprises one of a frame feature basis or an enhanced data identification information, timing information, a coding type, a frame type, synchronization information, and prediction coding related information. Or more. 42. The method of claim 37, wherein the instructing the lower layer comprises: transmitting information including one or more of a processing time, a processing action, and a processing state. 43. An apparatus for processing multimedia material, comprising: a receiver configured to receive the multimedia material; an upper decoder subsystem configured to process the multimedia material in an upper layer, And indicating, at least in part, based on information associated with the processing of the multimedia material in the upper layer; and a lower layer decoder subsystem configured to be based at least in part on the processing of the multimedia material in the upper layer The associated information is processed in the next 12228.doc 200803523 layer. 44. The apparatus of claim 43, wherein the step-by-step includes a poor organizer that is at least partially based on the information associated with the multimedia 'processing in the upper layer Descriptive information about the multimedia material is organized in the lower layer. The device of claim 44, which further includes an error control decision subsystem, the error control decision subsystem is configured to provide at least in part based on the • descriptive information The apparatus of claim 44, wherein the descriptive information comprises metadata. 47. The apparatus of claim 44, wherein the descriptive information includes One or more of frame feature information, basic or enhanced data identification information, time series information, a code type, a frame type, synchronization information, and predictive coding related information. ~ 48. The device of claim 43, wherein the upper layer The decoder subsystem further sets the state to indicate the lower layer by passing information including one or more of processing time, processing action, and processing status. An apparatus for processing multimedia material, comprising: means for receiving the multimedia material; means for processing the multimedia material in an upper layer; for, at least in part, relating to the processing of the multimedia material in the upper layer The associated message refers to a component of the lower layer; and a component for processing the multimedia material in the lower layer based at least in part on the information associated with the processing of the multimedia material in the upper layer. The apparatus of claim 49, further comprising, for at least in part, the information associated with the processing of the layered data, and the descriptive information about the multimedia material member. The device of claim 5, further comprising means for providing an instruction related to the processing of the multimedia material in the upper layer based at least in part on the description. The device of claim 50, wherein the descriptive information includes metadata. A device for asking for 50, wherein the descriptive information includes one of frame feature a:, basic or enhanced data identification information, timing information, a coding type, a type of box, synchronization information, and prediction coding related information. More. The apparatus of claim 49, wherein the means for indicating the lower layer comprises means for transmitting information including one or more of processing time, processing action, and processing status. 55. A machine-readable medium comprising a code for causing the one or more machines to perform program operations when the code is executed on one or more machines II, the code comprising: for receiving multimedia material a code for processing the multimedia material in the upper layer; for indicating a layer of code based at least in part on information associated with the processing of the multimedia material in the upper layer; and for at least scoring Program for processing the multimedia material in the lower layer based on the information associated with the processing of the multimedia material in the upper layer. 120028.doc 200803523 code 0 56. The machine readable medium of claim 55, further A Thai-style stone horse for organizing, at least in part, based on the processing associated with the processing of the multimedia material in the upper layer, the descriptive information about the multimedia material in the lower layer. 57. The machine readable medium of claim 56, further comprising at least 刀地基於該描述性資訊提供與該上層中該多媒體^料之 該處理相關的指令的程式碼。 58. 月求項5 6之機器可讀媒體,其中該描述性資訊包含元 資料。 59·如請求項56之機器可讀媒體,其中該描述性資訊包含圖 框特敛貧訊、基礎或增強資料識別資訊、時序資訊、一 編碼類型、一圖框類型、同步資訊及預測編碼相關資訊 中之一或多者。 60.如明求項55之機器可讀媒體,其進一步包含用於藉由傳 遞包含處理時間、處理動作及處理狀態中之一或多者的 資訊來指示該下層之程式碼。 61· —種處理多媒體資料之方法,其包含: 接收該多媒體資料; 自一第一層接收關於該多媒體資料之描述性資訊,其 中該描述性資訊與一第二層中該多媒體資料之該處理相 關;及 至少部分地基於該接收之描述性資訊在該第二層中處 理該多媒體資料。 120028.doc 200803523 62.如請求項61之方法,其進一步包含: 在该弟二層中接# $卜 . 接收至少一指令’其中該指令係至少部 为地基於該描述性資訊;及 至少部分地基於該接收之指令 體資料的該處理。 &amp; “一層中該多媒 63·如請求項62之方法,盆 /、中該接收之才曰令與一錯誤控制方 法:相關。 64.如請求項63之方法,纟中該錯誤控制方法包含錯誤復 原錯誤隱藏及一圖框之插入中之一或多者。 月袁員61之方法,其中該描述性資訊包含元資料。 66·如請求項61之方法,其中該描述性資訊包含圖框特徵資 &amp;基礎或增強資料識別資訊、時序資訊、一編碼類 型一圖框類型、同步資訊及預測編碼相關資訊中之一 或多者。 67· —種用於處理多媒體資料之裝置,其包含: _ 一接收器’其經組態以接收該多媒體資料;及 一解碼器,其經組態以自一第一層接收關於該多媒體 資料之描述性資訊,其中該描述性資訊與一第二層中該 多媒體資料的該處理相關,且經組態以至少部分地基於 該接收之描述性資訊在該第二層中處理該多媒體資料。 68·如請求項67之裝置,其中該解碼器進一步組態以在該第 二層中接收至少一指令,其中該指令係至少部分地基於 該描述性資訊,且該解碼器進一步組態以至少部分地基 於該接收之指令改變該第二層中該多媒體資料的該處 120028.doc -10- 200803523 理。 69·如請求項 法相關。、 、置’其中該接收之指令與一錯誤控制方 Μ:請::69之褒置,其中該錯誤控制方法包含錯誤復 ’、咬、、’日决隱藏及一圖框之插入中的一或多者。 72如靖:項:7之裝置’其中該描述性資訊包含元資料。 ,、7之裝置,其中該描述性資訊包含圖框特徵資 二::礎或增強資料識別資訊、時序資訊、一編碼類 ^ ®桓類型、同步貧訊及預測編碼相關資訊中之一 或多者。 73. -種用於處理多媒體資料之裝置,其包含: 用於接收該多媒體資料之構件; ;自第層接收關於該多媒體資料之描述性資訊 的構件’其中該描述性資訊與一第二層中之該多媒體資 料的該處理相關;及 用於至少部分地基於該接收之描述性資訊在該第二層 中處理該多媒體資料的構件。 74·如請求項73之裝置,其進一步包含: 用於在該第一層中接收至少一指令的構件,其中該指 令係至少部分地基於該描述性資訊;及 用於至少部分地基於該接收之指令改變該第二層中該 多媒體資料之該處理的構件。 喷求員74之裝置,其中該接收之指令與一錯誤控制方 法相關。 120028.doc •11- 200803523 月求項75之裝置,其中該錯誤控制方法包含錯誤復 原錯誤^藏及一圖框之插入中的一或多者。 =求項73之裝置,其中該描述性資訊包含元資料。 求員73之裝置,其中該描述性資訊包含圖框特徵資 二基礎或增強資料識別資訊、時序資訊、一編石馬類 里 圖框類型、同步資訊及預測編碼相關資訊中之_ 或多者。 79· -種包含程式碼之機器可讀媒體,當該程式碼在一或多 個機ϋ上執彳τ時’使該―或多個機器執行程式操作,該 粒式碼包含: 用於接收多媒體資料之程式碼; 用於自一第一層接收關於該多媒體資料之描述性資訊 的#王式碼’其中該描述性資訊與一第二層中該多媒體資 料之該處理相關;及 用於至少部分地基於該接收之描述性資訊在該第二層 中處理該多媒體資料的程式碼。 &quot; 80·如請求項79之機器可讀媒體,其進一步包含: 用於在該第二層中接收至少一指令之程式碼,其中該 指令係至少部分地基於該描述性資訊;及 用於至少部分地基於該接收之指令改變該第二層中該 多媒體資料之該處理的程式碼。 81. 如請求項80之機器可讀媒體,其中該接收之指令與—錯 誤控制方法相關。 82. 如請求項81之機器可讀媒體,其中該錯誤控制方法包含 120028.doc -12- 200803523 錯誤復原、 83·如請求項79 資料。 錯誤隱藏及一圖框夕扭: 杧之插入中的一或多者。 之機器可讀媒體,其中嗲 丫 β描迷性資訊包含元 8 4.如譜φ 來項79之機器可讀媒體,其中該描述性資訊包含圖 框特徵:資訊、基礎或增強資料識別資訊、時序資訊、一 編碼_型、一圖框類型、同步資訊及預測編碼相關資訊 中之〜或多者。The blade provides a code for the instruction associated with the processing of the multimedia in the upper layer based on the descriptive information. 58. The machine readable medium of claim 5, wherein the descriptive information comprises metadata. 59. The machine-readable medium of claim 56, wherein the descriptive information comprises frame-specific information, basic or enhanced data identification information, timing information, a coding type, a frame type, synchronization information, and prediction coding correlation. One or more of the information. 60. The machine readable medium of claim 55, further comprising instructions for indicating the underlying code by transmitting information including one or more of processing time, processing action, and processing status. 61. A method for processing multimedia material, comprising: receiving the multimedia material; receiving descriptive information about the multimedia material from a first layer, wherein the descriptive information and the processing of the multimedia material in a second layer Correlation; and processing the multimedia material in the second layer based at least in part on the received descriptive information. The method of claim 61, further comprising: in the second layer of the second layer, receiving at least one instruction 'where the instruction is based at least in part on the descriptive information; and at least in part This is based on the processing of the received command body data. &amp; "The multimedia 63 in a layer, as in the method of claim 62, the receiving/receiving method in the basin/, is associated with an error control method: 64. The method of claim 63, the error control method Included in the method of claim 61, wherein the descriptive information includes metadata. 66. The method of claim 61, wherein the descriptive information includes a map The frame feature &amp; base or enhanced data identification information, timing information, a coding type, a frame type, synchronization information, and predictive coding related information. 67. - A device for processing multimedia data, The method includes: a receiver configured to receive the multimedia material, and a decoder configured to receive descriptive information about the multimedia material from a first layer, wherein the descriptive information The processing of the multimedia material in the second layer is related and configured to process the multimedia material in the second layer based at least in part on the received descriptive information. 68. The decoder is further configured to receive at least one instruction in the second layer, wherein the instruction is based at least in part on the descriptive information, and the decoder is further configured to be based at least in part on the received instruction Change the location of the multimedia material in the second layer to 12028.doc -10- 200803523. 69. If the request method is related to ., , and set the instruction of the receiving and an error control: :::69 The device, wherein the error control method includes one or more of an error complex, a bite, a 'day hidden, and a frame insertion. 72 Rujing: Item: 7 device' wherein the descriptive information includes a meta The device of , , 7 , wherein the descriptive information includes one of the features of the frame feature:: or enhanced data identification information, time series information, a coding type, a type of synchronization, and a prediction code Or a device for processing multimedia materials, comprising: means for receiving the multimedia material; and means for receiving descriptive information about the multimedia material from the first layer Wherein the descriptive information is associated with the processing of the multimedia material in a second layer; and means for processing the multimedia material in the second layer based at least in part on the received descriptive information. The device of item 73, further comprising: means for receiving at least one instruction in the first layer, wherein the instruction is based at least in part on the descriptive information; and for changing the at least in part based on the received instruction a component of the processing of the multimedia material in the second layer. The device of the sprayer 74, wherein the received command is associated with an error control method. 120028.doc • 11-200803523 The device of claim 75, wherein the error control The method includes one or more of the error recovery error and the insertion of a frame. = The device of claim 73, wherein the descriptive information comprises metadata. The device of the staff member 73, wherein the descriptive information includes the frame feature 2 or the enhanced data identification information, the time series information, the frame type of the stone horse class, the synchronization information, and the prediction coding related information _ or more . 79. A machine-readable medium comprising a code for causing the one or more machines to perform program operations when the code executes τ on one or more of the machines, the granular code comprising: for receiving a code of multimedia data; a #王码' for receiving descriptive information about the multimedia material from a first layer, wherein the descriptive information is related to the processing of the multimedia material in a second layer; and The code of the multimedia material is processed in the second layer based at least in part on the received descriptive information. &lt; 80. The machine-readable medium of claim 79, further comprising: a code for receiving at least one instruction in the second layer, wherein the instruction is based at least in part on the descriptive information; The code of the processing of the multimedia material in the second layer is changed based at least in part on the received instruction. 81. The machine readable medium of claim 80, wherein the received instruction is related to an error control method. 82. The machine readable medium of claim 81, wherein the error control method comprises 120028.doc -12-200803523 error recovery, 83. request item 79 data. Error concealment and a frame twist: One or more of the insertions. A machine-readable medium, wherein the 描β-descriptive information comprises a machine readable medium of the ninth aspect, wherein the descriptive information includes a frame feature: information, basic or enhanced material identification information, ~ or more of timing information, a code_type, a frame type, synchronization information, and predictive coding related information. 120028.doc -13 -120028.doc -13 -
TW96112157A 2006-04-04 2007-04-04 Frame level multimedia decoding with frame information table TW200803523A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US78944306P 2006-04-04 2006-04-04

Publications (1)

Publication Number Publication Date
TW200803523A true TW200803523A (en) 2008-01-01

Family

ID=39590584

Family Applications (1)

Application Number Title Priority Date Filing Date
TW96112157A TW200803523A (en) 2006-04-04 2007-04-04 Frame level multimedia decoding with frame information table

Country Status (2)

Country Link
AR (1) AR060365A1 (en)
TW (1) TW200803523A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI424748B (en) * 2010-04-09 2014-01-21 Newport Media Inc Buffer size reduction for wireless analog tv receivers

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI424748B (en) * 2010-04-09 2014-01-21 Newport Media Inc Buffer size reduction for wireless analog tv receivers

Also Published As

Publication number Publication date
AR060365A1 (en) 2008-06-11

Similar Documents

Publication Publication Date Title
US8358704B2 (en) Frame level multimedia decoding with frame information table
JP5296123B2 (en) Improved error resilience using out-of-band directory information
JP5738929B2 (en) Decoder architecture for optimal error management in streaming multimedia
JP5937275B2 (en) Replace lost media data for network streaming
JP4982024B2 (en) Video encoding method
EP2754302B1 (en) Network streaming of coded video data
TWI325706B (en) Method and apparatus for processing multimedia data
TWI253868B (en) Picture coding method
JP2009284518A (en) Video coding method
KR101336243B1 (en) Transport stream structure for transmitting and receiving video data in which additional information is inserted, method and apparatus thereof
KR101959319B1 (en) Video data encoding and decoding methods and apparatuses
US20070174752A1 (en) Content distribution method, encoding method, reception/reproduction method and apparatus, and program
Dufaux et al. JPWL: JPEG 2000 for wireless applications
JP2004519908A (en) Method and apparatus for encoding MPEG4 video data
TW200803523A (en) Frame level multimedia decoding with frame information table
JP2007208418A (en) Inspection information generating apparatus, transmitter, and relaying apparatus
KR100899814B1 (en) Method and Apparatus for encoding and decoding using Redundant Coded Picture stemmed from selective Primary Coded Picture
Micanti et al. Digital Cinema package transmission over wireless IP networks
Lee et al. Efficient video data recovery for 3G-324M telephony over WCDMA networks
Superiori et al. Fehlerverschleierungsanalyse in H264/Advanved Video Coding-codierten Videosequenzen