TW200535686A - Decoder for BD-ROM, and method for generating image signals - Google Patents

Decoder for BD-ROM, and method for generating image signals Download PDF

Info

Publication number
TW200535686A
TW200535686A TW094101185A TW94101185A TW200535686A TW 200535686 A TW200535686 A TW 200535686A TW 094101185 A TW094101185 A TW 094101185A TW 94101185 A TW94101185 A TW 94101185A TW 200535686 A TW200535686 A TW 200535686A
Authority
TW
Taiwan
Prior art keywords
pixel
information
area
page
image
Prior art date
Application number
TW094101185A
Other languages
Chinese (zh)
Inventor
Gestel Wilhelmus Jacobus Van
Original Assignee
Koninkl Philips Electronics Nv
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninkl Philips Electronics Nv filed Critical Koninkl Philips Electronics Nv
Publication of TW200535686A publication Critical patent/TW200535686A/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43074Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of additional data with content streams on the same device, e.g. of EPG data or interactive icon with a TV program
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/42Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of patterns using a display memory without fixed position correspondence between the display memory contents and the display position on the screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • H04N5/44504Circuit details of the additional information generator, e.g. details of the character or graphics signal generator, overlay mixing circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/641Multi-purpose receivers, e.g. for auxiliary information
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/02Handling of images in compressed format, e.g. JPEG, MPEG
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/10Mixing of images, i.e. displayed pixel being the result of an operation, e.g. adding, on the corresponding input pixels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/12Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
    • G09G2340/125Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels wherein one of the images is motion video
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/02Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
    • G09G5/06Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed using colour palettes, e.g. look-up tables
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising

Abstract

A decoder (50) is described for receiving and decoding an information stream which contains page composition information, region composition information, object data, and possibly colour look-up table indication information, and for generating image signals on the basis of said information, which information defines at least one page (60) in at least one plane of the contents of a display image. The decoder comprises: a colour look-up table memory (38) containing at least one colour look-up table (CLUT) having a colour look-up table ID; an object buffer (36) for storing decoded object data; a composition buffer (35) for storing page composition information, region composition information, and colour look-up table indication information; and an image controller (51) adapted to generate image signals (RGB+α) directly on the basis of the object data in the object buffer (36) and at least one colour look-up table (CLUT) in the colour look-up table memory (38), using the information in the composition buffer (35).

Description

200535686 九、發明說明: 【發明所屬之技術領域】 本發明—般而言係關於在諸如監視器、電視之顯示幕上 顯示資訊之領域。 更特定言之,本發明係關於顯示裝置中之解碼器,該解 碼:具有-接收關於視訊、字幕及圖形的經編碼資訊之輸 入端,且被設計成解碼和處理此資訊。更特定言之,本發 明係關於減小此等顯示裝置中所用的緩衝器之尺寸的: 法0 已經根據不㈣標準或格式(例如緊密光碟_標準, 數位化通用光碟(DVD)標準等㈣發出光碟和碟機一相對 新的‘準為BD(監光碟)。具體言之,本發明係關於讀取駐 ^唯讀光碟⑽初⑷之領域,且將在下文中具體說明: 2明用於此示例性應用,但應注意,並不意欲將本發明之 範圍限制於BD-ROM。 【先前技術】 眾所周知,影片(或其它可視圖像)包括一連串連續顯示 之圖片。通常,每一圖片對應於40 ms的顯示時間(或換言 之,影片通常每秒含有25張圖片)。此顯示時間亦將被稱 為’,訊框間隔”。顯示幕包括按水平行及垂直列排列的像素 陣列。在數位視訊中,以編碼方式提供自(例如)讀取bd_ ROM碟的碟機至顯示裝置之資訊。顯示設備接收經編碼資 讯,在解碼過程中,計算關於整個圖片的像素資訊並將其 儲存在緩衝器記憶體中。顯示相應圖片之實際過程包括讀 989l5.doc 200535686 取緩衝器、計算連續像素驅動信號,及向營幕連續顯示像 素驅動信號,其中通常沿水平行逐個地且自上而下逐行地 寫像素。 由於編碼及解碼圖片以及在螢幕上顯示圖片之技術本來 (perse)已為人所熟知,無需在此更詳細地描述。應注意, 用於編碼圖片之重要格式是MPEG格式。 、可以無線方式接收經編碼之資訊,諸如在數位廣播的情 況下。亦可藉由讀取諸如光碟之儲存媒體來獲得經編碼之 貧訊。可料允許使㈣在可寫w上自行記錄之裝置。 同樣’音訊出版公司或視訊出版公司出版經預先記錄之碟 人片’該等碟片係唯讀碟(R0M);可賭得允許使用者播放此 等碟片之播放虞置(play_back apparatus)。在此等播放裝置 (下文簡單稱之為播放器)中,碟機組件讀取並解碼記錄於 碟片上之貝料’且產生視訊流及/或音訊流’該視訊流及/ 或音訊流適於經由顯示裝置(諸如電視裝置、監視器、揚 聲器等)來顯示。由於本發明不涉及音訊,因此將在以下 論述中將其忽略。 影片可包含若干元素,說明如下: 移動囷片,意即將顯示於電視榮幕上的影片之實際圖 片。在下文中,亦、將此稱為視訊圖片或主顯示影像。 字幕。字幕係文字之圖形表現,其如同晝中晝顯示一樣 以圖形方式覆蓋於主顯示影像上。字幕係以訊框精度(意 即’具有與主顯示之時序相關之時序)且在特定時間週期 期間顯示。舉例而言,字幕係用以: ’ 98915.doc 200535686 -顯示口頭文字之翻譯, 乂桌助在理解口頭文字之語t 上有困難的人; ° -顯示口頭文字之隸你 士上 子之轉錄’以幫助有聽力困難的人; -為卡拉OK(伴唱)目的 禎不口頭文字之轉錄; -顯示注釋;等 圖形圖片。圖形圖片亦如 π 冋旦中旦顯示一樣覆蓋於主顯 不衫像上。同步圖形盥非回 y /、非冋步圖形之間有所差別。 同步囷形係以訊框精廑 + m止 在特疋時間週期期間顯示。 同:▲圖形可含有靜態圖形或動態圖形,或二者之組合。 ㈣圖形包含一在若干主顯示訊框間隔期間顯示的影 像。舉例而言,靜態圖形係用以: -顯示具有按紐狀態說明之互動式按鈕(諸如"正常"、 ”選·定"啟動,,等); 動態圖形包含-連串互不相同之影像,每一影像與— 或多個主顯示影像結合顯示。舉例而言,動態圖形係用 以: -顯示導演注釋; -強調主顯示影像之一部分(例如在導演注釋期間)。 非同步圓形亦在特定時間週期期間顯示,但不需要精 確的時序。與同步圖形情況下相同的實例同樣適用。 將參看圖1更詳細地說明圖片作為 構。為此說明,引入,,平面”之概念。 若干元素之組合的結 ® 1展示置於彼此之 後的三張圖片(影像),就如同其如層—樣位於一堆疊内, 意即不同的,,平行,,平面。最下層圖片為視訊圖片V;其相 98915.doc 200535686 上層圖片為圖形圖片G;其 在中間的圖片為字幕圖片 平面。為共同引用此等圖 應平面將被稱為視訊平面。最 相應平面將被稱為圖形平面。 s,其相應平面將被稱為字幕 不影像’’)之組成元素,將使用短語 片’其係最終圖片P(”顯 、凡素〜像。根據疋義,視訊平面含有"視訊影像”,字幕 平面s有予幕衫像’且圖形平面含有”圖形影像"。 每·疋素衫像V、S、G具有與顯示器之顯示尺寸相對應200535686 IX. Description of the invention: [Technical field to which the invention belongs] The present invention is generally related to the field of displaying information on display screens such as monitors and televisions. More specifically, the present invention relates to a decoder in a display device, which has an input terminal that receives encoded information about video, subtitles, and graphics, and is designed to decode and process this information. More specifically, the present invention relates to reducing the size of the buffers used in these display devices: Method 0 has been based on various standards or formats (such as compact disc_standard, digital versatile disc (DVD) standard, etc.) A relatively new 'quasi-BD' (disc monitor) is issued for optical discs and disc players. Specifically, the present invention relates to the field of reading pre-recorded optical discs, and will be specifically described below: This exemplary application, however, should be noted that it is not intended to limit the scope of the present invention to BD-ROM. [Prior Art] It is well known that a movie (or other visual image) includes a series of continuously displayed pictures. Generally, each picture corresponds to At a display time of 40 ms (or in other words, a movie usually contains 25 pictures per second). This display time will also be called ', frame interval'. The display screen includes a pixel array arranged in horizontal rows and vertical columns. In digital video, provide information from the drive that reads the bd_ ROM disc to the display device in encoding mode. The display device receives the encoded information and calculates the entire information during the decoding process. The pixel information of the film is stored in the buffer memory. The actual process of displaying the corresponding picture includes reading 9985.doc 200535686 to fetch the buffer, calculate the continuous pixel driving signal, and continuously display the pixel driving signal to the camp screen. The pixels are written horizontally line by line and line by line from top to bottom. Since the techniques of encoding and decoding pictures and displaying pictures on the screen are already well known, they do not need to be described in more detail here. It should be noted that The important format for encoding pictures is the MPEG format. You can receive encoded information wirelessly, such as in the case of digital broadcasting. You can also obtain encoded poor information by reading storage media such as optical discs. Allowed A device that records itself on a writable w. Similarly, 'Audio Publishing Company or Video Publishing Company publishes a pre-recorded disc player' These discs are read-only discs (ROM); users can bet to play this The disc playback device (play_back apparatus). In this playback device (hereinafter simply referred to as the player), the disc player component reads and decodes the record. The material on the disc 'and generates a video stream and / or audio stream' The video stream and / or audio stream are suitable for display via a display device (such as a television device, monitor, speaker, etc.). Since the present invention does not involve Audio, so it will be ignored in the following discussions. A movie can contain several elements, which are explained as follows: Mobile cymbals, meaning actual pictures of a movie that is about to be shown on a TV screen. In the following, this is also referred to as a video picture Or main display image. Subtitles. Subtitles are graphic representations of text, which are graphically overlaid on the main display image like day, day, and day display. Subtitles are frame-accurate (meaning that they have a timing related to the timing of the main display ) And displayed during a specific time period. For example, subtitles are used to: '98915.doc 200535686-show translations of spoken text, and help people who have difficulty understanding the language of spoken text; °-show spoken The Transcript of Your Master's Transcript 'to help people with hearing difficulties;-Transcripts of verbal text for karaoke (singing) purposes;-Display notes; Graphic pictures. The graphic picture is also covered on the main display and the shirt like the π and the middle display. There is a difference between synchronous graphics and non-returning y /, non-paced graphics. Synchro frames are displayed with frame precision + m during special time periods. The same: ▲ graphics can contain static graphics or dynamic graphics, or a combination of the two. The graphic contains an image displayed during a number of main display frame intervals. For example, static graphics are used to:-display interactive buttons with button status descriptions (such as " normal ", " select " start ",etc.); dynamic graphics include-series are different from each other Each image is displayed in combination with—or multiple main display images. For example, motion graphics are used to:-display the director's comments;-emphasize parts of the main display image (for example, during the director's comments). Shapes are also displayed during specific time periods, but do not require precise timing. The same example as in the case of synchronized graphics is also applicable. The picture will be explained in more detail with reference to Figure 1. For this explanation, the concept of "planar" is introduced . The combination of several elements ® 1 shows the three pictures (images) placed behind each other, as if they were in a stack, meaning different, parallel, and flat. The bottom picture is video picture V; its image is 98915.doc 200535686; the upper picture is graphic picture G; the picture in the middle is subtitle picture. For common reference to these figures, the application plane will be referred to as the video plane. The most corresponding plane will be called the graphics plane. s, its corresponding plane will be called the subtitles and images "), and will use the phrase slice 'It is the final picture P (" Xian, Fansu ~ image. According to the meaning, the video plane contains " video image ", The subtitle plane s has a curtain image" and the graphics plane contains "graphic images". Each elementary shirt image V, S, G has a size corresponding to the display size of the display.

的尺寸。以高清晰度電視(HDTV)相,顯示器的尺寸為 1920*1080 像素。 對於每一像素而言,需要3色(RGB或γυν)值及透明等級 值。透明等級表明與較下層平面之影像的混合。最下層平 面不而要透明等級。在每變數一位元組的情況下,需要每 像素4位元組。此導致較大記憶體。此即為平面内之資訊 以編碼之方式儲存的原因。 平面内每一像素所需之資料量取決於編碼方法。 視訊圖片:每像素16位元。乂柯匕編碼使用[4:2:2或 4:2:0]的抽樣標準。 字幕圖片:8位元[具有256個輸入的色彩查找表,且每 一輸入在輸出端處有4位元組(RGB + alpha] 圖形圖片·· 8位元[具有256個輸入的色彩查找表,且每 一輸入在輸出端處有4位元組(RGB + alpha] 在讀出平面的同時,進行每像素4位元組的轉換 (RGB + alpha) 〇 藉由基於逐像素添加相應的視訊、字幕及圖形影像並考 98915.doc 200535686size of. In the case of high-definition television (HDTV), the size of the display is 1920 * 1080 pixels. For each pixel, a 3-color (RGB or γυν) value and a transparency level value are required. Transparency levels indicate blending with images from lower planes. The lowest level is not transparent. In the case of one byte per variable, 4 bytes per pixel are required. This results in larger memory. This is why the information in the plane is stored in an encoded manner. The amount of data required for each pixel in the plane depends on the encoding method. Video picture: 16 bits per pixel. The Ke Ke coding uses a sampling standard of [4: 2: 2 or 4: 2: 0]. Caption picture: 8-bit [color lookup table with 256 inputs, and each input has 4 bytes (RGB + alpha) graphic picture at the output end · 8-bit [color lookup table with 256 inputs , And each input has 4 bytes (RGB + alpha) at the output end, while reading the plane, perform 4 bytes per pixel conversion (RGB + alpha) 〇 By adding the corresponding video based on pixel by pixel , Subtitles and graphic images and test 98915.doc 200535686

慮指示透明度值的alpha值,來建構最終顯示影像p。更特 疋&之’為建構最終顯示影像p之某一像素的RGB值,將 相應視訊像素之RGB值乘以因子(1_as)(方塊丨),將相應字 幕像素之RGB值乘以因子as(方塊2),且將二者之結果相加 (方塊3),乘以因子(1-邶)(方塊4),將所得結果加上(方塊 5)相應圖形像素之RGB值乘以(方塊6)因子哗的RGB值。在 此,as界定字幕像素之透明度值,且邺界定圖形像素之透 明度值。舉例而言,若某一圖形像素之透明度值ag=l,則 該圖形像素為完全透明(意即不可見),且來自較下層平面 之像素為完全可見,反之,若ag==〇,則該圖形像素為完全 不透明,且來自較下層平面之像素不可見。 (MPEG)傳輸流可含有多個使用者可選之互不相同之字 幕服務,例如不同語言、不同縱橫比等。同樣地,傳輸流 可含有多個使用者可選之互不相同的圖形服務。在播放期 間,如使用者所選,僅解碼並顯示該等字幕服務之一。同 樣地’如使用者所選’僅解碼並顯示該等圖形服務之一。 由於字幕平面與圖形平面之解碼器模型㈣,以下說明將 僅具體地提及字幕平面。 字幕服務包含複數個連續的字幕頁面,每—頁面具有與 子幕平面之尺寸相對應的尺寸(意即,在情況下為水平 蘭像素乘以垂直_像素)。每一頁面被顯示特定的持 續時間,1 亥持續時間通常對應於多個視訊訊框間隔。此時 間週期被稱為頁面實體,·在1面實體期間,字幕平面之 内容保持相同。傳輸流僅含有—次字幕頁面之資訊,其與 98915.doc 200535686 顯示之起始及終止時間的指示相結合。 描述頁面布局之頁面組合 的區域I ^ &只囬之或多個區域 古,可目a 匸或識別付數字。舉例而 猎由規疋兩個對角之座標來指示矩形區域。在頁面 實體期間,惟有此g妁 一在頁面 有此區域内之像素對頁面布局之内客右田。 其效果與考慮所有1它像辛 匕像素亚取透明度值為1(完全透明)時 目同,但是使用區域節省了計算時間。 田每―區域全部在頁面區域内。兩個或兩個以上區域可重 C1: ’在重登的區域中,惟右命 氓有省寻區域之一(頂部區域)之内 容用於顯示。 區域可指向物件,諸如照片、文字文件等。不同的區域 可指向同-物件,但每個區域僅指向一物件。每一物件具 有-物件標識符數字。每—物件之尺寸可小於或大於頁面 尺寸,但該尺寸足以完全填充指向該物件之區域。Consider the alpha value indicating the transparency value to construct the final display image p. More specifically, 'amp' is to construct the RGB value of a pixel of the final display image p. Multiply the RGB value of the corresponding video pixel by a factor (1_as) (block 丨), and multiply the RGB value of the corresponding subtitle pixel by a factor as. (Block 2), and add the results of both (Block 3), multiply by the factor (1- 邶) (Block 4), multiply the result by (Block 5) and multiply the RGB value of the corresponding graphic pixel by (Block 6) The RGB value of the factor. Here, as defines the transparency value of the subtitle pixels, and does not define the transparency value of the graphics pixels. For example, if the transparency value ag of a graphic pixel is ag = 1, the graphic pixel is completely transparent (meaning invisible), and the pixels from the lower plane are completely visible; otherwise, if ag = = 0, then The graphic pixels are completely opaque, and pixels from lower planes are not visible. The (MPEG) transport stream can contain multiple subtitle services that are different from each other, such as different languages and different aspect ratios. Similarly, a transport stream can contain multiple graphics services that are different from each other and can be selected by the user. During playback, if selected by the user, only one of these subtitle services is decoded and displayed. Similarly, 'as selected by the user', only one of these graphics services is decoded and displayed. Due to the decoder model of the subtitle plane and graphics plane, the following description will only specifically refer to the subtitle plane. The subtitle service includes a plurality of consecutive subtitle pages, each of which has a size corresponding to the size of the sub-curtain plane (that is, horizontal blue pixels by vertical_pixels in the case). Each page is displayed for a specific duration. The duration usually corresponds to multiple video frame intervals. This period of time is called the page entity, and the content of the subtitle plane remains the same during one entity. The transport stream only contains information on the subtitle page, which is combined with the indication of the start and end time shown by 98915.doc 200535686. The area I ^ & describes the page combination of the page layout. Only one or more areas are returned, and the number can be identified or identified. For example, the rectangular area is indicated by the coordinates of two diagonal corners. During the actual period of the page, this is the only one that has pixels within this area on the page to the right of the page layout. The effect is the same as when considering all 1 pixels. The transparency value is 1 (completely transparent), but using the area saves calculation time. Tian Tian-All areas are within the page area. Two or more areas can be repeated C1: ’In the re-registered area, the right-clicker has one of the search areas (top area) for display. The area can point to objects such as photos, text files, and so on. Different areas can point to the same object, but each area points to only one object. Each object has an -object identifier number. The size of each object can be smaller or larger than the page size, but the size is sufficient to completely fill the area pointing to the object.

mpeg傳輸流之字幕及圖形資訊的傳輸按以下方法實 現。使用頁面組合段(PCS)來描述頁面布局;使用區域组 合段來描述區域性質’·使用物件界定段來描述物件;且使 μ料umc:ujT…頁面組合内可使用一或多 個CLUT表。每一段係封裝於一 pES封包内,該pEs封包含 有PTS/DTS時間戳記。PES封包連同丁8封包及pir^_同 封裝於一基本傳輸流中。 在圖2A及2B中圖解說明以上内容。圖2A展示一具有部 分重疊的兩區域丨丨及^的字幕頁面1〇,第一區域u覆於第 二區域12之上。圖2A進一步展示兩物件13及14。第一物件 98915.doc -10- 200535686 13之尺寸小於頁面1〇之尺寸;第二物件“之垂直尺寸比頁 面大弟區域11指向第一物件13之一部兮15;第二區 域12指向第二物件14之一部分16。 圖2B說明字幕平面中的字幕影像,意即字幕頁面之内 容。此字幕影像由第一區域丨丨界定之第一部分21含有第一 物件13之部分丨5。此字幕影像由第二區域12界定之第二部 刀22s有第一物件14之部分16。此字幕影像之剩餘部分μ 為空。The subtitle and graphic information of the mpeg transport stream is transmitted in the following way. Use page composition segment (PCS) to describe page layout; use area composition segment to describe the nature of the area '· use object definition segment to describe objects; and make μ material umc: ujT ... one or more CLUT tables can be used in the page composition. Each segment is encapsulated in a pES packet, which contains a PTS / DTS timestamp. The PES packet, together with the D8 packet and pir ^ _, is packaged in a basic transport stream. The above is illustrated in Figures 2A and 2B. FIG. 2A shows a subtitle page 10 having two regions 丨 丨 and ^ which partially overlap, and the first region u overlays the second region 12. FIG. 2A further shows the two objects 13 and 14. The size of the first object 98915.doc -10- 200535686 13 is smaller than the size of the page 10; the vertical size of the second object "is larger than the page area 11 of the page pointing to a part 15 of the first object 13; the second area 12 is pointing to the first Part 16 of one of the two objects 14. Figure 2B illustrates the caption image in the caption plane, that is, the content of the caption page. The caption image is defined by the first area. The first portion 21 contains the portion of the first object 13. This caption The second knife 22s of the image defined by the second region 12 has a portion 16 of the first object 14. The remaining portion of this subtitle image μ is empty.

物件獨立地儲存於解碼器中。在頁面重建期間,解碼器 知道對於頁面中的每—像素而言,需要讀取哪—物件之哪 -像素。舉例而言,當重建第一區域u内的第一像素A %解碼益將使用來自第一物件13之相應像素A,;當重建 第二區域12内的第二像素B時’解碼器將使用來自第二物 件14之相應像素B,;當重建區域23中不在任何區域内的第 二像素C時,解碼器將設定R=G=B=〇(黑色)。 此種做法的優點在於,物件僅需被傳輸及解瑪一次。可 藉由改變區域相對於頁面之相對位置(例如:矩形u相對 於頁面1G之位置)’或藉由改變區域相對於相應物件之相 對位置(例如:矩形15相對於物件13之位置)來㈣地實現 一些種類的動晝。 圖3為時相,其圖解說明傳輸、解碼及顯㈣件之時 序。水平軸線表示時間;時間單元指示訊框間隔。動作 ”傳輸”解碼丨,及,’顯示,,係指 扣不為具有起始時間及終止時 間的水平條。 98915.doc 200535686 在時刻tl、t2及t3傳輸資料段丁心及化,其分別有關於 頁面組合(頁面組合段)、區域組合(區域組合段)及色彩查 找表(CLUT段)。頁面組合段含有關於(例如)頁面之顯示時 間戳記(PTS),區域之數目及位置,及重疊關係(階層或相 互優先權)的資料。區域組合段含有關於(例如)相應物件 (物件ID)及相對於相應物件之區域的相對位置的資料。 CLUT段含有一具有CLUT-ID之色彩查找表CLUT。頁面組 合及/或區域組合可指示哪一 CLUT應被用於相應頁面、區 域及物件。 自時刻t4至時刻t5,傳輸第一物件資料段Td,該資料段 Td含有第一物件之資料、一解碼時間戳記⑴及一顯示 日守間戳兄(PTS) ’該解碼時間戳記決定何時應開始解碼 時,以便在顯示物件之前及時對其進行解碼,且該顯示時 間戳記決定何時應開始顯示物件。一接收到所有物件資 料’便可解碼第一物件(自時刻至t8,t6由DTS決定)。一 解碼完第一物件,便可顯示第一物件(自由頁面組合段中 白勺P T S所決定白勺時刻19開始)。 自時刻t7至時刻tlO,傳輸並接收含有第二物件之資料以 及相應的DTS及PTS之弟二物件資料段Te。一接收到所有 物件資料,便可解碼第二物件(自時刻111至112, 決定)。一解碼完第二物件,便可顯示第二物件(自新頁面 瞬間之頁面組合段中的顯示時間戳記所決定之時刻tl 3開 始)。自顯示第一物件至顯示第二物件的過渡可包含擦除 及/或退色效應,以使付顯示第二物件之起始時刻113早於 98915.doc 12 200535686 顯不第一物件之終止時刻tl4。該擦除及/或退色效應 =在時刻m傳輪之資料段Tf、Tg、Th來執行,此;二 ^別有關於頁面組合及區域組合在時刻tl3、U6及tl4的變 、圖4祝明根據先前技術之解碼器3〇的設計及操作。封包 哉別付(PID)濾波器31接收資料流,其後將資料臨時儲存 於傳輸緩衝器32内以使位元速率平均。該傳輸緩衝器可相 對^小,512位元組的尺寸便已足夠。接下來,將經編碼 =料儲存於編碼資料緩衝器33内;因此,該編碼資料緩 衝器33含有如圖3中所示的字幕段。 在由該等段之DTS決定之_,將其自編碼㈣緩衝器 33移除。將頁面組合段、區域組合段及色彩查找表轉移至 組合緩衝器35 ’且將物件界定段轉移至圖形處理器34。圖 形處理器34解碼物件段,且經解碼之物件段被儲存於物件 緩衝器36中。 一頁面組合段之PTS時間戳記指示相應頁面將於何時顯 不。圖形控制器39利用組合缓衝器35内的資訊,用來自物 件緩衝器36之資料來更新字幕平面37之區域。接下來,讀 出具有與視訊平面及圖形平面相同之位址的字幕平面”, 使用色彩查找表記憶體38内之色彩查找表,將來自字暮平 面37之8位元資料轉譯為32位元(8位元之四倍)RGB+a,其 被提供為輸出信號。 此已知設計的一缺點在於:無論何時發生任何變化,例 如當區域相對於頁面移動時或當物件相對於區域移動時, 98915.doc 13 200535686 均需要更新整個念 個子幕千面37(在hd視訊情況下包括 1920*1080個像素)。更新 义 而在子幕平面37可被讀出之 刖全部元成;另一方面,更新 祈過転不月匕在讀出字幕平面37 的同時開始。此咅喟兩, .心明而在與垂直消隱週期(blanking ―)相對於之非常短的時間間隔内執行更新過程。因 而,物件緩衝器,與字幕平面37之間的位元速率非常高。 若此位元速率變得過高,則字幕平面π中甚至可能需要雙 緩衝,從而要求兩倍的記憶體空間。 另外、、、二解碼之物件儲存於物件緩衝器%中且亦儲存於 字幕平面37中的事實構成一缺點,因為其要求相對多的記 憶體空間。 本發明之重要目標係克服或至少減少該等缺點中至少一 者。 【發明内容】 根據本發明之一重要態樣,填充字幕平面37之步驟被避 免基於頁面組合資料、區域組合資料及物件參數,影像 控制器直接使用物件緩衝器中的資料。因此,可省略字幕 平面以節省記憶體空間。若像素在區域之外,則無需定址 、件爰衝器内之相應像素,且可將其透明度設定為炉1, 以使其值不重要。特定言之,無需刷新任何位於區域之外 勺《己隐體位置’在提供相應像素輸出信號時將相應透明度 值α没定為U完全透明)便已足夠。 【實施方式】 圖5為况明根據本發明之解碼器5〇之設計及操作的圖。 989l5.doc -14- 200535686 在比較圖5之解派$《Λ t 攄本㈣ 與圖4之已知解碼器30時,可看出根 月之解碼器50不包含任何字幕平面。元件31_36可Objects are stored separately in the decoder. During page reconstruction, the decoder knows which object to read for every pixel in the page. For example, when reconstructing the first pixel A% decoding benefit in the first region u, the corresponding pixel A from the first object 13 will be used; when reconstructing the second pixel B in the second region 12, the decoder will use The corresponding pixel B from the second object 14; when the second pixel C in the area 23 that is not in any area is reconstructed, the decoder will set R = G = B = 0 (black). The advantage of this approach is that the objects need to be transmitted and demarcated only once. You can change the relative position of the area relative to the page (for example: the position of the rectangle u with respect to the page 1G) 'or change the relative position of the area with respect to the corresponding object (for example: the position of the rectangle 15 relative to the object 13). To achieve some kind of moving day. Fig. 3 is a phase diagram illustrating the timing of transmission, decoding and display. The horizontal axis represents time; the time unit indicates the frame interval. The action "Transfer" decoding, and, 'show,' means that the buckle is not a horizontal bar with a start time and an end time. 98915.doc 200535686 At time t1, t2, and t3, the data segment Ding Xinhehua is transmitted, which is about page combination (page combination segment), area combination (region combination segment), and color lookup table (CLUT segment). The page group segment contains information about, for example, the page's display time stamp (PTS), the number and location of regions, and overlapping relationships (hierarchical or mutual priority). The area combination segment contains information about, for example, the corresponding object (object ID) and the relative position of the area relative to the corresponding object. The CLUT segment contains a color lookup table CLUT with a CLUT-ID. Page combinations and / or area combinations can indicate which CLUT should be used for the corresponding page, area, and object. From time t4 to time t5, the first object data segment Td is transmitted. The data segment Td contains the data of the first object, a decoding time stamp, and a display daytime stamp (PTS). When decoding is started so that the object is decoded in time before it is displayed, and the display time stamp determines when the object should start to be displayed. As soon as all object data is received, the first object can be decoded (from time to t8, t6 is determined by DTS). As soon as the first object is decoded, the first object can be displayed (time 19 determined by P T S in the free page combination segment). From time t7 to time t10, data including the second object and the corresponding second object data segment Te of DTS and PTS are transmitted and received. Once all the object data is received, the second object can be decoded (from time 111 to 112, decision). As soon as the second object is decoded, the second object can be displayed (starting from the time tl 3 determined by the display timestamp in the page combination segment of the new page moment). The transition from displaying the first object to displaying the second object may include erasing and / or fading effects, so that the starting time 113 of displaying the second object is earlier than 98915.doc 12 200535686 showing the ending time of the first object t14 . The erasing and / or fading effect = the data segments Tf, Tg, and Th are executed at the time m, at the time m, and this is not the same as the page combination and area combination at time t13, U6, and t14. Figure 4 The design and operation of the decoder 30 according to the prior art will be explained. The packet filter (PID) filter 31 receives the data stream, and then temporarily stores the data in the transmission buffer 32 to average the bit rates. The transmission buffer can be relatively small, and a size of 512 bytes is sufficient. Next, the encoded data is stored in the encoded data buffer 33; therefore, the encoded data buffer 33 contains a subtitle segment as shown in FIG. At _ determined by the DTS of these segments, it is removed from the encoding buffer 33. The page combination section, the area combination section, and the color lookup table are transferred to the combination buffer 35 'and the object definition section is transferred to the graphics processor 34. The graphics processor 34 decodes the object segment, and the decoded object segment is stored in the object buffer 36. The PTS timestamp of a page combination segment indicates when the corresponding page will be displayed. The graphics controller 39 uses the information in the combination buffer 35 to update the area of the subtitle plane 37 with the information from the object buffer 36. Next, read out the subtitle plane with the same address as the video plane and graphics plane. "Use the color lookup table in the color lookup table memory 38 to translate the 8-bit data from the word plane 37 into 32 bits. (Four times 8-bit) RGB + a, which is provided as an output signal. One disadvantage of this known design is that whenever there is any change, such as when the area moves relative to the page or when the object moves relative to the area , 98915.doc 13 200535686 need to update the entire sub-screen thousand face 37 (including 1920 * 1080 pixels in the case of hd video). Update all the elements that can be read out on the sub-screen plane 37; another On the one hand, the update has been initiated when the subtitle plane 37 is read. At the same time, the update process is performed in a very short time interval relative to the vertical blanking period (blanking). Therefore, the bit rate between the object buffer and the subtitle plane 37 is very high. If this bit rate becomes too high, double buffering may even be required in the subtitle plane π, thus requiring twice the memory space In addition, the fact that the decoded objects are stored in the object buffer% and also in the subtitle plane 37 constitutes a disadvantage because it requires a relatively large amount of memory space. An important object of the present invention is to overcome or at least reduce the At least one of the disadvantages etc. [Abstract] According to an important aspect of the present invention, the step of filling the subtitle plane 37 is avoided based on the page combination data, area combination data, and object parameters. The image controller directly uses the Data. Therefore, the subtitle plane can be omitted to save memory space. If the pixels are outside the area, there is no need to address the corresponding pixels in the buffer, and the transparency can be set to furnace 1, so that its value is not important In particular, there is no need to refresh any “hidden body position” outside the area, and the corresponding transparency value α is not determined to be U fully transparent when providing the corresponding pixel output signal. [Embodiment] Figure 5 is the case A diagram illustrating the design and operation of the decoder 50 according to the present invention. 989l5.doc -14- 200535686 In the comparison of the solution of FIG. 5 $ 《Λ t 摅 本 ㈣ When compared with the known decoder 30 of Fig. 4, it can be seen that the decoder 50 of the root month does not contain any subtitle plane. The components 31_36 may

與上述相同元株一梯 α > r J —一 袭,且…需再次說明。影像控制器5 1為 >三儲存頁面組合貧料、區域組合資料及物件參數。 在计异輸出像素時,影像控制器51基於其已知的資訊自 個)正確物件之(多個)正確位置取得(多個)正確像素,該 資訊包含相應的透明度值。使區域之外的像素完全透明 (α=1) 〇The same element as above is a ladder α > r J — and it needs to be explained again. The image controller 51 is > three storage page combination lean materials, area combination data and object parameters. When discriminating the output pixels, the image controller 51 obtains the correct pixel (s) from the correct position (s) of the correct object based on its known information, and the information includes the corresponding transparency value. Make pixels outside the area completely transparent (α = 1).

因此避免了提供影像記憶體的需要及以非常大的位元 速率刷新此影像記憶體的需要。 多看Θ 6 Α及6Β更洋細地說明根據本發明之解碼器的操 作。將在直角座標系[x,y]中界定有關於頁面之像素位址, X座標對應於水平位置,且y座標對應於垂直位置。頁面具 有1920x1080個像素的尺寸。左上角之座標為[〇,〇],右上 角之座標為[1919,0];左下角之座標為[〇,1〇79],右下角之 座標為[1919,1079]。 假設某一頁面60之頁面組合資料界定兩個區域61及62。 具有水平寬度wl及垂直高度hl之第一區域61的左上角位於 頁面中的[xl,yl]處。具有水平寬度〜2及垂直高度h2之第二 區域62的左上角位於頁面中的[x2,y2]處。第二區域62與第 一區域61重疊。 假設兩個物件71及72係儲存於物件緩衝器36中。第一物 件71具有水平寬度Wobjl及垂直高度Hobjl ;第二物件72具 有水平寬度Wobj2及垂直高度Hobj2。 98915.doc 200535686 第一區域61指向第一物件71,其中w〇bjl2wl且 Hobjlkhl。第二區域62指向第二物件72,其中w〇bj22w2 且Hobj22h2。將在直角座標系[p,q]中界定有關於物件之像 素位址,p座標對應於水平位置且q座標對應於垂直位置, 位址[p = 05q=0]對應於物件之左上角。假設第一區域61之左 上角指向位址[pl,ql](意即第一物件71之位址),且第二區 域62之左上角指向位址[p2,q2](意即第二物件72之位址)。 由於物件71、72應完全覆蓋指向它的區域6丨、62,以下限 制適用:This avoids the need to provide image memory and the need to refresh this image memory at a very large bit rate. Looking more at Θ 6 Α and 6B, the operation of the decoder according to the present invention will be explained in more detail. The pixel address of the page will be defined in the rectangular coordinate system [x, y], the X coordinate corresponds to the horizontal position, and the y coordinate corresponds to the vertical position. The page has a size of 1920x1080 pixels. The upper left corner is [0, 〇], the upper right corner is [1919,0]; the lower left corner is [0,1079], and the lower right corner is [1919,1079]. Assume that the page combination data of a certain page 60 defines two regions 61 and 62. The upper left corner of the first region 61 having a horizontal width wl and a vertical height hl is located at [xl, yl] in the page. The upper left corner of the second area 62 having a horizontal width of ~ 2 and a vertical height h2 is located at [x2, y2] in the page. The second area 62 overlaps the first area 61. It is assumed that two objects 71 and 72 are stored in the object buffer 36. The first object 71 has a horizontal width Wobjl and a vertical height Hobjl; and the second object 72 has a horizontal width Wobj2 and a vertical height Hobj2. 98915.doc 200535686 The first area 61 points to the first object 71, where w0bjl2wl and Hobjlkhl. The second area 62 points to the second object 72, where wObj22w2 and Hobj22h2. The pixel address of the object will be defined in the rectangular coordinate system [p, q]. The p coordinate corresponds to the horizontal position and the q coordinate corresponds to the vertical position. The address [p = 05q = 0] corresponds to the upper left corner of the object. Suppose the upper left corner of the first region 61 points to the address [pl, ql] (meaning the address of the first object 71), and the upper left corner of the second region 62 points to the address [p2, q2] (meaning the second object Address 72). Since the objects 71, 72 should completely cover the areas 6 丨, 62 that point to it, the following restrictions apply:

Wobj 1 >p 1 + w 1 Hobj 1 >q 1 + h 1 Wobj2>p2 + w2 Hobj2>q2 + h2 圖7-為圖解說明影像控制器51在執行產生影像信號之方 法100時的操作之流程圖。 顯示開始於選擇左上角中的像素[步驟丨0丨]。 對於此像素,首先判定此像素是否位於”上層"區域(意即 第二區域62)内,[步驟m];若不在第二區域62内,則判 定此像素是否位於"下層"區域(意即第一區域61)内[步驟 121]。若該像素亦不在第一區域61内,則以任意值(例如 R G B-0)產生RGB信號[步驟131],且將透明等級α設為 ㈣[步驟132] ’表明此像素完全透明。因❿,在此情況下 可名略咨詢物件緩衝器36及使用來自色彩查找表記憶體^ 之色衫查找表來將像素值轉譯為RGB信號之步驟。 98915.doc -16- 200535686 若在步驟111中發現像素位於第二區域62之内,則計算 第二區域62所指物件(意即第二物件72)内相應的像素位址 [步驟112],自物件緩衝器36讀取此位址之内容[步驟 113],並使用來自色彩查找表記憶體38之色彩查找表將其 轉#為RGB信號[步驟114],該色彩查找表具有由頁面組合 貢料及/或區域組合資料所指示的CLUT_m。另外,將透明 等級α设定為為第二區域62界定之透明等級[步驟丨Μ]。 右在步驟121中發現像素位於第一區域6丨之内,則計算 第區域61所私物件(意即第一物件71)内相應的像素位址 [步驟122],自物件緩衝器%讀取此位址之内容[步驟 U3] ’且使用來自色彩查找表記憶體“之色彩查找表將其 轉^RGB^ #b[步驟124],該色彩查找表具有由頁面組合 貝料及/或區域組合資料所指示的clut_id。另外,將透明 等級以設定為為第一區域61界定之透明等級[步驟125]。 在步驟140中,輪出像素信號RGB及α,其包括與來自圖 籲形平面及視訊平面之像素的組合,但對此未做說明。 對〜水平仃的所有像素重複以上程序:像素之χ座標 曰加[步驟151],直至到達該行之末端[步驟152]。 對影像之所有行重複以上程序:行之y座標增加-[步驟 161],直至到達該頁面之末端[步驟162]。 在頁面之末端,以上程序再次又從頭開始以產生下一影 像訊框。 應注意,#頁面組合資料、區域組合資料或物件參數中 任者被改文,以上對影像控制器5丨之操作的描述仍然適 98915.doc -17- 200535686 用 j設第二區域62向左移—個像素。在根據先前技術之 解馬°、中貝1]而在轉換週期内自顯示像素[1919,1079]至顯 示料[〇,〇]完全刷新整個字幕平面37。在根據本發明之解 石馬态中,對於第-p ^ ^ £域62之外的像素(無論是在其舊位置 一运是新位置中),上述程序均得出完全相同的結果。相 同結論適用於第-F 0 ^ ^ ^ ^域62之内的像素(無論是在其舊位置 還疋新位置中)。對於現在在第二區域Μ之内而先前在 ^區域62之外,或現在在第二區域^之外而先前在第二 區域62之内的像专,μ ” 逑私序传出不同的結果,即自物件 緩衝器36的不同位置讀取像素資料及/或為透明等級 了不同的值。 >白此項技★者應清楚,本發明並不限於上述例示性實 細例’而是可能在如附加的申請專利範圍所界定之本發明 =範圍内做出若干變化及修正。舉例而言,可將組合 、,友衝益35及影像控制器51整合為一整體。 另外’在上文中已對其中—影像含有覆於彼此之上的三 個平面的情況說明本發明,但本發明亦適用於其中一影像 :有覆於彼此之上的三個以上平面的情況,且本發明甚至象 於其中-影像含有僅-或兩個平面的情況。 在上文中’已參看方塊圖說明本發明,該等方塊圖闡明 =據本發明之設備的功能塊。應瞭解,可在硬體中建構 此專功能塊中一哎多去盆 者其中此功能塊之功能係由個別硬 /件來執行,但亦可能在軟體中建構此等功能塊中一或 夕者’使得此功能塊之功能由電腦程式之_或多個程式行 98915.doc 200535686 或可程式化設備(諸如微處理器、微控制器等)來執行。 【圖式簡單說明】 # ° 圖1為圖解說明作為若干元素之組合的圖片結構的方塊 圖; 圖2A及2B為使用頁面、區域及物件圖解說明圖片之建 構的圖; 圖3為圖解說明傳輸、解碼及顯示物件之時序的時序 圖;Wobj 1 > p 1 + w 1 Hobj 1 > q 1 + h 1 Wobj2 > p2 + w2 Hobj2 &q; +2 flow chart. The display starts by selecting the pixel in the upper left corner [steps 丨 0 丨]. For this pixel, first determine whether the pixel is located in the "upper layer" area (meaning the second area 62), [step m]; if it is not in the second area 62, determine whether the pixel is located in the "lower layer" area (Meaning the first region 61) [step 121]. If the pixel is not in the first region 61, generate an RGB signal at an arbitrary value (for example, RG B-0) [step 131], and set the transparency level α [㈣132] 'indicates that this pixel is completely transparent. Because of this, in this case, you can consult the object buffer 36 and use the color lookup table from the color lookup table memory ^ to translate the pixel values into RGB signals 98915.doc -16- 200535686 If the pixel is found in the second area 62 in step 111, the corresponding pixel address in the object pointed to by the second area 62 (meaning the second object 72) is calculated [step 112], read the content of this address from the object buffer 36 [step 113], and use the color lookup table from the color lookup table memory 38 to turn it into an RGB signal [step 114], the color lookup table has As indicated by page composition and / or area combination information CLUT_m. In addition, set the transparency level α to the transparency level defined by the second area 62 [step 丨 M]. Right in step 121, it is found that the pixels are within the first area 6 丨, then the private objects in the area 61 are calculated. (Meaning the first object 71) the corresponding pixel address [step 122], read the content of this address from the object buffer% [step U3] 'and use the color lookup table from the color lookup table memory " It turns to ^ RGB ^ #b [step 124], and the color lookup table has the clut_id indicated by the page combination material and / or area combination information. In addition, the transparency level is set to the transparency level defined for the first area 61 [step 125]. In step 140, the pixel signals RGB and α are rotated out, which includes a combination with pixels from the appealing plane and the video plane, but this is not explained. Repeat the above procedure for all the pixels of ~ horizontal :: Add the χ coordinate of the pixel [step 151] until it reaches the end of the line [step 152]. Repeat the above procedure for all lines of the image: the y-coordinate of the line increases-[step 161] until it reaches the end of the page [step 162]. At the end of the page, the above procedure starts again from the beginning to produce the next image frame. It should be noted that any of #page combination data, area combination data, or object parameters have been rewritten. The above description of the operation of the image controller 5 丨 still applies. 98915.doc -17- 200535686 Use j to set the second area 62 to the left Shift — pixels. In accordance with the solution of the prior art, [Zhongbei 1], the entire subtitle plane 37 is completely refreshed from the display pixel [1919, 1079] to the display material [0, 0] in the conversion period. In the solution according to the present invention, for the pixels outside the -p ^ ^ domain 62 (whether in its old position or in its new position), the above procedure yields exactly the same results. The same conclusion applies to pixels within the -F 0 ^ ^ ^ ^ domain 62 (whether in its old position or its new position). For images that are now within the second region M and previously outside the region 62, or are now outside the second region ^ and were previously within the second region 62, μ " That is, read pixel data from different positions of the object buffer 36 and / or have different values for the transparency level. ≫ It should be clear to those skilled in the art that the present invention is not limited to the above-mentioned illustrative examples. Several changes and amendments may be made within the scope of the present invention as defined by the scope of the attached patent application. For example, the combination, You Chongyi 35, and the image controller 51 can be integrated into a whole. In addition, the above The present invention has been described in the case where the image contains three planes covering each other, but the present invention is also applicable to one of the images: the case where there are more than three planes covering each other, and the invention even As in the case where the image contains only-or two planes. In the above, the invention has been described with reference to block diagrams, which illustrate = functional blocks of the device according to the invention. It should be understood that this can be in hardware In this specialized function block A lot of people who want to go to the basin where the function of this function block is performed by individual hardware / pieces, but it is also possible to construct these function blocks in software one or more times so that the function of this function block is controlled by a computer program_ or Multiple program lines 98915.doc 200535686 or programmable devices (such as microprocessors, microcontrollers, etc.) to execute. [Schematic description] # ° Figure 1 is a block diagram illustrating the picture structure as a combination of several elements Figures; Figures 2A and 2B are diagrams illustrating the construction of pictures using pages, regions, and objects; Figure 3 is a timing diagram illustrating the timing of transmitting, decoding, and displaying objects;

圖4為說明根據先前技術之解碼器之設計及操作的圖; 圖5為說明根據本發明之解碼器之設計及操作的圖; 圖6A及6B為關於物件内位址來說明頁面像素之位址的 一根據本發明之產生 圖7為圖解說明影像控制器在執行 影像信號的方法時的操作之流程圖。 【主要元件符號說明】 10 字幕頁面 11 第一區域 12 第二區域 13 第一物件 14 第二物件 15 第一物件之一部分 16 第二物件之一部分 21 字幕影像之第一部分 22 字幕影像之第二部分 98915.doc • 19- 200535686Fig. 4 is a diagram illustrating the design and operation of a decoder according to the prior art; Fig. 5 is a diagram illustrating the design and operation of a decoder according to the present invention; Figs. 6A and 6B are illustrations of the position of a page pixel with respect to an address within an object Generation of Addresses According to the Present Invention FIG. 7 is a flowchart illustrating operations of the image controller when performing a method of an image signal. [Description of main component symbols] 10 Subtitle page 11 First area 12 Second area 13 First object 14 Second object 15 First part 16 First part 21 First subtitle image 22 Second subtitle image 98915.doc • 19- 200535686

23 字幕影像之剩餘部分 30 先前技術之解碼器 31 封包識別符濾波器 32 傳輸緩衝器 33 編碼資料緩衝器 34 圖形處理器 35 組合緩衝器 36 物件緩衝器 37 字幕平面 38 色彩查找表 39 圖形控制器 50 本發明之解碼器 51 影像控制器 60 頁面 61 第一區域 62 第二區域 71 第一物件 72 第二物件 100 方法 98915.doc -20-23 The remainder of the subtitle image 30 Prior art decoder 31 Packet identifier filter 32 Transmission buffer 33 Coded data buffer 34 Graphics processor 35 Combination buffer 36 Object buffer 37 Subtitle plane 38 Color look-up table 39 Graphics controller 50 Decoder of the present invention 51 Image controller 60 Page 61 First region 62 Second region 71 First object 72 Second object 100 Method 98915.doc -20-

Claims (1)

200535686 十、申請專利範圍·· L :種解石馬器⑽’其用於接收並解碼—含有頁面組合資 =區域組合資訊、物件資料且可能含有色彩查找表指 ^資訊的資訊流,且用於基於該資訊產生若干影像信 號’該資訊在-顯示影像之各内容之至少—平面内界定 至少一頁面(60);該解碼器包括: —一色彩查找表記憶體(38),其含有至少一具有色彩查 找表ID之色彩查找表(Clut); _於儲存經解碼之物件資料的物件緩衝器(36); •合緩衝器(35),其用於儲存頁面組合資訊、區域組合 資訊,及色彩查找表指示資訊; •影像控制器(51),其被調適成使用該組合緩衝器(35) 中的貧訊,基於該物件緩衝器(36)中的物件資料及該 色彩查找表記憶體(38)中之至少一色彩查找表 (CLUT)直接產生若干影像信號(RGB + c〇。 2·如明求項1之解碼器,其中該影像控制器(51)在為該顯示 心像之某平面中的某一特定像素(χ、y)產生若干影像 信號(RGB+α)時,係調適成: (幻定該像素(X、y)是否位於一區域(61 ; 62)之一可見部 分中(步驟111、121); (b)萬一該像素(X、y)不在任何區域(61 ; 62)之一可見部 分中,為具有0ί=1(完全透明)之此像素產生若干輸出 像素信號(RGB + ce)(步驟132)。 3·如請求項2之解碼器,其中萬一該像素(χ、y)不在任何區 98915.doc 200535686 域(61,62)之一可見部分中,該影像控制器⑺)係調適 成為具有R=G=B等㈣至以範圍内的—任意常數且較佳 等於零之此像素產生若干輸出像素信號(RGB+a)(步驟 131) 〇 4.如請求们之解碼器,其中該影像控制器(51)在為該顯示 影像之某一平面之某一像素(χ、y)產生若干影像信號 (RGB+α)時,係調適成: ⑷定該像素(X、7)是否位於—區域(62)之—可見部分中 ’ (step 111); (Ο萬一該像素(X、y)位於一區域(62)之一可見部分中, (d) 該物件緩衝器(36)讀取物件資料; (e) 基於自該物件緩衝器(36)讀取的物件資料,為此像素 產生若干輸出像素信號(RGB+α)。 5·如明求項4之解碼器,其中該影像控制器u係調適成考 慮到該區域(62)相對於該頁面(6〇)之相對位置(χ2、y2)及 _ 忒區域(62)相對於該物件(72)之相對位置(p2、q2),在該 像素(X、y)所在之該區域(62)所指向的該物件(72)的一相 應位址處自該物件緩衝器(36)讀取物件資料。 6·如味求項5之解碼器,其中該影像控制器(51)係調適成在 4 物件(72)之位址(ρ=ρ2+(χ-χ2),q=q2 + (y-y2)w 自該物 件緩衝器(36)讀取物件資料。 98915.doc200535686 10. Scope of patent application · L: A kind of calcite horse ⑽ 'which is used to receive and decode — an information stream containing page combination information = area combination information, object data, and possibly color lookup table pointers ^ information, and Generating a plurality of image signals based on the information, the information defines at least one page in at least-plane of each content of the displayed image (60); the decoder comprises:-a color lookup table memory (38), which contains A color look-up table (Clut) with a color look-up table ID; _ an object buffer (36) for storing decoded object data; a combined buffer (35) for storing page combination information and area combination information, And color lookup table indication information; • an image controller (51) adapted to use the lean information in the combined buffer (35), based on the object data in the object buffer (36) and the color lookup table memory At least one color look-up table (CLUT) in the body (38) directly generates a plurality of image signals (RGB + c0. 2. The decoder of the item 1 of Ruming, wherein the image controller (51) is displaying the mental image for the display When a certain pixel (χ, y) in a certain plane generates several image signals (RGB + α), it is adapted to: (determine whether the pixel (X, y) is located in one of a region (61; 62) In the visible part (steps 111, 121); (b) in the event that the pixel (X, y) is not in one of the visible parts of any area (61; 62), several pixels are generated for this pixel with 0ί = 1 (fully transparent) Output the pixel signal (RGB + ce) (step 132). 3. As in the decoder of claim 2, in case the pixel (χ, y) is not in any visible part of the region 98915.doc 200535686 domain (61, 62) In this image controller ⑺) is adapted to have R = G = B, etc. to the range of-any constant and preferably equal to zero. This pixel generates a number of output pixel signals (RGB + a) (step 131) 〇4 The decoder of the request, wherein when the image controller (51) generates several image signals (RGB + α) for a certain pixel (χ, y) of a certain plane of the display image, it is adapted to: ⑷ Determine whether the pixel (X, 7) is located in the visible part of the area (62) '(step 111); (0 in case the pixel (X y) located in a visible part of an area (62), (d) the object buffer (36) reads the object data; (e) based on the object data read from the object buffer (36), this pixel A number of output pixel signals (RGB + α) are generated. 5. The decoder of Ruming term 4, wherein the image controller u is adapted to take into account the relative position of the area (62) relative to the page (60) ( χ2, y2) and _ 忒 region (62) relative position (p2, q2) with respect to the object (72), the object (72) pointed by the region (62) where the pixel (X, y) is located A corresponding address is read from the object buffer (36). 6. The decoder of Ruyi seeking item 5, wherein the image controller (51) is adapted to the address of the 4 object (72) (ρ = ρ2 + (χ-χ2), q = q2 + (y-y2) w Read the object data from the object buffer (36).
TW094101185A 2004-01-19 2005-01-14 Decoder for BD-ROM, and method for generating image signals TW200535686A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP04100142 2004-01-19

Publications (1)

Publication Number Publication Date
TW200535686A true TW200535686A (en) 2005-11-01

Family

ID=34802650

Family Applications (1)

Application Number Title Priority Date Filing Date
TW094101185A TW200535686A (en) 2004-01-19 2005-01-14 Decoder for BD-ROM, and method for generating image signals

Country Status (2)

Country Link
TW (1) TW200535686A (en)
WO (1) WO2005071660A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070052642A (en) * 2005-11-17 2007-05-22 엘지전자 주식회사 Method and apparatus for reproducing data and method for transmitting data
CN113010698B (en) * 2020-11-18 2023-03-10 北京字跳网络技术有限公司 Multimedia interaction method, information interaction method, device, equipment and medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5745095A (en) * 1995-12-13 1998-04-28 Microsoft Corporation Compositing digital information on a display screen based on screen descriptor
KR20010034920A (en) * 1998-06-26 2001-04-25 매클린토크 샤운 엘 Terminal for composing and presenting mpeg-4 video programs
US6573905B1 (en) * 1999-11-09 2003-06-03 Broadcom Corporation Video and graphics system with parallel processing of graphics windows
US6518985B2 (en) * 1999-03-31 2003-02-11 Sony Corporation Display unit architecture
US6888577B2 (en) * 2000-01-24 2005-05-03 Matsushita Electric Industrial Co., Ltd. Image compositing device, recording medium, and program

Also Published As

Publication number Publication date
WO2005071660A1 (en) 2005-08-04

Similar Documents

Publication Publication Date Title
US6297797B1 (en) Computer system and closed caption display method
US9324360B2 (en) Storage medium having interactive graphic stream and apparatus for reproducing the same
US20060204092A1 (en) Reproduction device and program
US8204357B2 (en) Reproducing device, reproducing method, reproducing program and recording medium
US6441813B1 (en) Computer system, and video decoder used in the system
US20060294542A1 (en) Recording medium, recording method, reproduction apparatus and method, and computer-readable program
US8237741B2 (en) Image processing apparatus, image processing method, and image processing program
KR20080019719A (en) A method and apparatus for displaying data content
KR100882078B1 (en) Reproduction apparatus for parallel processing of display set, recording medium, recording method, reproduction method, and computer-readable recording medium
US20050008347A1 (en) Method of processing subtitle stream, reproducing apparatus and information storage medium thereof
EP3336833A1 (en) Information processing device, information processing method, and program
TW200535686A (en) Decoder for BD-ROM, and method for generating image signals
JPH11184449A (en) Display controller and computer system and method for controlling dynamic image display
US20070133950A1 (en) Reproduction apparatus, reproduction method, recording method, image display apparatus and recording medium
JP4534975B2 (en) REPRODUCTION DEVICE, REPRODUCTION METHOD, RECORDING METHOD, VIDEO DISPLAY DEVICE, AND RECORDING MEDIUM
WO2006018786A1 (en) Method of storing and transferring image signals
JP4534974B2 (en) REPRODUCTION DEVICE, REPRODUCTION METHOD, RECORDING METHOD, VIDEO DISPLAY DEVICE, AND RECORDING MEDIUM
JP2006180091A (en) Apparatus and method of compositing content
JPH10322663A (en) Computer system and video decoder used in the system
JP2010256580A (en) Image display device