TWI320925B - Apparatus for reproducing data from a storge medium storing imige data and text-based subtitle data - Google Patents

Apparatus for reproducing data from a storge medium storing imige data and text-based subtitle data Download PDF

Info

Publication number
TWI320925B
TWI320925B TW094105743A TW94105743A TWI320925B TW I320925 B TWI320925 B TW I320925B TW 094105743 A TW094105743 A TW 094105743A TW 94105743 A TW94105743 A TW 94105743A TW I320925 B TWI320925 B TW I320925B
Authority
TW
Taiwan
Prior art keywords
information
text
style
data
dialog
Prior art date
Application number
TW094105743A
Other languages
Chinese (zh)
Other versions
TW200529202A (en
Inventor
Kil-Soo Jung
Sung-Wook Park
Kwang-Min Kim
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of TW200529202A publication Critical patent/TW200529202A/en
Application granted granted Critical
Publication of TWI320925B publication Critical patent/TWI320925B/en

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B2220/00Record carriers by type
    • G11B2220/20Disc-shaped record carriers
    • G11B2220/25Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
    • G11B2220/2537Optical discs
    • G11B2220/2541Blu-ray discs; Blue laser DVR discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/84Television signal recording using optical recording
    • H04N5/85Television signal recording using optical recording on discs or drums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/806Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components with processing of the sound signal
    • H04N9/8063Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components with processing of the sound signal using time division multiplex of the PCM audio and PCM video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • H04N9/8227Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being at least another television signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • H04N9/8233Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being a character code signal

Abstract

A storage medium storing a multimedia image stream and a text-based subtitle stream, and a reproducing apparatus and a reproducing method therefor are provided to reproduce the text-based subtitle data stream recorded separately from the multimedia image stream such that the subtitle data can be easily produced and edited and a caption can be provided in a plurality of languages. The storage medium stores: image data; and text-based subtitle data to display a caption on an image based on the image data, wherein the subtitle data includes: one style information item specifying an output style of the caption; and a plurality of presentation information items that are displaying units of the caption, and the subtitle data is separated and recorded separately from the image data. Accordingly, a caption can be provided in a plurality of languages, and can be easily produced and edited, and the output style of caption data can be changed in a variety of ways. In addition, part of a caption can be emphasized or a separate style that a user can change can be applied.

Description

16252pif.doc 九、發明說明: 【發明所屬之技術領域】 本發明是有關於一種多媒體影像的再生,且特別是有 關於一種用以記錄多媒體影像串流與文字式字幕串流的儲 存媒體以及將記錄於此儲存媒體的文字式字幕串流進行再 生的再生裝置與再生方法。 【先前技術】 為了提供高密度(high-density)多媒體影像,視訊串 流、音訊_流、提供字幕的播放圖形串流以及提供按鈕與 選單與使用者互動的互動圖形串流會被多路傳輸至一主^ 串流並記錄在儲存媒體上,其中此主要串流也就是所知的 音訊視覺’’AV”資料串流。特別是提供字幕的播放圖形串流 也提供點陣圖影像以為了在影像上顯示字幕或標題。 ,除了其大尺寸外,點陣圖標題資料在字幕或標題資料 的製作上會有問題且要對已製作的標題資料作編輯也备 當困難。這是因為標題資料與其他串流諸(像是視ς、 音訊與互動圖形串流)-起多路傳輸。再者,另-問題曰 S以各種方式來改變標題資料的輸出樣式,也就是^ ,-種標題的輸出樣式改變為另—種輸出樣式。" 【發明内容】 髀穴伢記錄文字式字幕串流的儲存媒 裝置與方法。㈣於此儲存雜的文字式字幕資料的再生 根據本發明的目的就是提供一種從儲存影像資料與文 1320925 16252pif.doc 字式字幕的儲存媒體再生資料的# 據此影像資料來顯示標題,此穿^勺’、’、—影像上依 係用以解碼影像資料;以及字幕據ί 式資訊將播放資訊轉換為點陣圖景增、,j用以依據樣 資訊的輸出與已解碼影像資料同步,二字 係為顯示標綱單元 的樣式資訊。 鞠出徠八 字幕解碼轉碼與雜_糾 輸出字幕資料,其係覆蓋字幕眘料右ρ @又子式子幕並 μ ㈣在已解碼的影像資料 。,式貝丫、播放貧訊是以完成封包的元件資料流 (packetized elementary streams,PESs )的單 器在。ESs的單元中語法分析與處理樣‘資訊與 樣式資訊是以-個PES形成且記馳字 數個播放資訊項目會以PESs的單元記錄在樣式 ί播^ 且子幕解碼器會應用一個樣式資訊項目至此 些播放育訊項目。 播放資訊包括指示標題内容的文字資訊以及控 制精由I換文,貧訊所獲得的點陣圖影像輪出的組合資 Qfl ’且其巾當藉由參考該組合資訊輸& 時字幕解Μ會控麻時間。 韻的文子以 貝。孔會指定一個或多個視窗區域’其中視窗區域 疋仏4輸出在螢幕上的區域,且字幕解碼器會在同一時間 輸出已轉換文字資訊在一個或多個視窗中。 *7 16252pif.doc 在組合資訊之中 ^ 束時間在全域時間軸上〜玫資訊的輸出開始時間與輸出結 是使用在播敌清單中,義成4間資訊,其中全域時間幸由 元,並且字幕清單是影像#料的再生單 影像資料的輪出同步:_ 奐文字資訊的輸出與已解碼 結束時間。 ^八係碡由參考輸出開始時間與輪出 倘若目前再生的播放 下一個播放資訊項目出;項目的輸出結束時間相同於 連續地再生此兩個播放資訊^時間時,則字幕解碼器會 倘若下—個播放資訊力古 碼器會在輸出開始時間與連續再生時,則字幕解 重置。 、 夺’則會保留間隔緩衝器,而不 樣式資訊是輸出樣式的集人 體的生產商預先定義且岸至^而輸出樣式错由儲存媒 會依據樣式資訊轉資訊中’且字幕解碼器 至點陣圖影像。、 ' ,式貝訊之後的播放資訊項目 此外,在播放資訊之中的文字次 至文字的二二 卫且予幕解碼器應用藉由應 的預先定義絕對值直文字 丄灿925 16252pif.doc 的部分,作為線内樣式資訊。 此外,樣式資訊更包括使用者可改變樣式資訊,且在 • 從使用者接收可改變樣式資訊項目之中的一個樣式的選擇 貢訊之後,字幕解碼器會應用由生產生預先定義的樣式資 ·· 訊,之後應用線内樣式資訊,且之後最後應用對應選擇資 訊的使用者可改變樣式資訊項目至文字。 子幕解碼益應用在由生產商預先定義的樣式資訊項目 Λ 之中的預先定義字體資訊的相對值至文字,作為使用者可 改變樣式資訊。 倘若除了由生產商預先定義的樣式資訊外儲存媒體容 許定義在再生裝置中的預先定義樣式資訊時,則字幕解石馬 器會應用預先定義樣式資訊至文字。 此外,樣式資訊包括色彩調色板的集合來應用至播放 資訊,並且字幕解碼器依據定義在色彩調色板的顏色將在 樣式資訊之後的所有播放資訊項目轉換為點陣圖影像。 播放資訊更包括色彩調色板的集合與顏色更新旗標, % 其係與包括在樣式資訊中的色彩調色板的集合分開,^倘 若顏色更新旗標設定為”丨,,時,則字幕解碼器會應用包括在 播放貢訊中的色彩調色板的集合,以及倘若顏色更新旗標 設定為時’則字幕解碼器會應用包括在樣式資訊中的^ 彩調色板的原先集合。 ^藉由設定該顏色更新旗標至”1”且逐漸改變包括在連 續播放資訊項目中的色彩調色板的透明值,使得字幕解 β貫作淡入/淡出效果,且當淡入/淡出效果完成時,則字 9 1320925 16252pif.doc 幕解碼器會依據包括在樣式資訊中的色彩調色板的原先集 合來重置色彩對照表(c〇l〇r l〇〇k-Up tabie, CLUT )。 此外,樣式資訊包括指示視窗區域的位置的區域資 訊,其係用於欲輸出在影像上的已轉換播放資訊,以及用 於將播放貢訊轉換為點陣圖影像所需的字體資訊,且字幕 = ==:區域資訊與字體資訊將已轉換播放資訊 置、it二包ί至少一個已轉換播放資訊的輸出開始位 或顏色,且其中字幕解 上:、 換為點陣圖影像。 子心補播放貢訊轉 字幕解碼器參考字體栲宏一 !,其中字體播案包括在;訊:為字體識別 貪料的記錄單元的屬性資訊。、l,,、係儲存影像16252pif.doc IX. Description of the Invention: [Technical Field] The present invention relates to a multimedia image reproduction, and more particularly to a storage medium for recording multimedia video stream and text subtitle stream and A reproduction device and a reproduction method for recording a text subtitle stream of the storage medium for reproduction. [Prior Art] In order to provide high-density multimedia images, video streaming, audio_streaming, playback of graphics streams providing subtitles, and interactive graphics streams that provide buttons and menus to interact with the user are multiplexed. The main stream is recorded and recorded on the storage medium, wherein the main stream is also known as the audio visual ''AV' data stream. In particular, the playback graphics stream providing the subtitle also provides the bitmap image for the sake of Display subtitles or titles on the image. In addition to its large size, the bitmap title data may have problems in the production of subtitles or title materials, and it is difficult to edit the created title data. This is because the title Data and other streams (such as video, audio and interactive graphics streams) - multiplexed. In addition, the other problem 曰S changes the output style of the title data in various ways, that is, ^, - The output style of the title is changed to another output style. " [Summary] The storage medium device and method for recording the text subtitle stream by the acupoints. (4) The text type stored here Reproduction of Curtain Data According to the purpose of the present invention, it is provided to display a title from a storage medium for storing image data and a storage medium of text 1320925 16252pif.doc font subtitles, which is used to display the title, the image of the ', ', and the image The upper part is used to decode the image data; the subtitle is converted into the bitmap image by the information of the ί type information, and the j is used to synchronize the output of the sample information with the decoded image data, and the two characters are the display unit. Style information. 鞠 徕 字幕 字幕 字幕 subtitle decoding transcoding and miscellaneous _ correction output subtitle data, which is covered by subtitles carefully ρ @又子式子幕 and μ (four) in the decoded image data. The message is a single device that completes the packetized elementary streams (PESs). The syntax analysis and processing of the ESs unit's information and style information is formed by a PES and counts the number of playback information items. The unit of PESs will be recorded in the style ί broadcast ^ and the sub-screen decoder will apply a style information item to play the education project. The playback information includes the content of the title. The word information and the control fines are exchanged by I, and the bitmap image obtained by the poor news is combined with Qfl' and the towel is controlled by reference to the combined information and the subtitles will control the time. The hole will specify one or more window areas where the window area 疋仏4 outputs the area on the screen, and the subtitle decoder will output the converted text information in one or more windows at the same time. *7 16252pif .doc in the combined information ^ bundle time on the global timeline ~ Mei information output start time and output knot is used in the broadcast enemy list, Yicheng 4 information, where the global time is fortunately, and the subtitle list is the image #料的再生单影像资料的轮出同步: _ 奂 text information output and decoded end time. ^八系碡 by reference output start time and turn out if the current playback of the next playback information item is out; the output end time of the item is the same as continuously reproducing the two playback information ^ time, then the subtitle decoder will be under A playback information force coder will reset the subtitle solution when the output start time and continuous reproduction are performed. , won't retain the interval buffer, and the style information is the output style set. The manufacturer of the human body is pre-defined and the shore is to ^ and the output style is wrong. The storage medium will change the information according to the style information and the subtitle decoder to the point. Array image. , ', after the broadcast of the information project, in addition to the text in the playback of the text to the text of the second and second guard and the decoder decoder application by the predefined absolute value of the straight text 925 925 16252pif.doc Partially, as inline style information. In addition, the style information includes a user-changeable style information, and after receiving a selection of a style from the user that can change the style information item, the subtitle decoder applies the predefined styles generated by the student. · Message, after applying the inline style information, and then applying the corresponding selection information to the user can change the style information item to the text. Sub-screen decoding benefits the relative value of the predefined font information in the style information item 预先 predefined by the manufacturer to the text, and the user can change the style information. If the storage medium allows for predefined style information defined in the playback device in addition to the style information predefined by the manufacturer, the subtitle stone will apply the predefined style information to the text. In addition, the style information includes a collection of color palettes to apply to the playback information, and the subtitle decoder converts all of the playback information items after the style information into bitmap images according to the color defined in the color palette. The playback information further includes a collection of color palettes and a color update flag, which is separated from the collection of color palettes included in the style information, and if the color update flag is set to "丨,, then, the subtitles The decoder will apply the set of color palettes included in the play tribute, and if the color update flag is set to 'the subtitle decoder will apply the original set of color palettes included in the style information. ^ By setting the color update flag to "1" and gradually changing the transparency value of the color palette included in the continuous play information item, the subtitle solution β is subjected to the fade in/out effect, and when the fade in/out effect is completed , the word 9 1320925 16252pif.doc The curtain decoder resets the color map (c〇l〇rl〇〇k-Up tabie, CLUT ) according to the original set of color palettes included in the style information. The style information includes area information indicating the position of the window area, which is used for the converted play information to be output on the image, and the font used to convert the play tribute into the bitmap image. And subtitle ===: area information and font information will be converted playback information set, it two packs ί at least one converted playback information output start bit or color, and the subtitles are solved:, replaced by bitmap image. The sub-heart compensating play Gongxun subtitle decoder reference font 栲宏一!, where the font broadcast case is included in the message: the attribute information of the recording unit for the font recognition greedy material., l,,, store image

以及=字幕解碼J解碼器緩衝字幕資料 ::料項目時語言的數個字 、擇資訊,並在字幕資料° θ㈣者接收預期語言的 幕資料。 、"之中再生對應選擇資訊的字 卞八^明的再一目的是提供-種從傲户旦 16252pif.doc =二==放資訊的輸出與已解碼影像資 資MM ^ 包括表示顯示標題的單元的播放 、。及拓疋私題的輸出樣式的樣式資訊。 旦的又—目的是提供—種儲存顧’其係儲存. 二if字式字幕資料,其係用以依據影像資料 盆;以’其中字幕資料包括:—個樣式資訊’ 的輸出樣式;以及數個播放資訊項目, ;==福題料元,且字幕㈣是與雜㈣分開且 t發明的其他目的與優勢將在以下詳細 由本發明的實施例習得。 卫丑糟 【實施方式】 顯易士二::他目的、特徵、和優點能更明 細說明如下 實施例’並配合所附圖式,作詳 F1 0 ,?、圖1,根據本發明範例實施例儲存媒體(例如 的媒體23G)係多層方式構成以管理記錄於其上 像串流的㈣體資料結構⑽。多媒體資料結 匕括剪輯110、播放清單120、電影物件130與目錄 I β "、中苫輯110為多媒體影像的紀錄單元、播放清 疋多媒體影像的再生單元、電影物件13〇包括用來 以^媒體影像的導航指令且目錄表⑽用來指^首先再 的電影物件以及電影物件13G的標題。 剪輯110會實作成一個物件,其係包括用於高影像品 16252pif.doc 質電影的音訊·視訊(audi〇_visua丨,AV)資料串流的剪輯 AV串流以及用於對應此AV資料串流的剪輯資訊114。例 如~T根據像疋動態影像壓縮標準(M〇ti〇n picture EXperts Group,MPEG)來壓縮AV資料串流。然而,在本發明目 ”卞此剪輯11〇不需要壓縮AV資料串流112。此外,剪 輯貧訊#114包括AV資料串流112的音訊/視訊屬性、進入 .·-也Θ#其巾關於隨機存取進人點的位置的資訊以預先 定義磁區的單元記錄在進人點地圖中。 播放清單12〇是這些剪輯11〇的再生間隔的集合,且 視為播放項目122。電影物件i3G是以導航 、、主1 ι此㈣航料根據使用者的需求開始播放 ΐ m的再=、在電㈣件13G之間錢或管理播放清 數個是ΐ儲存媒體最上層的表,其係用來定義 始位二二早’ a錄表14G包括所有標題與選單的開 ΐ選單;:二透過使用者操作(像是標題搜尋 儲存媒題與選單。目錄表140也包括當 的開始位且置資訊。時第—次自動地再生的標題與選單 在此些項目中,壓縮編 作詳細,圖 的範P結:串流⑽與文字式字幕串㈣ 凊參照圖2,為了解決上述關於點陣圖式的標題資料 16252pif.doc 的問題’根據本發明實施例文字式字幕串流220以與記錄 在儲存媒體230的剪輯AV資料串流210是以分開的方式 提供’例如多功能數位碟片(digital versatile disc, DVD)。 AV資料串流210包括視訊串流202、音訊串流204用於提 供字幕資料的播放圖形串流206與用於提供與使用者互動 的按紐與選單的互動圖形串流208,而上述的串流在動晝 主串流中多路傳輸並記錄在儲存媒體230中,其中動畫主 串流就是所熟知的音訊-視訊(audio-visual, AV )資料串流。 根據本發明實施例文字式字幕資料220表示用於提供 多媒體影像的字幕與標題的資料來記錄在儲存媒體23〇 中’且是使用標記語言來實作,例如可擴展標記語言 (Extensible Markup Language, XML)。然而’此多媒體 影像的字幕與標題是使用二位元資料來提供,此後,使用 二位元資料提供多媒體影像的字幕與標題的文字式字幕資 料220簡單視為”文字式字幕串流,’。用於提供字幕資料的 播放圖形串流206也提供點陣圖式字幕資料來在螢幕上顯 示字幕(或標題)。 由於文字式字幕串流220是與AV資料串流210分開 記錄,且不會與AV資料串流210多路傳輸,所以文字式 字幕串流220的大小不受限於此。因此,可以使用數種語 言提供字幕與標題,再者,文字式字幕串流220可以連續 地再生且有效地編輯而不會有任何困難。 之後文字式字幕串流220會轉換成點陣圖圖形影像, 並輸出在螢幕上覆蓋多媒體影像。如此轉換文字式字幕資 16252pif.doc 料為圖像式點陣圖影像的流程視為轉換(rendering)。文字 式字幕串流220包括轉換(rendering)標題文字所需的資 訊。 以下將配合圖3詳細說明包括轉換(rendering)資訊 的文字式字幕串流220。圖3是用以根據本發明實施例說 明文字式字幕串流220的資料結構的示意圖。 請參照圖3,根據本發明實施例文字式字幕串流220 包括對話樣式單元(dialog style unit, DSU) 310以及數個 對話播放單元(dialog presentation units, DPU)320 至 340。 DSU 310與DPU 320至340也可視為對話單元。每個形成 文字式字幕串流220的對話單元310至340是以完成封包 的元件資料流(packetized elementary streams, PESs )或簡 單孰知的PES封包350的型式來儲存。同樣地,文字式字 幕串流220的PES是以傳輸封包(transport packets, TP) 362的單元來記錄與傳送。連續的TP可視為傳輸串流 (transport stream,TS )。 然而’如圖2所示’根據本發明實施例文字式字幕串 流220不會與AV資料串流210多路傳輸且會在儲存媒體 230上記錄成分開的TS。 請參照圖3’在包括在文字式字幕串流220的一個PES 封包350中會記錄一個對話單元。文字式字幕串流22〇包 括一個配置在前面的DSU 310與數個接在DSU 31〇之後的 DPU 320至340。DSU 310包括說明在顯示於螢幕的標題 中對話的輸出樣式的資訊,其中多媒體影像再生在此螢幕 16252pif.doc 上。其間,數個DPU 320至340包括在欲顯示的對話内容 上的文字資訊項目以及在各別輸出項目上的資訊。 圖4是根據本發明實施例繪示具有圖3的資料結構的 文字式字幕串流220的示意圖。 請參照圖4 ’文字式字幕串流220包括—個dsu 410 與數個DPU420。 在本發明範例實施例中,數個DPU定義成 num_of_dialog一presentation_units。然而,數個 DPU 不會 分別具體指定。範例的案例是使用像是 while(processed_length<end_of_life)的語法。 DSU與DPU的資料結構將配合圖5作詳細說明。圖$ 是根據本發明實施例繪示圖3中的對話樣式單元的示咅 圖。 請參照圖5,在DSU 310中定義對話樣式資訊項目的 一集合dialog一styleset() 510 ’在其中會集合欲顯示成標題 的對話的輸出樣式資訊項目。DSU 310包括在標題中^示 對話的區域的位置的資訊、轉換(rendering)對話所需的 資訊、使用者可以控制的樣式的資訊等等。詳細内容將在 以下說明。 圖6是用以根據本發明實施例說明對話樣式單_ (dialog style unit,DSU)的範例資料結構的示意圖。 請參照圖6,DSU 310包括調色板集合 collection )610與區域樣式集合620。調色板集合61〇是^ 個色彩調色板的集合,其用以定義使用在標題中的顏色。 1320925 I6252pif.doc 包括在調色板集合610中的顏色組合與顏色資訊(像是透 明度)可應用至配置於DSU之後的所有數個DPU。And = subtitle decoding J decoder buffer subtitle data: the number of words in the language of the item, select information, and in the subtitle data ° θ (four) to receive the expected language screen material. , "Regeneration of the corresponding information to select the information of the word ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 252 The playback of the unit. And the style information of the output style of the extension. Once again - the purpose is to provide - a kind of storage Gu's department storage. Two if font subtitle data, which is used according to the image data basin; to 'the subtitle data includes: - a style information' output style; and number The playback information items, ;==福元元, and the subtitles (4) are separate from the (4) and other objects and advantages of the invention will be learned in detail below by the embodiments of the present invention.卫 丑 [Embodiment] 显易士二:: his purpose, features, and advantages can more clearly explain the following embodiment 'and with the drawings, for details F1 0, ?, Figure 1, according to the example implementation of the present invention An example storage medium (e.g., media 23G) is constructed in a multi-layered manner to manage the (four) volume data structure (10) on which the image stream is recorded. The multimedia data includes a clip 110, a playlist 120, a movie object 130 and a directory I β ", a medium 110 is a recording unit of a multimedia image, a playback unit for playing a clear multimedia image, and a movie object 13 〇 ^ The navigation command of the media image and the table of contents (10) is used to refer to the first movie object and the title of the movie object 13G. The clip 110 is implemented as an object, which includes a clip AV stream for audio and video (audio_visua丨, AV) data stream of a high-image 16252pif.doc movie and for corresponding to the AV data string. Streaming clip information 114. For example, ~T compresses AV data streams according to the M〇ti〇n picture EXperts Group (MPEG). However, in the present invention, the clip 11 does not need to compress the AV stream 112. In addition, the clip #114 includes the audio/video attribute of the AV stream 112, enters. The information of the position of the random access entry point is recorded in the entry point map by the unit of the predefined magnetic area. The play list 12A is a set of reproduction intervals of these clips 11〇, and is regarded as the play item 122. The movie object i3G It is the navigation, the main 1 ι (4) voyage according to the user's needs to start playing ΐ m =, between the electricity (four) 13G or management play the number is the top of the storage media table, the system Used to define the starting position of the second two early days, a record 14G includes all the titles and menus of the opening menu; two through the user operations (such as the title search storage media and menus. The table of contents 140 also includes the starting position and Set the information. The title and menu that are automatically regenerated in the first time are compressed and compiled in detail. The graph P knot: stream (10) and text subtitle string (4) 凊 refer to Figure 2, in order to solve the above point Question of the title data of the pattern 16252pif.doc The text subtitle stream 220 is provided in a separate manner from the clip AV data stream 210 recorded on the storage medium 230 in accordance with an embodiment of the present invention, such as a digital versatile disc (DVD). The stream 210 includes a video stream 202, an audio stream 204 for providing a play graphics stream 206 of subtitle data, and an interactive graphics stream 208 for providing buttons and menus for interacting with the user, and the stream is active The main stream is multiplexed and recorded in the storage medium 230, wherein the animated main stream is a well-known audio-visual (AV) data stream. The text subtitle data 220 is represented according to an embodiment of the present invention. The data for providing subtitles and titles of multimedia images is recorded in the storage medium 23' and is implemented using a markup language, such as Extensible Markup Language (XML). However, the subtitles of this multimedia image are The title is provided using two-bit data. Thereafter, using the two-bit data to provide subtitles of the multimedia image and the caption data 220 of the title is simply regarded as Text subtitle streaming, '. The playback graphics stream 206 for providing subtitle data also provides bitmap subtitle data to display subtitles (or titles) on the screen. Since the text subtitle stream 220 is an AV data string The stream 210 is recorded separately and is not multiplexed with the AV data stream 210, so the size of the text subtitle stream 220 is not limited thereto. Therefore, subtitles and titles can be provided in several languages, and further, the text type The subtitle stream 220 can be continuously reproduced and effectively edited without any difficulty. The text subtitle stream 220 is then converted into a bitmap image image, and the output overlays the multimedia image on the screen. In this way, the conversion of the text subtitles 16252pif.doc is considered to be the rendering of the image bitmap image. The text subtitle stream 220 includes the information required to render the title text. A text subtitle stream 220 including rendering information will be described in detail below in conjunction with FIG. FIG. 3 is a diagram showing the data structure of a text subtitle stream 220 in accordance with an embodiment of the present invention. Referring to FIG. 3, a text subtitle stream 220 includes a dialog style unit (DSU) 310 and a plurality of dialog presentation units (DPU) 320 to 340, in accordance with an embodiment of the present invention. DSU 310 and DPUs 320 through 340 can also be considered as dialog units. Each of the dialog units 310 to 340 forming the caption stream 220 is stored in the form of packetized elementary streams (PESs) or simply known PES packets 350. Similarly, the PES of the text-based caption stream 220 is recorded and transmitted in units of transport packets (TP) 362. A continuous TP can be viewed as a transport stream (TS). However, as shown in FIG. 2, the text subtitle stream 220 is not multiplexed with the AV material stream 210 and the separated TSs are recorded on the storage medium 230 in accordance with an embodiment of the present invention. Referring to Figure 3', a dialog unit is recorded in a PES packet 350 included in the caption stream 220. The text subtitle stream 22 includes a DSU 310 disposed in front and a plurality of DPUs 320 to 340 connected to the DSU 31A. The DSU 310 includes information describing the output style of the dialog displayed in the title of the screen, wherein the multimedia image is reproduced on this screen 16252pif.doc. In the meantime, a plurality of DPUs 320 to 340 include text information items on the content of the conversation to be displayed and information on the respective output items. 4 is a schematic diagram of a text subtitle stream 220 having the data structure of FIG. 3, in accordance with an embodiment of the present invention. Referring to FIG. 4', the caption stream 220 includes a dsu 410 and a plurality of DPUs 420. In an exemplary embodiment of the invention, several DPUs are defined as num_of_dialog-presentation_units. However, several DPUs are not specified separately. The example case uses a syntax like while(processed_length<end_of_life). The data structure of the DSU and DPU will be described in detail in conjunction with FIG. 5. Figure $ is a diagram showing the dialog style unit of Figure 3 in accordance with an embodiment of the present invention. Referring to FIG. 5, a set of dialog-styleset() 510' defining a dialog style information item is defined in the DSU 310 in which an output style information item of a dialog to be displayed as a title is assembled. The DSU 310 includes information on the position of the area of the dialog in the title, information required to render the dialog, information on the style that the user can control, and the like. The details will be explained below. 6 is a schematic diagram showing an example data structure of a dialog style unit (DSU) according to an embodiment of the present invention. Referring to FIG. 6, the DSU 310 includes a palette set collection 610 and a region style set 620. The palette set 61 is a collection of color palettes that define the colors used in the title. 1320925 I6252pif.doc The color combinations and color information (such as transparency) included in the palette set 610 can be applied to all of the DPUs that are configured after the DSU.

區域樣式集合(regi〇n styie c〇iiecti〇n) 620是形成標 題的各別對話的輸出樣式資訊的集合。每個區域樣式包括 指示顯示在螢幕上的對話的位置的區域資訊622、指示欲 應用至每個對話文字的輸出樣式的文字樣式資訊624以及 指示樣式的使用者可改變樣式集合(user changeable style collection) 626,其中使用者可任意改變應用至每個對話文 字的樣式。 圖7是用以根據本發明另一實施例說明對話樣式單元 的範例資料結構的示意圖。 請參照圖7,與圖6不同的是並沒有包栝調色板集合 。也就是,沒有在DSU M0中定義色彩調色板集合, 但調色板集合610定義在圖12A與12B所述的DPU中》 每個區域樣式710的資料結構是相同於圖6所述的資料結 構。A region style collection (regi〇n styie c〇iiecti〇n) 620 is a collection of output style information for the respective conversations that form the title. Each region style includes area information 622 indicating the location of the dialog displayed on the screen, text style information 624 indicating the output style to be applied to each dialog text, and a user changeable style collection indicating the style. 626, wherein the user can arbitrarily change the style applied to each dialog text. Figure 7 is a diagram for explaining an example data structure of a dialog style unit in accordance with another embodiment of the present invention. Referring to Figure 7, the difference from Figure 6 is that there is no set of palettes. That is, the color palette set is not defined in the DSU M0, but the palette set 610 is defined in the DPUs described in FIGS. 12A and 12B. The data structure of each of the area patterns 710 is the same as that described in FIG. structure.

圖8是根據本發明實施例繪示圖6或圖7中的範例對 話樣式單元的示意圖。 請參照圖8與圖6,DSU 310包括調色扳集合860與 610以及數個區域樣式82〇與62〇。如上所述,調色板集合 610是數個色彩調色板的集合,其用以定祕用在標題中 的顏色。包括在調色板集合610中的顏色組合與顏色貧訊 (像是透明度)可應用至配置於DSU之後的所有數個 DPU。 16252pif.doc 其間,每個區域樣式820與620包括區域資訊83〇與 622’其係指示視窗區域上的資訊,其中視窗區域中標題^ 顯示在螢幕上,且區域資訊830與622包括X、γ座把人 寬、高背景顏色等視窗區域的資訊’其中視窗區域中找 欲顯示在螢幕上。 不蹲 同樣地,母個區域樣式820與620包括文字樣式資% 840與624 ’其係指示欲應用至每個對話文字的輸出樣式。 也就是包括對話文字欲顯示在上述視窗區域的位置的X、 Y座標、輸出方向(例如由左至右或由上至下)、棑序、行 ,隔、欲參考的字體識別字、字體樣式(例如黑體或斜體= 字體大小與字體顏色資訊等等。 再者,每個區域樣式820與620也包括使用者可改變 樣式集合8 5 0與626,其係指示使用者可任意改變的樣式= 然而,使用者可改變樣式集合85〇與626是非必須的: 用者可改㈣式集合85Q與626可包括文字輸出樣式資訊 項目840與624之中的視窗區域位置、文字輸出位置、、字 ^大小行間隔等改變資訊。每個改變資訊項目可表示成在 用,弋4〇與625上相關資訊的相對增加或減少值來應 母個對6¾'文子。 總而5之,有三種樣式相關資訊的型態:定義在區域 φ θ立’、620中的樣式資訊(regi〇n—styie) 620、用來加強 二題的部分的線内樣式資訊(inline_Style) 151。(稍後解釋) 者可改變樣式資訊(user—changeable_style) 850, 且應用此寫資訊項目的順序如下: 16252pif.doc 1)基本地,應用定義在區域樣式中的區域樣式資訊 620。 一 Y倘若有線内樣式資訊,則應用線内樣式資訊丨510來 覆盍區域樣式資訊應用的部分,並加強標題文字的部分。 ~ 3)倘若有使用者可改變樣式資訊85G ’則最後應用此 貢訊。而使用者可改變樣式資訊的呈現不是必須的。 其間,在欲應用至每個對話文字的文字樣式資訊項目 840與624之中,藉由字體(f〇nt_id) 842的識別字 字體檔案資訊可定義如下。 >考的 圖9A是根據本發明實施例繪示包括藉由圖8中字體 資訊842參考的數個字體集合的範例剪輯資訊檔案gw的 示意圖。 /、 的 請參照圖9A、圖8、圖2與圖1,根據本發明在 StreamCodingInf0〇 930中包括各種記錄在儲存媒體的串 流上的資訊,其中StreamCodingInfo() 930指的是包括在剪 輯資訊檔910與11〇中的串流編碼資訊結構。也就是包括 視说串流202、音訊串流、播放圖形串流、互動圖形串、、宁 文字式字幕串流上的資訊。特別的是,包括欲顯示標題% 語言上的資訊(textST_language_code)932,其係關於文字气 字幕串流220。同樣地,也可定義檔案儲存字體資訊的^ 體名稱936與檔案名稱938,其係對應指示欲參考與顯厂、 在圖8中的字體的識別字的font一id 842與934。用於找尋 欲參考與在此定義的字體的識別字的字體檔案的方法將^ 合圖10作詳細說明。 16252pif.doc 圖9B是根據本發明另一實施例繪示包括藉由圖8的 字體資訊842參考的數個字體集合的範例剪輯資訊檔案 940的示意圖。 ” 請參照圖9β,結構ClipInfoO定義在剪輯資訊檔案910 與no中。在此結構中定義了藉由圖8中字體資訊842參 考的數個字體集合。也就是,具體指明對應font—id 842的 子體檔案名稱952,其中f〇nt—id 842指示欲參考與顯示在 圖8的字體的識财。用於找尋欲參考與在此定義的字體 的識別字的字體檔案的方法將在以下作詳細說明。 圖10是顯示藉由圖9A與9B中字體檔案名稱938與 952參^的數個字體檔案的位置的示意圖。 ” 睛參照圖1〇 ’根據本發明實施例顯示關於記錄在多媒 體^的^的目錄結構。特別是,由於使用目錄結構,所 以可以很合易找到字體檔案的位置,例如儲存在輔助資料 (AUXDATA)目錄的 lllu f〇nt 1〇1〇 或 99999 f⑽ 1〇2〇。 其間,形成對話單元的DPU的結構將配合 “始RS, Η 4 1丨p 口卞 〜π Μ根猓本發明另 320的 1 範例資料結構的示意圖。 应顯:i;;:丄1與圖3’包括欲輪出對話内容的文字資訊 指明欲參考的_色 的對咭區拭考 於欲輸出在螢幕的對話 …域域1130。特別的是’用於欲輸出在螢幕的對 16252pif.doc =對话區域資訊U3G包括指明欲應用至對話的輸出樣 =樣式ii資訊1132與指示實際輸出在榮幕上的文字 • 〜0舌文字貝机1134。在此案例中,其假設由調色板參考 •胃°K 1120指不的色彩調色板集合是定義在DSU (請參考 圖6的610)中。 其間’圖12Α是用以根據本發明實施例說明圖 DPU 320的範例資料結構的示意圖。 § 贫請參照圖12Α與圖3,DPU 320包括指示用於欲在螢 ^輸出對話時間的時間資訊丨2丨Q、定義色彩調色板集合 ,调色,集合mo以及用於欲輸出在營幕上的對話的對 治區域貢訊1230。在此案例中,調色板集合122〇不會定 義在如圖11中的DSU中,但會直接地定義在DPU 320中。 其間,圖12B是用以根據本發明實施例說明圖3中 DPU 320的範例資料結構的示意圖。 印參照圖12B,DPU 320包括指示用於欲在螢幕上輸 出對活的時間的時間資訊125〇、顏色更新旗標126〇、當顏 % 色更新旗標設為1時所需的色彩調色板集合1270以及用於 欲輸出在螢幕上對話的對話區域資訊丨28〇。在此案例中, 色彩調色板集合1270也是定義在如圖η的DSU中,並且 儲存在DPU 320中。特別的是’為了表示使用連續再生的 淡入/淡出,除了定義在DSU中的基本調色板外,用來表 示淡入/淡出的調色板集合1270會定義在DPU 320中且顏 色更新旗標1260會設定為卜此將配合圖19作詳細說明。 圖13是根據本發明實施例繪示圖11至圖12B中的 20 16252pif.doc DPU 320的示意圖。 請參照圖13、圖11、圖12A與圖12B,DTO包括對 話開始時間資訊(dialog_strat_PTS)與對話結束時間資訊 (dialog—end_PTS)1310作為指示用於欲在螢幕上輸出的對 話的時間的時間資訊111 同樣地,對話調色板識別字 (dialog_palette_id)被包括成調色板參考資訊1120。在圖 12A的案例中,色彩調色板集合122〇可被包括取代調色板 參考資訊1120。對話文字資訊(regi〇n—subtitle) 1334被包 括成對話區域資訊1230以用於欲輸出的對話,且為了指明 應用至其的輸出樣式’也會包括區域樣式識別字 (region_style_id) 1332。圖13中的範例只是DPU的實施例 且具有如圖11至圖12B所示的資料結構的DPU可以各種 方式修改來實作。 圖14是用以說明圖13中的對話文字資訊 (region_subtitle)的範例資料結構的示意圖。 請參照圖14,對話文字資訊(圖11的1134、圖12A 的1234、圖12B的1284與圖13的1334)包括線内資訊 1410與對話文字1420作為輸出樣式來加強對話的部分。 圖15是根據本發明實施例繪示圖13的對話文字資訊 1334的示意圖。如圖15所示,對話文字資訊1334是由線 内樣資訊(inline_style) 1510 與對話文字(text_string) 1520 來貫作。同樣地’較佳的是指示線内樣式的結束的資訊包 括在圖15的實施例中。除非定義線内樣式的結束部分,否 則一旦指明的線内樣式可能會接著應用在其後,其會與生 1320925 16252pif.doc 產商的所設定的相反。 其間,圖16是用以說明在連續地再生連續對話播放單 元(dialog presentation units,DPUs)的限制的示意圖。 請參照圖16與圖13’當需要連續再生上述的數個DPIj 時,則需要下列限制。FIG. 8 is a schematic diagram showing an example dialog style unit of FIG. 6 or FIG. 7 according to an embodiment of the invention. Referring to Figures 8 and 6, DSU 310 includes palettes 860 and 610 and a plurality of zone patterns 82A and 62B. As mentioned above, palette set 610 is a collection of color palettes that are used to define the colors used in the title. The color combinations and color lags (such as transparency) included in the palette set 610 can be applied to all of the DPUs that are configured after the DSU. 16252pif.doc Meanwhile, each of the area patterns 820 and 620 includes area information 83〇 and 622' which indicates information on the window area, wherein the title ^ in the window area is displayed on the screen, and the area information 830 and 622 includes X, γ. The information of the window area such as the width of the person and the background color of the seat is displayed in the window area to be displayed on the screen. Similarly, parent region styles 820 and 620 include text styles 840 and 624' which indicate the output style to be applied to each dialog text. That is, the X, Y coordinates, the output direction (for example, left to right or top to bottom), the order, the line, the interval, the font identification word to be referred to, and the font style to be displayed in the position of the above-mentioned window area. (e.g., bold or italic = font size and font color information, etc. Again, each of the region styles 820 and 620 also includes user-changeable style sets 850 and 626, which indicate a style that the user can arbitrarily change. = However, the user can change the style set 85 〇 and 626 is not necessary: the user can change the (4) set 85Q and 626 can include the window area position, the text output position, the word in the text output style information items 840 and 624 ^ Change the information such as the size of the line interval. Each change information item can be expressed as in use, and the relative increase or decrease of the relevant information on the 弋4〇 and 625 should be the parent pair of 63⁄4' text. In total, there are three styles. Type of related information: style information (regi〇n-styie) 620 defined in the area φ θ 立 ', 620, inline style information (inline_Style) 151 used to strengthen the part of the second question. (Explained later) By The style information (user-changeable_style) 850 can be changed, and the order in which the information items are applied is as follows: 16252pif.doc 1) Basically, the area style information 620 defined in the area style is applied. If Y is wired within the style information, then the inline style information 丨 510 is applied to overlay the portion of the regional style information application and to enhance the portion of the title text. ~ 3) If there is a user who can change the style information 85G ’, then the last application will be applied. The user can change the presentation of the style information is not necessary. Meanwhile, among the text style information items 840 and 624 to be applied to each of the dialog words, the font file information by the font (f〇nt_id) 842 can be defined as follows. > Figure 9A is a schematic diagram showing an example clip information file gw including a plurality of font sets referenced by the font information 842 of Figure 8 in accordance with an embodiment of the present invention. Referring to FIG. 9A, FIG. 8, FIG. 2 and FIG. 1, according to the present invention, StreamCodingInf0〇930 includes various information recorded on a stream of a storage medium, wherein StreamCodingInfo() 930 refers to information included in the clip. Streaming encoded information structures in files 910 and 11〇. That is to say, the information on the streaming stream 202, the audio streaming, the playing graphics stream, the interactive graphics string, and the streaming text of the text subtitles are included. In particular, it includes a message (textST_language_code) 932 to display the title % language, which is related to the text gas subtitle stream 220. Similarly, the file name 936 and the file name 938 of the file storage font information may be defined, which corresponds to the font-ids 842 and 934 of the identification words of the fonts in Fig. 8 which are to be referred to. A method for finding a font file of a character to be referred to with a font defined herein will be described in detail with reference to FIG. 16252pif.doc Figure 9B is a diagram showing an exemplary clip information archive 940 including a plurality of font sets referenced by the font information 842 of Figure 8 in accordance with another embodiment of the present invention. Referring to FIG. 9β, the structure ClipInfoO is defined in the clip information files 910 and no. In this structure, a plurality of font sets referenced by the font information 842 in FIG. 8 are defined. That is, the corresponding font-id 842 is specified. The child file name 952, where f〇nt_id 842 indicates the wealth to be referred to and displayed in the font of Fig. 8. The method for finding the font file of the character to be referred to with the font defined herein will be DETAILED DESCRIPTION Fig. 10 is a diagram showing the positions of a plurality of font files exemplified by the font file names 938 and 952 in Figs. 9A and 9B. "Eye is shown in Fig. 1" in accordance with an embodiment of the present invention. The directory structure of ^. In particular, because of the directory structure, it is easy to find the location of the font file, such as lllu f〇nt 1〇1〇 or 99999 f(10) 1〇2〇 stored in the AUXDATA directory. Meanwhile, the structure of the DPU forming the dialog unit will be combined with the schematic diagram of the first example data structure of the "other RS, Η 4 1 丨 卞 卞 π π Μ 猓 另 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 3' includes text information that wants to turn out the content of the conversation, indicating that the _ color of the dialogue area to be referred to is to be tested on the screen to be output on the screen... Domain 1130. In particular, 'for the pair to be output on the screen 16252pif.doc = dialog area information U3G includes an output sample indicating the application to be applied to the dialog = style ii information 1132 and the text indicating the actual output on the screen of the glory • ~ 0 tongue text machine 1134. In this case, it is assumed by the palette References • The set of color palettes indicated by the stomach K1120 is defined in the DSU (please refer to 610 of FIG. 6). FIG. 12A is a schematic diagram for explaining an example data structure of the DPU 320 according to an embodiment of the present invention. § Poor Please refer to FIG. 12A and FIG. 3, the DPU 320 includes a time information 丨2丨Q for defining a dialogue time to be outputted in the flash, a color palette set, a color palette, a set mo, and a The dialogue on the camp is in the area of Gongzhi 1230. In the case, the palette set 122 is not defined in the DSU as in FIG. 11, but will be directly defined in the DPU 320. Meanwhile, FIG. 12B is a diagram for explaining the DPU 320 of FIG. 3 according to an embodiment of the present invention. Schematic of the example data structure. Referring to Figure 12B, the DPU 320 includes time information 125 指示 for the time to output the live activity on the screen, a color update flag 126 〇, when the color % update flag is set to 1 The desired color palette set 1270 is used for the dialog area information to be outputted on the screen. In this case, the color palette set 1270 is also defined in the DSU as shown in FIG. In the DPU 320. In particular, 'in order to indicate the use of continuous reproduction of fade in/out, in addition to the basic palette defined in the DSU, the palette set 1270 used to represent fade in/out is defined in the DPU 320 and the color The update flag 1260 will be set forth in detail with reference to Figure 19. Figure 13 is a schematic illustration of the 20 16252 pif.doc DPU 320 of Figures 11 through 12B, in accordance with an embodiment of the present invention. Figure 12A and Figure 12B, DTO The dialog start time information (dialog_strat_PTS) and the dialog end time information (dialog_end_PTS) 1310 are used as time information indicating the time for the dialog to be output on the screen. Similarly, the dialog palette identification word (dialog_palette_id) is included. The color palette reference information 1120. In the case of FIG. 12A, the color palette set 122A can be included in place of the palette reference information 1120. The dialog text information (regi〇n-subtitle) 1334 is included as the dialog area information 1230 for the dialog to be output, and also includes the area style identification word (region_style_id) 1332 in order to indicate the output pattern applied thereto. The example in Fig. 13 is only an embodiment of the DPU and the DPU having the data structure as shown in Figs. 11 to 12B can be modified in various ways. Figure 14 is a diagram for explaining an example data structure of the dialog text information (region_subtitle) in Figure 13 . Referring to Fig. 14, the dialog text information (1134 of Fig. 11, 1234 of Fig. 12A, 1284 of Fig. 12B, and 1334 of Fig. 13) includes inline information 1410 and dialog text 1420 as output patterns to enhance the dialogue. FIG. 15 is a schematic diagram of the dialog text information 1334 of FIG. 13 according to an embodiment of the invention. As shown in Fig. 15, the dialog text information 1334 is composed of inline_style 1510 and text_string 1520. Similarly, the information indicating the end of the inline pattern is included in the embodiment of Fig. 15. Unless the end of the inline style is defined, the specified inline style may be applied afterwards, as opposed to the one set by the 1320925 16252pif.doc manufacturer. Meanwhile, Fig. 16 is a diagram for explaining the limitation of continuously reproducing the continuous dialog presentation units (DPUs). Referring to Fig. 16 and Fig. 13', when it is necessary to continuously reproduce the above several DPIjs, the following restrictions are required.

1)當對話物件開始在圖形平面(graphic plane,GP )上 輸出時,則定義在DPU中的對話開始時間資訊 (dialog_start_PTS) 1310 會指示一時間’圖形平面(graphic plane, GP)將在以下配合圖π作詳細說明。 2)定義在DI>U中的對話開始時間資訊 (dialog_start_PTS) 1310指示一時間來重置處理文字式字 幕的文子式子幕解碼器’其中文字式字幕解碼器將在以下 配合圖17作詳細說明。1) When the dialog object starts to be output on the graphic plane (GP), the dialog start time information (dialog_start_PTS) 1310 defined in the DPU will indicate a time 'graphic plane (GP) will be matched below Figure π is described in detail. 2) The dialog start time information (dialog_start_PTS) defined in DI>U 1310 indicates a time to reset the text sub-picture decoder for processing the text subtitles. The text subtitle decoder will be described in detail below with reference to FIG. .

3)當需要連續再生上述的數個DPU時,則目前DPU 的對話結束時間資訊(dialog_end_PTS)應該相同於下一個 連續再生的DPU的對話開始時間資訊(dial〇g_start_pTs)。 也就是’在圖16中,為了連續地再生DPU#2與DPU#3, 包括在DPU #2中的對話結束時間資訊應該相同於包括在 DPU #3中的對話開始時間資訊。 其間’最佳的是根據本發明DSU滿足下列限制。 1) 文字式字幕串流220包括一個DSU。 2) 包括在所有區域樣式(regi〇n sty le )的數個使用者 可改變樣式資訊項目(user_contr〇l_style )應該是相同的。 其間’最佳的是根據本發明DPU滿足下列限制。 22 1320925 16252pif.doc 1)用於至少兩個標題的視窗區域應該被定義。 根據本發明實施例依據記錄在儲存媒體的文字式字幕 串流220的為料結構的範例再生裝置的結構將配合圖p 說明如下。 圖17是用以根據本發明實施例說明用於文字式字幕 串流的範例再生裝置的示意圖。3) When it is necessary to continuously reproduce the above several DPUs, the dialog end time information (dialog_end_PTS) of the current DPU should be the same as the dialog start time information (dial〇g_start_pTs) of the next continuously reproduced DPU. That is, in Fig. 16, in order to continuously reproduce DPU #2 and DPU #3, the session end time information included in DPU #2 should be identical to the dialog start time information included in DPU #3. The best among them is that the DSU according to the present invention satisfies the following limitations. 1) The text subtitle stream 220 includes a DSU. 2) Several user-changeable style information items (user_contr〇l_style) included in all regional styles (regi〇n sty le ) should be the same. In the meantime, the DPU according to the present invention satisfies the following limitations. 22 1320925 16252pif.doc 1) The window area for at least two titles should be defined. The structure of an exemplary reproducing apparatus based on the material structure of the caption stream 220 recorded on the storage medium according to an embodiment of the present invention will be described below with reference to FIG. Figure 17 is a diagram for explaining an example reproducing apparatus for text subtitle streaming in accordance with an embodiment of the present invention.

請參照圖17,再生裝置1700 (所謂的錄放裝置)包括 緩衝單元與文字式字幕解碼器173〇。其中緩衝單元包括用 於儲存子體槽案的字體預載緩衝器(f〇nt prel〇ading buffer, FPB) 1712與用於儲存文字式字幕檔案的字幕預載緩衝器 (subtitle preloading buffer, SPB) 1710,而文字式字幕解 碼器1730用以藉由圖形平面(graphics piane,1750 與色彩對照表(color look-up table,CLUT) 1760解碼與再 生事先記錄在儲存媒體的文字式字幕串流作為輸出。Referring to Fig. 17, a reproducing apparatus 1700 (so-called recording and reproducing apparatus) includes a buffer unit and a caption decoder 173A. The buffer unit includes a font preload buffer (FPB) 1712 for storing a child volume slot and a subtitle preloading buffer (SPB) for storing a text subtitle file. 1710, and the text subtitle decoder 1730 is configured to decode and reproduce the text subtitle stream previously recorded in the storage medium by using a graphics piane (1750 and color look-up table (CLUT) 1760 as an output. .

特別地’字幕預載緩衝器(subtitle preloading buffer, SPB) 1710會預載文字式字幕資料串流22〇而字體預載緩 衝器(font preloading buffer,FPB )1712 會預載字體資訊。 文字式字幕解碼器1730包括文字式幕處理器1732、 對 έ舌排列緩衝器(dialog composition buffer, DCB ) 1734、 對話緩衝器(dialog buffer, DB) 1736、文字式字幕轉換 (rendering)器1738、對話播放控制器1740以及點陣圖 物件緩衝器(bitmap object buffer, BOB ) 1742。 文字式幕處理器1732從字幕預載緩衝器(subtitie preloading buffer, SPB) 1710中接收文字式字幕資料串流 23 1320925 16252pif.doc 220、轉換上述關於包括在DSU的資訊的樣式以及包括在 DPU的對話輸出時間資訊至對話排列緩衝器(dial〇g composition buffer,DCB) 1734 並轉換包括在 DPU 的對話 文子資訊至對話緩衝器(dialog buffer,DB) 1736。In particular, the subtitle preloading buffer (SPB) 1710 preloads the text subtitle data stream 22, and the font preloading buffer (FPB) 1712 preloads the font information. The text subtitle decoder 1730 includes a text screen processor 1732, a dialog composition buffer (DCB) 1734, a dialog buffer (DB) 1736, a text subtitle conversion device 1738, The dialog playback controller 1740 and a bitmap object buffer (BOB) 1742. The text screen processor 1732 receives the text subtitle stream 23 1320925 16252pif.doc 220 from the subtiting preloading buffer (SPB) 1710, converts the above information about the information included in the DSU, and includes the pattern included in the DPU. The dialog outputs time information to the dialog array buffer (DCB) 1734 and converts the dialog message information included in the DPU to a dialog buffer (DB) 1736.

對活播放控制器1740藉由使用關於包括在對話排列 緩衝器(dialog c〇mpositi〇n buffer,DCB ) 1734 的資訊的樣 式來控制文字式字幕轉換(rendering)器1738,且藉由使 用對話輸出時間資訊來控制用於轉換(rendering)在點陣 圖物件緩衝器(bitmap object buffer,OBO ) Π42的點陣圖 影像的時間來輸出至圖形平面(graphics plane, GP ) 1750。The live play controller 1740 controls the text subtitle rendering device 1738 by using a pattern regarding information included in the dialog c缓冲器mpositi〇n buffer (DCB) 1734, and outputs by using a dialog. The time information is used to control the time of the bitmap image for the bitmap object buffer (OBO) Π 42 to be output to the graphics plane (GP) 1750.

根據對話播放控制器1740的控制,文字式字幕轉換 (rendering)器1738將對話文字資訊轉換(也就是執行轉 換(rendering))為點陣圖影像,其係藉由應用在字體預 載緩衝器(font preloading buffer, FPB) 1712預載入字體資 机項目之中對應儲存在對話緩衝器(diai〇g buffer, DB ) 1736的對話文字資訊的字體資訊項目至對話文字資訊。已 轉換(rendering)點陣圖影像會儲存在點陣圖物件緩衝器 (bitmap object buffer, OBO) 1742中並根據對話播放控制 态1740的控制輸出至圖形平面(graphics plane, GP ) 1750。 此時,藉由參考色彩對照表(color look-up table,CLUT ) Π60來應用指定在DSU中的顏色。 由生產商定義在DSU的資訊可使用成應用至對話文 字的樣式相關資訊,且也可應用由使用者預定義樣式相關 資訊。如圖17所示的再生裝置1700會優先於由生產商定 24 1320925 16252pif.doc 義的樣式相關資訊之前應用由使用者定義的樣式資訊。In accordance with the control of the dialog playback controller 1740, the text-based captioning device 1738 converts the dialog text information (i.e., performs rendering) into a bitmap image, which is applied to the font preload buffer ( Font preloading buffer, FPB) The 1712 preloaded font asset item corresponds to the font information item of the dialog text information stored in the dialog buffer (diai〇g buffer, DB) 1736 to the dialog text information. The rendered bitmap image is stored in the bitmap object buffer (OBO) 1742 and output to the graphics plane (GP) 1750 according to the control of the dialog playback control state 1740. At this time, the color specified in the DSU is applied by referring to a color look-up table (CLUT) Π60. The information defined by the manufacturer at the DSU can be used as style-related information applied to the dialog text, and the style-related information pre-defined by the user can also be applied. The reproducing apparatus 1700 shown in Fig. 17 applies the user-defined style information in preference to the style-related information defined by the manufacturer 24 1320925 16252 pif.doc.

如圖8所述’由生產商定義在DSU中的區域樣式資訊 (region—Style )是基本地應用成應用在對話文字的樣式相 關資訊,且倘若線内樣式資訊(inine_style )包括在DPU 中時,其中DPU包括應用區域樣式資訊的對話文字,則會 應用線内樣式資訊(inline_style)至對應的部分。同樣地, 倘若生產商額外地定義使用者可改變樣式在DSU中且其 中一個由使用者定義的使用者可改變樣式被選擇時,則會 應用區域樣式或線内樣式,然後最後應用使用者可改變資 訊。同樣地,如圖15所述,較佳的是指示應用線内樣式的 結束的資訊包括在線内樣式的内容中。 再者,生產商可指明是否可使用定義在再生裝置本身 的樣式相關資訊,其係與由生產商定義並記錄在儲存媒體 上的樣式相關資訊分開。As shown in FIG. 8 'region-style information defined by the manufacturer in the DSU (region-Style) is basically applied to the style-related information applied to the dialog text, and if the inline style information (inine_style) is included in the DPU The DPU includes the dialog text of the application area style information, and the inline style information (inline_style) is applied to the corresponding part. Similarly, if the manufacturer additionally defines that the user can change the style in the DSU and one of the user-defined user changeable styles is selected, the area style or inline style is applied, and then the user can finally apply Change the information. Similarly, as described in Fig. 15, it is preferable that the information indicating the end of the application inline style is included in the content of the inline style. Furthermore, the manufacturer can indicate whether style-related information defined in the reproduction device itself can be used, which is separate from the style-related information defined by the manufacturer and recorded on the storage medium.

圖18是用以根據本發明實施例說明在範例再生裝置 1700(例如如圖17所示)中文字式字幕串流22〇的預載入 程序的不意圖。 請參照圖18 ’如圖2所示的文字式字幕串流220是定 義在上述播放清單的子路徑中。在子路徑中,可以定義支 援數種δ吾5的數個文子式字幕串流220。同樣地,應用至 文字式字幕的字體檔案可定義在如圖9八與9Β所述的剪輯 資訊檔案910或940中。可包括在一個儲存媒體的最多255 個文字式字幕串流220可定義在每個播放清單中。同樣 地,也可定義包括在一個儲存媒體的最多255個字體檔 25 1320925 16252pif.doc 案°然而’為了保證無間斷播放,文字式字幕串流22〇的 大小應該小於或等於再生裝置1700 (例如圖17所示)的 預載緩衝器1710的大小。 ••圖19是用以根據本發明實施例說明在範例再生裝置 • 中DPU的再生程序的示意圖。 請參照圖19、圖13與圖17,顯示再生DPU的流程。 播放控制器1740控制用於欲輸出在圖形平面(graphics 蠢 Plane,GP) 1乃〇上的轉換(rendering)對話的時間,其係 藉由使用指定包括在DPU的對話的輸出時間1310的對話 開始時間資訊(dialog—start_PTS )與對話結束時間資訊 (dialog_end_PTS )。此時’當完成轉換儲存在點陣圖物 件緩衝器(bitmap object buffer, BOB) 1742 的已轉換 (rendering)對話點陣圖影像至圖形平面(graphicsplane, GP ) 1750時,其中點陣圖物件缓衝器(bitmap object buffer, BOB) 1742包括在文字式字幕解碼器1730中,則對話開 始時間資訊會指定一時間。也就是,倘若是定義在DPU中 % 的對話開始時間時,則在完成轉換資訊至圖形平面 (graphics plane, GP) 1750之後建構對話所需的點陣圖資 訊會準備好被使用。同樣地,當再生DPU完成時,對話結 束時間資訊會指定一時間。此時’文字式字幕解碼器1730 與圖形平面(graphics plane,GP) 1750會被重置。最佳的 :是,無論其為連續再生在文字式字幕解碼器1730的緩衝器 (像是點陣圖物件缓衝器(bitmap object buffer, BOB) 1742)也會在DPU的開始時間與結束時間之間被重置。 26 1320925 16252pif.doc 然而,當需要數個DPU連續再生時,則文字式字幕解 碼态1730與圖形平面(graphics plane, GP ) 1750不會重置 且儲存在每個緩衝器(像是對話排列緩衝器(dialogFigure 18 is a schematic illustration of a preloading procedure for a text subtitle stream 22 in an example playback device 1700 (e.g., as shown in Figure 17) in accordance with an embodiment of the present invention. Referring to Figure 18, the caption stream 220 shown in Figure 2 is defined in the sub-path of the playlist. In the sub-path, a plurality of text subtitle streams 220 of a plurality of δ wu 5 can be defined. Similarly, the font file applied to the text subtitles can be defined in the clip information file 910 or 940 as shown in Figs. Up to 255 text subtitle streams 220 that may be included in a storage medium may be defined in each playlist. Similarly, it is also possible to define up to 255 font files 25 1320925 16252pif.doc included in a storage medium. However, in order to ensure uninterrupted playback, the size of the text subtitle stream 22〇 should be less than or equal to the reproduction device 1700 (eg The size of the preload buffer 1710 shown in FIG. • Fig. 19 is a schematic diagram for explaining a reproduction procedure of a DPU in an exemplary reproduction apparatus according to an embodiment of the present invention. Referring to FIG. 19, FIG. 13, and FIG. 17, the flow of reproducing the DPU is shown. The playback controller 1740 controls the time for a rendering dialogue to be output on the graphics plane (Grass Plane, GP), which is started by using a dialog that specifies the output time 1310 of the conversation included in the DPU. Time information (dialog_start_PTS) and session end time information (dialog_end_PTS). At this time, when the conversion of the bitmap image of the bitmap object buffer (BOB) 1742 to the graphics plane (GP) 1750 is completed, the bitmap object is slowed down. A bitmap object buffer (BOB) 1742 is included in the text subtitle decoder 1730, and the dialog start time information specifies a time. That is, if the dialog start time of % in the DPU is defined, the bitmap information required to construct the dialog after completing the conversion of the information to the graphics plane (GP) 1750 is ready to be used. Similarly, when the regenerative DPU is completed, the dialog end time information is specified for a time. At this time, the text subtitle decoder 1730 and the graphics plane (GP) 1750 are reset. Best: Yes, regardless of whether it is continuously reproduced in the buffer of the text subtitle decoder 1730 (such as a bitmap object buffer (BOB) 1742), it will also be at the start time and end time of the DPU. Between being reset. 26 1320925 16252pif.doc However, when several DPUs are required for continuous reproduction, the text subtitle decoding state 1730 and the graphics plane (GP) 1750 are not reset and stored in each buffer (like a dialog arrangement buffer). Dialog

composition buffer,DCB) 1734、對話緩衝器(dialog buffer, DB )1736 與點陣圖物件緩衝器(bitmap object buffer,OBO ) 1742)中的内容會保留。也就是,當目前再生的dpu的對 話結束時間資訊與之後連續再生的DPU的對話開始時間 資訊相同時’則每緩衝器的内容會保留而不重置。 特別是’有淡入/淡出效果作為應用數個DPU的連續 再生範例。淡入/淡出效果可藉由改變點陣圖物件的色彩對 照表(color look-up table, CLUT) 1760 來實作,其中點陣 圖物件疋轉換為圖形平面(graphics piane,〇ρ) 1750。也 就是,第一 DPU包括組合資訊,像是顏色、樣式與輸出時 間^且之後的連續的數個DPU具有相同於第一 DPU的組 合貝,,但只更新色彩調色板資訊。在此_巾,藉由在Composition buffer, DCB) 1734, dialog buffer (DB) 1736 and bitmap object buffer (OBO) 1742) will remain. That is, when the session end time information of the currently regenerated dpu is the same as the session start time information of the DPU which is continuously reproduced later, then the contents of each buffer are retained without being reset. In particular, there is a fade-in/fade-out effect as a continuous reproduction example in which several DPUs are applied. The fade in/out effect can be implemented by changing the color look-up table (CLUT) 1760 of the bitmap object, where the bitmap object is converted to a graphics plane (graphics piane, 〇ρ) 1750. That is, the first DPU includes combined information such as color, style, and output time ^ and subsequent consecutive DPUs have the same composition as the first DPU, but only the color palette information is updated. Here, _ towel, by

顏色資訊項目之中逐漸改變透明度(從〇%至1〇〇%)來實 作淡入/淡出效果。 火特別是,當使用如圖12B所示DPU的資料結構時, 淡入/淡出效果可有效地使用顏色更新旗標126〇來實作。 也就是’倘若對話播放控㈣丨檢錢確認包括在则 ^的顏色更新旗標·是設為,,〇,,時,也就是,倘若是一 般不需要淡人/淡出效果的#射,則會基本地使用包括在 1^740 =^ DSU中的顏色然而倘若對話播放控制器 仏查與確認包括在_巾_色更新旗標1260是設 27 1320925 16252pif.doc 為”1”時’也就是,倘若需要淡入/淡出效果時,則藉由使 用顏色資訊1270 (取代圖6所示的DSU中的顏色資訊) 來實作淡入/淡出效果。此時,藉由調整包括在DPU中的 顏色資訊1270的透明度來簡單地實作淡入/淡出效果。 • 在顯示淡入/淡出效果之後,最佳的是來更新色彩對照 表(color look-up table,CLUT) 1760 至包括在 DSU 中的 原始顏色資訊。這是因為除非更新色彩對照表(c〇1〇r l〇ok-叩table,CLUT ) 1760,否則一旦指定的顏色資訊可連 琴,續地應用’而與生產商的期望相反。 圖20是用以根據本發明實施例說明在範例再生裝置 中文字式字幕串流與動步與輸出的程序的示意 圖。The color information project gradually changes the transparency (from 〇% to 1〇〇%) to achieve the fade in/out effect. Fire In particular, when using the data structure of the DPU as shown in Fig. 12B, the fade in/out effect can be effectively implemented using the color update flag 126. That is, if the dialogue play control (4) 丨 check money confirmation is included in the ^ color update flag · is set to, 〇,,, that is, if it is generally not required to fade / fade out the effect of # shot, then Basically, the color included in 1^740 =^ DSU will be used. However, if the dialog playback controller checks and confirms, it is included in the _ towel_color update flag 1260 is set to 27 1320925 16252pif.doc is "1"" If the fade in/out effect is required, the fade in/out effect is implemented by using the color information 1270 (instead of the color information in the DSU shown in FIG. 6). At this time, the fade in/out effect is simply implemented by adjusting the transparency of the color information 1270 included in the DPU. • After displaying the fade in/out effect, it is best to update the color look-up table (CLUT) 1760 to the original color information included in the DSU. This is because unless the color map (c〇1〇r l〇ok-叩table, CLUT) 1760 is updated, once the specified color information can be linked, the application is continued, contrary to the manufacturer's expectations. Figure 20 is a schematic diagram showing a procedure for character stream streaming and moving steps and outputs in an exemplary reproducing apparatus in accordance with an embodiment of the present invention.

請參照圖2〇’包括在文字式字幕資料串流220的DPU 的對話開,時間資訊與對話結束時間資訊應該定義成使用 在播放,月f中的全域時間轴上的時間點,以便與多媒體影 像的AV資料串流的輸出時間同步。因此,可避免μ資 % 料串流的系統時間時鐘(system time dock,STC)與文字 式字幕資料串流220的對話輸出時間(dialog output time, PTS )之間的非連續。 圖21是用以根據本發明實施例說明在範例再生裝置 中輸3文:式字幕串流至螢幕的程序的的示意圖。 月>”、、® 21其係顯示的是應用包括樣式相關資訊的 轉換(rendering)資訊21〇1的流程、文字資 訊2140轉換 成點陣圖影像2106的流程以及依據包括在組合資訊2· 28 1320925 16252pif.doc 中的輸出位置資訊(像是region_horizontal_p〇sition與 region_vertical_position )將已轉換點陣圖影像輸出在圖形 平面(graphics plane, GP) 1750上對應位置的流程。Referring to FIG. 2 〇 'the dialogue of the DPU included in the text subtitle data stream 220, the time information and the dialogue end time information should be defined to use the time point on the global time axis in the play, month f, in order to communicate with the multimedia. The output time of the AV data stream of the image is synchronized. Therefore, the discontinuity between the system time dock (STC) of the stream stream and the dialog output time (PTS) of the text subtitle stream 220 can be avoided. Figure 21 is a diagram for explaining a procedure for inputting a stream of a subtitle stream to a screen in an exemplary reproducing apparatus according to an embodiment of the present invention. Month>", ® 21 displays the flow of applying the rendering information 21〇1 including the style-related information, the flow of the text information 2140 into the bitmap image 2106, and the basis for inclusion in the combined information 2· 28 1320925 16252pif.doc The output position information (such as region_horizontal_p〇sition and region_vertical_position) outputs the converted bitmap image to the corresponding position on the graphics plane (GP) 1750.

轉換(rendering)資訊2102呈現樣式資訊,像是區域 的寬、高、前景的顏色、背景顏色、字體名稱與字體大小。 如上所述,組合資訊2108指示播放的開始時間與結束時 間、視窗區域的水平與垂直位置資訊等等,其中在視窗區 域中標題輸出在圖形平面(graphics plane, GP) 1750上。 圖22是用以根據本發明實施例說明在再生裝置1700 (如圖17所示)中轉換(rendering )文字式字幕資料串流 220的流程的示意圖。 請參照圖22、21與圖8 ,藉由使用 region—horizontal一position 、 region_vertical_position 、 region_width與region_height指定的視窗區域被指定成標 題顯示在圖形平面(graphics plane,GP) 1750上的一區域, 其中 region_horizontal_position、region_vertical_position、 region_width與region—height是用於定義在DSU的標題的 視窗區域的位置資訊830。已轉換(rendering )對話的點 陣圖影像是從藉由region_horizontal_position與 region_vertical_position所指定的開始點位置被顯示,其中 region_horizontal_position 與 region_vertical_position 是視 窗區域中對話的輸出位置840。 其間’根據本發明再生裝置儲存由使用者選擇的樣式 資訊(style_id)在系統暫存區中。圖23是根據本發明實 29 16252pif.doc ^例繪不配置在範例再生裝置 串流的範例狀態暫存器的示意 中用於再生文字式字幕資料 圖0 請參照圖23 一 队怨货孖态(播放狀態暫存器,以下1 稱PSRs)儲存由㈣者在第12暫存器選擇的樣 二 =樣式2310)。因此’例如倘若使用者即使'° ,考12來應用。而儲存資訊的暫存ϋ會被改變。 依據上述記錄文字式字幕資料串流DO的儲存媒體盘 再生文字式字幕資料“22G的再线置來再 ^ t資料串流220的方法將配合圖24描述如下。圖24^ ^發明實施例再生文字式字幕資料串流咖的方法^ 在步驟2410中,從儲存媒體23()(如圖 =则資訊與卿資訊的文字式字幕⑽串流22〇, ^在步驟期巾,依據包括在咖資对的轉換 ()㈣將包括在咖f訊中的標題文字轉換成 點陣圖影像。在步驟2430中,根據時間資訊與位置資訊將 ,轉換點陣圖影像輸*在$幕上,其中時間f訊與位置資 Λ為包括在DPU資訊中的組合資訊。 次...如上所述,ί發明提供—儲存媒體,其將文字式字幕 貝料串抓與影像貝料分開儲存。本發明也提供—再生裝置 ,再生此文字式字幕資料串如方法,減料幕資料的 W作與已製作字幕資料的編輯可以更容易。同時 ,因為不 16252pif.doc 限制=資所以可提供數種語言的標題。 放㈣xU 是以—個樣式資訊項目與數個播 式可事先定義:以用應用至全部播放資料的輸出樣 種方式改變,且也可定義加強標題 邛刀的線内貧訊與使用者可改變樣式。 連择::廿:t使用數個鄰近播放資訊項目可開啟標題的 連續再生並可制此來實作淡人/淡出效果。 係明可貫作成在電腦可讀記賴體上的程式碼,其 々、:猎身又電腦讀取。電腦可讀記錄媒體包括各式 的記錄媒體,電腦可讀記錄媒體包括磁性 , φη«<-γ- ) 乂及載波(亦即透過網際網路傳輸), 可以八魏錄媒體可透過網路分享在電腦系統中並 刀月式儲存與執行電腦可讀碼。 限定發明已以較佳實施例揭露如上,然其並非用以 和二,^何熟習此技藝者,在賴離本發明之精神 電二可-搵二可作些許之更動與潤飾。例如可以使用任何 “錄料Γ裝置將文字式字幕資料與w資 同方式配置。再ί 字式字幕資料可如圖3與圖4以不 、部分或者17的再生裝置可實作成記錄裝置的 行記錄與/或再生功能的裝置。類似 的程式化電腦來體的以或—般目的或特定目的 範圍不限於所揭所述的方法。因此本發明之保護 的Λ施例,而當視後附之申請專利範圍 1320925 16252pif.doc 所界定者為準。 【圖式簡單說明】 圖1是用以根據本發明實施例說明記錄在儲存媒體上 的多媒體資料結構的示意圖。 圖2是根據本發明實施例繪示圖1的剪輯AV串流與 文字式字幕串流的範例貧料結構的不意圖。The rendering information 2102 renders style information such as the width, height, foreground color, background color, font name, and font size of the region. As described above, the combined information 2108 indicates the start time and end time of the play, the horizontal and vertical position information of the window area, and the like, wherein the title is output on the graphics plane (GP) 1750 in the window area. Figure 22 is a diagram for explaining the flow of rendering a text subtitle stream 220 in a reproducing apparatus 1700 (shown in Figure 17) in accordance with an embodiment of the present invention. Referring to FIGS. 22, 21 and 8, the window area specified by region-horizontal-position, region_vertical_position, region_width, and region_height is specified as an area in which the title is displayed on the graphics plane (GP) 1750, where region_horizontal_position The region_vertical_position, region_width, and region_height are position information 830 for defining a window region of the title of the DSU. The bitmap image of the rendered dialogue is displayed from the start point position specified by region_horizontal_position and region_vertical_position, where region_horizontal_position and region_vertical_position are the output positions 840 of the dialog in the viewport region. In the meantime, the reproducing apparatus stores the style information (style_id) selected by the user in the system temporary storage area according to the present invention. FIG. 23 is a diagram showing an exemplary state register of an example regenerative device stream for reproducing text subtitle data according to the present invention. FIG. 23 is a diagram of a team of complaints. (Playback status register, the following 1 is called PSRs) Stores the sample 2 (style 2310) selected by the (4) in the 12th register. Therefore, for example, if the user even '°, test 12 to apply. The temporary storage of stored information will be changed. The method of reproducing the text subtitle data "22G re-lined and re-characterized data stream 220" according to the above-mentioned recording media subtitle data stream DO will be described below with reference to Fig. 24. Fig. 24^ The method of text subtitle data streaming coffee ^ In step 2410, from the storage medium 23 () (as shown in the figure = then the information and the text of the subtitles (10) stream 22 〇, ^ in the step period, according to the included coffee The conversion of the asset () (4) converts the title text included in the coffee message into a bitmap image. In step 2430, according to the time information and the location information, the converted bitmap image is displayed on the screen, wherein The time information and the location information are combined information included in the DPU information. As described above, the invention provides a storage medium that stores the text subtitles and the video material separately from the image. It also provides a reproduction device to reproduce the text subtitle data string. For example, it is easier to reduce the material of the reduction screen and edit the subtitle data. At the same time, because there is no 16252pif.doc limit= The title. xU is defined by a style information item and several broadcasts: it can be changed in the way of output to all play data, and can also define the inline and user-changeable styles that enhance the title file.连择::廿:t Use several adjacent playback information items to open the continuous reproduction of the title and make it possible to implement the fade/fade effect. The code can be made into a computer-readable code. The computer readable recording medium includes various types of recording media, and the computer readable recording medium includes magnetic, φη«<-γ-) 乂 and carrier (ie, through the Internet) Transmission), which can be used to share and read computer readable codes in a computer system through a network. The invention has been disclosed above in the preferred embodiment, but it is not used for Those skilled in the art can make some changes and refinements in the spirit of the present invention. For example, any "recording device" can be used to configure the text subtitle data in the same manner. Further, the subtitle data can be realized as a device for recording and/or reproducing the recording device by means of a reproducing device of no, part or 17 as shown in Figs. 3 and 4. The scope of a similar stylized computer for the purpose of the general purpose or specific purpose is not limited to the method disclosed. The invention is therefore to be construed as being limited by the scope of the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic diagram for explaining a structure of a multimedia material recorded on a storage medium according to an embodiment of the present invention. 2 is a schematic diagram showing an exemplary lean structure of the clip AV stream and the text subtitle stream of FIG. 1 according to an embodiment of the present invention.

圖3是用以根據本發明實施例說明文字式字幕串流的 資料結構的示意圖。 圖4是根據本發明實施例繪示具有圖3的資料結構的 文字式字幕串流的示意圖。 圖5是根據本發明實施例繪示圖3中的對話樣式單元 的示意圖。 圖6是用以根據本發明實施例說明對話樣式單元的範 例貨料結構的不意圖。 圖7是用以根據本發明另一實施例說明對話樣式單元 的範例資料結構的示意圖。3 is a schematic diagram of a data structure for explaining a text subtitle stream according to an embodiment of the present invention. 4 is a schematic diagram showing a text subtitle stream having the data structure of FIG. 3, in accordance with an embodiment of the present invention. FIG. 5 is a schematic diagram of the dialog style unit of FIG. 3 according to an embodiment of the invention. Fig. 6 is a schematic diagram for explaining a sample material structure of a dialog style unit according to an embodiment of the present invention. Figure 7 is a diagram for explaining an example data structure of a dialog style unit in accordance with another embodiment of the present invention.

圖8是根據本發明實施例繪示圖6或圖7中的範例對 話樣式單元的示意圖。 圖9A與9B是根據本發明實施例繪示包括藉由字體資 訊參考的數個字體集合的範例剪輯資訊檔案的示意圖。 圖10是顯示藉由字體檔案資訊(繪示於圖9A與9B) 參考的數個字體檔案的位置的示意圖。 圖11是用以根據本發明另一實施例說明圖3中對話播 放單元的範例資料結構的示意圖。 32 1320925 16252pif.doc 圖12A與12B是用以根據本發明另一實施例說明圖3 中對話播放單元的範例資料結構的示意圖。 圖13是根據本發明實施例繪示圖11至圖12B中的對 話播放單元的示意圖。 圖14是用以說明圖13中的對話文字資訊的範例資料 結構的示意圖。 圖15是根據本發明實施例繪示圖π的對話文字資訊 的示意圖。 圖16是用以說明在連續地再生連續對話播放單元 (dialog presentation units,DpUs )的限制的示意圖。 圖丨7是用以根據本發明實施例說明用於文字式字幕 串流的範例再生裝置的示意圖。 圖18是用以根據本發明實施例說明在範例再生裝置 中文子式字幕串流的預載入程序的示意圖。 圖丨9是用以根據本發明實施例說明在範例再生裝置 中對活播放單元(dialog presentation unit, DPU )的再生矛。 序的不意圖。 圖20是用以根據本發明實施例說明在範例再生裝置 中文字式字幕串流與動晝資料同步與輸出的程序的示意 圖。 ’心、 圖21是用以根據本發明實施例說明在範例再生裝置 中輸出文干式子幕串流至勞幕的程序的的示意圖。 圖22是用以根據本發明實施例說明在範例再生裝置 中表現文字式字幕串流的程序的示意圖。 33 1320925 16252pif.doc 圖23是根據本發明實施例繪示配置在範例再生裝置 中用於再生文子式字幕串流的範例狀態暫存器的示意圖。 圖24是根據本發明實施例再生文字式字幕串流的方 法的流程圖。 【主要元件符號說明】 1〇〇 :多媒體資料結構 110 :剪輯FIG. 8 is a schematic diagram showing an example dialog style unit of FIG. 6 or FIG. 7 according to an embodiment of the invention. 9A and 9B are diagrams showing exemplary clip information files including a plurality of font sets referenced by font information, in accordance with an embodiment of the present invention. Figure 10 is a diagram showing the locations of several font files referenced by font file information (shown in Figures 9A and 9B). FIG. 11 is a schematic diagram showing an example data structure of the dialog playing unit of FIG. 3 according to another embodiment of the present invention. 32 1320925 16252pif.doc FIGS. 12A and 12B are diagrams for explaining an example data structure of the dialog playing unit of FIG. 3 according to another embodiment of the present invention. FIG. 13 is a schematic diagram showing the dialog playing unit of FIG. 11 to FIG. 12B according to an embodiment of the present invention. Figure 14 is a diagram for explaining an example data structure of the dialog text information in Figure 13; Figure 15 is a diagram showing the dialog text information of Figure π, in accordance with an embodiment of the present invention. Figure 16 is a diagram for explaining the limitation of continuously reproducing the dialog presentation units (DpUs). Figure 7 is a schematic diagram for explaining an example reproducing apparatus for text subtitle streaming in accordance with an embodiment of the present invention. Figure 18 is a diagram for explaining a preloading procedure of the Chinese subtitle stream in the example reproducing apparatus according to an embodiment of the present invention. Figure 9 is a diagram for explaining a re-spraying of a visual presentation unit (DPU) in an exemplary reproducing apparatus according to an embodiment of the present invention. The intention of the order. Figure 20 is a diagram for explaining a procedure for synchronizing and outputting a text subtitle stream and an animation data in an exemplary reproducing apparatus according to an embodiment of the present invention. Fig. 21 is a schematic diagram for explaining a procedure for outputting a text-drying sub-screen stream to a screen in an example reproducing apparatus according to an embodiment of the present invention. Figure 22 is a diagram for explaining a procedure for representing a text subtitle stream in an exemplary reproducing apparatus in accordance with an embodiment of the present invention. 33 1320925 16252pif.doc FIG. 23 is a diagram showing an exemplary state register for reproducing a text subtitle stream in an example reproduction device, in accordance with an embodiment of the present invention. Figure 24 is a flow diagram of a method of reproducing a text subtitle stream in accordance with an embodiment of the present invention. [Main component symbol description] 1〇〇: Multimedia material structure 110: Clip

112 : AV資料串流 114 :剪輯資訊 120 ·播放清單 122 :播放項目 130 :電影物件 140 :目錄表 202 :視訊串流 204 :音訊串流 206 :播放圖形串流 208 :互動圖形串流 210: AV資料串流 220 :文字式字幕資料 230 :儲存媒體 310 ·對廷樣式單元(diai〇g styie unit,DSU) 320、330、340:對話播放單元(dial〇g presentation units, DPU) 350 : PES 封包 34 1320925 16252pif.doc 362 :傳輸封包(transport packets, TP )112: AV data stream 114: Clip information 120 • Playlist 122: Play item 130: Movie object 140: Table of contents 202: Video stream 204: Audio stream 206: Play graphics stream 208: Interactive graphics stream 210: AV data stream 220: text subtitle data 230: storage medium 310 • diij〇g styie unit (DSU) 320, 330, 340: dialogue play unit (dial) g presentation units (DPU) 350 : PES Packet 34 1320925 16252pif.doc 362: transport packets (TP)

410 : DSU410 : DSU

420 : DPU 610 :調色板集合 620 :區域樣式集合 622 :區域資訊 624 :文字樣式資訊420 : DPU 610 : Palette Collection 620 : Area Style Collection 622 : Area Information 624 : Text Style Information

626 :使用者可改變樣式集合 710 :區域樣式 820 :區域樣式 830 :區域資訊 840 :文字樣式資訊 850 :使用者可改變樣式集合 860 :調色板集合 910、940 :剪輯資訊檔案 1110 :時間資訊626: User can change style collection 710: Area style 820: Area style 830: Area information 840: Text style information 850: User can change style collection 860: Palette collection 910, 940: Clip information file 1110: Time information

1120 :調色板參考資訊 1130 :對話區域資訊 1132 :樣式參考資訊 1134 :對話文字資訊 1210 :時間資訊 1220 :調色板集合 1230 :對話區域資訊 1232 :樣式參考資訊 35 1320925 16252pif.doc1120 : Palette Reference Information 1130 : Conversation Area Information 1132 : Style Reference Information 1134 : Conversational Text Information 1210 : Time Information 1220 : Palette Collection 1230 : Conversation Area Information 1232 : Style Reference Information 35 1320925 16252pif.doc

1234 :對話文字資訊 1250:時間資訊 1260 :顏色更新旗標 1270 :色彩調色板集合 1280 :對話區域資訊 12 8 2 .樣式參考資訊 1284 :對話文字資訊 1410 :線内樣式資訊 1420 :對話文字 1700 :再生裝置 子幕預載緩衝器(subtitle preloading buffer, SPB) 子體預載緩衝器(font preloading buffer,FPB ) 1730:文字式字幕解碼器 1732 :文字式幕處理器 1734 :對話排列緩衝器(dialog composition buffer, DCB) 1736 .對 5舌緩衝器(diai〇g buffer, DB ) 1738 ·文子式字幕轉換(rendering)器 1740 :對話播放控制器 1742 ’ 點陣圖物件緩衝器(bitmap object buffer, BOB ) 175〇·圖形平面(graphics plane, GP) 1760 ·色彩對照表(c〇i〇r i〇〇k-up table, CLUT ) 361234: Conversational text information 1250: Time information 1260: Color update flag 1270: Color palette collection 1280: Conversation area information 12 8 2 . Style reference information 1284: Conversational text information 1410: Inline style information 1420: Conversation text 1700 : Reproduction device subtitle preloading buffer (SPB) font preloading buffer (FPB) 1730: text subtitle decoder 1732: text screen processor 1734: dialog arrangement buffer ( Dialog composition buffer, DCB) 1736. Pair of tongue buffers (diai〇g buffer, DB) 1738 • Text subtitle conversion (rendering) device 1740: dialog playback controller 1742 'bitmap object buffer (bitmap object buffer, BOB ) 175 〇 · graphics plane ( GP ) 1760 · color comparison table (c〇i〇ri〇〇k-up table, CLUT ) 36

Claims (1)

爲第94105743號中文專利範圍無劃線修正本 修正日期:98年丨〇月6曰 十、申請專利範園·· 修:;ι 再生資料的裝置,^二料與文字式字幕資料的館存媒體 標題,該裝置包括^ 一影像上依據該影像資料顯示一 :=:=r該影-“及 一對話播放單元與一對 ί該文字式字幕資料,包括 鲁 將該對話播放單元轉換為話樣式單元 其中=:二已:碼之影像資料同步, 指定該對話輸出到—慈^括使用於該對話之一文字,及 其中螢幕上的一時間之輸出時間資訊, 訊,該Ιίίϊίί單Λ包括-樣式資訊及-調色板資 調色板資訊包=二字之輪出樣式,該 義了使用_^ 色*調色板,該色彩調色板定 彩,且、〜心'之對話的該文字在輸出到螢幕上之色 少一個色器選擇該組複數個色彩調色板中的至 在輸出到螢幕上之調=用於該相應之對話的該文字 字式_収觸林像資料與文 解碼與該影像ΐ料,其中該字幕解碼器 幕資料,並記錄在該儲存媒體上的該文字式字 解碼的影像^料^。予式字幕資料來覆蓋該字幕資料在該 37 3.如申請專利範11第2項所述之從儲存影像 字式字幕的儲存媒體再生資料的裝置,其中該對話播放^ 元與該對話樣式單元是以完成封包的元件資料= ( packetized elementary streams, PESs)的單元來形成,机 字幕解碼ϋ在PESs的單元巾語法分析與處 ^ 單元與該對話樣式單元。 f話播放 、4.如帽專利·第丨韻述之㈣存影像 子式字幕的儲麵體再生㈣的裝置再二 _資=字幕解碼器緩衝該文字以及藉由== 5=申請專利範圍第丨項所述之從儲存 料 予式予幕的儲存媒體再生資料的襞置 ^ 項目時:: 在該些文字項目之::生者二For the Chinese patent scope No. 94105743, there is no slash correction. The date of this amendment: 98 years of the month of the month of 1998, application for the patent garden, · repair:; ι, the device for reproducing data, the library of two materials and text subtitles The media title, the device includes: an image according to the image data display:::=r the shadow-"and a dialog playback unit and a pair of the text subtitle data, including Lu convert the dialog playback unit into a message The style unit where =: two has: code image data synchronization, specify the dialogue output to - a kind of text used in the dialogue, and a time output information on the screen, the message, the Ι ϊ ϊ ί ί ί ί ί Style information and - palette palette information package = two-word round-out style, the meaning of the use of _^ color * palette, the color palette fixed color, and ~ heart 'the dialogue of the The text is less colored on the screen. A color picker selects the set of the plurality of color palettes to be outputted to the screen = the text type for the corresponding dialogue _ receives the forest image data and Text decoding and the image data, The caption decoder screen material, and recording the text word decoded image on the storage medium. The subtitle data is overlaid on the subtitle data in the 37. 3. As described in claim 2, item 2 The device for reproducing data from a storage medium storing image subtitles, wherein the dialog playing unit and the dialog pattern unit are formed by a unit of packetized elementary streams (PESs), and the subtitle decoding is performed. In the PESs, the unit towel syntax analysis and the unit and the dialogue style unit. f words play, 4. such as the cap patent · the third rhyme description (four) save the image subtitles of the reservoir body regeneration (four) device again = The subtitle decoder buffers the text and the device for reproducing data from the storage medium of the storage material as described in the third paragraph of the patent application scope:: In the text items: : 生者二 38 IS38 IS
TW094105743A 2004-02-28 2005-02-25 Apparatus for reproducing data from a storge medium storing imige data and text-based subtitle data TWI320925B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20040013827 2004-02-28
KR1020040032290A KR100727921B1 (en) 2004-02-28 2004-05-07 Storage medium recording text-based subtitle stream, reproducing apparatus and reproducing method thereof

Publications (2)

Publication Number Publication Date
TW200529202A TW200529202A (en) 2005-09-01
TWI320925B true TWI320925B (en) 2010-02-21

Family

ID=36760967

Family Applications (2)

Application Number Title Priority Date Filing Date
TW098133833A TWI417873B (en) 2004-02-28 2005-02-25 A storage medium and an apparatus for reproducing data from a storage medium storing audio-visual data and text-based subtitle data
TW094105743A TWI320925B (en) 2004-02-28 2005-02-25 Apparatus for reproducing data from a storge medium storing imige data and text-based subtitle data

Family Applications Before (1)

Application Number Title Priority Date Filing Date
TW098133833A TWI417873B (en) 2004-02-28 2005-02-25 A storage medium and an apparatus for reproducing data from a storage medium storing audio-visual data and text-based subtitle data

Country Status (10)

Country Link
JP (2) JP4776614B2 (en)
KR (1) KR100727921B1 (en)
CN (3) CN100479047C (en)
AT (1) ATE504919T1 (en)
DE (1) DE602005027321D1 (en)
ES (1) ES2364644T3 (en)
HK (3) HK1088434A1 (en)
MY (1) MY139164A (en)
RU (1) RU2490730C2 (en)
TW (2) TWI417873B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7787753B2 (en) 2003-04-09 2010-08-31 Lg Electronics Inc. Recording medium having a data structure for managing reproduction of text subtitle data and methods and apparatuses of recording and reproducing
KR20050078907A (en) 2004-02-03 2005-08-08 엘지전자 주식회사 Method for managing and reproducing a subtitle of high density optical disc
WO2005091722A2 (en) 2004-03-26 2005-10-06 Lg Electronics Inc. Recording medium and method and apparatus for reproducing text subtitle stream recorded on the recording medium
ATE479987T1 (en) * 2004-03-26 2010-09-15 Lg Electronics Inc STORAGE MEDIUM, METHOD AND APPARATUS FOR PLAYBACKING SUBTITLE STREAMS
CN1934625B (en) * 2004-03-26 2010-04-14 Lg电子株式会社 Method and apparatus for reproducing and recording text subtitle streams
KR100818926B1 (en) * 2006-10-31 2008-04-04 삼성전자주식회사 Apparatus and method for handling presentation graphic of optical disk
CN101183524B (en) * 2007-11-08 2012-10-10 腾讯科技(深圳)有限公司 Lyric characters display process and system
CN101904169B (en) * 2008-02-14 2013-03-20 松下电器产业株式会社 Reproduction device, integrated circuit, reproduction method
EP3118854B1 (en) * 2014-09-10 2019-01-30 Panasonic Intellectual Property Corporation of America Recording medium, playback device, and playback method

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5294982A (en) * 1991-12-24 1994-03-15 National Captioning Institute, Inc. Method and apparatus for providing dual language captioning of a television program
EP0714582B1 (en) * 1993-08-20 1999-04-21 Thomson Consumer Electronics, Inc. Closed caption system for use with compressed digital video transmission
DK0745307T5 (en) * 1994-12-14 2010-10-11 Koninkl Philips Electronics Nv Subtitle transmission system
US5721720A (en) * 1994-12-28 1998-02-24 Kabushiki Kaisha Toshiba Optical recording medium recording pixel data as a compressed unit data block
JPH08241068A (en) * 1995-03-03 1996-09-17 Matsushita Electric Ind Co Ltd Information recording medium, device and method for decoding bit map data
JPH08275205A (en) * 1995-04-03 1996-10-18 Sony Corp Method and device for data coding/decoding and coded data recording medium
US5848352A (en) * 1995-04-26 1998-12-08 Wink Communications, Inc. Compact graphical interactive information system
JP3484838B2 (en) * 1995-09-22 2004-01-06 ソニー株式会社 Recording method and playback device
US6345147B1 (en) * 1995-11-24 2002-02-05 Kabushiki Kaisha Toshiba Multi-language recording medium and reproducing device for the same
JPH10210504A (en) * 1997-01-17 1998-08-07 Toshiba Corp Sub video image color pallet setting system
JPH10271439A (en) * 1997-03-25 1998-10-09 Toshiba Corp Dynamic image display system and dynamic image data recording method
US6288990B1 (en) * 1997-10-21 2001-09-11 Sony Corporation Reproducing apparatus, recording apparatus, and recording medium
JPH11196386A (en) * 1997-10-30 1999-07-21 Toshiba Corp Computer system and closed caption display method
JP3377176B2 (en) * 1997-11-28 2003-02-17 日本ビクター株式会社 Audio disc and decoding device
KR100327211B1 (en) * 1998-05-29 2002-05-09 윤종용 Sub-picture encoding method and apparatus
JP2000023082A (en) * 1998-06-29 2000-01-21 Toshiba Corp Information recording and reproducing device for multiplex television broadcast
JP2002056650A (en) * 2000-08-15 2002-02-22 Pioneer Electronic Corp Information recorder, information recording method and recording medium with record control program recorded therein
JP4467737B2 (en) * 2000-08-16 2010-05-26 パイオニア株式会社 Information recording apparatus, information recording method, and information recording medium on which recording control program is recorded
JP4021264B2 (en) * 2002-07-11 2007-12-12 株式会社ケンウッド Playback device
KR100939711B1 (en) * 2002-12-12 2010-02-01 엘지전자 주식회사 Apparatus and method for reproducing a text based subtitle
KR100930349B1 (en) * 2003-01-20 2009-12-08 엘지전자 주식회사 Subtitle data management method of high density optical disc
CN100473133C (en) * 2004-02-10 2009-03-25 Lg电子株式会社 Text subtitle reproducing method and decoding system for text subtitle
JP2007522596A (en) * 2004-02-10 2007-08-09 エルジー エレクトロニクス インコーポレーテッド Recording medium and method and apparatus for decoding text subtitle stream
KR100739680B1 (en) * 2004-02-21 2007-07-13 삼성전자주식회사 Storage medium for recording text-based subtitle data including style information, reproducing apparatus, and method therefor
WO2005091722A2 (en) * 2004-03-26 2005-10-06 Lg Electronics Inc. Recording medium and method and apparatus for reproducing text subtitle stream recorded on the recording medium

Also Published As

Publication number Publication date
CN1774759A (en) 2006-05-17
RU2007146766A (en) 2009-06-20
JP2011035922A (en) 2011-02-17
TW200529202A (en) 2005-09-01
ES2364644T3 (en) 2011-09-08
JP5307099B2 (en) 2013-10-02
KR100727921B1 (en) 2007-06-13
MY139164A (en) 2009-08-28
CN101059984B (en) 2010-08-18
DE602005027321D1 (en) 2011-05-19
CN101360251B (en) 2011-02-16
CN101059984A (en) 2007-10-24
TWI417873B (en) 2013-12-01
ATE504919T1 (en) 2011-04-15
JP4776614B2 (en) 2011-09-21
CN100479047C (en) 2009-04-15
RU2490730C2 (en) 2013-08-20
KR20050088035A (en) 2005-09-01
HK1116588A1 (en) 2008-12-24
HK1126605A1 (en) 2009-09-04
HK1088434A1 (en) 2006-11-03
TW201009820A (en) 2010-03-01
CN101360251A (en) 2009-02-04
JP2007525904A (en) 2007-09-06

Similar Documents

Publication Publication Date Title
TWI320925B (en) Apparatus for reproducing data from a storge medium storing imige data and text-based subtitle data
US8437612B2 (en) Storage medium recording text-based subtitle stream, reproducing apparatus and reproducing method for reproducing text-based subtitle stream recorded on the storage medium
US9620172B2 (en) Systems and methods for converting interactive multimedia content authored for distribution via a physical medium for electronic distribution
TWI264938B (en) Information recording medium, methods of recording/playback information onto/from recording medium
US8521000B2 (en) Information recording and reproducing method using management information including mapping information
TWI259720B (en) Information recording medium, methods of recording/playback information onto/from recording medium
US20070101265A1 (en) Information storage medium, information playback apparatus, information playback method, and information playback program
TW200818902A (en) Recording medium and recording and reproducing methods and apparatuses
JP2006004486A (en) Information recording medium and information reproducing apparatus
US20080219636A1 (en) Authoring Audiovisual Content
Imoji et al. HD DVD video player