TW201009820A - A storage medium and an apparatus for reproducing data from a storage medium storing audio-visual data and text-based subtitle data - Google Patents

A storage medium and an apparatus for reproducing data from a storage medium storing audio-visual data and text-based subtitle data Download PDF

Info

Publication number
TW201009820A
TW201009820A TW098133833A TW98133833A TW201009820A TW 201009820 A TW201009820 A TW 201009820A TW 098133833 A TW098133833 A TW 098133833A TW 98133833 A TW98133833 A TW 98133833A TW 201009820 A TW201009820 A TW 201009820A
Authority
TW
Taiwan
Prior art keywords
information
dialog
text
style
data
Prior art date
Application number
TW098133833A
Other languages
Chinese (zh)
Other versions
TWI417873B (en
Inventor
Kil-Soo Jung
Sung-Wook Park
Kwang-Min Kim
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of TW201009820A publication Critical patent/TW201009820A/en
Application granted granted Critical
Publication of TWI417873B publication Critical patent/TWI417873B/en

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B2220/00Record carriers by type
    • G11B2220/20Disc-shaped record carriers
    • G11B2220/25Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
    • G11B2220/2537Optical discs
    • G11B2220/2541Blu-ray discs; Blue laser DVR discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/84Television signal recording using optical recording
    • H04N5/85Television signal recording using optical recording on discs or drums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/806Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components with processing of the sound signal
    • H04N9/8063Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components with processing of the sound signal using time division multiplex of the PCM audio and PCM video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • H04N9/8227Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being at least another television signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • H04N9/8233Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being a character code signal

Abstract

A storage medium storing a multimedia image stream and a text-based subtitle stream, and a reproducing apparatus and a reproducing method therefor are provided to reproduce the text-based subtitle data stream recorded separately from the multimedia image stream such that the subtitle data can be easily produced and edited and a caption can be provided in a plurality of languages. The storage medium stores: image data; and text-based subtitle data to display a caption on an image based on the image data, wherein the subtitle data includes: one style information item specifying an output style of the caption; and a plurality of presentation information items that are displaying units of the caption, and the subtitle data is separated and recorded separately from the image data. Accordingly, a caption can be provided in a plurality of languages, and can be easily produced and edited, and the output style of caption data can be changed in a variety of ways. In addition, part of a caption can be emphasized or a separate style that a user can change can be applied.

Description

201009820 ioz^z-i^-pif.doc 六、發明說明: 【發明所屬之技術領域】 本發明是有關於一種多媒體影像的再生,且特別是有 關於一種用以記錄多媒體影像串流與文字式字幕串流的儲 存媒體以及將記錄於此儲存媒體的文字式字幕串流進行再 生的再生裝置與再生方法。 【先前技術】 為了提供高密度(high-density)多媒體影像,視訊串 流、音訊串流、提供字幕的播放圖形串流以及提供按鈕與 選單與使用者互動的互動圖形串流會被多路傳輸至一主要 串流並記錄在儲存媒體上,其中此主要串流也就是所知的 音訊視覺”AV”資料串流。特別是提供字幕的播放圖形串流 也提供點陣圖影像以為了在影像上顯示字幕或標題。 除了其大尺寸外,點陣圖標題資料在字幕或標題資料 的製作上會有問題且要對已製作的標題資料作編輯也會相 當困難。這是因為標題資料與其他串流資料(像是視訊、 θ訊與互動圖形串流)一起多路傳輸。再者,另一問題是 無法以各種方式來改變標題資料的輸出樣式,也就是無法 將一種標題的輸出樣式改變為另一種輸出樣式。 【發明内容】 本發明的目的就是提供記錄文字式字幕串流的儲存媒 體,以及再生記錄於此儲存媒體的文字式字幕資料的再生 裝置與方法。 根據本發明的目的就是提供一種從儲存影像資料與文 201009820 . 1 υ,ώ j^-x>-pif.d〇C 字式字幕的儲存媒體再生資料的裝置,其係在一与 據此影像資料來顯示標題,此裝置包括:視訊解ς器,= 係用以解碼影像資料;以及字幕解碼H,其係用以^ 式資訊將播放資訊轉換為點陣圖影像,並控制已轉換 資訊的輸出與已解碼影像資料同步,其中文字式資料勺 係為顯示標題的單·播放資誠及敏標題^輸’出= 的樣式資訊。 字幕解碼器解碼與影像資料分開記錄的文字 ❹輸出字幕資料,其係覆蓋字幕資料在已解碼^像㈣ 上。樣式資訊與播放資訊是以完成封包的元件資料流 (packetized elementary streams,PESs)的單元來形成,^ 字幕解碼II在PESs的單元中語法分析與處理樣式資訊= 播放資訊。 、、 樣式資訊是以-個PES形成且記錄在字幕資料的前面 部分,而數個播放資訊項目會以PESs的單元 資訊之後,^字幕解會顧—個樣式資訊項 些播放資訊項目。 此外 …+a a貧訊包括指示標題内容的文字資訊以及右 制藉由轉換文字資訊所獲得的點陣圖影像輸出的組合負 訊’且其巾當藉由參考触合資訊輸出已轉換的文 時字幕解碼器會控制此時間。 = 一個或多個視窗區域’其中視窗區域 疋標題輸出在螢幕上龍域,且字幕解,會在同一時間 輸出已轉換文字資訊在一個或多個視窗中。 201009820 i〇zDz-i>-pif.doc 在、且口資訊之中的播放資訊的輸出開始時間 束時間在全域_軸上定義斜間資訊,其中全^出二 是使用在銳清單巾’且銳清單是f彡像資料的再: 7L ’並且字幕解碼时將已轉換文字資訊的輪 影像資料的輸出同步,其係#由參考輸出開始時間與輸出 倘若目前再生的播放資訊項目的輸出結束時間相同於 下一個播放資訊項目的輸出開始時間時,則字幕解碼哭會 連續地再生此兩個播放資訊項目。 、、盗 倘若下一個播放資訊沒有要求連續再生時,則字幕解 碼器會在輸出開始時間與輸出結束時間之間重置間隔緩衝 器,且倘若要求連續再生時,則會保留間隔緩衝器, 重置。 樣式資訊是輸出樣式的集合,而輸出樣式藉由儲存媒 體的生產商預先定義且應用至播放資訊中,且字幕解碼器 會依據樣式資訊轉換記錄在樣式資訊之後的播放資訊項目 至點陣圖影像。 〇 此外,在播放資訊之中的文字資訊包括欲轉換為點陣 圖影像的文字以及欲應用至文字的唯一部分的線内樣式資 訊’並且字幕解碼器應用藉由應用文字的線内樣式資訊唯 一部分至藉由生產商預先定義的樣式資訊來提供一功能以 加強文字的部分。 字幕解碼器應用預先定義字體資訊的相對值或包括在 由生產商預先定義的樣式資訊中的預先定義絕對值至文字 201009820 的部分,作為線内樣式資訊。 此外,樣式資訊更包括使用者可改變樣式資訊,且& 從使用者接收可改變樣式資訊項目之中的一個樣式的選擇 資訊之後’字幕解碼器會應用由生產生預先定義的樣式資 訊’之後應用線内樣式資訊,且之後最後應用對應選擇資 訊的使用者可改變樣式資訊項目至文字。 字幕解碼器應用在由生產商預先定義的樣式資訊項目 之中的預先定義字體資訊的相對值至文字,作為使用者可 • 改變樣式資訊。 倘若除了由生產商預先定義的樣式資訊外儲存媒體容 許定義在再生裝置中的預先定義樣式資訊時,則字幕解瑪 器會應用預先定義樣式資訊至文字。 ~ 此外’樣式資訊包括色彩調色板的集合來應用至播玫 資訊,並且字幕解碼器依據定義在色彩調色板的顏色將在 樣式資訊之後的所有播放資訊項目轉換為點陣圖影像。 播放資訊更包括色彩調色板的集合與顏色更新旗標, 參 其係與包括在樣式資訊中的色彩調色板的集合分開,且倘 若顏色更新旗標設定為”1”時,則字幕解碼器會應用包括在 播放資訊中的色彩調色板的集合,以及倘若顏色更新旗榡 設定為”0,’時,則字幕解碼器會應用包括在樣式資訊中的色 彩調色板的原先集合。 藉由設定該顏色更新旗標至”1”且逐漸改變包括在連 續播放資訊項目中的色彩調色板的透明值,使得字幕解瑪 器實作淡入/淡出效果,且當淡入/淡出效果完成時,則字 201009820 幕解瑪器會依據包括在樣式資訊中的色彩調色板的原先集 合來重置色彩對照表(c〇l〇r l〇〇k-Up tabie,CLUT )。 此外’樣式資訊包括指示視窗區域的位置的區域資 訊’其係用於欲輸出在影像上的已轉換播放資訊’以及用 於將播放資訊轉換為點陣圖影像所需的字體資訊,且字幕 解碼器會藉由使用區域資訊與字體資訊將已轉換播放資訊 轉換為點陣圖影像。 字體資訊包括至少一個已轉換播放資訊的輸出開始位 置、輸出方向、排序、行間隔、字體識別字、字體大小、 或顏色,且其中字幕解碼器會依據字體資訊將播放資訊轉 換為點陣圖影像。 —字幕2碼H參考字騎案上㈣*資訊作為字體識別 中字體檔案包括在剪輯資訊㈣中,其係儲存影像 資枓的記錄單元的屬性資訊。 =外’在再生影缝料之前字幕解碼^緩衝字幕資料 i及藉由字幕解碼器參考的字體檔案。 莫咨ja卜倘若在儲存媒體上記錄支援數種語言的數個字 目時幕解碼时從使时接收預期語言: 幕資料; 。並在子幕資料項目之巾再生對應選擇資訊的字201009820 ioz^zi^-pif.doc VI. Description of the Invention: [Technical Field] The present invention relates to a multimedia image reproduction, and in particular to a method for recording a multimedia video stream and a text subtitle string A streaming storage medium and a reproducing device and a reproducing method for reproducing a text subtitle stream recorded on the storage medium. [Prior Art] In order to provide high-density multimedia images, video streaming, audio streaming, playback of graphics streams providing subtitles, and interactive graphics streams that provide buttons and menus to interact with the user are multiplexed. The main stream is recorded on a storage medium, which is known as an audio visual "AV" data stream. In particular, the playback graphics stream that provides subtitles also provides bitmap images for displaying subtitles or titles on the image. In addition to its large size, bitmap title data can be problematic in the production of subtitles or title material and it can be quite difficult to edit the title material that has been created. This is because the header data is multiplexed with other streaming data (such as video, θ, and interactive graphics streams). Furthermore, another problem is that the output style of the title material cannot be changed in various ways, that is, the output style of one title cannot be changed to another output style. SUMMARY OF THE INVENTION An object of the present invention is to provide a storage medium for recording a text subtitle stream, and a reproducing apparatus and method for reproducing text subtitle data recorded on the storage medium. SUMMARY OF THE INVENTION It is an object of the present invention to provide an apparatus for reproducing data from a storage medium storing image data and text 201009820. 1 υ, ώ j^-x>-pif.d〇C font subtitles. Data to display the title, the device includes: a video decoder, = for decoding image data; and a subtitle decoding H, which is used to convert the broadcast information into a bitmap image and control the converted information. The output is synchronized with the decoded image data, wherein the text data scoop is the style information of the single title, the broadcast capital, and the sensitive title of the title. The subtitle decoder decodes the text recorded separately from the image data. ❹ Outputs the subtitle data, which covers the subtitle data on the decoded image (4). The style information and the play information are formed by the units of the packetized elementary streams (PESs), and the subtitle decoding II parses and processes the style information in the PESs unit = play information. The style information is formed by a PES and recorded in the front part of the subtitle data, and several playback information items will be based on the unit information of the PESs, and the subtitles will be used to play the information items. In addition, the +aa poor news includes text information indicating the content of the title and the combined negative output of the bitmap image output obtained by converting the text information, and the towel is outputted by referring to the touch information. The subtitle decoder controls this time. = One or more window areas ‘where the window area 疋 is output on the screen, and the subtitles are decoded, and the converted text information is output in one or more windows at the same time. 201009820 i〇zDz-i>-pif.doc The output start time of the playback information in the port information defines the inter-slope information on the global_axis, where all the two are used in the sharp list towel' The sharp list is the same as the data of the image data: 7L 'and the output of the wheel image data of the converted text information is synchronized when the subtitle is decoded, and the output time is output by the reference output and the output end time of the currently reproduced playback information item. When the output start time of the next playback information item is the same, the subtitle decoding crying continuously reproduces the two playback information items. If the next playback information does not require continuous reproduction, the subtitle decoder will reset the interval buffer between the output start time and the output end time, and if continuous reproduction is required, the interval buffer will be retained. Set. The style information is a collection of output styles, and the output style is pre-defined by the manufacturer of the storage medium and applied to the playback information, and the subtitle decoder converts the playback information items recorded to the bitmap information according to the style information to the bitmap image. . In addition, the text information in the playback information includes the text to be converted into the bitmap image and the inline style information to be applied to the unique portion of the text' and the subtitle decoder application is unique by the inline style information of the application text. Partially provides a function to enhance the text portion by style information predefined by the manufacturer. The subtitle decoder applies a relative value of the pre-defined font information or a portion of the pre-defined absolute value included in the style information predefined by the manufacturer to the text 201009820 as the inline style information. In addition, the style information further includes the user can change the style information, and & after receiving the selection information of one of the style information items from the user, the 'subtitle decoder applies the predefined style information generated by the student' The inline style information is applied, and the user who last applies the corresponding selection information can change the style information item to the text. The subtitle decoder applies the relative value of the pre-defined font information among the style information items predefined by the manufacturer to the text, and the user can change the style information. If the storage medium allows for predefined style information defined in the playback device in addition to the style information predefined by the manufacturer, the subtitle maser will apply the predefined style information to the text. ~ In addition, the 'style information includes a collection of color palettes to apply to the broadcast information, and the subtitle decoder converts all of the play information items after the style information into bitmap images according to the color defined in the color palette. The play information further includes a set of color palettes and a color update flag, which are separated from the set of color palettes included in the style information, and if the color update flag is set to "1", the subtitle is decoded. The set of color palettes included in the playback information is applied, and if the color update flag is set to "0", the subtitle decoder applies the original set of color palettes included in the style information. By setting the color update flag to "1" and gradually changing the transparency value of the color palette included in the continuously played information item, the subtitle gramper is implemented as a fade in/out effect, and when the fade in/out effect is completed At the time, the word 201009820 will solve the color comparison table (c〇l〇rl〇〇k-Up tabie, CLUT) according to the original set of color palettes included in the style information. The area information indicating the position of the window area 'is used for the converted play information to be outputted on the image' and the font required for converting the broadcast information into the bitmap image The subtitle decoder converts the converted playback information into a bitmap image by using the area information and the font information. The font information includes an output start position, an output direction, a sort, a line interval, and a font of at least one converted play information. Recognition word, font size, or color, and the subtitle decoder converts the playback information into a bitmap image according to the font information. - Subtitle 2 code H reference word on the ride (4) * Information as font recognition in the font file included in the clip In information (4), it is the attribute information of the recording unit that stores the image assets. = Outside 'Subtitle decoding before the reproduction of the shadow material ^ Buffered subtitle data i and the font file referenced by the subtitle decoder. The storage medium records several words in several languages. When the screen is decoded, the expected language is received from the time: the screen data; and the words corresponding to the selected information are reproduced in the towel of the sub-screen data item.

料與文字 再生資料的方法,其係在影像上依據 此方法包括:解碣影像資料,·讀取樣 依據樣式f贿播放資轉換為點陣 201009820 -jJ-pif.doc 圖影像;以及控制已轉換播放資訊的輸出與已解石馬影像資 料Π步/、中文子式資料包括表示顯示標題的單元的播放 資訊以及指定標題的輸出樣式的樣式資訊。 月的又一目的是提供一種儲存媒體’其係儲存: 影像育料;以及文字式字幕資料,其係用以依據影像資料 在影像上顯示標題’其巾字幕資料包括:-個樣式資訊, 其係用以指不標題的輸出樣式;以及數個播放資訊項目, 其係域示標題的單元,且字幕㈣是與影像資料分開且 ® 分別記錄。 本發明的其他目的與優勢將在以下詳細描述,並且藉 由本發明的實施例習得。 【實施方式】 為讓本發明之上述和其他目的、特徵、和優點能更明 顯易懂’下文特舉一較佳實施例,並配合所附圖式,作詳 細說明如下。 δ _,、、、圖1,根據本發明範例實施例儲存媒體(例女 圖2所示的媒體230)係多層方式構成以管理記錄於其』 影像串流的多媒體資料結構100。多媒體資料錯 構100包括剪輯110、播放清單120、電影物件13〇與目錄 f 140 J中剪輯11〇為多媒體影像的紀錄單元、播放清 早疋多媒體影像的再生單元、電影物件130包括用 再生ί媒體影像的導航指令且目錄表14G用來指定首先再 生的電f物件以及電影物件130的標題。 剪輯m會實減,物件,其係包_於高影像品 201009820 10Z3Z-JLf-pif.doc 質電影的音訊-視訊(audio-visual, AV )資料串流的剪輯 AV串流以及用於對應此AV資料串流的剪輯資訊114。例 如’可根據像是動態影像壓縮標準(Motion Picture Experts Group, MPEG)來壓縮AV資料串流。然而,在本發明目 的中此剪輯110不需要壓縮AV資料串流112。此外,剪 輯資訊114包括AV資料串流112的音訊/視訊屬性、進入 點地圖等’其中關於隨機存取進入點的位置的資訊以預先 定義磁區的單元記錄在進入點地圖中。 播放清單120是這些剪輯11〇的再生間隔的集合,且 ❿ 每個再生間隔視為播放項目122。電影物件130是以導航 程式所構成’且此些導航程式根據使用者的需求開始播放 清單120的再生、在電影物件130之間交換或管理播放清 單120的再生。 目錄表140是在儲存媒體最上層的表,其係用來定義 數個標題與選單’且目錄表140包括所有標題與選單的開 始位置資訊以致於可再生透過使用者操作(像是標題搜尋 或選單乎叫)所選擇的標題與選單。目錄表140也包括當 ❹ 儲存媒體放於再生裝置時第一次自動地再生的標題與選單 的開始位置資訊。 在此些項目中,壓縮編碼多媒體影像的剪輯AV串流 的資料結構將配合圖2作詳細說明。圖2是根據本發明實 施例緣示圖1的AV資料串流210與文字式字幕串流220 的範例資料結構的示意圖。 請參照圖2’為了解決上述關於點陣圖式的標題資料 10 201009820 的問題’根據本發明實施例文字式字幕串流220以與記錄 在儲存媒體230的剪輯AV資料串流210是以分開的方式 &供’例如多功能數位碟片(digital versatile disc,DVD)。 AV資料串流21〇包括視訊串流202、音訊串流204用於提 供字幕資料的播放圖形串流206與用於提供與使用者互動 的按鈕與選單的互動圖形串流208,而上述的串流在動畫 主串流中多路傳輸並記錄在儲存媒體230中,其中動晝主 ❹ 串流就是所熟知的音訊-視訊(audio-visual,AV )資料串流。 根據本發明實施例文字式字幕資料220表示用於提供 多媒體影像的字幕與標題的資料來記錄在儲存媒體23〇 中,且是使用標記語言來實作,例如可擴展標記語言 (Extensible Markup Language,XML)。然而,此多媒體 影像的字幕與標題是使用二位元資料來提供,此後,使用 二位元資料提供多媒體影像的字幕與標題的文字式字幕資 料220簡單視為,,文字式字幕串流,,。用於提供字幕資料的 播放圖形串流206也提供點陣圖式字幕資料來在螢幕上顯 ❿ 示字幕(或標題)。 由於文字式字幕串流220是與AV資料串流210分開 記錄,且不會與AV資料串流210多路傳輸,所以文字式 字幕串流220的大小不受限於此。因此,可以使用數種語 言提供字幕與標題,再者,文字式字幕串流220可以連續 地再生且有效地編輯而不會有任何困難。 之後文字式字幕串流220會轉換成點陣圖圖形影像, 並輪出在螢幕上覆蓋多媒體影像。如此轉換文字式字幕資 11 201009820.The method for reproducing materials and text is based on the method, including: unpacking the image data, and reading the sample according to the style, bribe playing the money into a dot matrix 201009820 -jJ-pif.doc image; and controlling The output of the converted play information and the decoded stone image data step/, the Chinese sub-type data include the play information indicating the unit displaying the title and the style information of the output style of the specified title. Another purpose of the month is to provide a storage medium for storing: image breeding; and text subtitle data, which is used to display a title on the image according to the image data, and the subtitle data includes: - a style information, It is used to refer to the output style of the title; and several playback information items, which are the units of the title, and the subtitles (4) are separated from the image data and recorded separately. Other objects and advantages of the present invention will be described in detail below and are obtained by the embodiments of the present invention. The above and other objects, features, and advantages of the present invention will become more apparent and understood. δ _, , , , FIG. 1, a storage medium (such as the media 230 shown in FIG. 2) is configured in a multi-layer manner to manage the multimedia material structure 100 recorded in the video stream according to an exemplary embodiment of the present invention. The multimedia material misconfiguration 100 includes a clip 110, a playlist 120, a movie object 13 and a record unit in the directory f 140 J, a recording unit for multimedia images, a playback unit for playing early morning multimedia images, and a movie object 130 including a reproduction media. The navigation command of the image and the table of contents 14G are used to specify the title of the first regenerated electrical object and the movie object 130. The clip m will be reduced, the object, its package _ _ high image 201009820 10Z3Z-JLf-pif.doc audio-visual video (audio-visual, AV) data stream clip AV stream and used to correspond to this The clip information 114 of the AV data stream. For example, the AV data stream can be compressed according to, for example, Motion Picture Experts Group (MPEG). However, this clip 110 does not require compression of the AV data stream 112 for purposes of the present invention. In addition, the clip information 114 includes the audio/video attributes of the AV material stream 112, the entry point map, etc. wherein the information about the position of the random access entry point is recorded in the entry point map with the unit defining the magnetic area. The playlist 120 is a set of playback intervals of these clips 11〇, and ❿ each playback interval is regarded as a play item 122. The movie object 130 is constructed by a navigation program and the navigation programs start playback of the list 120 according to the user's needs, exchange between the movie objects 130, or manage the reproduction of the play list 120. The table of contents 140 is the table at the top of the storage medium, which is used to define a plurality of titles and menus ' and the table of contents 140 includes the start position information of all the titles and menus so that the reproduction can be performed by the user (such as title search or The selection is simply called the selected title and menu. The table of contents 140 also includes the start position information of the title and menu that are automatically reproduced for the first time when the storage medium is placed on the reproducing apparatus. In these items, the data structure of the clip AV stream of the compression-encoded multimedia image will be described in detail in conjunction with FIG. FIG. 2 is a schematic diagram showing an exemplary data structure of the AV data stream 210 and the text subtitle stream 220 of FIG. 1 according to an embodiment of the present invention. Referring to FIG. 2', in order to solve the above problem with the header data 10 201009820 of the dot pattern, the text subtitle stream 220 is separated from the clip AV data stream 210 recorded on the storage medium 230 according to an embodiment of the present invention. Mode & for 'digital versatile disc (DVD). The AV data stream 21 includes a video stream 202, an audio stream 204 for providing a playback graphics stream 206 of subtitle data, and an interactive graphics stream 208 for providing buttons and menus for interacting with the user, and the string The stream is multiplexed in the animation main stream and recorded in the storage medium 230, wherein the active stream is a well-known audio-visual (AV) data stream. According to an embodiment of the present invention, the caption data 220 indicates that the subtitle and title data for providing the multimedia image is recorded in the storage medium 23, and is implemented using a markup language, such as an Extensible Markup Language (Extensible Markup Language, XML). However, the subtitles and titles of the multimedia image are provided by using two-bit data. Thereafter, using the two-bit data to provide subtitles of the multimedia image and the caption text of the caption 220 is simply regarded as, the text subtitle stream, . The playback graphics stream 206 for providing subtitle data also provides bitmap subtitle data to display subtitles (or titles) on the screen. Since the text subtitle stream 220 is recorded separately from the AV material stream 210 and is not multiplexed with the AV material stream 210, the size of the text subtitle stream 220 is not limited thereto. Therefore, subtitles and titles can be provided in several languages. Further, the text subtitle stream 220 can be continuously reproduced and efficiently edited without any difficulty. The text subtitle stream 220 is then converted into a bitmap image image, and the multimedia image is overlaid on the screen. This converts text subtitles 11 201009820.

丄 υ厶 j‘rL^pif,d〇G 料為圖像式點陣圖影像的流程視為轉換(rendering)。文字 式子幕串流220包括轉換(rendering)標題文字所需的資 訊。 以下將配合圖3詳細說明包括轉換(rendering)資訊 的文字式字幕串流220。圖3是用以根據本發明實施例說 明文字式字幕串流220的資料結構的示意圖。 請參照圖3,根據本發明實施例文字式字幕串流22〇 包括對話樣式單元(dialog style unit, DSU) 310以及數個 對話播放單元(dialog presentation units, DPU) 320 至 340。 ❹ DSU 310與DPU 320至340也可視為對話單元。每個形成 文字式字幕串流220的對話單元31〇至340是以完成封包 的元件資料流(packetized elementary streams,PESs)或簡 單孰知的PES封包350的型式來儲存。同樣地,文字式字 幕串流220的PES是以傳輸封包(tranSp〇rt packets,TP ) 362的單元來記錄與傳送。連續的τρ可視為傳輸串流 (transport stream,TS )。 然而,如圖2所示,根據本發明實施例文字式字幕串 _ 流220不會與AV資料串流210多路傳輸且會在儲存媒體 230上記錄成分開的TS。 請參照圖3,在包括在文字式字幕串流220的一個PES 封包350中會記錄一個對話單元。文字式字幕串流22〇包 括一個配置在前面的DSU 310與數個接在DSIJ 310之後的 DPU 320至340。DSU 310包括說明在顯示於螢幕的標題 中對話的輸出樣式的資訊,其中多媒體影像再生在此螢幕 12 201009820 . 上。其間,數個DPU 320至340包括在欲顯示的對話内办 上的文字資訊項目以及在各別輸出項目上的資訊。 4 圖4是根據本發明實施例緣示具有圖3的資料择構的 文字式字幕串流220的示意圖。'° ' 凊參知、圖4 ’文子式子幕串流220包括一個dsu 410 與數個DPU420。 在本發明範例實施例中,數個DPU定義成 num_of_dialog_presentation_units。然而,數個 Dpu 不會 刀別具體指疋。祀例的案例是使用像是 while(processed—length<end_of一life)的語法。 DSU與DPU的資料結構將配合圖5作詳細說明。圖5 是根據本發明實施例繪示圖3中的對話樣式單元的示意 圖。 請參照圖5 ’在DSU 310中定義對話樣式資訊項目的 一集合dialog一styleset〇 510 ’在其中會集合欲顯示成標題 的對話的輸出樣式資訊項目。DSU310包括在標題中顯示 參 對話的區域的位置的資訊、轉換(rendering)對話所需的 資訊、使用者可以控制的樣式的資訊等等。詳細内容將在 以下說明。 圖6是用以根據本發明實施例說明對話樣式單元 (dialog style unit, DSU )的範例資料結構的示意圖。 請參照圖6,DSU 310包括調色板集合(palette collection) 610與區域樣式集合620。調色板集合610是數 個色彩調色板的集合,其用以定義使用在標題中的顏色。 13 201009820 iozDz-u-pif.doc 包括在調色板集合610中的顏色組合與顏色資訊(像是透 明度)可應用至配置於DSU之後的所有數個DPU。 區域樣式集合(region style collection) 620是形成標 題的各別對話的輸出樣式資訊的集合。每個區域樣式包括 指示顯示在螢幕上的對話的位置的區域資訊622、指示欲 應用至每個對話文字的輸出樣式的文字樣式資訊624以及 才曰示樣式的使用者可改變樣式集合(user changeable styie collection ) 626,其中使用者可任意改變應用至每個對話文 字的樣式。 圖7是用以根據本發明另一實施例說明對話樣式單元 的範例資料結構的示意圖。 請參照圖7 ’與圖6不同的是並沒有包括調色板集合 610。也就是,沒有在DSU 310中定義色彩調色板集合, 但調色板集合610定義在圖12A與12B所述的DPU _。 每個區域樣式710的資料結構是相同於圖6所述的資料結 構0 圖8是根據本發明實施例繪示圖6或圖7中的範例對 話樣式單元的示意圖。 請參照圖8與圖6,DSU 310包括調色板集合860與 610以及數個區域樣式820與620。如上所述,調色板集合 610是數個色彩調色板的集合,其用以定義使用在標題中 的顏色。包括在調色板集合610中的顏色組合與顏色資訊 (像是透明度)可應用至配置於DSU之後的所有數個 DPU。 14 201009820 ιυ‘以-i>pifd〇c 其間,每個區域樣式820與620包括區域資訊83〇與 622,其係指示視窗區域上的資訊,其中視窗區域中標題欲 顯示在螢幕上,且區域資訊830與622包括χ、γ座標、 寬、高背景顏色等視窗區域的資訊,其中視窗區域中^ 欲顯不在勞幕上。 ’ 同樣地,每個區域樣式820與62〇包括文字樣式資訊 細與624 ’其係指示欲應用至每個對話文字的輸出樣式。 也就是包括對話文字欲顯示在上述視窗區域的位置的X、 Υ座標、輸出方向(例如由左至右或由上至下)、排序、行 間隔、欲參考的字體識別字、字體樣式(例如黑體或斜體)、 字體大小與字體顏色資訊等等。 再者,每個區域樣式82〇與620也包括使用者可改變 樣式集合850與626’其係指示使用者可任意改變的樣式。 然而,使用者可改變樣式集合85〇與626是非必須的。使 用者可改變樣式集合85〇與626可包括文字輸出樣式資訊 項目840與624之中的視窗區域位置、文字輸出位置、字 ❹社小行間隔等改變資訊。每做變資訊項目可表示成在 輸出樣式840與625上相關資訊的相對增加或減少值來應 用至每個對話文字。. 、總而言之’有三種樣式相關資訊的型態:定義在區域 樣式820與620中的樣式資訊(region_style) 620、用來加強 標題的部分的_樣式資訊(inline—style) 151G (稍後解釋) 以及使用者可改變樣式資訊(仙沉―gw, 且應用此寫資訊項目的順序如下: _ 15 201009820 io/D2-L>-pif.doc 1) 基本地’應用定義在區域樣式中的區域樣式資 620。 工貝§ 2) 倘若有線内樣式資訊,則應用線内樣式資訊l5i〇來 覆蓋區域樣式資訊應用的部分,並加強標題文字的Α卩八。 3) 倘若有使用者可改變樣式資訊850,則最後此 資訊。而使用者可改變樣式資訊的呈現不是必須的。 其間’在欲應用至每個對話文字的文字樣式資訊項目 840與624之中,藉由字體(font_id) 842的識別字所^考的 字體檔案資訊可定義如下。 圖9A是根據本發明實施例繪示包括藉由圖8中字體 資訊842參考的數個字體集合的範例剪輯資訊檔案91〇的 示意圖。 請參照圖9A、圖8、圖2與圖1,根據本發明在 StreamCodingInfo() 930中包括各種記錄在儲存媒體的串 流上的資訊’其中StreamCodingInfo() 930指的是包括在剪 輯資訊檔910與110中的串流編碼資訊結構。也就是包括 視訊串流202、音訊串流、播放圖形串流、互動圖形串流、 文字式字幕串流上的資訊。特別的是,包括欲顯示標題的 語言上的資訊(textST_language_code)932,其係關於文字式 字幕串流220。同樣地’也可定義檔案儲存字體資訊的字 體名稱936與檔案名稱938,其係對應指示欲參考與顯示 在圖8中的字體的識別字的font_id 842與934。用於找尋 欲參考與在此定義的字體的識別字的字體檔案的方法將配 合圖10作詳細說明。 201009820 .丄 υ厶 j‘rL^pif, d〇G The process of image-based bitmap image processing is regarded as rendering. The text sub-screen stream 220 includes the information needed to render the title text. A text subtitle stream 220 including rendering information will be described in detail below in conjunction with FIG. FIG. 3 is a diagram showing the data structure of a text subtitle stream 220 in accordance with an embodiment of the present invention. Referring to FIG. 3, a text subtitle stream 22〇 includes a dialog style unit (DSU) 310 and a plurality of dialog presentation units (DPU) 320 to 340 according to an embodiment of the present invention. ❹ DSU 310 and DPUs 320 through 340 can also be considered as dialog units. Each of the dialog units 31A through 340 forming the caption stream 220 is stored in the form of packetized elementary streams (PESs) or simply known PES packets 350. Similarly, the PES of the text-based caption stream 220 is recorded and transmitted in units of tranSp〇rt packets (TP) 362. The continuous τρ can be regarded as a transport stream (TS). However, as shown in FIG. 2, the text subtitle string stream 220 will not be multiplexed with the AV material stream 210 and will record a separate TS on the storage medium 230 in accordance with an embodiment of the present invention. Referring to FIG. 3, a dialog unit is recorded in a PES packet 350 included in the text subtitle stream 220. The text subtitle stream 22 includes a DSU 310 disposed in front and a plurality of DPUs 320 through 340 connected to the DSIJ 310. The DSU 310 includes information describing the output style of the dialog displayed in the title of the screen, wherein the multimedia image is reproduced on this screen 12 201009820 . In the meantime, a plurality of DPUs 320 to 340 include text information items in the dialog to be displayed and information on the respective output items. 4 is a schematic diagram of a text subtitle stream 220 having the data selection of FIG. 3 in accordance with an embodiment of the present invention. The '° ' 凊 知 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , In an exemplary embodiment of the invention, several DPUs are defined as num_of_dialog_presentation_units. However, several Dpus will not be specifically pointed. The case of the example is to use a syntax like while(processed_length<end_of-life). The data structure of the DSU and DPU will be described in detail in conjunction with FIG. 5. Figure 5 is a schematic illustration of the dialog style unit of Figure 3, in accordance with an embodiment of the present invention. Referring to Fig. 5', a set of dialog stylesets 510' defining a dialog style information item in the DSU 310 is set in which an output style information item of a dialog to be displayed as a title is assembled. The DSU 310 includes information showing the position of the area in which the dialog is displayed in the title, information required to render the dialog, information on the style that the user can control, and the like. The details will be explained below. 6 is a schematic diagram showing an example data structure of a dialog style unit (DSU) according to an embodiment of the present invention. Referring to FIG. 6, the DSU 310 includes a palette collection 610 and an area style collection 620. The palette set 610 is a collection of color palettes that are used to define the colors used in the title. 13 201009820 iozDz-u-pif.doc The color combinations and color information (such as transparency) included in palette set 610 can be applied to all of the DPUs that are configured after the DSU. A region style collection 620 is a collection of output style information for the respective conversations that form the title. Each region style includes area information 622 indicating the location of the dialog displayed on the screen, text style information 624 indicating the output style to be applied to each dialog text, and a user changeable style set to indicate the style (user changeable) Styie collection ) 626, where the user can arbitrarily change the style applied to each dialog text. Figure 7 is a diagram for explaining an example data structure of a dialog style unit in accordance with another embodiment of the present invention. Referring to Fig. 7', unlike FIG. 6, the palette set 610 is not included. That is, the color palette set is not defined in the DSU 310, but the palette set 610 is defined in the DPU _ described in FIGS. 12A and 12B. The data structure of each of the area patterns 710 is the same as that of the data structure described in FIG. 6. FIG. 8 is a schematic diagram showing the example dialog style unit of FIG. 6 or FIG. 7 according to an embodiment of the present invention. Referring to Figures 8 and 6, DSU 310 includes palette sets 860 and 610 and a plurality of regional styles 820 and 620. As mentioned above, palette set 610 is a collection of color palettes that are used to define the colors used in the title. The color combinations and color information (such as transparency) included in the palette set 610 can be applied to all of the DPUs configured after the DSU. 14 201009820 ιυ '以-i>pifd〇c, each of the area styles 820 and 620 includes area information 83〇 and 622, which indicates information on the window area, wherein the title in the window area is to be displayed on the screen, and the area Information 830 and 622 include information on window areas such as χ, γ coordinates, width, and high background color, where the window area is not on the screen. Similarly, each of the regional styles 820 and 62 includes text style information detail and 624' indicating the output style to be applied to each dialog text. That is, the X, Υ coordinates, the output direction (for example, left to right or top to bottom), the sorting, the line spacing, the font identification word to be referred to, and the font style (for example, the position of the dialog text to be displayed in the above-mentioned window area) (for example) Blackbody or italic), font size and font color information, and more. Moreover, each of the regional styles 82A and 620 also includes user-changeable style sets 850 and 626' which indicate a style that the user can arbitrarily change. However, it is not necessary for the user to change the style sets 85〇 and 626. The user changeable style sets 85 〇 and 626 may include text output style information items 840 and 624, such as window area position, text output position, word line spacing, and the like. Each change information item can be represented as a relative increase or decrease in the associated information on output styles 840 and 625 for each dialog text. In summary, there are three types of style-related information: style information (region_style) 620 defined in the area styles 820 and 620, and _style information (inline-style) 151G (explained later) for enhancing the part of the title. And the user can change the style information (Shen Shen-gw, and the order in which the information item is applied is as follows: _ 15 201009820 io/D2-L>-pif.doc 1) Basically 'apply the area style defined in the area style 620. Gongbei § 2) If the style information is wired, the inline style information l5i〇 is applied to cover the part of the area style information application, and the title text is strengthened. 3) If there is a user who can change the style information 850, then this information is last. The user can change the presentation of the style information is not necessary. In the meantime, among the text style information items 840 and 624 to be applied to each dialog text, the font file information determined by the identification word of the font (font_id) 842 can be defined as follows. Figure 9A is a diagram showing an example clip information file 91A including a plurality of font sets referenced by the font information 842 of Figure 8 in accordance with an embodiment of the present invention. Referring to FIG. 9A, FIG. 8, FIG. 2 and FIG. 1, according to the present invention, various information recorded on a stream of a storage medium is included in StreamCodingInfo() 930, wherein StreamCodingInfo() 930 is included in the clip information file 910. And the stream encoding information structure in 110. That is, information including video stream 202, audio stream, playback graphics stream, interactive graphics stream, and text subtitle stream. In particular, it includes language information (textST_language_code) 932 for which a title is to be displayed, which is related to the text subtitle stream 220. Similarly, the font name 936 and the file name 938 of the file storage font information can be defined, which corresponds to the font_ids 842 and 934 indicating the identification words of the fonts to be displayed in Fig. 8. A method for finding a font file of an identification word to be referred to with a font defined herein will be described in detail in conjunction with FIG. 201009820 .

1 υζ, Jz,-i)-pif.d〇C 圖9Β是根據本發明另一實施例繪示包括藉由圖$的 字體資訊842參考的數個字體集合的範例煎輯資訊槽案 940的示意圖。 请參照圖9B’結構ClipInfo()定義在剪輯資訊檔案91〇 與110中。在此結構中定義了藉由圖8中字體資訊842參 考的數個字體集合。也就是’具體指明對應f0nt__i(i842的 字體檔案名稱952,其中font_id 842指示欲參考與顯示在 圖8的字體的識別字。用於找尋欲參考與在此定義的字體 β 的識別字的字體檔案的方法將在以下作詳細說明。 圖10是顯示藉由圖9Α與9Β中字體檔案名稱938與 952參考的數個字體檔案的位置的示意圖。 ^ 請參照圖10’根據本發明實施例顯示關於記錄在多媒 體上的檔案的目錄結構。特別是,由於使用目錄結構,所 以可以很容易找到字體檔案的位置,例如儲存在輔助資料 (AUXDATA)目錄的 lllll.font 1010 或 99999 font 1020。 其間’形成對話單元的DPU的結構將配合圖^作詳 ❹ 細說明。 圖11是用以根據本發明另一實施例說明圖3中Dpu 320的範例資料結構的示意圖。 請參照圖11與圖3,包括欲輸出對話内容的文字資訊 與顯示時間上的資訊的DPU 320包括指示用於在螢幕上輸 出對話時間的時間資訊1110、具體指明欲參考的色彩調^ 板的調色板參考資訊1120以及用於欲輪出在營幕^對話 的對話區域資訊1130。特別的是,用於欲輪出在榮幕的對 17 201009820 102^2-u-pif.doc 話的對話區域資訊1130包括指明欲應用至對話的輪出 式的樣式參考資訊1132與指示實際輸出在螢幕上 丄 的對話文字資訊1134。在此案例中,其假設由調色板 資訊1120指示的色彩調色板集合是定義在DSU (請來 圖6的610)中。 少 其間,圖12A是用以根據本發明實施例說明圖3中 DPU 320的範例資料結構的示意圖。 請參照圖12A與圖3,DPU 320包括指示用於欲在螢 幕上輸出對話時間的時間資訊1210、定義色彩調色板集合 的調色板集合1220以及用於欲輸出在螢幕上的對話的對 話區域資訊1230。在此案例中,調色板集合122〇不會定 義在如圖11中的DSU中,但會直接地定義在DPU 320中。 其間,圖12B是用以根據本發明實施例說明圖3中 DPU320的範例資料結構的示意圖。 請參照圖12B ’ DPU 320包括指示用於欲在螢幕上輸 出對話的時間的時間資訊1250、顏色更新旗標1260、當顏 &更新旗標設為1時所需的色彩調色板集合!27〇以及用於 欲輸出在肇幕上對話的對話區域資訊〗28〇。在此案例中, 色彩調色板集合1270也是定義在如圖11的DSU中,並且 儲存在DPU320中。特別的是,為了表示使用連續再生的 淡入/淡出,除了定義在DSU中的基本調色板外,用來表 示淡入/淡出的調色板集合1270會定義在DPU 320中且顏 色更新旗標1260會設定為1。此將配合圖19作詳細說明。 圖13是根據本發明實施例繪示圖11至圖12B中的 18 201009820 .1 υζ, Jz,-i)-pif.d〇C FIG. 9A illustrates an exemplary fried information slot 940 including a plurality of font sets referenced by the font information 842 of FIG. $ according to another embodiment of the present invention. schematic diagram. Referring to Fig. 9B', the structure ClipInfo() is defined in the clip information files 91A and 110. A plurality of font sets referenced by the font information 842 in Fig. 8 are defined in this structure. That is, 'specifically designate the font file name 952 corresponding to f0nt__i (i842, where font_id 842 indicates the identification word to be referred to and displayed in the font of Fig. 8. The font file for finding the identification word of the font β to be defined herein is defined. The method will be described in detail below. Fig. 10 is a diagram showing the positions of several font files referenced by the font file names 938 and 952 in Figs. 9A and 9B. ^ Referring to Fig. 10', the display is shown in accordance with an embodiment of the present invention. The directory structure of the files recorded on the multimedia. In particular, because of the directory structure, it is easy to find the location of the font file, such as lllll.font 1010 or 99999 font 1020 stored in the AUXDATA directory. The structure of the DPU of the dialog unit will be described in detail with reference to Fig. 11. Fig. 11 is a schematic diagram showing the structure of an example data of the Dpu 320 of Fig. 3 according to another embodiment of the present invention. Referring to Fig. 11 and Fig. 3, The DPU 320 for outputting the text information of the conversation content and displaying the information on the time includes a time indicating the time for outputting the conversation time on the screen. 1110. The palette reference information 1120 of the color palette to be referred to is specifically used and the dialog area information 1130 for rotating in the camping session. In particular, it is used for the pair to be turned out in the honor screen. The dialog area information 1130 of the 201009820 102^2-u-pif.doc statement includes the round-out style reference information 1132 to be applied to the dialog and the dialog text information 1134 indicating that the actual output is on the screen. In this case, It is assumed that the set of color palettes indicated by the palette information 1120 is defined in the DSU (see 610 of Figure 6). In less detail, Figure 12A is an illustration of an example of the DPU 320 of Figure 3 in accordance with an embodiment of the present invention. Referring to FIG. 12A and FIG. 3, the DPU 320 includes time information 1210 indicating that a dialog time is to be output on the screen, a palette set 1220 defining a color palette set, and for outputting on the screen. The dialog area information 1230 of the dialog. In this case, the palette set 122 is not defined in the DSU as in Fig. 11, but is directly defined in the DPU 320. Meanwhile, Fig. 12B is used to Embodiment of the present invention A schematic diagram of an example data structure of the DPU 320 in Fig. 3. Referring to Figure 12B, the DPU 320 includes time information 1250 indicating the time for the dialogue to be output on the screen, a color update flag 1260, and a color & update flag setting. The color palette set required for 1! 27〇 and the dialog area information for the dialogue to be output on the screen. 28 In this case, the color palette set 1270 is also defined in Figure 11. In the DSU, and stored in the DPU 320. In particular, in order to indicate the use of continuous fade in/out, in addition to the basic palette defined in the DSU, the palette set 1270 used to represent fade in/out is defined in the DPU 320 and the color update flag 1260 Will be set to 1. This will be described in detail in conjunction with FIG. Figure 13 is a diagram showing the 18 201009820 of Figures 11 to 12B in accordance with an embodiment of the present invention.

1 υζ. JZ,-jJ-plf.d〇C DPU 320的示意圖。 請參照圖13、圖11、圖12A與圖12B,DPU包括對 話開始時間資訊(dialog_strat_PTS)與對話結束時間資訊 (dialog_end一PTS)1310作為指示用於欲在螢幕上輸出的對 話的時間的時間資訊1110。同樣地,對話調色板識別字 (dialog_palette_id)被包括成調色板參考資訊112〇。在圖 12A的案例中’色彩調色板集合1220可被包括取代調色板 參考資訊1120。對話文字資訊(region_subtitle) 1334被包 鬌 括成對話區域資訊1230以用於欲輸出的對話,且為了指明 應用至其的輸出樣式,也會包括區域樣式識別字 (region_style一id) 1332。圖13中的範例只是DPU的實施例 且具有如圖11至圖12B所示的資料結構的Dpu可以各種 方式修改來實作。 圖14是用以說明圖13中的對話文字資訊 (region一subtitle)的範例資料結構的示意圖。 請參照圖14 ’對話文字資訊(圖u的1134、圖12A ❹ 的1234、圖12B的1284與圖13的1334)包括線内資訊 1410與對話文字1420作為輸出樣式來加強對話的部分。 圖15是根據本發明實施例繪示圖13的對話文字資訊 1334的不意圖。如圖15所示,對話文字資訊1334是由線 内樣資訊(inline_style) 1510 與對話文字(text—string) 152〇 來實作。同樣地,較佳的是指示線内樣式的結束的資訊包 括在圖15的實施例中。除非定義線内樣式的結束部分,否 則一旦指明的線内樣式可能會接著應用在其後,其會與生 201009820 i o/^^'U-pif.doc 產商的所設定的相反。 其間,圖16是用以說明在連續地再生連續對話播放單 元(dialog presentation units,DPUs )的限制的示意圖。 清參照圖16與圖13’當需要連續再生上述的數個Dpu 時’則需要下列限制。 1)¾對話物件開始在圖形平面(graphic piane,〇ρ)上 輸出時,則定義在DPU中的對話開始時間資訊 (dialog一start一PTS) 1310 會指示一時間,圖形平面(graphk plane, GP)將在以下配合圖17作詳細說明。 ❹ 2)定義在DPU中的對話開始時間資訊 (dialog一start一PTS) 1310指示一時間來重置處理文字式字 幕的文字式字幕解碼器,其中文字式字幕解碼器將在以下 配合圖17作詳細說明。 3)當需要連續再生上述的數個dpu時,則目前Dpu 的對話結束時間資訊(dialog一 end一PTS)應該相同於下一個 連續再生的DPU的對話開始時間資訊。 也就是,在圖16中,為了連續地再生〇1>11#2與1)1)11#3, 包括在DPU #2中的對話結束時間資訊應該相同於包括在 I3PU #3中的對話開始時間資訊。 其間,最佳的是根據本發明DSU滿足下列限制。 1) 文字式字幕串流220包括一個DSU。 2) 包括在所有區域樣式(regi〇n_styie)的數個使用者 可改變樣式資訊項目(user_contr〇l_style )應該是相同的。 其間’最佳的是根據本發明DPU滿足下列限制。 20 201009820 x v^-^^-J-pif.doc 1:)用於至少兩個標題的視窗區域應該被定義。 根據本發明實麵錄記錄在贿雜蚊字式字幕 串流220的資料結構的範例再生裝置的結構將配合圖17 說明如下。 圖17是用以根據本發明實施例說明用於文字式字幕 串流的範例再生裝置的示意圖。 凊參照圖17,再生裝置Π00 (所謂的錄放裝置)包括 緩衝單元與文字式字幕解碼器1730。其中緩衝單元包括用 於儲存子體播案的字體預載緩衝器(f〇nt prel〇ading buffer, FPB) 1712與用於儲存文字式字幕檔案的字幕預載缓衝器 (subtitle preloading buffer,SPB) 1710,而文字式字幕解 碼器1730用以藉由圖形平面(graphics 〇ρ) 175〇 與色彩對照表(color look-up table,CLUT) 1760解碼與再 生事先記錄在儲存媒體的文字式字幕_流作為輸出。 特別地’字幕預載緩衝器(subtitle preloading buffer, SPB) 1710會預載文字式字幕資料串流220而字體預載緩 ❹ 衝器(font preloading buffer, FPB) 1712 會預載字體資訊。 文字式字幕解碼器1730包括文字式幕處理器1732、 對話排列緩衝器(dialog composition buffer, DCB ) 1734、 對話緩衝器(dialog buffer, DB) 1736、文字式字幕轉換 (rendering)器1738、對話播放控制器1740以及點陣圖 物件緩衝器(bitmap object buffer, BOB ) 1742。 文字式幕處理器1732從字幕預載緩衝器(subtitle preloading buffer, SPB) 1710中接收文字式字幕資料串流 21 201009820 10Z)Z-JL^pifd〇c 220、轉換上述關於包括在DSU的資訊的樣式以及包括在 DpU的對話輸出時間資訊至對話排列緩衝器(dialog composition buffer, DCB ) 1734 並轉換包括在 DPU 的對話 文字資訊至對話缓衝器(dialog buffer,DB) 1736。 對話播放控制器1740藉由使用關於包括在對話排列 缓衝器(dialog composition buffer, DCB) 1734 的資訊的樣 式來控制文字式字幕轉換(rendering)器1738,且藉由使 用對話輸出時間資訊來控制用於轉換(rendering)在點陣 圖物件緩衝器(bitmap object buffer, OBO ) 1742的點陣圖 ❹ 影像的時間來輸出至圖形平面(graphicsplane,GP) 1750。 根據對話播放控制器1740的控制,文字式字幕轉換 (rendering)器1738將對話文字資訊轉換(也就是執行轉 換(rendering))為點陣圖影像,其係藉由應用在字體預 載緩衝器(font preloading buffer, FPB )1712預載入字體資 訊項目之中對應儲存在對話緩衝器(dialog buffer, DB) 1736的對話文字資訊的字體資訊項目至對話文字資訊。已 轉換(rendering)點陣圖影像會儲存在點陣圖物件緩衝器 ⑩ (bitmap object buffer,OBO) 1742中並根據對話播放控制 器1740的控制輸出至圖形平面(graphiCs plane,GP)1750。 此時’藉由參考色彩對照表(c〇l〇r l〇〇k-up table,CLUT) 1760來應用指定在DSU中的顏色。 由生產商定義在DSU的資訊可使用成應用至對話文 字的樣式相關資訊,且也可應用由使用者預定義樣式相關 資訊。如圖17所示的再生裝置17〇〇會優先於由生產商定 22 201009820 義的樣式相關資訊之前應用由使用者定義的樣式資訊。 如圖8所述,由生產商定義在DSU中的區域樣式資訊 (regi〇n_style)是基本地應用成應用在對話文字的樣式相1 υζ. JZ,-jJ-plf.d〇C DPU 320 schematic. Referring to FIG. 13, FIG. 11, FIG. 12A and FIG. 12B, the DPU includes dialog start time information (dialog_strat_PTS) and dialog end time information (dialog_end_PTS) 1310 as time information indicating the time for the dialogue to be output on the screen. 1110. Similarly, the dialog palette identification word (dialog_palette_id) is included as palette reference information 112. In the case of Figure 12A, the color palette set 1220 can be included in place of the palette reference information 1120. The dialog text information (region_subtitle) 1334 is included as dialog area information 1230 for the dialog to be output, and also includes a region style recognize word (region_style_id) 1332 in order to indicate the output style applied thereto. The example in Fig. 13 is only an embodiment of the DPU and the Dpu having the data structure as shown in Figs. 11 to 12B can be modified in various ways. Figure 14 is a diagram for explaining the structure of an example data of the region-subtitle of Figure 13; Referring to Fig. 14' dialog text information (1134 of Fig. u, 1234 of Fig. 12A, 1284 of Fig. 12B, and 1334 of Fig. 13), the inline information 1410 and the dialog text 1420 are included as output patterns to enhance the dialogue. Figure 15 is a schematic illustration of the dialog text information 1334 of Figure 13 in accordance with an embodiment of the present invention. As shown in Fig. 15, the dialog text information 1334 is implemented by inline style information (inline_style) 1510 and dialog text (text_string) 152〇. Likewise, it is preferred that the information indicating the end of the inline style is included in the embodiment of Fig. 15. Unless the end of the inline style is defined, the inline style will be applied afterwards, as opposed to the one set by the 201009820 i o/^^'U-pif.doc manufacturer. Meanwhile, Fig. 16 is a diagram for explaining the limitation of continuously reproducing the continuous dialog presentation units (DPUs). Referring to Fig. 16 and Fig. 13', when it is necessary to continuously reproduce the above several Dpus, the following restrictions are required. 1) When the 3⁄4 dialog object starts to be output on the graphics plane (graphic piane, 〇ρ), the dialog start time information (dialog-start-PTS) defined in the DPU 1310 will indicate a time, the graphics plane (graphk plane, GP) The details will be described below in conjunction with FIG. ❹ 2) Define the dialog start time information (dialog-start-PTS) in the DPU 1310 to indicate a time to reset the text subtitle decoder for processing the text subtitles, wherein the text subtitle decoder will be combined with FIG. 17 below. Detailed description. 3) When it is necessary to continuously reproduce the above several dpus, the current Dpu session end time information (dialog-end-PTS) should be the same as the dialog start time information of the next continuously reproduced DPU. That is, in Fig. 16, in order to continuously reproduce 〇1 >11#2 and 1)1)11#3, the session end time information included in DPU #2 should be the same as the conversation included in I3PU #3. Time information. In the meantime, it is preferable that the DSU according to the present invention satisfies the following limitations. 1) The text subtitle stream 220 includes a DSU. 2) Several user-changeable style information items (user_contr〇l_style) included in all regional styles (regi〇n_styie) should be the same. In the meantime, the DPU according to the present invention satisfies the following limitations. 20 201009820 x v^-^^-J-pif.doc 1:) The window area for at least two titles should be defined. The structure of an exemplary reproducing apparatus recorded in accordance with the present invention in the data structure of the bribe-type subtitle stream 220 will be described below with reference to FIG. Figure 17 is a diagram for explaining an example reproducing apparatus for text subtitle streaming in accordance with an embodiment of the present invention. Referring to Fig. 17, a reproducing apparatus Π00 (so-called recording and reproducing apparatus) includes a buffer unit and a caption decoder 1730. The buffer unit includes a font preload buffer (FPB) 1712 for storing a child broadcast and a subtitle preloading buffer (SPB) for storing a text subtitle file. 1710, and the text subtitle decoder 1730 is used for decoding and reproducing the text subtitles previously recorded in the storage medium by means of a graphics 〇ρ 175 color and a color look-up table (CLUT) 1760. Stream as an output. In particular, the subtitle preloading buffer (SPB) 1710 preloads the text subtitle data stream 220 and the font preloading buffer (FPB) 1712 preloads the font information. The text subtitle decoder 1730 includes a text screen processor 1732, a dialog composition buffer (DCB) 1734, a dialog buffer (DB) 1736, a text subtitle conversion device 1738, and a dialog play. Controller 1740 and bitmap object buffer (BOB) 1742. The text screen processor 1732 receives the text subtitle data stream 21 from the subtitle preloading buffer (SPB) 1710. 201009820 10Z) Z-JL^pifd〇c 220, converts the above information about the information included in the DSU. The style and the dialog output time information included in the DpU are converted to a dialog composition buffer (DCB) 1734 and the dialog text information included in the DPU is converted to a dialog buffer (DB) 1736. The dialog play controller 1740 controls the caption rendering device 1738 by using a pattern regarding information included in the dialog composition buffer (DCB) 1734, and controls the output of the time information by using the dialog. It is used to render the bitmap image of the bitmap object buffer (OBO) 1742 at the time of output to the graphics plane (GP) 1750. In accordance with the control of the dialog playback controller 1740, the text-based captioning device 1738 converts the dialog text information (i.e., performs rendering) into a bitmap image, which is applied to the font preload buffer ( Font preloading buffer, FPB) The 1712 preloaded font information item corresponds to the font information item of the dialog text information stored in the dialog buffer (DB) 1736 to the dialog text information. The rendered bitmap image is stored in bitmap object buffer (OBO) 1742 and output to the graphics plane (graphiCs plane, GP) 1750 according to the control of dialog playback controller 1740. At this time, the color specified in the DSU is applied by referring to the color comparison table (c〇l〇r l〇〇k-up table, CLUT) 1760. The information defined by the manufacturer at the DSU can be used as style-related information applied to the dialog text, and the style-related information pre-defined by the user can also be applied. The reproduction device 17 shown in Fig. 17 applies the user-defined style information in advance of the style-related information defined by the manufacturer. As shown in Figure 8, the region style information (regi〇n_style) defined by the manufacturer in the DSU is basically applied to the style of the text applied to the dialog.

關資訊’且倘若線内樣式資訊(inine—style)包括在j)PU 中時,其中DPU包括應用區域樣式資訊的對話文字,則會 應用線内樣式資訊(inline_style)至對應的部分。同樣地, 倘若生產商額外地定義使用者可改變樣式在Dsu中且其 中一個由使用者定義的使用者可改變樣式被選擇時,則會 應用區域樣式或線内樣式,然後最後應用使用者可改變資 訊。同樣地,如圖15所述,較佳的是指示應用線内樣式的 結束的資訊包括在線内樣式的内容中。 再者,生產商可指明是否可使用定義在再生裝置本身 的樣式相訊’其係與由生產商定義並記錄在儲存媒體 上的樣式相關資訊分開。 1700(例如如圖17所示)中文字式字幕串流22〇Off information' and if the inline-style information (inine-style) is included in the j)PU, where the DPU includes the dialog text of the application area style information, the inline style information (inline_style) is applied to the corresponding part. Similarly, if the manufacturer additionally defines that the user can change the style in Dsu and one of the user-defined user-changeable styles is selected, the area style or inline style is applied, and then the user can finally apply Change the information. Similarly, as described in Fig. 15, it is preferable that the information indicating the end of the application inline style is included in the content of the inline style. Furthermore, the manufacturer can indicate whether it is possible to use the style information defined in the reproduction device itself, which is separate from the style-related information defined by the manufacturer and recorded on the storage medium. 1700 (for example, as shown in Figure 17) Chinese character subtitle stream 22〇

255個字體檔 圖18是用以根據本發明實施例說明在範例再生裝置 201009820 i〇z^/-iJ-pif.doc 案。然而,為了保證無間斷播放,文字式字幕串流22〇的 大小應該小於或等於再生裝置17〇〇 (例如圖17所示)的 預載緩衝器Π10的大小。 ' 圖19是用以根據本發明實施例說明在範例再生裝置 中DPU的再生程序的示意圖。 请參照圖19、圖13與圖π,顯示再生j)pu的流程。 播放控制器1740控制用於欲輸出在圖形平面 plane,GP ) 1750上的轉換(rendering )對話的時間,其係 藉由使用指定包括在DPU的對話的輸出時間131〇的對 開始時間資訊(dialog一start一PTS )與對話結束時間資訊 (dialog一end一PTS)。此時,當完成轉換儲存在點陣圖物 件緩衝器(bitmap object buffer,BOB) 1了42 的已轉換 (rendering)對話點陣圖影像至圖形平面(graphicsplane, GP ) 1750時,其中點陣圖物件緩衝器(bitmap 〇bject buffer, BOB) 1742包括在文字式字幕解碼器1730中,則對話開 始時間資訊會指定一時間。也就是,倘若是定義在DPU中 的對話開始時間時,則在完成轉換資訊至圖形平面 (graphics plane,GP) 1750之後建構對話所需的點陣圖資 訊會準備好被使用。同樣地,當再生DPU完成時,對話結 束時間資訊會指定一時間。此時,文字式字幕解碼器173〇 與圖形平面(graphics plane, GP) 1750會被重置。最佳的 疋’無論其為連續再生在文字式字幕解碼器1730的緩衝器 (像疋點陣圖物件緩衝器(bitmap object buffer, BOB ) 1742)也會在DPU的開始時間與結束時間之間被重置。 24 201009820 . 1 ν^^^,-jJ-pif.doc 然而,當需要數個DPU連續再生時,則文字式字幕解 碼器1730與圖形平面(graphics plane,GP ) 1750不會重置 且儲存在每個緩衝器(像是對話排列緩衝器(dial〇g composition buffer,DCB ) 1734、對話緩衝器(dialog buffer, DB )1736 與點陣圖物件緩衝器(bitmap object buffer,OBO ) 1742)中的内容會保留。也就是,當目前再生的Dpu的對 話結束時間資訊與之後連續再生的DPU的對話開始時間 ^訊相同時’則每緩衝器的内容會保留而不重置。 特別是,有淡入/淡出效果作為應用數個Dpu的連續 再生範例。淡入/淡出效果可藉由改變點陣圖物件的色彩對 照表(color l〇〇k-up table,CLUT) 1760 來實作,其中點陣 圖物件是轉換為圖形平面(graphics plane,Gp) 175〇。也 就是,第一 DPU包括組合資訊,像是顏色、樣式與輸出時 間,且之後的連續的數個DPU具有相同於第一 DPU的組 合資訊,但只更新色彩調色板資訊。在此案例中,藉由在 顏色資訊項目之中逐漸改變透明度(從0°/。至100%)來實 ❹ 作淡入/淡出效果。 特別疋’當使用如圖12B所示DPU的資料結構時, 淡炎出效果可有效地使用顏色更新旗標1260來實作。 也就是’倘若對話播放控制器174〇檢查與確認包括在Dpu 中的顏色更新旗標1260是設為,,〇,,時,也就是,倘若是一 般不需要淡入/淡出效果的案例中,則會基本地使用包括在 圖6所示的DSU中的顏色資訊。然而倘若董十話播放控制器 1740檢查與確認包括在DPU中的顏色更新旗標126〇是設 25 201009820 ιοζ^ζ-ij-pif.doc 為”1”時,也就是,倘若需要淡入/淡出效果時,則藉由使 用顏色資訊1270 (取代圖6所示的DSU中的顏色資訊) 來實作淡入/淡出效果。此時,藉由調整包括在DPU中的 顏色資訊1270的透明度來簡單地實作淡入/淡出效果。 在顯示淡入/淡出效果之後,最佳的是來更新色彩對照 表(color look-up table,CLUT) 1760 至包括在 DSU 中的 原始顏色資訊。這是因為除非更新色彩對照表(c〇1〇r l〇〇k_up table,CLUT ) 1760 ’否則一旦指定的顏色資訊可連 續地應用,而與生產商的期望相反。 φ 圖20是用以根據本發明實施例說明在範例再生裝置 中文字式字幕串流與動晝資料同步與輸出的程序的示意 圖。 請參照圖20’包括在文字式字幕資料串流22〇的Dpu 的對話開始時間資訊與對話結束時間資訊應該定義成使用 在播放清單中的全域時間軸上的時間點,以便與多媒體影 像的AV資料串流的輸出時間同步。因此,可避免Av資 料串流的系統時間時鐘(System time也成,π。)與文字 式字幕資料串流220的對話輸出時間(dial〇g 〇啤说time pTS)之間的非連續。 ’ 圖21是用以根據本發明實施例說明在範例再生裝置 中輸出文字式字幕串流至螢幕的程序的的示意圖。 請參照圖21,其係顯示的是應用包括樣式相關資訊的 轉換(rendering)資訊2101的流程、文字資訊214〇轉換 成點陣圖影像2106的流程以及依據包括在組合資訊雇 26 201009820 1 -i3-pif.doc 中的輸出位置資訊(像是region_horizontal_p〇sition與_ region_vertical_position )將已轉換點陣圖影像輸出在圖形 平面(graphics plane, GP ) 1750上對應位置的流程。 轉換(rendering)資訊2102呈現樣式資訊,像是區域 的寬、高、前景的顏色、背景顏色、字體名稱與字體大小。 如上所述,組合資訊2108指示播放的開始時間與結束時 間、視窗區域的水平與垂直位置資訊等等,其中在視窗區 域中標題輸出在圖形平面(graphics plane,GP ) 1750上。 ® 圖22是用以根據本發明實施例說明在再生裝置17〇〇 (如圖17所示)中轉換(rendering)文字式字幕資料串流 220的流程的示意圖。 請參照圖22、21與圖8,藉由使用 region_horizontal_position 、region_vertical_position 、 region_width與region_height指定的視窗區域被指定成標 題顯示在圖形平面(graphics plane, GP) 1750上的一區域, 其中 region_horizontal position、regi〇n_vertical position、 • region-width與regi〇n_height是用於定義在DSU的標題的 視窗區域的位置資訊830。已轉換(rendering)對話的點 陣圖影像是從藉由regi〇nJi〇rizontal_position與 region一vertical_p〇siti〇n所指定的開始點位置被顯示,其中 region—horizontal_p〇sition 與 regi〇n_vertical_position 是視 窗區域中對話的輸出位置840。 其間’根據本發明再生裝置儲存由使用者選擇的樣式 資訊(style_id)在系統暫存區中 '圖23是根據本發明實 27 201009820 l ooi>pif.doc 施例繪示配置在範例再生裝置 串流的範繼_存器的示意圖祕再生文字式字幕資料 請參照圖23,狀態暫存器 六 稱懸)儲存由使用者在第’以下簡 =樣。因此,例如:二擇即的使^ 1700(如圖17所不)執行選單呼 从/裝置 式資訊改變按鈕,使用者之前遘沾搂'、之後按下樣 H Λ _存資訊的暫存11會被改變。 依,上,錄文字式字幕資料串流22()的儲存媒體與 再生文字式字幕資料串流220的具吐胺要水 、、 I音?9Π W , 的再生裝置來再生文字式字 幕資枓串抓220的方法將配合_ 24描述如下。圖2 據本發明實施例再生文字式字幕資料串流22()的方法的流 程圖。 在步驟2410中,從儲存媒體23〇 (如2所示)讀取 包括DSU資訊與DPU資訊的文字式字幕資料串流22〇, 且在步驟2420中’依據包括在Dsu資訊中的轉換 (rendering)資訊將包括在DPU資訊中的標題文字轉換成 點陣圖影像。在步驟2430中,根據時間資訊與位置資訊將 已轉換點陣圖影像輸出在螢幕上,其中時間資訊與位置資 訊為包括在DPU資訊中的組合資訊。 如上所述,本發明提供一儲存媒體,其將文字式字幕 資料串流與影像資料分開儲存。本發明也提供一再生裝置 與再生此文字式字幕資料串流的方法,以致於字幕資料的 製作與已製作字幕資料的編輯可以更容易。同時,因為不 201009820 . i oz jz-iJ-pif-doc 限制字幕資料項目的數目,所以可提供_語 此外,由於字幕資料是以一個樣式資訊 放資訊項目來形成,所以用應用至全部播放資料 式可事先定義並可以各種方式改變,且也可㈣ 部分的線内資訊與使用者可改變樣式。 知題 再者,藉域用數购近減資訊項目可開 連續再生並可使用此來實作淡入/淡出效果。 $的 本發明可實作成在電腦可讀記錄媒體上的程 係可藉由-般電腦讀取。電腦可軌錄親包括各式可链 存電腦可讀資㈣記錄雜’㈣可讀記錄媒 降 =存媒體(例如R0M、軟碟、硬碟)、光學儲存 如CD-ROM、DVD)以及載波(亦即透過網際網路傳輪(幻 同時’電腦可讀記錄媒體可透過鴨分享在電腦系統; 可以分散方式儲存與執行電腦可讀碼。255 Font Files FIG. 18 is a diagram for explaining an example reproduction device 201009820 i〇z^/-iJ-pif.doc according to an embodiment of the present invention. However, to ensure uninterrupted playback, the size of the text subtitle stream 22〇 should be less than or equal to the size of the preload buffer Π 10 of the reproducing device 17 (e.g., as shown in FIG. 17). Figure 19 is a diagram for explaining a reproduction procedure of a DPU in an exemplary reproduction apparatus according to an embodiment of the present invention. Referring to Fig. 19, Fig. 13, and Fig. pi, the flow of reproducing j)pu is shown. The playback controller 1740 controls the time for the rendering dialogue to be output on the graphics plane plane, GP) 1750, by using the pair of start time information (dialog) specifying the output time of the dialog included in the DPU (dialog) A start-PTS) and dialog end time information (dialog-end-PTS). At this time, when the conversion is stored in the bitmap object buffer (BOB) 1 42 of the rendered dialogue bitmap image to the graphics plane (GP) 1750, where the bitmap The object buffer (BB) is included in the text subtitle decoder 1730, and the session start time information is specified for a time. That is, if the dialog start time is defined in the DPU, the bitmap information required to construct the dialog after completing the conversion of the information to the graphics plane (GP) 1750 is ready to be used. Similarly, when the regenerative DPU is completed, the dialog end time information is specified for a time. At this time, the caption decoder 173A and the graphics plane (GP) 1750 are reset. The best 疋', whether it is continuously reproduced in the text subtitle decoder 1730 buffer (like the bitmap object buffer (BOB) 1742) will also be between the start and end times of the DPU. Was reset. 24 201009820 . 1 ν^^^,-jJ-pif.doc However, when several DPUs are required for continuous reproduction, the text subtitle decoder 1730 and the graphics plane (GP) 1750 are not reset and stored in Each buffer (such as a dialog array buffer (DCB) 1734, a dialog buffer (DB) 1736, and a bitmap object buffer (OBO) 1742) The content will be retained. That is, when the conversation end time information of the currently reproduced Dpu is the same as the conversation start time of the continuously reproduced DPU, then the contents of each buffer are retained without being reset. In particular, there is a fade in/out effect as a continuous reproduction example in which several Dpus are applied. The fade in/out effect can be implemented by changing the color l〇〇k-up table (CLUT) 1760 of the bitmap object, where the bitmap object is converted to a graphics plane (Gp) 175 Hey. That is, the first DPU includes combined information such as color, style, and output time, and subsequent consecutive DPUs have the same combination information as the first DPU, but only update the color palette information. In this case, the fade in/out effect is achieved by gradually changing the transparency (from 0°/. to 100%) in the color information item. In particular, when the data structure of the DPU shown in Fig. 12B is used, the lightening effect can be effectively implemented using the color update flag 1260. That is, if the dialog playback controller 174 checks and confirms that the color update flag 1260 included in the Dpu is set to, 〇,,, that is, if it is generally not necessary to fade in/out the effect, then The color information included in the DSU shown in Fig. 6 will be basically used. However, if the play controller 1740 checks and confirms that the color update flag 126 included in the DPU is 25 201009820 ιοζ^ζ-ij-pif.doc is "1", that is, if it is necessary to fade in/out For the effect, the fade in/out effect is implemented by using the color information 1270 (instead of the color information in the DSU shown in FIG. 6). At this time, the fade in/out effect is simply implemented by adjusting the transparency of the color information 1270 included in the DPU. After displaying the fade in/out effect, it is best to update the color look-up table (CLUT) 1760 to the original color information included in the DSU. This is because unless the color map is updated (c〇1〇r l〇〇k_up table, CLUT) 1760', once the specified color information can be applied continuously, contrary to the manufacturer's expectations. φ Figure 20 is a schematic diagram for explaining a procedure for synchronizing and outputting a text subtitle stream and an animation data in an exemplary reproducing apparatus according to an embodiment of the present invention. Referring to FIG. 20, the dialog start time information and the dialog end time information of the Dpu included in the text subtitle data stream 22〇 should be defined as the time point on the global time axis used in the playlist, so as to be associated with the AV of the multimedia image. The output time of the data stream is synchronized. Therefore, it is possible to avoid the discontinuity between the system time clock of the Av data stream (System time is also, π.) and the dialogue output time of the text subtitle stream 220 (dial 〇 〇 说 time time pTS). Figure 21 is a diagram for explaining a procedure for outputting a text subtitle stream to a screen in an exemplary reproducing apparatus according to an embodiment of the present invention. Referring to FIG. 21, the flow of the application of the rendering information 2101 including the style-related information, the flow of the text information 214, the conversion to the bitmap image 2106, and the basis for inclusion in the combined information employment 26 201009820 1 -i3 are shown. The output position information in -pif.doc (such as region_horizontal_p〇sition and _region_vertical_position) outputs the converted bitmap image to the corresponding position on the graphics plane (GP) 1750. The rendering information 2102 renders style information such as the width, height, foreground color, background color, font name, and font size of the region. As described above, the combined information 2108 indicates the start time and end time of the play, the horizontal and vertical position information of the window area, and the like, wherein the title is output on the graphics plane (GP) 1750 in the window area. Figure 22 is a diagram for explaining the flow of rendering a text subtitle stream 220 in a reproducing apparatus 17 (shown in Figure 17) in accordance with an embodiment of the present invention. Referring to FIGS. 22, 21 and 8, the window region specified by using region_horizontal_position, region_vertical_position, region_width, and region_height is specified as an area in which the title is displayed on a graphics plane (GP) 1750, where region_horizontal position, regi〇 N_vertical position, • region-width, and regi〇n_height are position information 830 for defining a window region of the title of the DSU. The bitmap image of the rendered dialogue is displayed from the starting point position specified by regi〇nJi〇rizontal_position and region-vertical_p〇siti〇n, where region-horizontal_p〇sition and regi〇n_vertical_position are window regions The output position of the middle conversation is 840. In the meantime, the reproduction device stores the style information (style_id) selected by the user in the system temporary storage area according to the present invention. FIG. 23 is a schematic diagram of the example reproduction device according to the present invention 27 201009820 l ooi>pif.doc The schematic diagram of the flow of the continuation of the stream is shown in Figure 23, and the state register is suspended. The user is stored in the following figure. Therefore, for example, the 2700 (as shown in FIG. 17) performs the menu call/device type information change button, and the user presses the sample before, and then presses the sample H Λ _ the temporary storage of the information 11 Will be changed. According to the above, the recording medium of the text subtitle data stream 22 () and the reproduction type caption data stream 220 have a reproduction device for reproducing the text subtitles. The method of string grabbing 220 will be described as follows. 2 is a flow diagram of a method of reproducing a text subtitle data stream 22() in accordance with an embodiment of the present invention. In step 2410, the text subtitle stream 22 including the DSU information and the DPU information is read from the storage medium 23 (shown as 2), and in step 2420, 'converting according to the inclusion in the Dsu information (rendering) The information will include the title text in the DPU information converted into a bitmap image. In step 2430, the converted bitmap image is output on the screen according to the time information and the location information, wherein the time information and the location information are combined information included in the DPU information. As described above, the present invention provides a storage medium that stores a stream of text subtitle data separately from image data. The present invention also provides a reproducing apparatus and a method of reproducing the stream of the subtitle data so that the creation of the subtitle data and the editing of the subtitle data can be made easier. At the same time, because 201009820. i oz jz-iJ-pif-doc limits the number of subtitle data items, it can provide _ language. In addition, since the subtitle data is formed by a style information information item, it is applied to all the playing materials. The formula can be defined in advance and can be changed in various ways, and the in-line information and the user can change the style in part (4). In addition, the domain can be used for continuous reduction and can be used to implement the fade in/out effect. The process of the present invention which can be implemented on a computer readable recording medium can be read by a general computer. Computer trackable pro-included all kinds of chain-storable computer-readable resources (four) recording miscellaneous '(four) readable recording media drop = storage media (such as R0M, floppy disk, hard disk), optical storage such as CD-ROM, DVD) and carrier (That is, through the Internet transmission (magic simultaneous 'computer-readable recording media can be shared in the computer system through the duck; can store and execute computer readable code in a decentralized manner.

—雖然本發明已以較佳實施例揭露如上然其並非用以 發明’任何熟習此技藝者’在不脫離本發明之精神 範圍内’當可作些許之更動與潤飾。例如可以使用任 電腦可讀媒體或資料儲存裝置將文字式字幕資料與A 記錄。此外,文字式字幕資料可如圖3與圖4以不 1立ί配置^„者,圖17的再生裝置可實作成記錄装置的 心或者是單—執行記錄與/或再生魏的裝置。類似 U可實作成具有勃體的晶片或一般目的或特定目的 j式化電腦純行® 24所㈣方法。ϋ此本發明之保護 不限於所揭露的實施例,而當視後附之申請專利範圍 29 201009820 丄 ooz-u-pif.doc 所界定者為準。 【圖式簡單說明】 圖1是用以根據本發明實施例說明記錄在儲存媒體上 的多媒體資料結構的示意圖。 —圖2是根據本發明實施例繪示圖丨的剪輯AV串流與 文字式字幕串流的範例資料結構的示意圖。 • 圖3是用以根據本發明實施例說明文字式字幕串流的 資料結構的示意圖。 机、 _ 圖4是根據本發明實施例繪示具有圖3的資料結 文子式字幕串流的示意圖。 -圖5是根據本發明實施例繪示圖3中的對話樣式單元 的示意圖。 圖6是用以根據本發明實施例說明對話樣式 例資料結構的示意圖。 早疋的範 圖7是用以根據本發明另一實施例說明對話椹 的範例資料結構的示意圖。 僳式單元 圖8是根據本發明實施例繪示圖6或圖7中的範例 話樣式單元的示意圖。 子 _圖9A與9B是根據本發明實施例繪示包括藉由字體 訊參考的數個字體集合的範例剪輯資訊檔案的示意圖。 圖丨〇是顯示藉由字體檔案資訊(繪示於圖9A與9 參考的數個字體檔案的位置的示意圖。 〜) 圖U是用以根據本發明另一實施例說明圖3中 放單元的範例資料結構的示意圖。 、§播 30 201009820 % 圖12A與12B是用以根據本發明另一實施例說明圖3 中對話播放單元的範例資料結構的示意圖。 圖13是根據本發明實施例繪示圖η至圖UB中的對 話播放單元的示意圖。 圖14是用以說明圖π _的對話文字資訊的範例資料 結構的不意圖。 圖15是根據本發明實施例繪示圖13的對話文字資訊 的示意圖。 圖16是用以說明在連續地再生連續對話播放單元 (dialog presentation units,DPUs )的限制的示意圖。 圖Π是用以根據本發明實施例說明用於文字式字幕 串流的範例再生裝置的示意圖。 圖18是用以根據本發明實施例說明在範例再生裝置 中文字式字幕串流的預載入程序的示意圖。 圖19是用以根據本發明實施例說明在範例再生裝置 中對話播放單元(dialog presentation unit,DPU)的再生程 序的示意圖。 圖20是用以根據本發明實施例說明在範例再生裝置 中文字式字幕串流與動晝資料同步與輸出的程序的示意 圖。 圖21是用以根據本發明實施例說明在範例再生裝置 中輸出文字式字幕串流至螢幕的程序的的示意圖。 圖22是用以根據本發明實施例說明在範例再生裝置 中表現文字式字幕串流的程序的示意圖。 31The present invention has been described in its preferred embodiments as a matter of course, and is not intended to be a part of the invention. For example, the text subtitle data can be recorded with A using any computer readable medium or data storage device. In addition, the text subtitle data can be configured as shown in FIG. 3 and FIG. 4, and the reproducing device of FIG. 17 can be implemented as a heart of the recording device or a device for performing recording and/or reproducing Wei. U can be implemented as a wafer having a body or a general purpose or a specific purpose. The protection of the present invention is not limited to the disclosed embodiment, and the scope of the patent application is attached. 29 201009820 丄ooz-u-pif.doc is defined as follows. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic diagram for explaining a structure of a multimedia material recorded on a storage medium according to an embodiment of the present invention. The embodiment of the present invention is a schematic diagram showing an example data structure of a clip AV stream and a text subtitle stream in FIG. 3. FIG. 3 is a schematic diagram of a data structure for explaining a text subtitle stream according to an embodiment of the present invention. 4 is a schematic diagram showing the data subtitle stream of FIG. 3 according to an embodiment of the present invention. FIG. 5 is a schematic diagram of the dialog style unit of FIG. 3 according to an embodiment of the present invention. Used to A schematic diagram of a structure of a dialog style example is described in accordance with an embodiment of the present invention. FIG. 7 is a schematic diagram of an exemplary data structure for explaining a dialog box according to another embodiment of the present invention. FIG. 8 is a diagram of a method according to the present invention. FIG. 9A and FIG. 7B are schematic diagrams showing an exemplary clip information file including a plurality of font sets referenced by a font reference according to an embodiment of the present invention. Figure 丨〇 is a schematic diagram showing the position of several font files referenced by Figures 9A and 9 by means of font file information. Figure 5 is a diagram for explaining the placement unit of Figure 3 according to another embodiment of the present invention. FIG. 12A and FIG. 12B are diagrams for explaining an example data structure of the dialog playing unit of FIG. 3 according to another embodiment of the present invention. FIG. 13 is a diagram showing an example of a structure of a dialog box according to another embodiment of the present invention. Figure η to Figure UB is a schematic diagram of the dialog playing unit. Figure 14 is a schematic diagram for explaining the example data structure of the dialog text information of Figure π_. Figure 15 is a diagram of the present invention. The embodiment shows a schematic diagram of the dialog text information of Fig. 13. Fig. 16 is a schematic diagram for explaining the limitation of continuously reproducing the continuous dialog presentation units (DPUs). The figure is for explaining according to an embodiment of the present invention. FIG. 18 is a schematic diagram showing a preloading procedure for a text subtitle stream in an example reproducing apparatus according to an embodiment of the present invention. FIG. DETAILED DESCRIPTION OF THE INVENTION A schematic diagram of a playback procedure of a dialog presentation unit (DPU) in an example playback device is illustrated. Figure 20 is a diagram for explaining a procedure for synchronizing and outputting a text subtitle stream and an animation data in an exemplary reproducing apparatus according to an embodiment of the present invention. Figure 21 is a diagram for explaining a procedure for outputting a text subtitle stream to a screen in an example reproducing apparatus according to an embodiment of the present invention. Figure 22 is a diagram for explaining a procedure for representing a text subtitle stream in an exemplary reproducing apparatus in accordance with an embodiment of the present invention. 31

201009820. 丄 OZ JZrL^pif.dOC 圖23疋根據本發明實施例繪示配置在範例再生裝置 中用於再生文字式字幕串流的範例狀態暫存器的示意圖。 圖24是根據本發明實施例再生文字式字幕串流的方 法的流程圖。 【主要元件符號說明】 1〇〇:多媒體資料結構 110:剪輯 112:AV資料串流 114 :剪輯資訊 120 :播放清單 122 :播放項目 130:電影物件 140 :目錄表 202 :視訊串流 204 ·音訊串流 206 :播放圖形串流 208 :互動圖形串流 210 : AV資料串流 220 .文字式字幕資料 230 :儲存媒體 310·對話樣式單元(心㈣styie unit,Dsu) 320 330 340.對話播放單元(diai〇g presentati〇n units, DPU) 350:PES 封包 32 201009820、 362 :傳輸封包(transport packets, TP)201009820. 丄 OZ JZrL^pif.dOC FIG. 23A is a diagram showing an exemplary state register for reproducing a text subtitle stream in an example reproduction device according to an embodiment of the present invention. Figure 24 is a flow diagram of a method of reproducing a text subtitle stream in accordance with an embodiment of the present invention. [Main Component Symbol Description] 1〇〇: Multimedia Data Structure 110: Clip 112: AV Data Stream 114: Clip Information 120: Playlist 122: PlayItem 130: Movie Object 140: Table of Contents 202: Video Stream 204 • Audio Stream 206: Play graphics stream 208: interactive graphics stream 210: AV data stream 220. Text subtitle data 230: storage medium 310. dialog style unit (heart (four) styie unit, Dsu) 320 330 340. dialog playback unit ( Diai〇g presentati〇n units, DPU) 350: PES packet 32 201009820, 362: transport packets (TP)

410 : DSU410 : DSU

420 : DPU 610 :調色板集合 620:區域樣式集合 622 .區域資訊 624 :文字樣式資訊 626 :使用者可改變樣式集合 ❹ 710:區域樣式 820:區域樣式 830 :區域資訊 840 :文字樣式資訊 850 :使用者可改變樣式集合 860 :調色板集合 910、940 :剪輯資訊檔案 1110 :時間資訊 ❹ 1120:調色板參考資訊 1130 :對話區域資訊 1132 :樣式參考資訊 1134:對話文字資訊 1210:時間資訊 1220:調色板集合 1230 :對話區域資訊 1232 :樣式參考資訊 33 201009820.420: DPU 610: palette set 620: region style set 622. area information 624: text style information 626: user can change style set ❹ 710: area style 820: area style 830: area information 840: text style information 850 User can change style set 860: palette set 910, 940: clip information file 1110: time information ❹ 1120: palette reference information 1130: dialog area information 1132: style reference information 1134: dialog text information 1210: time Information 1220: Palette Collection 1230: Conversation Area Information 1232: Style Reference Information 33 201009820.

丄 ;x_i^_pif.d〇G丄 ;x_i^_pif.d〇G

1234 :對話文字資訊 1250 :時間資訊 1260:顏色更新旗標 1270 :色彩調色板集合 1280:對話區域資訊 1282:樣式參考資訊 1284:對話文字資訊 1410:線内樣式資訊 1420 :對話文字 1700 :再生裝置 1710 :字幕預載緩衝器(subtitle preloading buffer, SPB) 1712 .字體預载緩衝器(font prei〇ading buffer, FPB) 1730 :文字式字幕解碼器 1732:文字式幕處理器 參 1734 :對話排列緩衝器(dialog composition buffer, DCB) 1736 :對話緩衝器(dial〇g buffer,DB ) 1738 ·文字式字幕轉換(rendering)器 1740 :對話播放控制器 1742 :點陣圖物件緩衝器(bitmap object buffer, BOB ) 1750 ·圖形平面(graphics plane,GP ) 1760 :色彩對照表(color look-up table,CLUT ) 341234: Conversational text information 1250: Time information 1260: Color update flag 1270: Color palette collection 1280: Conversation area information 1282: Style reference information 1284: Conversational text information 1410: Inline style information 1420: Conversational text 1700: Regeneration Device 1710: Subtitle preloading buffer (SPB) 1712. Font prei〇ading buffer (FPB) 1730: Text subtitle decoder 1732: Text screen processor reference 1734: Conversation arrangement Dialogue composition buffer (DCB) 1736: dialog buffer (dial〇g buffer, DB) 1738 · text subtitle conversion (rendering) device 1740: dialog playback controller 1742: bitmap object buffer (bitmap object buffer , BOB ) 1750 · graphics plane (GP ) 1760 : color look-up table (CLUT ) 34

Claims (1)

201009820 i〇z^z-i)-pif.doc 七、申請專利範圍: 1.一種儲存媒體,包括: 一影音資料;以及 一文予式字幕資料’其用以提供該影音資料之字幕, 其中該文字式字幕資料包括一複數個對話播放單元與 一對話樣式單元’該對話樣式單元定義使用於該對話播放 單元之一組輸出樣式,以及201009820 i〇z^zi)-pif.doc VII. Patent application scope: 1. A storage medium comprising: a video and audio material; and a subtitle data for providing subtitles of the audio and video material, wherein the text type The subtitle data includes a plurality of dialog play units and a dialog style unit. The dialog style unit defines a group output style used for the dialog play unit, and 每一個對話播放單元包括一對話文字資訊,指示該對 話文子資訊輸出時間之一時間資訊,定義給該對話文字資 訊之色彩之一調色板資訊,及一顏色更新旗標,該顏色更 新旗標用來指示與一先前對話播放單元比較時,是否只有 該調色板資訊有改變。 2·如申請專利範圍第丨項所述之儲存影像資料與文字 式字幕的儲存媒體,其中若該顏色更新旗標被設為”Γ,時, 該調色板資訊是根據該先前對話播放單元,用於一先前對 話文字資訊之輸出。 ' 3·如中請專利範圍第i項所述之儲存影像資料與文字 式字幕的儲存媒體,其中若該顏色更新旗標被設為”〇,,時, 該調色板資訊是根據該當時對話播放單元,用於該對話文 4.-種從儲存影像資料與文字式字幕資料的儲存媒體 再生資料的裝置’錢在—影壯依_干 標題,該裝置包括: 貝种.4不 ;以及 視訊解碼器’其用以解碼該影像資料 35 201009820 L〇^z-u-pif.doc 字式字幕資料,包括 定義一組輸出式樣以用於該::70該對話樣式單元 樣式單元將該些對話播放單:轉換’依據該對話 制已轉換之該些對話播放單元1點陣圖影像,並控 同步, 、輪出與已解碼之影像資料 該對單元包括-對話文字資訊,指* 字資訊時間資訊,定義給該對話文 色更新旗標用來及一顏色更新旗標,缝❹ 只有該調先-話播放單元比較時,是否 字弋车H月專利範圍第4項所述之從儲存影像資料與文 再生㈣的裝置,其中若該顏色更新 诗叹:1時’該子幕解碼器使用該調色板資訊,根據 對話·單元,用於—先前對話文字資訊之輸出。 Α如申請專利範圍第1項所述之從儲存影像資料與文 予f的儲存媒體再生資料的裝置,其中若該顏色更新 _ 二被叹為〇’’時,該字幕解瑪器使用該調色板資訊,根據 該备時對話播放單元,用於該對話文字資訊之輸出。 36Each dialog playing unit includes a dialog text message indicating a time information of the dialogue text information output time, a palette information defining a color of the dialog text information, and a color update flag, the color update flag Used to indicate whether only the palette information has changed when compared to a previous dialog playback unit. 2. The storage medium storing image data and text subtitles as described in the scope of the patent application, wherein if the color update flag is set to "Γ, the palette information is based on the previous dialog playback unit. For the output of a previous conversational text message. 3 3. The storage medium for storing image data and text subtitles as described in item i of the patent scope, wherein the color update flag is set to "〇,, When the color palette information is based on the current dialogue playing unit, the dialog box is used for the text 4. The device for reproducing data from the storage medium storing the image data and the text subtitle data 'money in the shadow _ _ _ dry title The device comprises: a shell type .4; and a video decoder for decoding the image data 35 201009820 L〇^zu-pif.doc font subtitle data, including defining a set of output patterns for the:: 70. The dialog style unit style unit converts the dialog playlists: converts the bitmap image of the dialog playback unit 1 that has been converted according to the dialog system, and controls synchronization, and rotates and decodes the image data. The pair of units includes - dialog text information, which refers to the * word information time information, is defined for the dialog text update flag and a color update flag, and the stitching is only when the first-to-speech unit is compared. The apparatus for storing image data and text reproduction (4) according to item 4 of the H patent scope of the vehicle, wherein if the color is updated, the sigh: 1 when the sub-picture decoder uses the palette information, according to the dialog unit, Used for the output of the previous conversation text message. For example, the device for reproducing data from the storage medium and the storage medium of the text f as described in the first paragraph of the patent application, wherein the color correction _ is smashed as '', the subtitle grammar uses the tone The swatch information is used for outputting the dialog text information according to the standby dialog playback unit. 36
TW098133833A 2004-02-28 2005-02-25 A storage medium and an apparatus for reproducing data from a storage medium storing audio-visual data and text-based subtitle data TWI417873B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20040013827 2004-02-28
KR1020040032290A KR100727921B1 (en) 2004-02-28 2004-05-07 Storage medium recording text-based subtitle stream, reproducing apparatus and reproducing method thereof

Publications (2)

Publication Number Publication Date
TW201009820A true TW201009820A (en) 2010-03-01
TWI417873B TWI417873B (en) 2013-12-01

Family

ID=36760967

Family Applications (2)

Application Number Title Priority Date Filing Date
TW098133833A TWI417873B (en) 2004-02-28 2005-02-25 A storage medium and an apparatus for reproducing data from a storage medium storing audio-visual data and text-based subtitle data
TW094105743A TWI320925B (en) 2004-02-28 2005-02-25 Apparatus for reproducing data from a storge medium storing imige data and text-based subtitle data

Family Applications After (1)

Application Number Title Priority Date Filing Date
TW094105743A TWI320925B (en) 2004-02-28 2005-02-25 Apparatus for reproducing data from a storge medium storing imige data and text-based subtitle data

Country Status (10)

Country Link
JP (2) JP4776614B2 (en)
KR (1) KR100727921B1 (en)
CN (3) CN100479047C (en)
AT (1) ATE504919T1 (en)
DE (1) DE602005027321D1 (en)
ES (1) ES2364644T3 (en)
HK (3) HK1088434A1 (en)
MY (1) MY139164A (en)
RU (1) RU2490730C2 (en)
TW (2) TWI417873B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7787753B2 (en) 2003-04-09 2010-08-31 Lg Electronics Inc. Recording medium having a data structure for managing reproduction of text subtitle data and methods and apparatuses of recording and reproducing
KR20050078907A (en) 2004-02-03 2005-08-08 엘지전자 주식회사 Method for managing and reproducing a subtitle of high density optical disc
WO2005091722A2 (en) 2004-03-26 2005-10-06 Lg Electronics Inc. Recording medium and method and apparatus for reproducing text subtitle stream recorded on the recording medium
ATE479987T1 (en) * 2004-03-26 2010-09-15 Lg Electronics Inc STORAGE MEDIUM, METHOD AND APPARATUS FOR PLAYBACKING SUBTITLE STREAMS
CN1934625B (en) * 2004-03-26 2010-04-14 Lg电子株式会社 Method and apparatus for reproducing and recording text subtitle streams
KR100818926B1 (en) * 2006-10-31 2008-04-04 삼성전자주식회사 Apparatus and method for handling presentation graphic of optical disk
CN101183524B (en) * 2007-11-08 2012-10-10 腾讯科技(深圳)有限公司 Lyric characters display process and system
CN101904169B (en) * 2008-02-14 2013-03-20 松下电器产业株式会社 Reproduction device, integrated circuit, reproduction method
EP3118854B1 (en) * 2014-09-10 2019-01-30 Panasonic Intellectual Property Corporation of America Recording medium, playback device, and playback method

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5294982A (en) * 1991-12-24 1994-03-15 National Captioning Institute, Inc. Method and apparatus for providing dual language captioning of a television program
EP0714582B1 (en) * 1993-08-20 1999-04-21 Thomson Consumer Electronics, Inc. Closed caption system for use with compressed digital video transmission
DK0745307T5 (en) * 1994-12-14 2010-10-11 Koninkl Philips Electronics Nv Subtitle transmission system
US5721720A (en) * 1994-12-28 1998-02-24 Kabushiki Kaisha Toshiba Optical recording medium recording pixel data as a compressed unit data block
JPH08241068A (en) * 1995-03-03 1996-09-17 Matsushita Electric Ind Co Ltd Information recording medium, device and method for decoding bit map data
JPH08275205A (en) * 1995-04-03 1996-10-18 Sony Corp Method and device for data coding/decoding and coded data recording medium
US5848352A (en) * 1995-04-26 1998-12-08 Wink Communications, Inc. Compact graphical interactive information system
JP3484838B2 (en) * 1995-09-22 2004-01-06 ソニー株式会社 Recording method and playback device
US6345147B1 (en) * 1995-11-24 2002-02-05 Kabushiki Kaisha Toshiba Multi-language recording medium and reproducing device for the same
JPH10210504A (en) * 1997-01-17 1998-08-07 Toshiba Corp Sub video image color pallet setting system
JPH10271439A (en) * 1997-03-25 1998-10-09 Toshiba Corp Dynamic image display system and dynamic image data recording method
US6288990B1 (en) * 1997-10-21 2001-09-11 Sony Corporation Reproducing apparatus, recording apparatus, and recording medium
JPH11196386A (en) * 1997-10-30 1999-07-21 Toshiba Corp Computer system and closed caption display method
JP3377176B2 (en) * 1997-11-28 2003-02-17 日本ビクター株式会社 Audio disc and decoding device
KR100327211B1 (en) * 1998-05-29 2002-05-09 윤종용 Sub-picture encoding method and apparatus
JP2000023082A (en) * 1998-06-29 2000-01-21 Toshiba Corp Information recording and reproducing device for multiplex television broadcast
JP2002056650A (en) * 2000-08-15 2002-02-22 Pioneer Electronic Corp Information recorder, information recording method and recording medium with record control program recorded therein
JP4467737B2 (en) * 2000-08-16 2010-05-26 パイオニア株式会社 Information recording apparatus, information recording method, and information recording medium on which recording control program is recorded
JP4021264B2 (en) * 2002-07-11 2007-12-12 株式会社ケンウッド Playback device
KR100939711B1 (en) * 2002-12-12 2010-02-01 엘지전자 주식회사 Apparatus and method for reproducing a text based subtitle
KR100930349B1 (en) * 2003-01-20 2009-12-08 엘지전자 주식회사 Subtitle data management method of high density optical disc
CN100473133C (en) * 2004-02-10 2009-03-25 Lg电子株式会社 Text subtitle reproducing method and decoding system for text subtitle
JP2007522596A (en) * 2004-02-10 2007-08-09 エルジー エレクトロニクス インコーポレーテッド Recording medium and method and apparatus for decoding text subtitle stream
KR100739680B1 (en) * 2004-02-21 2007-07-13 삼성전자주식회사 Storage medium for recording text-based subtitle data including style information, reproducing apparatus, and method therefor
WO2005091722A2 (en) * 2004-03-26 2005-10-06 Lg Electronics Inc. Recording medium and method and apparatus for reproducing text subtitle stream recorded on the recording medium

Also Published As

Publication number Publication date
CN1774759A (en) 2006-05-17
RU2007146766A (en) 2009-06-20
JP2011035922A (en) 2011-02-17
TW200529202A (en) 2005-09-01
ES2364644T3 (en) 2011-09-08
JP5307099B2 (en) 2013-10-02
KR100727921B1 (en) 2007-06-13
MY139164A (en) 2009-08-28
CN101059984B (en) 2010-08-18
DE602005027321D1 (en) 2011-05-19
CN101360251B (en) 2011-02-16
CN101059984A (en) 2007-10-24
TWI417873B (en) 2013-12-01
TWI320925B (en) 2010-02-21
ATE504919T1 (en) 2011-04-15
JP4776614B2 (en) 2011-09-21
CN100479047C (en) 2009-04-15
RU2490730C2 (en) 2013-08-20
KR20050088035A (en) 2005-09-01
HK1116588A1 (en) 2008-12-24
HK1126605A1 (en) 2009-09-04
HK1088434A1 (en) 2006-11-03
CN101360251A (en) 2009-02-04
JP2007525904A (en) 2007-09-06

Similar Documents

Publication Publication Date Title
US8437612B2 (en) Storage medium recording text-based subtitle stream, reproducing apparatus and reproducing method for reproducing text-based subtitle stream recorded on the storage medium
TWI298874B (en) Text subtitle decorder and method for decoding text subtitle streams
US8615158B2 (en) Reproduction device, reproduction method, program storage medium, and program
KR100970735B1 (en) Reproducing method for information storage medium recording audio-visual data and recording apparatus therefor
US7756398B2 (en) Recording medium and method and apparatus for reproducing text subtitle stream for updating palette information
TWI417873B (en) A storage medium and an apparatus for reproducing data from a storage medium storing audio-visual data and text-based subtitle data
KR20070028325A (en) Text subtitle decoder and method for decoding text subtitle streams
KR20050031847A (en) Storage medium for recording subtitle information based on text corresponding to audio-visual data including multiple playback route, reproducing apparatus and reproducing method therefor
US20050196146A1 (en) Method for reproducing text subtitle and text subtitle decoding system