EP1552515A4 - INFORMATION STORAGE MEDIUM CONTAINING MULTI-LANGUAGE SUBTITLE DATA USING TEXT DATA AND DOWNLOADABLE FONTS AND APPARATUS THEREFOR - Google Patents

INFORMATION STORAGE MEDIUM CONTAINING MULTI-LANGUAGE SUBTITLE DATA USING TEXT DATA AND DOWNLOADABLE FONTS AND APPARATUS THEREFOR

Info

Publication number
EP1552515A4
EP1552515A4 EP03751536A EP03751536A EP1552515A4 EP 1552515 A4 EP1552515 A4 EP 1552515A4 EP 03751536 A EP03751536 A EP 03751536A EP 03751536 A EP03751536 A EP 03751536A EP 1552515 A4 EP1552515 A4 EP 1552515A4
Authority
EP
European Patent Office
Prior art keywords
data
subtitle
text
text data
storage medium
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP03751536A
Other languages
German (de)
English (en)
French (fr)
Other versions
EP1552515A1 (en
Inventor
Kil-Soo Jung
Seong-Jin Moon
Jung-Wan Ko
Sung-Wook Park
Hyun-Kwon Chung
Jung-Kwon Heo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of EP1552515A1 publication Critical patent/EP1552515A1/en
Publication of EP1552515A4 publication Critical patent/EP1552515A4/en
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • H04N9/8233Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being a character code signal
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B2220/00Record carriers by type
    • G11B2220/20Disc-shaped record carriers
    • G11B2220/25Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
    • G11B2220/2537Optical discs
    • G11B2220/2562DVDs [digital versatile discs]; Digital video discs; MMCDs; HDCDs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/84Television signal recording using optical recording
    • H04N5/85Television signal recording using optical recording on discs or drums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction

Definitions

  • the present invention relates to an information storage medium on which subtitles for supporting multiple languages using text data and downloadable fonts are recorded and an apparatus therefor.
  • FIG. 1 is a diagram of a data structure for a DVD.
  • the disc space of a DVD that is a multimedia storage medium is divided into a VMG area and a plurality of VTS areas.
  • Title information and information on a title menu are stored in the VMG area, and information on the title is stored in the plurality of VTS areas.
  • the VMG area comprises 2 to 3 files and each VTS area comprises 3 to 12 files.
  • FIG. 2 is a detailed diagram of a VMG area.
  • the VMG area includes a VMGI area storing additional information on the VMG, a VOBS area storing video information (video object) on the menu, and a backup area for the VMGI. These areas exist as one file and among them the presence of the VOBS area is optional.
  • VTS In the VTS area, information on a title, which is a reproduction unit, and a VOBS, which is video data, are stored. In one VTS, at least one title is recorded.
  • FIG. 3 is a detailed diagram of a VTS area.
  • a VTS area includes video title set information (VTSI), a VOBS that is video data for a menu screen, a VOBS that is video data for a video title set, and backup data of the VTSI.
  • VTSI video title set information
  • VOBS that is video data for a menu screen
  • VOBS that is video data for a video title set
  • backup data of the VTSI The presence of the VOBS for displaying a menu screen is optional.
  • Each VOBS is again divided into VOBs and cells that are recording units.
  • One VOB comprises a plurality of cells.
  • the lowest recording unit mentioned in the present invention is a cell.
  • FIG. 4 is a detailed diagram of a VOBS that is video data.
  • one VOBS comprises a plurality of VOBs
  • one VOB comprises a plurality of cells.
  • a cell comprises a plurality of VOBUs.
  • a VOBU is data coded by a moving pictures expert group (MPEG) method of coding moving pictures used in a DVD.
  • MPEG moving pictures expert group
  • images are spatiotemporal compression encoded, in order to decode an image, previous or following images are needed. Accordingly, in order to support a random access function by which reproduction can be started from an arbitrary location, intra encoding which does not need previous or following images is performed for every predetermined image.
  • the MPEG defines system encoding (ISO/IEC13818-1 ) for encapsulating video data and audio data into one bitstream.
  • the system encoding defines two multiplexing methods, including a program stream (PS) multiplexing method which is suitably for producing one program and storing the program in an information storage medium, and a transport stream multiplexing method which is appropriate for making and transmitting a plurality of programs.
  • PS program stream
  • transport stream multiplexing method which is appropriate for making and transmitting a plurality of programs.
  • the DVD employs the PS encoding method.
  • video data and audio data are respectively divided in the units of packs (PCK) and are multiplexed through time division of the packs.
  • Data other than the video and audio data defined by the MPEG are named as a private stream and also included in PCKs so that the data can be multiplexed together with the audio and video data.
  • a VOBU comprises a plurality of PCKs.
  • the first PCK in the plurality of PCKs is a navigation pack (NV_PCK).
  • the remaining part comprises video packs (V_PCK), audio packs (A_PCK), and sub picture packs (SP_PCK).
  • Video data contained in a video pack comprises a plurality of GOPs.
  • the SP_PCK is for 2 dimensional graphic data and subtitle data. That is, in the DVD, subtitle data that appear overlapping a video picture are coded by the same method as used for 2 dimensional graphic data. That is, for the DVD, a separate coding method for supporting multiple languages is not employed and after converting each subtitle data into graphic data, the graphic data is processed by one coding method and then recorded.
  • the graphic data for a subtitle is referred to as a sub picture.
  • a sub picture comprises a sub picture unit (SPU).
  • a sub picture unit corresponds to one graphic data sheet.
  • FIG. 5 is a diagram showing the relation between an SPU and SP PCK.
  • one SPU comprises a sub picture unit header (SPUH), pixel data (PXD), and a sub picture display control sequence table (SP_DCSQT), which are divided and recorded in this order into a plurality of 2048-byte SP_PCKs.
  • SP_DCSQT sub picture display control sequence table
  • PXD data is obtained by encoding a sub picture.
  • Pixel data forming a sub picture can have 4 different types of values, which are a background, a pattern pixel, an emphasis pixel-1 , and an emphasis pixel-2 that can be expressed by 2 bit values and have binary values of 00, 01 , 10, and 11 , respectively. Accordingly, a sub picture can be deemed as a set of data having the four pixel values and formed with a plurality of lines. Encoding is performed for each line. As shown in FIG. 6, the SPU is run-length encoded.
  • No_P the number of continuous pixels
  • PD 2-bit pixel data value
  • FIG. 7 is a diagram of the data structure of SP_DCSQT.
  • SP_DCSQT contains display control information for outputting the PXD data.
  • the SP_DCSQT comprises a plurality of sub picture display control sequences (SP_DCSQ).
  • SP_DCSQT is a set of display control commands (SP_DCCMD) performed at one time, and comprises SP_DCSQ_STM indicating a start time, SP_NXT_DCSQ_SA containing information on the location of the next SP_DCSQ, and a plurality of SP_DCCMD.
  • the SP_DCCMD is control information on how the pixel data (PXD) and video pictures are combined and output, and contains pixel data color information, information on contrast with video data, and information on an output time and a finish time.
  • FIG. 8 is a reference diagram showing an output situation considering sub picture data.
  • SP_DCSQT contains information on an SP display area, which is a sub picture display area in which a sub picture is displayed in a video display area that is a video image area, and information on the start time and finish time of output.
  • sub picture data for subtitle data of a maximum of 32 different languages can be multiplexed with video data and recorded. Distinction of these different languages is performed by a stream id provided by the MPEG system encoding and sub stream id defined in the DVD. Accordingly, if a user selects one language, SPUs are extracted from only SP_PCKs having stream id and sub stream id corresponding to the selected language, then decoded, and subtitle data are extracted. Then, output is controlled according to display control commands.
  • subtitle data are multiplexed together with video data as described above.
  • the amount of bits to be generated for sub picture data should be considered when video data are coded. That is, since subtitle data is converted into graphic data and processed, the amount of generated data for respective languages are different from each other and also the amounts are huge. Usually, after encoding of moving pictures is performed once, sub picture data for each language is again multiplexed being added to the output of the encoding such that a DVD appropriate to each region is produced. However, depending on the language, the amount of sub picture data is huge such that when sub picture data is multiplexed with video data, the entire amount of generated bits exceeds a maximum allowance. In addition, since sub picture data is multiplexed between video data, the start point of each VOBU is different according to the region. Since the start point of a VOBU is separately managed, whenever a multiplexing process newly begins, this information should be updated.
  • sub picture data cannot be used for additional purposes, such as for outputting two languages at a time for a language by outputting only subtitle data.
  • the present invention provides an information storage medium on which sub picture data is recorded with a data structure in which when video data are coded, the amount of bits to be generated for sub picture data need not be considered in advance and an apparatus therefor.
  • the present invention also provides an information storage medium on which sub picture data is recorded with a data structure in which sub picture data can be used for purposes other than subtitles and an apparatus therefor.
  • an information storage medium on which video data are recorded including: a plurality of clips that are recording units in which the video data are stored; and text data for subtitles which are recorded separately from the plurality of clips and overlappable with an image according to the video data and then outputtable, the text data including data for providing subtitles in at least one language.
  • the information storage medium may include character font data, which are recorded separately from the plurality of clips, for graphic expression of the text data and are which are usable in the text data.
  • the text data may be recorded in separate spaces for each of the multiple languages.
  • the text data may include character data which are convertible into graphic data and output synchronization information for synchronizing the graphic data with the video data.
  • the text data may include character data which are convertible into graphic data and output location information indicating a location in which the graphic data is to be displayed when the graphic data is overlapped with an image according to the video data.
  • the text data may include character data which are convertible into graphic data and information for expressing the output of the graphic data in a plurality of sizes when the graphic data is overlapped with an image.
  • the video data may be divided into units that are continuously reproducible, and a size of all of the text data corresponding to one unit is limited.
  • the video data may be divided into a plurality of units that are continuously reproducible, the text data corresponding to each reproducing unit being divided into a plurality of language sets, and a size of all of the text data forming one language set being limited.
  • the data forming the text data may be expressed and recorded in
  • the text data for subtitles are formed only with characters of one of ASCII, which is a basic English character set, and ISO8859-1 , which is a Latin-extended character set
  • the text data may be coded and recorded by using UTF-8 by which one character is coded into a plurality of 8-bit units.
  • the text data may be coded and recorded by using UFT-16 by which one character is coded into a plurality of 16-bit units.
  • the information storage medium may be a removable type.
  • the information storage medium may be an optical disc which is readable by an optical apparatus of the reproducing apparatus.
  • a reproducing apparatus which reproduces data from an information storage medium on which video data is recoded, the video data being coded and divided into clips that are recording units and recorded in a plurality of clips and on which text data for subtitles that are formed with data of a plurality of languages and are overlappable as graphic data with an image based on the video data, the text data being recorded separately from the clips
  • the reproducing apparatus including: a data reproducing unit which reads data from the information storage medium; a decoder which decodes the coded video data; a renderer which converts the text data into graphic data; a blender which overlays the graphic data with the video data to generate an image; a first buffer which temporarily stores the video data; and a second buffer which stores the text data.
  • Font data may be stored in a third buffer and are usable in the text data for graphic expression of the text data and are recorded separately from the clips on the information storage medium, and the renderer converts the text data into graphic data using the font data.
  • the text data When the text data are data of multiple languages, the text data may be recorded in separate spaces for each of the languages, wherein text data for a language that is one of selected by a user and set as an initial reproducing language s are temporarily stored in the second buffer, font data for converting the text data into graphic data may be temporarily stored in the third buffer, and, simultaneously, while ' reproducing video data, the text data may be converted into graphic data and the graphic data may be output.
  • the apparatus may include a controller which controls an output start time and end time of the text data using synchronization information.
  • On the information storage medium may be recorded the text data which includes the synchronization information, by which the text data are converted into graphic data which are overlapped with an image based on the video data.
  • the apparatus may include a controller which controls a location where the text data is overlapped with an image based on the video data using output location information.
  • the text data includes character data which are convertible into graphic data, and the output location information indicating a location where the graphic data is to be output when the graphic data is overlapped with an image based on the video data.
  • the video data recorded on the information storage medium may be divided into units that are continuously reproducible, and within a limited size of all of the text data corresponding to the recording unit, the text data are recorded. All of the text data whose size is limited may be stored in the second buffer before reproducing the continuously reproducible units, and when a language change occurs during reproduction, subtitle data corresponding to the language stored in the buffer may be output.
  • the video data may be divided into units that are continuously reproducible, the text data corresponding to one unit are divided into a plurality of language sets, the text data for subtitles forming the one language set are recorded so that all of the text data is limited.
  • the text data corresponding to a language set containing the subtitle data which are output simultaneously with video data may be stored in the buffer before reproducing the unit that is continuously reproducible, and when a language change occurs during reproduction, when the text data for the language are in the buffer, the text data for the language may be output, and when the text data for the language are not in the buffer, the text data corresponding to the language set containing the text data for the language are stored in the buffer and the text data for the language may be output.
  • the apparatus may include a subtitle size selector which selects a size of the subtitle data based on a user input.
  • the text data may include character data, which are convertible into graphic data, and information indicating the output of a plurality of graphic data items when the graphic data is overlapped with an image based on the video data may be recorded on the information storage medium.
  • Data forming the text data may be expressed and recorded in Unicode for supporting multi-language sets, and the renderer converts the characters expressed in Unicode into graphic data.
  • the text data for subtitles are formed only with characters of one of ASCII, which is a basic English character set, and ISO8859-1 , which is a Latin-extended character set
  • the text data may be coded and recorded by using UTF-8 by which one character is coded into a plurality of 8-bit units, and the renderer may convert the characters expressed by UFT-8 into graphic data.
  • the text data when the text data includes a character having a code point value of a 2-byte size in Unicode, the text data may be coded and recorded by using UFT-16 by which one character is coded into a plurality of 16-bit units, and the renderer may convert the characters expressed by UTF-16 into graphic data.
  • the information storage medium may be a removable type, and the reproducing apparatus may reproduce data recorded on the removable information storage medium.
  • the information storage medium may be an optical disc which is readable by an optical apparatus of the reproducing apparatus, and the reproducing apparatus may reproduce data recorded on the optical disc.
  • the reproducing apparatus may output the graphic data without reproducing video data recorded on the information storage medium.
  • the subtitle data may include subtitle data for one or more languages and the renderer may convert text data for the one or more languages into graphic data.
  • the subtitle data may be synchronously overlapped with a video image and then output.
  • a recording apparatus which records video data on an information storage medium, including: a data writer which writes data on the information storage medium; an encoder which codes video data; a subtitle generator which generates subtitle data addable to the video data; a central processing unit (CPU); a fixed-type storage; and a buffer.
  • the video data is stored in the fixed-type storage after the encoder divides video images into clips that are recording units and compression encodes the clips.
  • the subtitle generator generates subtitle data for a plurality of languages in the form of a text, the subtitle data being reproducible together with an image based on the video data and stored in the fixed-type storage.
  • the buffer temporarily stores the data stored in the fixed-type storage.
  • the data writer records the coded video data and subtitle data that are temporarily stored in the buffer on the information storage medium.
  • the CPU controls encoding of the video data, recording the coded video data and the subtitle data in respective separate areas on the information storage medium.
  • the apparatus may include a font data generator which generates font data for converting text data for subtitles into graphic data.
  • the font data generator may generate font data needed for converting the subtitle data into graphic data, and may store the font data in the fixed-type storage.
  • the buffer may temporarily store the font data stored in the fixed-type storage
  • the data writer may record the font data temporarily stored in the fixed-type storage on the information storage medium
  • the CPU may control the generating of the font data and recording the font data in separate areas of the information storage medium.
  • the CPU may control the subtitle data so that the subtitle data are recorded in a separate space for each language.
  • the apparatus may include a subtitle generator which generates the subtitle data by including character data which are convertible into graphic data and then output and output synchronization information for synchronizing with reproduction of the video images.
  • a subtitle generator which generates the subtitle data by including character data which are convertible into graphic data and then output and output synchronization information for synchronizing with reproduction of the video images.
  • the subtitle generator may generate the subtitle data by including character data which are convertible into graphic data and may output location information indicating a location where the graphic data will be output when the graphic data is overlapped with an image based on the video data.
  • the subtitle generator may generate the text data by including character data which is convertible into graphic data and information for expressing the output of the graphic data with a plurality of sizes when the graphic data is overlapped with an image based on the video data.
  • the coded video data may be divided into recording units that are continuously reproducible, and the subtitle generator may generate the text data so that a size of all of the subtitle data corresponding to the recording unit is limited.
  • the coded video data may be divided into recording units that are continuously reproducible, and after the text data corresponding to the recording unit are divided into a plurality of language sets, the subtitle generator may generate the text data so that a size of the entire subtitle data forming the one language set is limited.
  • the subtitle generator may generate data forming the text data in Unicode for supporting multi-language character sets.
  • the encoder may encode by using UTF-8 by which one character is coded into a plurality of 8-bit units when the text data are formed only with characters of one of ASCII, which is a basic English character set, and ISO8859-1 , which is a Latin-extended character set.
  • the encoder encodes by using UFT-16 by which one character is coded into a plurality of 16-bit units when the text data includes a character having a code point value of a 2-byte size in Unicode.
  • the information storage medium may be a removable type.
  • the information storage medium may be an optical disc.
  • a method of reproducing data stored on an information storage medium including: reading audio-visual (AV) data and text data; rendering subtitle image data from the text data; decoding the AV data and outputting decoded AV data; and blending the subtitle image data and the decoded AV data.
  • AV audio-visual
  • a reproducing apparatus including: a reading section which reads audio-visual (AV) data, text data, and font data; a decoder section which decodes the AV data and outputs moving picture data; a rendering section which renders subtitle image data from the text data; and a blending section which synthesizes the moving picture data with the subtitle image data.
  • AV audio-visual
  • a reproducing apparatus including: a reading section which reads text data and font data; a rendering section which renders subtitle image data from the text data; and an outputting section which outputs the subtitle image data an input receiving section which receives an input to subtitle data for a next line so as to control the output time of the subtitle data.
  • a data recording and/or reproducing apparatus including: a storage section; an encoder which codes audio-visual (AV) data to yield coded AV data; a subtitle generator which generates renderable text data for subtitles; a data writer which writes the coded AV data and the renderable text data onto the storage section; a reading section which reads the coded AV data and the rederable text data; a decoder section which decodes the coded AV data so as to yield moving picture data; a rendering section which renders subtitle image data from the renderable text data; and a blending section which synthesizes the moving picture data with the subtitle image data so as to yield blended moving picture data.
  • AV audio-visual
  • each subtitle data item is not coded together with AV data and within AV data, but is recorded in the form of separate text data in a separate recording space.
  • separate font data for rendering subtitle data that is in the form of text data is recorded.
  • synchronization information for interlocking subtitle data with AV moving pictures for which decoding process is finished, and output information for screen output are recorded.
  • the subtitle data corresponds to sub picture data in the conventional DVD. That is, on the information storage medium according to various embodiments of the present invention, the following elements are recorded:
  • FIG. 1 is a diagram of a data structure for a DVD
  • FIG. 2 is a detailed diagram of a VMG area
  • FIG. 3 is a detailed diagram of a VTS area
  • FIG. 4 is a detailed diagram of a VOBS that is video data
  • FIG. 5 is a diagram showing the relation between an SPU and
  • FIG. 6 is a diagram of the data structure of a sub picture when it is encoded
  • FIG. 7 is a diagram of the data structure of SP_DCSQT;
  • FIG. 8 is a reference diagram showing an output situation with sub picture data considered;
  • FIG. 9 is a block diagram of a reproducing apparatus according to an embodiment of the present invention.
  • FIG. 10 is a diagram of the data structure of text data stored in an information storage medium according to an embodiment of the present invention.
  • FIG. 11 is an embodiment of text data for subtitles according to an embodiment of the present invention.
  • FIG. 12 is a diagram of the data structure of text data for a language other than the language of FIG. 11 ;
  • FIG. 13 is an example of a text file used in the present invention.
  • FIG. 14 is an example of a subtitle to which a different style is applied.
  • FIG. 15 is an example of a subtitle displayed after changing a line
  • FIG. 16 is an example showing a case where a user executes a language change while subtitles in a language are being reproduced
  • FIG. 17 is an example of a plurality of language sets of subtitle data and font data for multiple languages
  • FIG. 18 is a diagram showing correlations of PlayList, Playltem, clip information, and a clip
  • FIG. 19 is an example of a directory structure according to the present invention.
  • FIG. 20 is an example showing a case where a reproducing apparatus outputs only subtitle data
  • FIG. 21 is an example showing a case where a reproducing apparatus outputs subtitle data for more than one language at the same time;
  • FIG. 22 is an example showing a case where during reproduction of only subtitle data, normal reproduction of video data begins from video data corresponding to subtitle line data;
  • FIG. 23 is a block diagram of a recording apparatus according to an embodiment of the present invention.
  • FIG. 9 is a block diagram of a reproducing apparatus according to an embodiment of the present invention.
  • the reproducing apparatus includes a reader which reads AV data, text data for subtitles, and downloaded font data stored in an information storage medium, a decoder for decoding AV data, a renderer which renders text files, and a blender which synthesizes moving pictures output from the decoder with subtitle data output from the renderer.
  • the reproducing apparatus further includes a buffer, which buffers data between the reader and the decoder and renderer s and stores determined font data, and may further include a storage (not shown) for storing resident font data that are stored in advance as defaults.
  • rendering encompasses all needed activities related to converting subtitle text data into graphic data so as to be displayed on a display apparatus. That is, rendering includes producing graphic data to form a subtitle image by repeating the process for finding a font matching with the character code of each character in the text data in the downloaded font data read from the information storage medium or from the residing font data, and converting the font data into graphic data. Rendering also includes selecting or converting colors, selecting or converting the size of characters, and producing graphic data appropriate to writing in horizontal lines or vertical lines. In particular, when the font data being used is an outline font, font data defines the shape of each character as a curve formula. In this case, rendering also includes a rasterizing process for generating graphic data by processing the curve formula.
  • FIG. 10 is a diagram of the data structure of text data (i.e., subtitle data) stored in an information storage medium according to an embodiment of the present invention.
  • text data is recorded separately from AV streams.
  • the text data includes synchronization information, display area information, and display style box information.
  • the synchronization information is addable to data to be output with subtitles in a rendering process and is usable for synchronizing the subtitles with video information which is decoded from AV stream data.
  • the display area information designates a location on which rendered subtitle data are displayed on a screen.
  • Display style box information contains information on the size of characters, writing of rendered subtitle data in horizontal lines or in vertical lines, and arrangement, colors, contrast, etc., in a display area.
  • the text data since text data for each of a plurality of languages may be written, the text data also contains information indicating a language of the plurality of languages. This so-called multi-language data may be stored in separate spaces for each of the respective languages, or may be stored in one space after being multiplexed in order of output time.
  • FIG. 11 is illustrates text data for subtitles according to an embodiment of the present invention.
  • a markup language is used as text data for subtitles in the present embodiment.
  • a minimal number of tags or elements in the markup language used for subtitles are used, and as described above, tags or attributes for synchronization and screen display may be included.
  • subtitle, head, meta, body, p elements are shown as examples.
  • information is displayed with an attribute. Attributes used in the example are as follows:
  • a time at which subtitles are displayed is expressed in the form of time (HH): minute (MM): second (SS): frame (FF).
  • the time can be expressed in units of 1/1000 second.
  • video data is MPEG video
  • the time may have a presentation time stamp (PTS) value of video images on which the subtitle overlays and is displayed.
  • the PTS value is a count value operating at 27MHz or 90kHz. If the PTS value is used, the subtitle data can be accurately matched with video data and operated.
  • - end A time at which a displayed subtitle disappears and has the same type of attribute value as 'start'.
  • - direction This indicates the direction of subtitle data to be displayed.
  • - size This indicates the width or height of a display area in which subtitle data is to be displayed. If the attribute value of "direction" is "horizontal”, a fixed width value of a subtitle data box is indicated, and if “vertical”, a fixed height value of the subtitle data box is indicated.
  • a subtitle element is used to indicate the root of text data
  • a head element is used to include a meta element which deals with information needed by all of the text data, or a style element which is not shown in the example of FIG. 11.
  • a meta element is used to express the title of the corresponding text data and the language to be used. That is, when multiple languages are selected, by using meta information in the text data, a desired language text file can be conveniently selected. Also, languages can be distinguished by the names of text files, or by directory names, if a different directory for each language text file is prepared.
  • subtitle data is loaded into the buffer of the reproducing apparatus before video data is reproduced, and with the reproduction of video data, the subtitle data is converted into graphic data by the renderer and made to overlap video images. Accordingly, the subtitle data in, for example, Korean, is displayed in a display area at an exact time.
  • control information may also be written in a format or syntax. Accordingly, the renderer has a parser function for verifying that a text file to be stored is written according to a syntax.
  • FIG. 12 is a diagram of the data structure of text data for a language other than the Korean language of FIG. 11.
  • subtitle data and text data are recorded in different areas, support for multiple languages is achievable by coding the video data separately from the subtitle data and then adding text data of respective different languages to the coded video data. Also, when subtitle data and font data that are not stored with video data on the information storage medium are downloaded through networks or loaded on the reproducing apparatus from an additional information storage medium, thus, subtitle data is easily used in other cases.
  • Unicode is a character code made to express languages throughout the world with more than 65,000 characters. According to the Unicode, each character is expressed by a code point in Unicode. Characters to express respective languages are sets of code points having regularly continuous values. The characters having a continuous space of code points are referred to as a code chart. Also, Unicode supports UTF-8, UTF-16, and UTF-32 as coding formats for actually storing or transmitting character data, that is, the code points. These formats are to express one character by using a plurality of data items with an 8-bit length, 16-bit length, and 32-bit length, respectively.
  • An ASCII code for expressing English characters and an ISO8859-1 code for expressing languages of European countries by expanding Latin have code point values from 0x00 to OxFF in Unicode.
  • Japanese Hirakana characters have code point values from 0x3040 to 0x309F.
  • the 11 ,172 characters for expressing modern Korean have code point values from OxACOO to 0XD7AF.
  • Ox indicates that the code point value is expressed by hexadecimal numbers.
  • subtitle data includes only English characters
  • the coding is performed by using UTF-8.
  • UTF-8 one character is expressible using 3 bytes.
  • lf UTF-18 one character is expressible in 2 bytes but each of the English characters included in the subtitle data at is also expressible in 2 bytes.
  • Each country has its own character code different from Unicode.
  • a Korean character has a 2-byte code point value and an English character has 1-byte code point value. If the subtitle data is generated by using a code other than Unicode but each nation's character set, each reproducing apparatus understands all of these character sets such that the load for implementation increases.
  • Font data is needed in order to process subtitle data as text data.
  • the font data supports multiple languages.
  • font data only for the characters used in an information storage medium are recorded in the information storage medium as subtitle data such that in a reproducing apparatus, such font data is loaded into a buffer before reproducing video data and then used. That is, the reproducing apparatus links each piece of subtitle text data with font data and then reproduces the data. Link information of subtitle text data and font data is recorded in the text data for subtitles or in a separate area.
  • the reproducing apparatus loads subtitle data and font data, which correspond to video data and is continuously reproducible before reproduction, and then uses the data.
  • continuous reproduction encompasses reproduction without pause, cessation, or interruption in the video and audio outputs of the video data.
  • a reproducing apparatus reproduces data by storing an amount of data in a video and audio buffer and if underflow in the buffer of the reproducing apparatus is prevented, continuous reproduction is possible.
  • FIG. 13 is an example of a text file used in this embodiment of the present invention.
  • a style element is used in a head element in order to use a CSS file format as an application of a style in a markup language for implementing a text file.
  • a style element is used in a head element in order to use a CSS file format as an application of a style in a markup language for implementing a text file.
  • subtitle data can use a variety of fonts with different sizes and colors.
  • subtitle styles that are set as defaults are not convenient. For example, a person with bad eyesight may feel inconvenience if the size of the font of the subtitle text is small. Accordingly, it is desirable to apply and display a style to satisfy ordinary users or persons with bad eyesight when applied to an identical text file. Therefore, by allowing users to determine the style, such as the size of a font, through a menu when reproducing an information storage medium in a first reproducing apparatus, a style sheet which is for applying a style according to a user's settings and has a plurality of options that are selectable by the user can be used.
  • User type is a set of CSS attributes.
  • user types that is, the degree of bad eyesight, is not relevant, and therefore, only the two following cases as follows will be explained:
  • - small a style for a user with normal eyesight
  • subtitles which are preset by using an @user rule or to which different styles are applied for users with good eyesight or with bad eyesight can be displayed.
  • FIG. 15 is an example in which the text data for the Korean subtitles implemented in FIG. 11 are displayed on an actual screen.
  • the width value of the subtitle data display area is fixed to 520 by the "size" attribute, subtitle data that cannot be expressed within one line is displayed after changing a line.
  • subtitle data is outputtable only in a display area and by using a line change element (br), line change can be selected forcibly.
  • the third ⁇ p> element is an example in which by a "direction" attribute, the display of subtitle data is vertically performed.
  • FIG. 16 is an example showing a case where a user executes a language change while subtitles in a language are being reproduced.
  • a reproducing apparatus changes subtitle text data being reproduced (in Korean, for example), links font data corresponding to text data, renders data of the changed language (English, for example), and by doing so, outputs the subtitles. If data for subtitles and font data for this are all loaded in the buffer, continuous reproduction of video data can be easily performed. If text data or font data desired to be changed is not loaded in the buffer, the data should be loaded into the buffer. At this time, a pause, cessation, or interruption can occur in reproduction of video data.
  • the sizes of data for subtitles and font data are limitable to less than the sizes of the respective buffers. In this case, however, the number of supported languages is restricted. Accordingly, in the present embodiment of the present invention, this problem is solved by creating a unit referred to as a language set.
  • FIG. 17 is an example of a plurality of language sets of subtitle data and font data for multiple languages.
  • subtitle data and font data for a plurality of languages added to one video image are divided into a plurality of language sets.
  • Subtitle data and font data that correspond to one language set are limited to a size that is less than the size of the buffer.
  • data of the existing language set is all deleted.
  • a pause, cessation, or interruption may occur.
  • a language change operation is performed again according to the relation between the language and the language set loaded in the buffer.
  • Information on the language set is recordable on an information storage medium or by considering the data stored in an information storage medium and the size of the buffer in the reproducing apparatus, and the reproducing apparatus determines this arbitrarily when reproducing data.
  • a clip is a recording unit of video data, and PlayList and Playltem will be used to indicate reproducing units.
  • AV streams are separated and recorded in units of clips.
  • a clip is recorded in a continuous space.
  • AV streams are compressed and recorded. Accordingly, in order to reproduce the compressed AV streams, attribute information of the compressed video data should be informed. Therefore, Clip information is recorded in each clip.
  • Clip information contains audio video attributes of the clip and an Entry Point Map in which information on the location of an Entry Point where random access is available in each interval is recorded.
  • the Entry Point is the location of I picture where an intra image is compressed
  • the Entry Point Map is mainly used for a time search used to find a point in a time interval after the starting point of reproduction.
  • PlayList is a basic unit of reproduction.
  • a plurality of PlayLists is stored.
  • One PlayList includes a series of a plurality of Playltems.
  • Playltem corresponds to a part of a clip, and more specifically, it is used in the form by which a reproduction start time and end time in the clip are determined. Accordingly, by using Clip information, the location of the part in an actual clip corresponding to the Playltem is identified.
  • FIG. 18 is a diagram showing correlations of a PlayList, a Playltem, Clip information, and a clip.
  • a plurality of text data items for subtitles for each clip are recorded in a space separate from the clip.
  • a plurality of data items for subtitles are linked to one clip and this link information is recordable in the Clip information.
  • a plurality of data items for subtitles are linked, but for some clips, no data items or only one data item for subtitles may be linked.
  • font data is generated for each language. Accordingly, font data is recorded in a separate space for each language.
  • FIG. 19 is an example of a directory structure according to an embodiment of the present invention.
  • clip, Clip information, a PlayList, subtitle text data, and font data are stored in the form of files and stored in different directory spaces according to the respective types. As shown, text files for subtitles and font files are storable in directory spaces separate from video data.
  • An information storage medium is a removable information storage medium (i.e., one which is not fixed to a reproducing apparatus and, only when data is reproduced, can be placed and used). Unlike a fixed information storage medium with a high capacity such as a hard disc, the removable information storage medium has a limited capacity. Also, reproducing apparatuses for reproducing this medium often have a buffer with a limited size and low level function s with limited performance. Accordingly, together with video data recorded on a removable information storage medium, only subtitle data and font data used for the subtitle data are recorded on the information storage medium and by using the data when video data is reproduced from the information storage medium, the amount of data that should be prepared in advance can be minimized.
  • a representative example of this removable recording medium is an optical disc.
  • video data is stored in a space separate from subtitle text data. If this subtitle text data is for multiple languages and has font data for outputting the subtitle data, a reproducing apparatus loads only the subtitle data and font data in the buffer and then, while reproducing video data, overlaps the subtitle data with a video image and outputs the subtitle data.
  • FIG. 20 is an example showing a case where a reproducing apparatus outputs only subtitle data.
  • a reproducing apparatus may output only subtitle data. That is, according to one of the many special reproduction functions, video data is not reproduced, and only subtitle data that is to be output overlapping the video data is converted into graphic data and then output.
  • subtitle data may be used, for example, for learning a foreign language.
  • video data is not overlapped and only subtitle data is output.
  • both the synchronization information and location information are neglected or not included, and the reproducing apparatus outputs a plurality of line data items including subtitle data on the entire screen, and waits for a user input. After watching all of the output subtitle data, the user sends a signal for displaying subtitle data for the next line to the reproducing apparatus so as to control the output time of the subtitle data.
  • FIG. 21 is an example showing a case where a reproducing apparatus outputs subtitle data for more than one language at the same time.
  • a reproducing apparatus may have a function for outputting subtitle data for two or more languages at the same time when subtitle data includes a plurality of languages. At this time, by using synchronization information of subtitle data for each language, subtitle data to be displayed on the screen is selected. That is, subtitle data is output in order of output start time, and when the output start times are the same, the subtitle data is output according to language.
  • FIG. 22 is an example showing a case where during reproduction of only subtitle data, normal reproduction of video data begins from video data corresponding to subtitle line data. As shown in FIG. 22, when the user selects one subtitle line data item, a reproducing time corresponding to the line data item is selected again, and video data corresponding to the time is normally reproduced.
  • a recording apparatus records video data and subtitle data on an information storage medium.
  • FIG. 23 is a block diagram of a recording apparatus according to an embodiment of the present invention.
  • the recording apparatus includes a central processing unit (CPU), a fixed high-capacity storage, an encoder, a subtitle generator, a font generator, a writer, and a buffer.
  • CPU central processing unit
  • the encoder, subtitle generator, and font generator may be implemented by software on the CPU.
  • a video input unit for receiving video data in real time is also includable.
  • the storage stores a video image that is the object of encoding, or video data that is coded by the encoder.
  • the storage stores a dialogue attached to the video data and large volume font data.
  • the subtitle generator receives information on the output time of a subtitle line data item from the encoder, receives subtitle line data from the dialogue data, makes subtitle data for the subtitles, and stores the subtitle data in a fixed-type storage apparatus.
  • the font generator generates font data containing characters used in the subtitle data for subtitles from the large volume font data and stores the font data in the fixed-type storage apparatus. That is, the font data stored in the information storage medium is part of the large volume font data stored in the fixed-type storage apparatus. This process for generating data in the form to be stored in an information storage medium is referred to as authoring.
  • coded video data stored in the fixed-type storage apparatus are divided into clips, which are the recording units, and recorded on an information storage medium. Also, subtitle data for subtitles added to video data contained in the clip are recorded in a separate area. Further, font data needed to convert the subtitle data into graphic data is recorded in a separate area.
  • the video data is divided into reproducing units that are continuously reproducible, and usually, this reproducing unit includes a plurality of clips.
  • this reproducing unit includes a plurality of clips.
  • the size of subtitle data, which are overlappable with a video image included in one reproducing unit and is output, is limited to be less than a size when the data for a plurality of languages is all added to the subtitle data.
  • subtitle data, which should be overlapped with a video image included in one reproducing unit is divided into language sets with which a language change is continuously performable when video data is reproduced.
  • Subtitle data included in one reproducing unit includes a plurality of language sets and the size of subtitle data included in one language set, plus data for a plurality of languages, is limited to less than a size.
  • the subtitle data includes character codes using Unicode and the data form actually recorded is codable by UTF-8 or UTF-16.
  • Video data, subtitle data for subtitles, and font data recorded in the fixed-type storage apparatus are temporarily stored in the buffer and are recorded on an information storage medium by the writer.
  • the CPU executes a software program controlling each device so that these functions are performed in order.
  • text data for multi-language subtitles are made to be a text file and then recorded in a space separate from AV streams such that more diverse subtitle are providable to users and a recording space arrangement is conveniently performable.
  • Font data for this are made to have a minimum size by collecting characters needed for the subtitle text and are stored separately in an information storage medium and used.
  • the present invention is applicable to fields related to recording and reproduction of moving pictures, particularly in fields in which text data of multiple languages must be provided while reproducing moving pictures.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Indexing, Searching, Synchronizing, And The Amount Of Synchronization Travel Of Record Carriers (AREA)
EP03751536A 2002-10-15 2003-10-14 INFORMATION STORAGE MEDIUM CONTAINING MULTI-LANGUAGE SUBTITLE DATA USING TEXT DATA AND DOWNLOADABLE FONTS AND APPARATUS THEREFOR Ceased EP1552515A4 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
KR2002062632 2002-10-15
KR20020062632 2002-10-15
US45254403P 2003-03-07 2003-03-07
US452544P 2003-03-07
PCT/KR2003/002120 WO2004036574A1 (en) 2002-10-15 2003-10-14 Information storage medium containing subtitle data for multiple languages using text data and downloadable fonts and apparatus therefor

Publications (2)

Publication Number Publication Date
EP1552515A1 EP1552515A1 (en) 2005-07-13
EP1552515A4 true EP1552515A4 (en) 2007-11-07

Family

ID=32109555

Family Applications (1)

Application Number Title Priority Date Filing Date
EP03751536A Ceased EP1552515A4 (en) 2002-10-15 2003-10-14 INFORMATION STORAGE MEDIUM CONTAINING MULTI-LANGUAGE SUBTITLE DATA USING TEXT DATA AND DOWNLOADABLE FONTS AND APPARATUS THEREFOR

Country Status (7)

Country Link
EP (1) EP1552515A4 (ja)
JP (2) JP2010154546A (ja)
AU (1) AU2003269536A1 (ja)
HK (3) HK1113851A1 (ja)
PL (1) PL375781A1 (ja)
TW (1) TWI246036B (ja)
WO (1) WO2004036574A1 (ja)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101053619B1 (ko) 2003-04-09 2011-08-03 엘지전자 주식회사 텍스트 서브타이틀 데이터의 재생을 관리하기 위한 데이터구조를 갖는 기록 매체, 그에 따른 기록 및 재생 방법 및장치
KR20050012328A (ko) 2003-07-25 2005-02-02 엘지전자 주식회사 고밀도 광디스크의 프레젠테이션 그래픽 데이터 관리 및재생방법과 그에 따른 고밀도 광디스크
RU2358337C2 (ru) 2003-07-24 2009-06-10 Эл Джи Электроникс Инк. Носитель записи, имеющий структуру данных для управления воспроизведением данных текстовых субтитров, записанных на нем, и устройства и способы записи и воспроизведения
KR100667751B1 (ko) 2003-10-01 2007-01-11 삼성전자주식회사 텍스트 기반의 자막 정보를 포함하는 저장 매체, 재생장치 및 그 재생 방법
CN1864220B (zh) * 2003-10-04 2012-08-22 三星电子株式会社 处理基于文本的字幕的设备
KR100739682B1 (ko) 2003-10-04 2007-07-13 삼성전자주식회사 텍스트 기반의 서브 타이틀 정보를 기록한 정보저장매체,그 처리장치 및 방법
KR20050035678A (ko) 2003-10-14 2005-04-19 엘지전자 주식회사 광디스크 장치의 부가 데이터 재생방법 및 장치와, 이를위한 광디스크
KR20050036277A (ko) * 2003-10-15 2005-04-20 엘지전자 주식회사 고밀도 광디스크의 네비게이션 정보 관리방법
KR100788655B1 (ko) 2003-11-10 2007-12-26 삼성전자주식회사 스타일 정보를 포함하는 텍스트 기반의 서브 타이틀데이터가 기록된 저장 매체, 재생 장치 및 그 재생 방법
KR100619053B1 (ko) 2003-11-10 2006-08-31 삼성전자주식회사 서브 타이틀을 기록한 정보저장매체 및 그 처리장치
KR20050072255A (ko) 2004-01-06 2005-07-11 엘지전자 주식회사 고밀도 광디스크의 서브타이틀 구성방법 및 재생방법과기록재생장치
KR20050078907A (ko) 2004-02-03 2005-08-08 엘지전자 주식회사 고밀도 광디스크의 서브타이틀 재생방법과 기록재생장치
WO2005076601A1 (en) 2004-02-10 2005-08-18 Lg Electronic Inc. Text subtitle decoder and method for decoding text subtitle streams
RU2377669C2 (ru) * 2004-02-10 2009-12-27 ЭлДжи ЭЛЕКТРОНИКС ИНК. Носитель записи, имеющий структуру данных для управления различными данными, и способ и устройство записи и воспроизведения
WO2005074399A2 (en) 2004-02-10 2005-08-18 Lg Electronics Inc. Recording medium and method and apparatus for decoding text subtitle streams
KR20060129067A (ko) 2004-02-26 2006-12-14 엘지전자 주식회사 기록매체 및 텍스트 서브타이틀 스트림 기록 재생 방법과장치
KR101102398B1 (ko) 2004-03-18 2012-01-05 엘지전자 주식회사 기록매체 및 기록매체상에 기록된 텍스트 서브타이틀스트림 재생 방법과 장치
PT1733385E (pt) 2004-03-26 2010-01-19 Lg Electronics Inc Meio de gravação e método e aparelho para reprodução e gravação de fluxos de legendas de texto
JP4724710B2 (ja) * 2004-05-03 2011-07-13 エルジー エレクトロニクス インコーポレイティド テキストサブタイトルデータを再生管理するためのデータ構造を有する記録媒体及びこれと関連する方法及び装置
US7778526B2 (en) * 2004-06-01 2010-08-17 Nero Ag System and method for maintaining DVD-subpicture streams upon conversion to higher compressed data format
CN1981342A (zh) 2004-06-18 2007-06-13 松下电器产业株式会社 再现装置、程序、再现方法
KR100599118B1 (ko) * 2004-07-20 2006-07-12 삼성전자주식회사 자막신호표시상태를 변경하는 데이터재생장치 및 그 방법
KR20060025100A (ko) * 2004-09-15 2006-03-20 삼성전자주식회사 다국어를 지원하는 메타 데이터를 기록한 정보저장매체 및메타 데이터 처리방법
US8473475B2 (en) 2004-09-15 2013-06-25 Samsung Electronics Co., Ltd. Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata
JP4715278B2 (ja) 2005-04-11 2011-07-06 ソニー株式会社 情報処理装置および情報処理方法、プログラム格納媒体、プログラム、並びに提供装置
US20110052147A1 (en) * 2006-10-25 2011-03-03 Koninklijke Philips Electronics N.V. Playback of video and corresponding subtitle data
JP5339002B2 (ja) * 2013-05-20 2013-11-13 ソニー株式会社 情報処理装置、再生方法、および記録媒体
US20230134226A1 (en) * 2021-11-03 2023-05-04 Accenture Global Solutions Limited Disability-oriented font generator

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0863509A1 (en) * 1995-11-24 1998-09-09 Kabushiki Kaisha Toshiba Multi-language recording medium and reproducing device for the same
US6253221B1 (en) * 1996-06-21 2001-06-26 Lg Electronics Inc. Character display apparatus and method for a digital versatile disc
US6259858B1 (en) * 1998-12-16 2001-07-10 Kabushiki Kaisha Toshiba Optical disc for storing moving pictures with text information and apparatus using the disc
US20020087569A1 (en) * 2000-12-07 2002-07-04 International Business Machines Corporation Method and system for the automatic generation of multi-lingual synchronized sub-titles for audiovisual data

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5497241A (en) * 1993-10-29 1996-03-05 Time Warner Entertainment Co., L.P. System and method for controlling display of motion picture subtitles in a selected language during play of a software carrier
JPH08102148A (ja) * 1994-09-30 1996-04-16 Sanyo Electric Co Ltd 記録媒体
JPH10327381A (ja) * 1997-03-21 1998-12-08 Toshiba Corp 映像情報の再生表示方法及び映像情報を記録した記録媒体
JP3948051B2 (ja) * 1997-04-30 2007-07-25 ソニー株式会社 編集装置及びデータ編集方法
US6046778A (en) * 1997-10-29 2000-04-04 Matsushita Electric Industrial Co., Ltd. Apparatus for generating sub-picture units for subtitles and storage medium storing sub-picture unit generation program
DE19950490A1 (de) * 1999-10-20 2001-04-26 Thomson Brandt Gmbh Verfahren zur Kodierung einer Bildsequenz sowie Teilbilddateneinheit zur Verwendung in einem elektronischen Gerät und Datenträger

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0863509A1 (en) * 1995-11-24 1998-09-09 Kabushiki Kaisha Toshiba Multi-language recording medium and reproducing device for the same
US6253221B1 (en) * 1996-06-21 2001-06-26 Lg Electronics Inc. Character display apparatus and method for a digital versatile disc
US6259858B1 (en) * 1998-12-16 2001-07-10 Kabushiki Kaisha Toshiba Optical disc for storing moving pictures with text information and apparatus using the disc
US20020087569A1 (en) * 2000-12-07 2002-07-04 International Business Machines Corporation Method and system for the automatic generation of multi-lingual synchronized sub-titles for audiovisual data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO2004036574A1 *

Also Published As

Publication number Publication date
TW200407812A (en) 2004-05-16
JP5620116B2 (ja) 2014-11-05
PL375781A1 (en) 2005-12-12
HK1113851A1 (en) 2008-10-17
JP2010154546A (ja) 2010-07-08
AU2003269536A1 (en) 2004-05-04
JP2010154545A (ja) 2010-07-08
HK1113853A1 (en) 2008-10-17
TWI246036B (en) 2005-12-21
HK1082111A1 (en) 2006-05-26
EP1552515A1 (en) 2005-07-13
WO2004036574A1 (en) 2004-04-29

Similar Documents

Publication Publication Date Title
US20040081434A1 (en) Information storage medium containing subtitle data for multiple languages using text data and downloadable fonts and apparatus therefor
JP5620116B2 (ja) テキストデータとダウンロードフォントとを利用した多国語支援用サブタイトルデータが記録された情報保存媒体に保存されたデータを再生する再生装置及びデータ記録及び/または再生装置その装置
KR100970735B1 (ko) 동영상 데이터가 기록된 정보저장매체를 재생하는 재생 방법 및 기록 장치
EP1730739B1 (en) Recording medium, method, and apparatus for reproducing text subtitle streams
US6393202B1 (en) Optical disc for which a sub-picture can be favorably superimposed on a main image and a disc reproduction apparatus and a disc reproduction method for the disc
US8498515B2 (en) Recording medium and recording and reproducing method and apparatuses
US20060146660A1 (en) Optical disc, reproducing device, program, reproducing method, recording method
KR20050032461A (ko) 텍스트 기반의 자막 정보를 포함하는 저장 매체, 재생장치 및 그 재생 방법
JP4534501B2 (ja) 映像再生装置および記録媒体
TWI330357B (ja)
US7965924B2 (en) Storage medium for recording subtitle information based on text corresponding to AV data having multiple playback routes, reproducing apparatus and method therefor
TWI320174B (ja)
WO2005031739A1 (en) Storage medium for recording subtitle information based on text corresponding to av data having multiple playback routes, reproducing apparatus and method therefor

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20050404

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK

RIN1 Information on inventor provided before grant (corrected)

Inventor name: HEO, JUNG-KWON

Inventor name: CHUNG, HYUN-KWON

Inventor name: PARK, SUNG-WOOK

Inventor name: KO, JUNG-WAN315-401 DAEWOO APT.,-956-2

Inventor name: MOON, SEONG-JIN436-502CHEONGMYUNG

Inventor name: JUNG, KIL-SOO104-1401 NAMSUWON DOOSAN APT. 282

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20071009

17Q First examination report despatched

Effective date: 20101109

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SAMSUNG ELECTRONICS CO., LTD.

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20151120