MXPA06009467A - Storage medium for storing text-based subtitle data including style information, and apparatus and method reproducing thereof - Google Patents

Storage medium for storing text-based subtitle data including style information, and apparatus and method reproducing thereof

Info

Publication number
MXPA06009467A
MXPA06009467A MXPA/A/2006/009467A MXPA06009467A MXPA06009467A MX PA06009467 A MXPA06009467 A MX PA06009467A MX PA06009467 A MXPA06009467 A MX PA06009467A MX PA06009467 A MXPA06009467 A MX PA06009467A
Authority
MX
Mexico
Prior art keywords
information
style
text
image
dialog
Prior art date
Application number
MXPA/A/2006/009467A
Other languages
Spanish (es)
Inventor
Park Sungwook
Jung Kilsoo
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of MXPA06009467A publication Critical patent/MXPA06009467A/en

Links

Abstract

A storage medium for storing text-based subtitle data including style information, a reproducing apparatus and methods are provided for reproducing text-based subtitle data including style information separately recorded on the storage medium. The storage medium includes:multimedia image data;and text-based subtitle data for displaying subtitles on an image based on the multimedia image data, wherein the text-based subtitle data includes dialog information indicating subtitle contents to be displayed on the image, style information indicating an output style of the dialog information, and partial style information indicating an output style applied to a portion of the dialog information. Accordingly, subtitles can be provided in a plurality of languages without limited to the number of units of subtitle data. In addition, subtitle data can be easily produced and edited. Likewise, an output style of the subtitle data can be changed in a variety of ways. Also, a special style can be applied in order to emphasize a portion of the subtitles.

Description

STORAGE MEDIA TO STORE TEXT-BASED SUBTITLE DATA INCLUDING STYLE INFORMATION, AND AN APPARATUS AND METHOD OF REPRODUCTION OF THEMSELVES FIELD OF THE INVENTION The present invention relates to the reproduction of a multimedia image, and more particularly, to a storage medium for recording text-based subtitle data including style information, and a reproduction apparatus and method for reproducing data. of text-based subtitles including style information recorded on • the storage medium. BACKGROUND OF THE INVENTION Recently, a video stream, an audio stream, a flow of presentation graphics to provide subtitle data, and a flow of interactive graphics to provide buttons or menus for interacting with a user, are multiplexed into a stream. main moving image (also known as a 'AV' audiovisual data stream) recorded on a storage medium to provide a high-definition multimedia image (HD, for its in English) that has high image quality. In particular, the flow of presentation graphics to provide subtitle data also provides an image based on bitmap • in order to display ref .: 175222 subtitles or captions in an image. BRIEF DESCRIPTION OF THE INVENTION Technical Problem However, subtitle data based on bitmaps have a large size and are multiplexed with other data streams. As a result, in order to guarantee a maximum bit rate required by a specific application, the number of subtitle data units, which may be included in a multiplexed main stream, is limited. In particular, when multilingual subtitles are provided, problems may arise that relate to a limited number of subtitle data units. Also, due to the image based on bitmaps, the production of the subtitle data and the editing of the subtitle data produced is very difficult. This is because such subtitle data is multiplexed with other data streams such as video, audio and interactive graphics streams. In addition, an output style of the subtitle data can not easily be changed in a variety of ways, that is, to change an output style to another output style of the subtitle data. Technical Solution In accordance with aspects, the present invention advantageously provides a storage medium in which text-based subtitle data including style information, and a reproduction apparatus and a method for reproducing text-based subtitle data including information are recorded. of style recorded in such storage medium. Advantageous Effects The present invention advantageously provides a storage medium, in which text-based subtitle data including a plurality of style information units, and a reproduction apparatus and method for the same are recorded, in such a way that the Subtitles may be provided in a plurality of languages without being limited to the number of subtitle data units. As a result, subtitle data can be easily produced and edited, and an output style of subtitle data can be changed in one of a variety of ways. In addition, a special style may be applied in order to emphasize a portion of the subtitles. BRIEF DESCRIPTION OF THE FIGURES Figure 1 illustrates an example data structure of a main stream, in which a multimedia image is encoded, and text-based subtitle data recorded separately in a storage medium in accordance with one embodiment of the present invention; Figure 2 is a block diagram of an exemplary reproduction apparatus in accordance with an embodiment of the present invention.; Figure 3 illustrates a sample data structure of text-based subtitle data in accordance with one embodiment of the present invention; Figures 4A and 4B are 'examples of reproduction results of text-based subtitle data having the data structure shown in Figure 3; Figure 5 illustrates a problem which can be generated when producing text-based subtitle data having the data structure shown in Figure 3; Figure 6 illustrates an example of online style information for incorporation into text-based subtitle data to solve the problem illustrated in Figure 5 in accordance with one embodiment of the present invention; Figure 7 illustrates a sample data structure of text-based subtitle data incorporating online style information in accordance with one embodiment of the present invention; Figure 8 illustrates a sample data structure of text-based subtitle data to which a reproduction apparatus can apply predetermined style information in accordance with another embodiment of the present invention; and Figure 9 is a flowchart illustrating a process of reproducing text-based subtitle data including information in accordance with another embodiment of the present invention. DETAILED DESCRIPTION OF THE INVENTION In accordance with an aspect of the present invention, a storage means comprises: multimedia image data; and text-based subtitle data for displaying subtitles in an image based on the multimedia image data, wherein the text-based subtitle data includes dialog information indicating subtitle content to be displayed in the image, style information indicating an output style of dialog information, and partial style information indicating an output style applied to a portion of the dialog information. The dialog information may include text information related to subtitle content that will be displayed in the image, and time information related to the time when the text information is taken to a screen and displayed in the image. The style information may include area information indicating a position in which the text information in the image is taken and information from sources related to the type, size, color, thickness, and style of a generated source. Text-based subtitle data can include at least one style sheet information unit that is u? output style group consisting of a plurality of style information units. The partial style information may be output style information to emphasize and display a portion of the text information, and have relative values with respect to font size and / or font color included in the style information. The partial style information may be included in the dialogue information, or stored separately from the dialogue information in which reference information of the partial style information is included.
The text-based subtitle data may also include, in addition to the style information, information on whether predetermined style information defined by a manufacturer of the storage medium is included. In accordance with another aspect of the present invention, an apparatus is provided for reproducing data of multimedia images and subtitle data 'based on text separately recorded on a storage medium to display subtitles on an image based on multimedia image data. . Such an apparatus comprises: an intermediate memory unit for storing style information indicating a dialogue information output style, which is content of subtitles to be displayed in the image, and partial style information indicating an applied output style to a portion of the dialogue information; and a text subtitle processing unit for reading the style information and the partial style information of the buffer unit, for applying the style information and the partial style information to the dialog information, for converting the information applied to a bitmap image, and to produce the converted bitmap image. In accordance with another aspect of the present invention, a method of reproducing multimedia image data and text-based subtitle data recorded on a storage medium to display subtitles on an image based on the multimedia image data, comprising: reading dialogue information indicating subtitle content to be displayed in the image, style information indicating an output style of dialog information, and partial style information indicating an output style applied to a portion of the dialog information; converting the dialog information to a bitmap image to which the style and partial style are applied based on style information and partial style information; and generating a converted bitmap image according to bitmap images information to extract time information included in the dialog information.
Additional aspects and / or advantages of the invention will be set forth in part in the following description and, in part, will be obvious from the description, or may be learned by practicing the invention. Hereinafter, the present invention will now be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. Figure 1 illustrates a data structure of a main stream 110, in which a multimedia image is encoded, and text-based subtitle data 120 recorded separately from the main stream 110 in a storage medium 130, such as a disk digital versatile (DVD) according to one embodiment of the present invention. The main stream 110 and the text-based subtitle data 120 can be obtained, separately or collectively, from one or more sources of data generators. With reference to Figure 1, the text-based subtitle data 120 is provided separately from the main stream 110 recorded in the storage medium 130 in order to solve problems related to bitmap-based subtitle data. The main stream 110, also known as an audiovisual data stream (AV), includes a video stream 102, an audio stream 104, a stream of presentation graphics 106, and a stream of interactive graphics 108, all. which are multiplexed therein in order to be recorded on the storage medium 130. The text-based subtitle data 120 represent data to provide subtitles or legends of a multimedia image that will be recorded on the storage medium 130, and can implemented using a markup language, such as an enhanced markup language (XML), or using binary data. The flow of presentation graphics 106 to provide subtitle data also provides subtitle data based on bitmaps in order to display subtitles (or legends) on a screen.
Since the text-based subtitle data 120 is recorded separately from the main stream 110, and is not multiplexed with the main stream 110, the size of the text-based subtitle data 120 is not limited therein. Similarly, the number of languages supported is not limited. As a result, subtitles or legends can be provided using a plurality of languages. In addition, it is convenient to produce and edit the text-based subtitle data 120. Referring now to Figure 2, a block diagram of a reproduction apparatus for reproducing text-based subtitle data recorded on a conformance storage medium is illustrated. with one embodiment of the present invention. As shown in FIG. 2, the reproduction apparatus 200, also known as a playback device, comprises a presentation graphics decoder 220, which can decode and reproduce all the subtitle data based on text 120 and / or subtitle data based on bitmaps 216, such as an output, via a graphics plane 232 and a color lookup table (CLUT) 234. The presentation graphics decoder 220 includes a memory buffer. sources 221 for storing source data of text-based subtitle data 120; a coded data buffer 222 for storing data of either the text-based subtitle data 120 or the subtitle data based on bitmaps 216 selected on the switch 218; a switch 223; a text subtitle processor 224 for converting dialog information included in the text-based subtitle data 120 to bitmap graphics for storing in an object buffer 227; a stream graphics processor 225 for decoding the subtitle data based on bitmaps 216 and producing a subtitle bitmap image for storing in the object buffer 227 and control information for storing in a composition buffer 226; and a graphics controller 228 for controlling and generating the bitmap image of the subtitles stored in the object buffer 227 based on the control information stored in the composition buffer 226. In a case of the data of subtitles based on bitmap 216, in the present presentation graphics decoder 220, the stream graphics processor 225 decodes the text-based subtitle data and transmits a subtitle bitmap image of the object buffer 227 and control information of the subtitles of the composition buffer 226. Also the graphics controller 228 controls an output of the bitmap image of the subtitles stored in the object buffer 227 based on the stored control information in the composition buffer 226. The image of output graphics of the subtitles is formed in a plane of gr 232 and is produced on a screen by applying a color with reference to a color search table (CLUT) 234. In a case of text-based subtitle data 120, the text subtitle processor 224 converts information of text dialogs to bitmap graphics by reference to font data stored in a font buffer 221 and applying style information that will be described later, and storing the converted bitmap graphics in the object buffer 227. Likewise, the text subtitle processor 224 transmits control information, such as output time information, to the composition buffer 226. The remaining subtitle processing methods converted to bitmaps are the same as in the case of subtitle data based on bitmaps 216. A detailed structure of subtitle data will now be described Text-based titles 120 that will be produced with reference to an example reproduction apparatus shown in Figure 2. Figure 3 illustrates a data structure of text-based subtitle data 120 in accordance with one embodiment of the present invention. . With reference to figure 3, the text-based subtitle data 120 includes style sheet information 310 and dialog information 320. A plurality of style sheet information units 310 and / or dialog information 320 may be included in subtitle data based on text 120. For example, style sheet information 310 includes a plurality of style information units 312 that indicates how to output text information to the screen. Style information 312 includes information about an output style such as area information indicating an output area of subtitles that will be displayed on the screen, position information indicating a subtitle position within the output area, information of color that indicates a background color, and font information to designate a font type and font size that will be applied to text captions, etc. The dialog information 320 included in the text information that will be displayed on the screen when converted to bitmap, ie, represented, reference style information that will be applied when the text information is represented, and time information Start and end of speech (dialogue) to designate the times when the subtitles (or legends) of the screen appear or disappear, respectively. In particular, the dialogue information 320 includes online style information to 'emphasize a portion of the text information of the subtitles by applying a new style to them. The online style information preferably excludes the area information and position information between the style information 312 applied to a whole text, and includes the font information and color information required to convert a portion of the text information to a bitmap image. As shown in Figure 3, the text-based subtitle data 120 comprises a plurality of style sheet information units 310 and a plurality of dialog information units 320. The style sheet information 310 is a set of information of styles 312 to be applied to each dialog information 320, and at least one unit of the information of style sheets 310 must exist. The manufacturer can produce information of additional style sheets 310 in such a way that a user can change and selecting a style applied to the text information and letting the information of additional style sheets 310 be included in the text-based subtitle data 120. The information of additional style sheets 312 to be selected by a user preferably includes only a plurality of font information units and color information that will be applied to the text information. The dialog information 320 includes the text information that contains subtitle content that will be brought to the screen. A plurality of dialogue information units 320 may be included in the text-based subtitle data 120 in order to process all the subtitles (legends) over an entire multimedia image having a high image quality. A unit of the dialogue information 320 converts the text information that will be generated at the speech start time to a bitmap image by reference to the reference style information and / or the online style information, and displays the converted bitmap image until the voice end time. Figures 4A and 4B are examples of text-based subtitle data reproduction results having the data structure shown in Figure 3. With reference to Figure 4A, the text subtitle processor 224 of the reproduction apparatus 200, as shown in Figure 2, it reads style information 412, which is directed by the reference style information 422 among a plurality of style information units included in the style sheet information 410, selected based on the reference style information 422 included in the dialog information 4120 to be reproduced in the operation (1). The text subtitle processor 224 then converts text information 424 to a bitmap image by applying the style information 412 to the text information 424, and generates the converted bitmap image. The result of reproduction image 430 is shown on the right in Figure 4A. That is, when a multimedia image is generated a bitmap image of text information 432 for subtitles to which the style information 412 applied by reference style information 422 is applied is taken together to be displayed on the screen . Figure 4B illustrates a reproduction image resulting from a case in which style information and online style information, which is applied to a portion of text information, are applied during reproduction. With reference to Figure 4B, the text subtitle processor 224 of the reproduction apparatus 200, as shown in Figure 2, reads style information 452 directed by the reference style information 462 in the operation (1) and applies the style information read 452 to text information 464 for subtitles. Also, the text subtitle processor 224 reads online style information 466 in the operation (2) and applies the online style information read 466 to a portion of the text information 464 for subtitles. That is, when the basic style information 452 included in the information of style sheets 450 and the online style information 466 defined in the dialog information overlap, the online style information 466 is reflected in a final output and displayed on the screen. In this form, the text information 464 to which style information 452 and online style information 466 is applied is converted to a bitmap image and displayed on the screen. The result of reproduction image 470 is shown on the right in Figure 4B. A text information image 472 for a subtitle output together with multimedia images is generated by applying style information 452 and line style information 466 to a portion thereof.
That is, when the subtitle data based on text 120 are reproduced with a multimedia image having high image quality, the reproduction apparatus 200, as shown, for example, in figure 2, selects style sheet information to be applied in an initial reproduction of the data of text-based subtitles 120 among a plurality of information of style sheets stored in a storage medium. If additional information indicating style sheet information to be applied initially is included in the style sheet information, the reproduction apparatus 200, shown in Figure 2, may select the style sheet information that will be applied in the initial reproduction of the text-based subtitle data 120 with reference to the additional information. In other words, the first style sheet information defined between the plurality of style sheet information units can be selected. The selected style sheet information is applied to all dialog information unless the user generates a style change request.
However, in a case where the manufacturer has produced a plurality of style sheet information units and the user can select one of a plurality of styles, that is, in a case in which the user generates a request for change of style. style, a problem can be generated. When the style change request is generated by the user, new style information included in the style sheet information is applied due to the reference style information included in subsequent output dialog information. However, if the recently applied style information is equal to previous line style information, no change is generated in a portion of text information directed by the style information online. As a result, the original purpose that the manufacturer wanted to emphasize a portion of the text information using the online style information can not be realized. Fig. 5 illustrates a problem which can be generated when producing text-based subtitle data having the data structure shown in Fig. 3. With reference to Fig. 5, a process of a case in which a request is illustrated is illustrated. of information exchange of a first style 512 to information of a second style 522 is received by a user. An image 540 in the lower left of figure 5 shows a result generated by the application of the information of the first style 512 before the style change request is generated. That is, a result is shown that online style information 536 is applied to a portion of text information 534 after first style information 512 is applied by reference style information 532 to all text information 534 Consequently, the information of the first style 512 is applied to all the subtitles, and the portion of the text information 534 is emphasized and visualized due to the online style information 536. However, as shown at the bottom Right of Figure 5, an image 550 displayed by the recent application of the second style information 522 after the user generates the style change request shows that an original purpose of the manufacturer that wanted to emphasize a portion of the text information using the style information on line 536. This can be generated when the information of the second style 522 is equal to the information online style 536. An example online style information will now be described to be incorporated into the text-based subtitle data to solve the problem as described in connection with Figure 5. Figure 6 illustrates an online style information that will be incorporated into text-based subtitle data to solve the problem-illustrated in FIG. 5 in accordance with one embodiment of the present invention. With reference to figure 6, the online style information 610 of the text-based subtitle data 120 includes a font type, relative font size information, and relative color information. In addition, in-line style information 610 may also include information such as thickness and italics. As described in relation to figure 6, since the information of style sheets includes only information about the size of the source and a color, an effect of emphasis on the type of font, the thickness, and the italics can be kept uniform if the user changes the style sheet information to the new style sheet information. However, in a case of font size and color, the problem described in Figure 5 may be generated. Therefore, it is preferable that the online style information 610 include information of relative font sizes and color information relative in such a way that the relative values are applied based on the current applied font size and color values of basic style information without using absolute values for font size attribute values and color information included in the style information at line 610. That is, by using relative attribute values for the font size and color, an emphasis effect can be maintained due to the online style information 610 even when the user changes the information of the sterilized leaves. Here, it is preferable that the reproduction apparatus 200, as shown, for example, in FIG. 2, be able to restore the values of font size and color to a workable size and to the minimum or maximum value of the color in a case in the one that a relevant source is generated from a range of workable size- or color. Figure 7 illustrates a sample data structure of text-based subtitle data in accordance with another embodiment of the present invention. As shown in Figure 7, the text-based subtitle data 120 comprises a plurality of style sheet information units 710 and a plurality of dialog information 720.
In contrast to figure 3, in which text-based subtitle data 120 is displayed in such a way that online style information is included in the dialog information 320 separate from the information of style sheets 310, the subtitle data Text-based 120 are shown in Figure 7, such that the style information 710 includes basic style information 712 and online style information 714 that can be applied to a portion of the text information. In addition, dialog information 720 also includes online style reference information to reference an identifier of online style information 714 included in style sheet information 710 in order to direct style information online 714 that will apply to text information in a current dialog.
The online style information 714 included in the style sheet information 710 defines a font size and color to show an emphasis effect on the basic style information 712. Consequently, even if the 710 style sheet information is changed by the user, by applying the online style information 714 separately defined by the style sheet information, the manufacturer's intention to emphasize a portion of "text information can be done to advantage." The online style information 714 follows a attribute of the basic style information 712 with respect to information about an area and position in which the portion of text information is displayed, and may include font type, font size, and color information as representation information that it will be used to emphasize the portion of the text information As another exemplary embodiment of the present invention, apart from the ho that the manufacturer defines the style information that will be applied to the text information, the reproduction apparatus (or playback device), as shown, for example, in Figure 2, to reproduce text-based subtitle data. including style information recorded on a storage medium can set the style information that will be applied to the text information at will. That is, a basic attribute follows the style information included in the text-based subtitle data described above, and a portion of the style information, such as font type, font size and a color, can be changed by the reproduction apparatus. In other words. The reproduction apparatus 200, as shown, for example, in Figure 2, can generate text information by representing a different output style using included online style information. Due to these functions of the reproduction apparatus 200, as shown in Figure 2, an output format is different from a set of formats established by the manufacturer in general. A method for solving this problem will now be described in detail. Figure 8 illustrates a sample data structure of text-based subtitle data to which a reproduction apparatus can apply predetermined style information in accordance with another embodiment of the present invention. With reference to Fig. 8, in order to solve the problem that the reproduction apparatus 200, as shown, for example,. In Figure 2, style information is applied to text information at will independently of the manufacturer's intent, the text-based subtitle data 120 additionally includes information 830 which indicates whether predetermined style information is allowed to be applied to the reproduction apparatus, as shows, for example, in Figure 2. Such information 830 represents information indicating whether the manufacturer allows predetermined style information to be applied to a reproduction apparatus 200, shown in Figure 2. When the manufacturer allows the information to be applied Default style, text information may be generated by applying the default style information supported by the reproduction apparatus 200, shown in Figure 2. When using the information 830 which indicates whether the default style information supported by the reproduction apparatus 200, shown in Figure 2, the default style information that will be applied to all of the style information included in the text-based subtitle data 120 can be determined by storing information 830 separately from the style sheet information 810 as shown in operation (1) of figure 8. Also, it can be determined if it is. it allows the default style information to be applied to only specific style information by storing the information 830 for each unit of the style sheet information 810 as shown in the operation (2) of figure 8. Now a method of reproducing the style will be described. text-based subtitle data including style information based on an example data structure of the text-based subtitle data recorded on a storage medium and an example reproduction apparatus, shown in Figure 2. Figure 9 is a flow diagram illustrating a process of reproducing text-based subtitle data including style information in accordance with an embodiment of the present invention. With reference to Figure 9, text-based subtitle data 120 including style information, style sheet information, and online style information, as shown, for example, in Figure 3, or Figure 7 , read from a storage medium in operation 910. In step 920, the style information is applied to a subtitle text included in the dialog information, the online style information is applied to a portion of the text of Subtitles, and the subtitle text is converted to a bitmap image. The converted bitmap image is generated based on information time in which speech (or legends) are output to a screen for a visual representation, at step 930. As described above, the present invention advantageously provides a storage means, in which text-based subtitle data including a plurality of style information units, and a reproduction apparatus and a method for it are recorded, so that subtitles can be provided in a plurality of languages without limited to the number of units of the subtitle data. As a result, subtitle data can be produced and easily edited, and an output style of the subtitle data can be changed in a variety of ways. In addition, a special style can be applied in order to emphasize a portion of the subtitles. The exemplary embodiments of the present invention may also be written as computer programs and may be implemented in general-purpose digital computers running the programs using a computer reading medium. Examples of the computer reading medium include magnetic storage media (e.g., ROMs, floppy disks, hard drives, etc.), optical recording media (e.g., CD-ROM, DVD, etc.), and storage media. such as carrier waves (for example, transmission over the Internet). The computer reading medium can also be distributed through networked computer systems in such a way that the computer reading code is stored and executed in a distributed form. While what is considered to be exemplary embodiments of the present invention has been illustrated and described, those skilled in the art will understand, and as the technology develops, that various changes and modifications may be made, and equivalents may be substituted for elements of the same without departing from the spirit and scope of the present invention. Many modifications can be made to adapt the teachings of the present invention to a particular situation without departing from the scope thereof. For example, many means of reading by computer or data storage devices can be used, as long as reference signals reflecting optimum recording conditions are recorded. In addition, the text-based subtitle data can also be configured differently from that shown in Figure 3 or Figure 7. Similarly, the CPU can be implemented as a set of integrated circuits that have firmware, or alternatively, a purpose-built computer. General or special programmed to perform the methods as described with reference to Figure 2 and Figure 9. Consequently, it is intended, therefore, that the present invention is not limited to the various described modalities of example, but that the present invention includes all modalities that fall within the scope of the appended claims. INDUSTRIAL APPLICABILITY The present invention applies to a storage medium in which text-based subtitle data including style information is recorded, and a reproduction apparatus and method for reproducing text-based subtitle data including style information in such medium is recorded. storage. It is noted that in relation to this date, the best method known to the applicant to carry out the aforementioned invention is that which is clear from the present description of the invention.

Claims (32)

  1. Having described the invention as above, the content of the following claims is claimed as property: 1. An information storage means, comprising: multimedia image data; and text-based subtitle data for displaying subtitles in an image based on the multimedia image data, characterized in that the text-based subtitle data includes dialog information indicating subtitle content to be displayed in the image, style information which indicates an output style of dialog information, and partial style information indicating an output style applied to a portion of the dialog information. The information storage means according to claim 1, characterized in that the dialog information includes text information indicating the content of subtitles that will be displayed in the image, and time information related to the time in which it is displayed. It will take the text information to a screen and it will be displayed in the image.
  2. 3. The information storage medium according to claim 2, characterized in that the style information includes area information indicating a position in which the text information is displayed in 1 image and source information related to the type, size, color, thickness, and style of an output source.
  3. The information storage medium according to claim 3, characterized in that the text-based subtitle data includes at least one style sheet information unit which is a group of generated styles consisting of a plurality of units of information. style information.
  4. The information storage medium according to claim 3, characterized in that the partial style information is output style information to emphasize and display a portion of the text information, and has relative values with respect to the font size and / or font color included in the style information.
  5. The information storage medium according to claim 1, characterized in that partial style information is included in the dialog information, or stored separately from the dialog information in which reference information of the information of the user is included. partial style.
  6. 7. The information storage means according to claim 3, characterized in that the text-based subtitle data includes, in addition to the style information, information on whether predetermined style information defined by a manufacturer of the storage medium is included.
  7. 8. An apparatus for reproducing data of multimedia images and text-based subtitle data recorded on a storage medium for displaying subtitles in an image based on the multimedia image data, characterized in that the apparatus comprises: a memory unit intermediate for storing style information indicating an output style of dialog information representing subtitle content to be displayed in the image, and partial style information, indicating an output style applied to a portion of the dialog information, of text-based subtitle data recorded on the storage medium; and a controller unit arranged to read the style information and partial style information of the buffer unit, to apply the style information read and the partial style information to the dialog information, to convert information applied to an image. bitmap, and generate the converted bitmap image.
  8. 9. A method of reproducing multimedia image data and text-based subtitle data recorded on a storage medium to display subtitles on an image based on multimedia image data, characterized in that the method comprises: reading information from "dialogs indicating subtitle content to be displayed in the image, style information indicating an output style of dialog information, and partial style information indicating an output style applied to a portion of the dialogue information, the text-based subtitle data recorded on the storage medium; converting the dialog information to a bitmap image to which the style and partial style are applied based on style information and partial style information; and generating the converted bitmap image in accordance with output time information included in the dialog information.
  9. The compliance apparatus, with claim 8, characterized in that the dialog information includes text information indicating subtitle content that will be displayed in the image, and time information related to the time in which text information will be extracted. to a screen and it will be displayed in the image.
  10. 11. The apparatus according to claim 8, characterized in that the style information includes area information indicating a position in which the text information in the image is displayed and source information related to the type, size, color, thickness, and style of an output source.
  11. The apparatus according to claim 8, characterized in that the text-based subtitle data includes at least one style sheet information unit which is a group of output styles - consisting of a plurality of information units of styles.
  12. The apparatus according to claim 8, characterized in that the partial style information is output style information to emphasize and display a portion of the text information, and has relative values with respect to the font size and / or the font color included in the style information.
  13. The apparatus according to claim 8, characterized in that the partial style information is included in the dialog information, or alternatively, it is stored separately from the dialog information in which reference information of the partial style.
  14. The apparatus according to claim 8, characterized in that the text-based subtitle data additionally includes information on whether predetermined style information defined by a manufacturer of the storage medium is included.
  15. 16. The method according to claim 9, characterized in that the dialog information includes text information indicating the content of subtitles that will be displayed in the image, and time information related to the time in which text information will be extracted. to a screen and it will be displayed in the image.
  16. 17. The method according to claim 9, characterized in that the "style information" includes area information indicating a position in which the text information in the image is displayed and source information related to the type, size, color , the thickness, and style of an output source 18.
  17. The method according to claim 9, characterized in that the text-based subtitle data includes at least one style sheet information unit which is a group of output styles. which consists of a plurality of style information units 19.
  18. The method according to claim 9, characterized in that the partial style information is output style information to emphasize and display a portion of the text information, and has relative values with respect to the font size and / or the font color included in the style information 20.
  19. The compliance method c in claim 9, characterized in that the partial style information is included in the "dialogue information", or alternatively, it is stored separately from the dialogue information in which reference information of the partial style information is included.
  20. The method according to claim 9, characterized in that the text-based subtitle data additionally includes information on whether predetermined style information defined by a manufacturer of the storage medium is included.
  21. 22. A computer reading means characterized in that it comprises instructions that, when executed by a computer system, performs the method comprising: reading subtitle data based on text to display subtitles in an image based on multimedia image data , text-based subtitle data include dialog information indicating subtitle content to be displayed in the image, style information indicating an output style of dialog information, and partial style information indicating an output style applied to a portion of the dialog information; converting the dialog information to a bitmap image to which the style and partial style are applied based on style information and partial style information; and generating the converted bitmap image in accordance with output time information included in the dialog information. .2. 3.
  22. The computer reading means according to claim 22, characterized in that the dialog information includes text information indicating content of subtitles that will be displayed in the image, and time information related to the time in which information will be extracted from text- to a screen and it will be displayed in the - image.
  23. 24. The computer reading means according to claim 22, characterized in that the style information includes area information indicating a position in which the text information in the image is displayed and source information related to the type, size, color, thickness, and style of an output source.
  24. 25. The computer reading means according to claim 22, characterized in that the text-based subtitle data includes at least one style sheet information unit which is a group of output styles consisting of a plurality of units. of information of styles.
  25. 26. The computer reading means according to claim 22, characterized in that the partial style information is output style information to emphasize and display a portion of the text information, and has relative values with respect to the font size. and / or the font color included in the style information.
  26. 27. The computer reading means according to claim 22, characterized in that the partial style information is included in the dialog information, or alternatively, it is stored separately from the dialog information in which reference information is included. of partial style information.
  27. 28. The computer reading means according to claim 22, characterized in that the text-based subtitle data additionally includes information on whether predetermined style information defined by a manufacturer of the storage medium is included.
  28. 29. A presentation graphics decoder, characterized by comprising: a buffer unit for storing text-based subtitle data of a storage medium, including style information indicating an output style of dialog information representing content of subtitles that will be displayed in the image, and partial style information, which indicates an output style applied to a portion of the dialog information; and a controller unit "arranged to read the style information and the partial style information of the buffer unit, to apply the style information read and the partial style information to the dialogue information, to convert information applied to a bitmap image, and generating the converted bitmap image
  29. 30. The presentation graphics decoder according to claim 29, characterized in that the dialog-information includes text information indicating subtitle content that is displayed. will display in the image, and time information related to the time in which the text information is taken to a screen displayed in the image, and where the style information includes area information indicating a position in which it is displayed text information in the image and information from sources related to type, size, color, thickness, and style e) An output source
  30. 31. The presentation graphics decoder according to claim 29, characterized in that the partial style information is generated style information to emphasize and display a portion of the text information., and has relative values with respect to the font size and / or font color included in the style information, and is included in the dialog information, or is stored separately from the dialog information in which information is included Reference for partial style information. The presentation graphics decoder according to claim 29, characterized in that the text-based subtitle data additionally includes information on whether predetermined style information defined by a manufacturer of the storage medium is included.
MXPA/A/2006/009467A 2004-02-21 2006-08-18 Storage medium for storing text-based subtitle data including style information, and apparatus and method reproducing thereof MXPA06009467A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020040011699 2004-02-21

Publications (1)

Publication Number Publication Date
MXPA06009467A true MXPA06009467A (en) 2007-04-10

Family

ID=

Similar Documents

Publication Publication Date Title
RU2316063C1 (en) Data carrier for storing text data of subtitles, including style information, and device and method for its reproduction
TWI246036B (en) Information storage medium containing subtitle data for multiple languages using text data and downloadable fonts and apparatus therefor
RU2388073C2 (en) Recording medium with data structure for managing playback of graphic data and methods and devices for recording and playback
JP2009016910A (en) Video reproducing device and video reproducing method
CN101072312B (en) Reproducing apparatus and method of information storage medium containing interactive graphic stream
MXPA06009467A (en) Storage medium for storing text-based subtitle data including style information, and apparatus and method reproducing thereof