MXPA06005152A - Information storage medium containing subtitles and processing apparatus therefor - Google Patents

Information storage medium containing subtitles and processing apparatus therefor

Info

Publication number
MXPA06005152A
MXPA06005152A MXPA/A/2006/005152A MXPA06005152A MXPA06005152A MX PA06005152 A MXPA06005152 A MX PA06005152A MX PA06005152 A MXPA06005152 A MX PA06005152A MX PA06005152 A MXPA06005152 A MX PA06005152A
Authority
MX
Mexico
Prior art keywords
subtitle
text
information
data
subtitles
Prior art date
Application number
MXPA/A/2006/005152A
Other languages
Spanish (es)
Inventor
Chung Hyunkwon
Moon Seongjin
Kang Manseok
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of MXPA06005152A publication Critical patent/MXPA06005152A/en

Links

Abstract

An information storage medium containing subtitles and a subtitle processing apparatus, where the information storage medium includes:audio-visual (AV) data;and subtitle data in which at least one subtitle text data and output style information designating an output form of the subtitle texts are stored with a text format. With this, output times of subtitle texts included in the text subtitle data can be overlapped, a subtitle file can be easily produced, and subtitles for an AV stream can be output with various forms.

Description

MEANS OF STORING INFORMATION CONTAINING SUBTITLES AND PROCESSING DEVICE FOR THE SAME FIELD OF THE INVENTION The present invention relates to an information storage means, and more particularly, to an information storage medium containing a plurality of subtitles that can be separately displayed by a processing apparatus for this.
BACKGROUND OF THE INVENTION A conventional subtitle is a bitmap image that is included in an audio-visual (AV) stream. Therefore, it is inconvenient to produce such a subtitle, and there is no choice but to read only the subtitle in its present form without modification since a user can not select several attributes of the subtitle defined by a subtitle producer. That is, since the attributes, such as font, character size, and character color, are predetermined and included in the AV stream as a bitmap image, the user can not change the attributes at will. In addition, since the subtitle is compressed and encoded in the AV stream, a production start time and a production end time of the subtitle are clearly Ref. 172603 designed to correspond to the AV stream, and playback times when subtitles are produced should not overlap. That is, only a subtitle should be produced at a certain time. ~ However, since a production start time and an end time of production of a subtitle are designated by a suitable producer and recorded on an information storage medium separately from the AV current, the production start times and end times of production of a plurality of subtitles can be superimposed on each other. In other words, since more than two subtitles can occur in a certain period of time, a method to solve this problem is necessary.
BRIEF DESCRIPTION OF THE INVENTION Technical Solution In one aspect of the present invention, the present invention provides an information storage means having recorded therein a plurality of text subtitles that are displayed separately although they overlap each other and an apparatus for reproduce the storage medium.
Advantageous Effects According to one embodiment of the present invention, a subtitle file can be easily produced, and the subtitles of an AV stream can be produced in various ways.
BRIEF DESCRIPTION OF THE FIGURES Figure 1 illustrates a structure of a text subtitle file; Figure 2 is a block diagram of an apparatus reproducing an information storage medium in which a subtitle of text is printed; Figure 3 is a detailed block diagram of the text subtitle processing unit of Figure 2; Figure 4 is a reference block diagram illustrating the generation of a bitmap image without a presentation engine; Figure 5 is an example diagram illustrating correlations between structures in which composition information, position information, object information, and color information are recorded; Figures 6A to 6C are diagrams illustrating a process of generating an image for a plurality of subtitles using composition information data and position information data; Figures 7A to 7C are diagrams illustrating a process of generating an image for a plurality of subtitles using composition information data and a plurality of position information data; and Figures 8A to 8C are diagrams illustrating a process of generating an image so that an image object is included in composition information data by allocating a plurality of composition information data for a plurality of subtitles.
BRIEF DESCRIPTION OF THE INVENTION In accordance with one aspect of the present invention, an information storage means is provided that includes: AV data; and subtitle data in which at least one subtitle text data or production style information designating a form of production of the subtitle texts is stored in a text format. In an aspect of the present invention, the production style information contains pieces of information so that the production style information is applied differently to the subtitle texts. In an aspect of the present invention, when a plurality of subtitle data exists, the plurality of subtitle data is generated, and the generated images compose a plurality of pages, respectively.
In accordance with another aspect of the present invention, a text subtitle processing apparatus is provided which includes: a text subtitle recognizer that separately extracts the generated information used to generate a text of text subtitle data and control the information used to present the generated text; and a source / text generator that generates a bitmap image of a subtitle text generating the subtitle text according to the extracted generation information. In one aspect of the present invention, the font / text design generator generates at least one subtitle text data by applying different styles to the subtitle text data and composes a plurality of pages with a plurality of generated images.
DETAILED DESCRIPTION OF THE INVENTION Reference will now be made in detail to the embodiments of the present invention, examples of which are illustrated in the accompanying figures, in which like reference numerals refer to similar elements throughout. The embodiments are described below to explain the present invention by reference to the figures. Figure 1 illustrates a structure of a text subtitle file 100. With reference to Figure 1, the text subtitle file 100 includes dialogue information 110, presentation information 120, and metadata 130a and 130b. The dialogue information 110 includes subtitle texts, production start times of the subtitle texts, end times of production of the subtitle texts, style groups or style information used to generate the subtitle texts, effect information of text change such as gradual appearance and gradual disappearance, and a formatting code of the subtitle texts. The formatting code includes one or more of a code that displays text with bold characters, a code to display the text in italics, a code that indicates underlining, or a code that indicates a line change. The presentation information 120 includes style information used to generate the subtitle texts and comprises a plurality of style groups. A style group is a style pack in which style information is recorded. A style includes information used to generate and display a subtitle text. This information includes, for example, one or more of a style name, a font, a text color, a background color, a text size, a line height, a text production region, a start position of text production, a production direction, or an alignment method. Metadata 130a and 130b, which are additional information of a moving image, include information required to perform additional functions except in a subtitle production function. For example, an additional function can be displayed by a Parental TV Directive such as 'TV-MA' on a screen for a proposed program for adult audiences. Figure 2 is a block diagram of an apparatus reproducing an information storage medium in which a text subtitle file is recorded. It is understood that the apparatus can also record the text subtitle file to the information storage medium. With reference to Figure 2, a text subtitle processing unit 220 generates a subtitle text to process a text subtitle file. The text subtitle processing unit 220 includes a text subtitle recognizer 221, which extracts the presentation information and dialog information from the text subtitle file, and a text font / design generator 222, which generates a production image generating the subtitle text according to the extracted presentation information. The text subtitle file 100 illustrated in FIG. 1 can be recorded in an information storage medium or in a memory included in a reproduction apparatus. In figure 2, the information storage medium or the memory in which the text subtitle file is recorded is called a subtitle information storage unit 200. A text subtitle file corresponding to a reproduction of source and image data in movement used to generate the subtitle is read from the subtitle information storage unit 200 and is stored in a buffer 210. The text subtitle file stored in the buffer 210 is transmitted to a text subtitle recognizer 221, which recognizes the information required to generate the text subtitle file. A subtitle text, source information, and generation style information are transmitted to the text source / design generator 222, and the subtitle text control information is transmitted to a composition buffer 233 of a presentation engine 230. The control information (i.e., information to display a screen with the subtitle text) includes a production region and a production start position. The source / text generator 222 generates a bitmap image by generating the subtitle text using text generation information transmitted from the text subtitle recognizer 221 and the source data transmitted from the buffer 210, composes a subtitle page designating a production start time and an end time of production of each subtitle text, and transmits the bitmap image and the subtitle page to an object buffer 234 of the display engine 230. The The subtitle of the bitmap image form read from the subtitle information storage unit 200 is input to a coded data buffer 231 and processed by a graphics processing unit 232 in the presentation engine 230. In As a result, the graphics processing unit 232 generates a bitmap image. The generated bitmap image is transmitted to the object buffer 234, and the control information of the bitmap image is transmitted to the composition buffer 233. The control information is used to designate a time and a position in which the bitmap image is stored in the object buffer 234 occurs for a graphics smoother 240 and designates a color lookup table (CLUT) 250 in which the color information that is applied the bitmap image produced by the graphics smoother 240 is recorded. The composition buffer 233 receives the object composition information transmitted from the text subtitle recognizer 221 and the bitmap subtitle data processed by the graphics processing unit 232 and transmits the control information to produce the subtitle on a screen to a graphics controller 235. The graphics controller 235 controls the object buffer 234 to combine the data from the processing bitmap subtitle by the graphics processing unit 232 and the subtitle text object data. generated received from the text source / design generator 222 and the graphics smoother 240 to generate a graphics plane of the combined data, and produces the graphics plane to a display unit (not shown) with reference to the CLUT 250 Figure 3 is a detailed block diagram of the text subtitle processing unit 220 of Figure 2. Referring to Figure 3, a subtitle, which is text subtitle file information, is input to the text subtitle recognizer 221. The text subtitle recognizer 221 transmits the subtitle control information recognized from the subtitle to the text engine. display 230 and the recognized text generation information of the subtitle to the text source / design generator 222. The text source / design generator 222 receives the text generation information from the text subtitle recognizer 221 and stores the information control of a subtitle text in an element control data buffer 290, subtitle text data in a text data buffer 291, and the style information used to generate the subtitle text data in a style data buffer 292. In addition, the text font / design generator 222 stores the source data used to generate the text in a source data buffer 293. The control information stored in the element control data buffer 290 may be a format code. The formatting code includes one or more of a code that displays text with bold characters, a code that displays the text in Italics, a code that indicates underlining, or a code that indicates a line change. The subtitle text data stored in the text data buffer 291 is text data that occurs as a subtitle. The style data stored in the style data buffer 292 may be one or more of the data such as a font, a text color, a background color, a text size, a line height, a region of text production, a text production start position, a production direction, or an alignment method. A text generator 294 generates a subtitle image with reference to the information recorded in each buffer and transmits the subtitle image to the presentation engine 230. Figure 4 is a reference block diagram illustrating the generation of an image of bitmap without the presentation engine 230. That is, Figure 4 illustrates another embodiment of an operation of the text subtitle processing unit 220 which includes a text subtitle controller 410 in place of the presentation engine 230. With reference to Figure 4, the font / text design generator 222 generates composition information, position information, object information, and color information and generates a bitmap image on the basis of the composition information, the position information, the object information, and the color information. The text subtitle controller 410 receives the object composition information from the text subtitle recognizer 221 and controls the text font / design generator 222 to directly produce the bitmap image generated by the font / design generator of text 222 to graphics smoothing device 240 and CLUT 250. FIG. 5 is an exemplary diagram illustrating the correlations between structures in which composition information, position information, object information, and color information record. A subtitle that occurs on a screen is composed of units of pages. Each page may also include data used for other purposes besides the subtitle. Composition information refers to information that contains information used to compose a page. The composition information includes production time information indicating a page production time, a reference value of object information indicating a production image object, a reference value of position information indicating a production position of object, and a reference value of color information indicating the object color information. The correlations between the information structures shown in Figure 5 are a part of the composition information, and it is also possible to compose the correlations between the position information, object information, and color information in a different form from Figure 5. With reference to Figure 5, a page may include at least one region to produce an image on a screen. At least one region is classified by the reference value of position information. The position information refers to a recorded structure of information required to compose at least one region to produce the image. The position information includes horizontal and vertical coordinate information for each region, a width of the region, and a height of the region. The object information includes object data that is displayed on the screen. In addition, the object information includes information of object data type corresponding to the object data. An operation of the text subtitle processing unit 220 will be described as an example. The text subtitle processing unit 220 generates the composition information, position information, object information, and color information of each generated subtitle image that is produced on the screen to provide a caption text. The composition information, position information, object information, and color information generated are transmitted to the presentation engine 230. As described above, when an information storage medium containing subtitles generated in a text form is reproduced, There are several exemplary methods to produce more than one subtitle at the same time. In a first method, the text subtitle processing unit 220 generates a new image for a plurality of subtitles, text production times of which are superimposed, and transmits a subtitle composed of objects generated to be produced to an information of position in a composition information to the presentation engine 230. There is a second method for composing the subtitles so that the subtitles, text production times of which they overlap, have different position information. That is, the text subtitle processing unit 220 generates an image of the plurality of subtitles, text production times of which they overlap, using different position information data in a composition information and transmits the generated image to the presentation engine 230. There is a third method of generating subtitles, text production times of which overlap, using information of different composition. That is, the text subtitle processing unit 220 generates different composition information data for a plurality of subtitles, text production times of which are superimposed, so that only one object is included in one or more information data. composition. The three methods will be described in detail with reference to figures 6 to 8.
Figures 6A to 6C are diagrams illustrating a process of generating an image for a plurality of subtitles using composition information data and position information data. In Figure 6A, a style 'Script' is defined as a style information used for the generation of subtitle text. With reference to figure 6A, the style 'Script' uses a? Rial.ttf font, a 'black' text color, a 'white' background color, a '16pt' character size, a reference position of coordinate text (x, y), an alignment method 'center', a production direction 'left-to-right-up-to-down', a region of text production 'left, top, width, height' , and a line height '40 px '. In Figure 6B, the subtitle texts 610, 620, and 630 generated using the Style Script are defined. With reference to Figure 6B, the subtitle text Hello 610 is produced from '00: 10: 00 'to '00: 15: 00', the subtitle text Subtitle 620 occurs from '00: 12: 00 'to' 00:17:00 ', and the subtitle text World 630 occurs from '00: 14: 00' to '00: 19: 00 '. Therefore, two or three subtitle texts occur between '00: 12: 00 'and '00: 17: 00 ' At this point '< br / > 'indicates a line change.
Using the < br / > it can be prevented that a plurality of subtitles are superimposed on a region even if a style is used. Figure 6C shows a production result of the subtitles defined in Figures 6A and 6B. With reference to Figure 6C, the data stored in each buffer of the text subtitle processing unit 220 in each illustrated time window will be described in detail. Before '00: 10: 00 ': the font / text design generator 222, when the production composition information includes an empty caption image, comprises: Element control data buffer: empty; Text data buffer: empty; Style Data Buffer: 'Script' style information; and Source data buffer: source information of 'Arial.ttf. From '00: 10: 00 'to '00: 12: 00': the font / text design generator 222, when the production composition information includes an image in which the subtitle text Hello 610 is generated, comprises : Element control data buffer: subtitle text control information Hi 610; Text data buffer: 'Hello'; Style Data Buffer: 'Script' style information; and Source data buffer: source information of? rial.ttf. From '00: 12: 00 'to '00: 14: 00': the font / text design generator 222, when the production composition information includes an image in which the subtitle text Hello 610 and the text of subtitle Subtitle 620 are generated, comprising: Element control data buffer: control information of the subtitle text Hello 610 and the subtitle text Subtitle 620; Text data buffer: 'Hello' and '< br / > Subtitle '; Style data buffer: style information from 'Script1; and Source data buffer: source information of 'Arial.ttf'. From '00: 14: 00 'to '00: 15: 00': the font / text design generator 222, when the production composition information includes an image in which the subtitle text Hi 610, the text of subtitle Subtitle 620, and the subtitle text World 630 are generated, comprising: Element control data buffer: control information of the subtitle text Hello 610, subtitle text Subtitle 620, and subtitle text World 630; Data buffer 'text:' Hello 'and' < br / > Subtitle 'y' < br / xbr / > World '; Style Data Buffer: 'Script' style information; and Source data buffer: source information of 'Arial.ttf. From '00: 15: 00 'to '00: 17: 00': the font / text design generator 222, when the production composition information includes an image in which the subtitle text Subtitle 620 and the text of World subtitle 630 are generated, comprising: Element control data buffer: subtitle text control information Subtitle 620 and subtitle text World 630; Text data buffer: '< br / > Subtitle 'y' < br / xbr / > World '; Style Data Buffer: 'Script' style information; and Source data buffer: source information of 'rial.ttf1. From '00: 17: 00 'to '00: 19: 00': the font / text design generator 222, when the production composition information includes an image in which the World 630 subtitle text is generated, comprises : Element control data buffer: control information of subtitle text World 630; Text data buffer: '< br / xbr / > World '; Style Data Buffer: 'Script' style information; and Source data buffer: source information of 'Arial. ttf '. After '00: 19: 00 ': the font / text design generator 222, when the production composition information includes an empty caption image, comprises: Element control data buffer: empty; Text data buffer: empty; Style Data Buffer: 'Script' style information; and Source data buffer: source information of 'rial. ttf1. As shown in the previous subtitle production process, in the first method, a subtitle image is generated by applying the same style to a plurality of subtitle texts having overlapping production times, composition information data including the subtitle images are generated, and the generated composition information data is transmitted to the presentation engine 230. At this time, page_time_out indicating the time when the transmitted composition information disappears from a screen refers to the time when a subtitle that is the last one produced to the screen between a plurality of subtitle that has superimposed production times disappears or the time when a new subtitle is added. The processing of subtitle text of the produced subtitles must be performed quickly considering a time taken to carry out the decoding of the subtitles in the text subtitle processing unit 220 and a time taken to produce the captions generated from the buffer of object 234 to the graphics smoother 240. When Tinicio indicates the time when a subtitle is produced from the text subtitle processing unit 220 of the reproduction apparatus, and when negated indicates the time when the subtitle arrives at the processing unit of text subtitle 220, the correlations between these times will be calculated by equation 1.
Equation 1 Tinicio ~ ^ lleffada l -í decoding + -1 composition L decoding = -c / eneracióii + -g-eneracisn of composition information Character number greneracióp = /, - character (i) i = 0 With reference to equation 1, you can know how quickly the text subtitle should be processed. At this point, decoding indicates the time taken to generate a subtitle to be produced, generate the composition information that includes a generated object, and transmit the generated composition information to the buffer of object 234. The subtitle that requires a time The production of T? nici0 must begin to be processed before at least the time obtained by adding Tdecodificación Y "Tcomposición • The decoding time is obtained by adding Tgeneración, which is the time taken to generate the subtitle text and transmit the subtitle text generated to the object buffer 2 4, and, tgeneration and comosició information which is the time taken to generate the composition information that includes the generated object and transmit the composition information to graphics flattener 240. TCarc time er is the time taken to generate a character.Then, the result is obtained by adding the time You are taken to generate all the characters.
The size of the object buffer 234 must be equal to or greater than the size of the object. At this point, the size of the object is obtained by adding the sizes of each of the character data of the object. Therefore, the number of characters composing a subtitle is limited to the number of characters which can be stored in the object buffer 234. In addition, since the object buffer 234 can store a plurality of subtitles, the number of characters that make up the plurality of subtitle is also limited to the number of characters which can be stored in the object buffer 234. Figures 7A to 7C are diagrams illustrating a process of generating an image for a plurality of subtitles using some composition information data and a plurality of position information data. In figure 7A, the 'Scriptl', 'Script2', and 'Script3' styles are defined as style information used for the generation of subtitle text. With reference to Figure 7A, each of the three styles uses an 'Arial.ttf1' font, a 'black' text color, a 'white' background color, a '16pt' character size, an 'alignment method' center ', a production direction' left-to-right-up-to-down ', and a line height' 40px '. As a reference position of subtitle text, 'Scriptl' has coordinates (xl, yl), 'Script2' has coordinates (x2, y2), and 'Script3' has coordinates (x3, y3). As a region of text production, 'Scriptl' has 'left, arribal, anchural, altural', 'Script2' has 'left2, up2, width2, height2', and 'Script3' has 'left3, up3, width3, height3' . In Figure 7B, the subtitle texts 710, 720, and 730 generated using the 'Scriptl', 'Script2', and 1 Script3 'styles are defined. With reference to Figure 7B, the subtitle text Hello 710 uses the 'Scriptl' style and occurs from '00: 10: 00 'to '00: 15: 00', the subtitle text Subtitle 720 uses the 'Script2' style and is produced from '00: 12: 00 'to '00: 17: 00', and the subtitle text World 730 uses the 'Script3' style and is produced from '00: 14: 00 'to '00: 19: 00'. Therefore, two or three subtitle texts occur between '00: 12: 00 'and '00: 17: 00'. Since different scripts are used, the line change mark <br / > It is unnecessary. Figure 7C shows a production result of subtitles defined in Figures 7A and 7B. With reference to Figure 7C, the data stored in each buffer of the text subtitle processing unit 220 in each illustrated time window will be described in detail. Before '00: 10: 00 ': the font / text design generator 222, when the production composition information includes an empty caption image, comprises: Element control data buffer: empty; Text data buffer: empty; Style data buffer: empty; and Source data buffer: source information of 'Arial.ttf. From '00: 10: 00 'to '00: 12: 00': the font / text design generator 222, when the production composition information includes an image in which the caption text Hello 710 is generated, comprises : Element control data buffer: subtitle text control information Hi 710; Text data buffer: 'Hello'; Style data buffer: style information from 'Scriptl'; and Source data buffer: source information of 'Arial.ttf'. From '00: 12: 00 'to '00: 14: 00': the font / text design generator 222, when the production composition information includes the subtitle text Hello 710 and the subtitle text Subtitle 720, comprises : Element control data buffer: control information of the subtitle text Hello 710 and the subtitle text Subtitle 720; Text data buffer: 'Hello' and 'Subtitle'; Style data buffer: style information from 'Scriptl' and 'Script2'; and Source data buffer: source information of 'Arial.ttf'. From '00: 14: 00 'to '00: 15: 00': the. font generator / text design 222, when the production composition information includes the subtitle text Hello 710, the subtitle text Subtitle 720, and the subtitle text World 730, comprises: Element control data buffer: control information of the subtitle text Hello 710, the subtitle text Subtitle 720, and the subtitle text World 730; Text data buffer: 'Hello', 'Subtitle' and 'World'; Style data buffer: style information from 'Scriptl', 'Script2', and 'Script3'; and Source data buffer: source information of 'Arial.ttf'. From '00: 15: 00 'to '00: 17: 00': the font / text design generator 222, when the production composition information includes the subtitle text Subtitle 720 and the subtitle text World 730, comprises : Element control data buffer: control information of subtitle text Subtitle 720 and subtitle text World 730; Text data buffer: 'Subtitle' and ' World '; Style data buffer: style information from 'Script2' and 'Script3'; and Source data buffer: source information of 'Arial.ttf'. From '00: 17: 00 'to '00: 19: 00': the font / text design generator 222, when the production composition information includes the World 730 subtitle text, comprises: Control data buffer of element: control information of the subtitle text Mundo 730; Text data buffer: 'World'; Style data buffer: style information from 'Script3'; and Source data buffer: source information of 'Arial.ttf'. After '00: 19: 00 ': the font / text design generator 222, when the production composition information includes an empty caption image, comprises: Element control data buffer: empty; Text data buffer: empty, - Style data buffer: empty; and Source data buffer: source information of 'Arial.ttf'. In the second method described above, subtitle images are generated by applying different styles to a plurality of subtitle texts having overlapping production times, composition information data including subtitle images are generated, and information data of generated composition are transmitted to presentation engine 230. A text subtitle processing time is the same as that of the first method. That is, the processing of subtitle text of the subtitles produced must be done quickly considering a time taken to decode the subtitles in the text subtitle processing unit 220 and a time taken to produce the subtitles generated from the text. the object buffer 234 to the graphics smoother 240. However, in this method, since a plurality of objects exists, a generation time is obtained by adding the times taken to generate the respective objects. That is, equation 2 calculates the generation time.
Equation 2 home ~ arrival ^ _ decoding + composition I decoding ~ generation + generation of composition information Object number generation ~ S / ¿oobbjjeettoo. { (ii)) i = 0 Character number < - object = character (i) i = 0 The number of characters of the subtitle text which can be stored in the object buffer 234 is limited in the second method to the same as that of the first method. Figures 8A to 8C are diagrams illustrating a process of generating an image so that an image object is included in composition information data by allocating a plurality of composition information data for a plurality of subtitles. In Figure 8A, the styles 'Scriptl', 'Script2', and 'Script3' are defined as style information used for the generation of subtitle text. With reference to Figure 8A, each of the three styles uses a 1-source. ttf1, a 'black' text color, a 'white' background color, a '16pt' character size, a 'center' alignment method, a production direction 'left-to-right-up-to-down ', and a line height' 40px '. As a reference position of subtitle text, 'Scriptl' has coordinates (xl, yl), 'Script2' has coordinates (x2, y2), and 'Script3' has coordinates (x3, y3). As a region of text production, 'Scriptl' has 'left, arribal, anchural, altural', 'Script2' has 'left2, up2, width2, height2', and 'Script3' has 'left3, up3, width3, height3' . In Figure 8B, the subtitle texts 810, 820, and 830 generated using the 'Scriptl', 'Script2' styles, and 'Script3' are defined. With reference to Figure 8B, the subtitle text Hello 810 uses the 'Scriptl' style and occurs from '00: 10: 00 'to '00: 15: 00', the subtitle text Subtitle 820 uses the 'Script2' style and is produced from '00: 12: 00 'to '00: 17: 00', and the World 830 subtitle text uses the 'Script3' style and is produced from '00: 14: 00 'a '00: 19: 00'. Therefore, two or three subtitle texts occur between '00: 12: 00 'and '00: 17: 00'. Figure 8C shows a production result of subtitles defined in Figures 8A and 8B. With reference to Figure 8C, the data stored in each buffer of the text subtitle processing unit 220 in each illustrated time window will be described in detail. From '00: 00: 00 'the font / text design generator 222, when the production composition information includes an empty subtitle image, comprises: Element control data buffer: empty; Text data buffer: empty; Style data buffer: empty; and Source data buffer: source information of 'Arial.ttf'. From '00: 10: 00 ': the font / text design generator 222, when the production composition information includes an image in which the caption text Hello 810 is generated, comprising: Control data buffer element: upload control information of the subtitle text Hi 810; Text data buffer: 'Hello'; Style data buffer: style information from 'Scriptl'; and Source data buffer: source information of 'Arial.ttf'. From '00: 12: 00 ': the font / text design generator 222, when the production composition information includes the subtitle text Hello 810 and the composition information that includes the subtitle text Subtitle 820 is generated, comprises : Element control data buffer: subtitle text load control information Subtitle 820; Text data buffer: 'Subtitle'; Style data buffer: style information from 'Script2'; and Source data buffer: source information of 'Arial.ttf'. From '00: 14: 00 ': the font / text design generator 222, when the production composition information that includes the subtitle text Hello 810, the composition information that includes the subtitle text Subtitle 820, and the Composition information that includes the subtitle text World 830 are generated, comprising: Element control data buffer: upload control information of the subtitle text World 830; Text data buffer: 'World'; Style data buffer: style information from 'Script3'; and Source data buffer: source information of 'Arial.ttf'. After '00: 15: 00 ': the text subtitle processing unit 220 does not execute any operation until the preparation of a production for subsequent subtitle texts to be produced after '00: 19: 00'. Therefore, the subtitle changes produced between '00: 15: 00 'and '00: 19: 00' are made by the presentation engine 230 controlling the composition information of the subtitles 'Hello', 'Subtitle', and 'World' received from the text subtitle processing unit 220. That is, at '00: 15: 00 ', the presentation engine 230 deletes the composition information and bitmap image object from the subtitle' Hello ' of the composition buffer 233 and the object buffer 234 and produces only the composition information of the subtitles 'Subtitle' and 'World' on a screen. At '00: 17: 00 ', the presentation engine 230 deletes the composition information and bitmap image object from the subtitle' Subtitle 'of the composition buffer 233 and the object buffer 234 and produces only the composition information of the subtitle 'World' on the screen. Furthermore, at '00: 19: 00 ', the presentation engine 230 deletes the composition information and bitmap image object from the' World 'subtitle of the composition buffer 233 and the object buffer 234 and does not returns to produce a subtitle on the screen.
In the third method described above, a subtitle image for each subtitle text is generated by applying different styles to a plurality of subtitle texts having overlapping production times, a composition information data is generated for each subtitle image, and the plurality of composition information data generated is transmitted to the presentation engine 230. A text subtitle processing time is the same as that of the first method. While only a processing time of only one composition information data is considered in the first and second methods since composition information data for a plurality of subtitle texts having overlapping production times are composed and produced, a plurality of composition information data is generated and produced in the third method since each subtitle text composes a separate composition information data. Therefore, for a subtitle text processing start time of the third method, the worst case, ie, a case where a plurality of composition information data for a plurality of subtitles having the same start time of production are simultaneously generated and produced, it should be considered. This is described by equation 3.
Equation 3 start ~ arrival ¿L ¿decoding + composition ¿decoding = generation + generation of composition information Number of information on composition information generation of composition information ~ / _¡ composition information (i) i = 0 Object number generation - S_j ¿oobbjjeettoo ((ii)) i = 0 Character number object = / character (i) i = 0 The generation of composition information tOTOaO-O to generate a plurality of composition information data is obtained by adding each composition information which is a time for generating composition information of a subtitle, all together. The time Tgeneration taken to generate a plurality of objects generating a plurality of subtitles is obtained by adding each TOJDjeto which is a time of generation of a subtitle, all together. The time 0object taken to generate a subtitle is obtained by adding each character, which is a generation time of each character included in a relative subtitle, all together. With reference to equation 3, to simultaneously produce a plurality of subtitles that include a plurality of characters, a sum of times taken to generate all the characters included in the subtitles, compose the plurality of composition information data, and produce the plurality of composition information data should be less than a difference between a subtitle production time and a subtitle processing start time of the text subtitle processing unit 220. The number of characters of the subtitle text that is stored in the object buffer 234 is limited in the third method to the same as that of the first method or the second method. As described in the third method, in an information storage medium and a reproduction apparatus constructed with a structure that supports the simultaneous production of a plurality of composition information data, a text subtitle and another bitmap image they can be produced simultaneously on a screen. The compressed and encoded data in an AV stream includes video data, audio data, subtitles based on bitmap, and other bitmap images without subtitle. A 'TV-14' image displayed on the top right of a screen to indicate a TV program for people above 14 years of age is an example of bitmap images without a subtitle. In a conventional method, since only composition information data is produced on a screen at one time, a region for producing a bitmap subtitle and a region for producing a bitmap image without subtitle are defined separately in the composition information to simultaneously produce the bitmap subtitle and the bitmap image without subtitle. Consequently, when a user shuts down a production of subtitles since the user does not want the production of subtitles, a decoder stops only the decoding of the subtitles. Therefore, since the subtitle data is not transmitted to an object buffer, the subtitles disappear from a screen, and only the bitmap image without subtitle occurs continuously on the screen. When the text subtitle processing unit 220 generates an image for a subtitle using composition information data and transmits the composition information data to the presentation engine 230 to produce the subtitle, if a subtitle production is turned off, a Bitmap image without subtitle recorded in an AV stream also does not occur. Therefore, in a case where a plurality of composition information data may be continuously produced on a screen as described in the third method of the present invention, when the text subtitles are selected instead of bitmap subtitles , the images except the bitmap subtitles in the composition information included in an AV stream can be produced continuously, and the text subtitles can be produced using compositional information generated by the text subtitle processing unit 220. It is say, text subtitles and other bitmap images without a subtitle can be produced simultaneously on the screen. The present invention can be included in a general-purpose computer running a program from a computer-readable medium, including but not limited to storage media such as magnetic storage media (ROMs, RAMs, floppy disks, magnetic tapes, etc.). , optically readable media (CD-ROMs, DVDs, etc.), and carrier waves (transmission over the internet). The present invention can be included as a computer readable medium having a computer readable program code unit included therein to originate a number of computer systems connected via a network to effect distributed processing. And the functional programs, codes and code segments to include the present invention can be easily deduced by programmers in the art to which the present invention pertains. Although a few embodiments of the present invention have been shown and described, it should be appreciated by those skilled in the art that changes can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents. It is noted that in relation to this date, the best method known to the applicant to carry out the aforementioned invention, is that which is clear from the present description of the invention.

Claims (23)

CLAIMS Having described the invention as above, the contents of the following claims are claimed as property:
1. Information storage medium, characterized in that it comprises: audio-visual data (AV), - and subtitle data in which at least some subtitle text data and production style information that designate a production form of at least Subtitle text data is stored in a text format.
Information storage means according to claim 1, characterized in that the production style information contains a plurality of pieces of information so that the production style information is applied differently to each subtitle text.
Information storage medium according to claim 1, characterized in that each of the subtitle text data is generated by a reproduction apparatus that applies the same production style according to the production style information and generates a page composed of an image corresponding to the subtitle data.
Information storage medium according to claim 1, characterized in that each of the subtitle text data is generated by a reproduction apparatus that applies different styles of production and generates pages, each page comprising each of the data of corresponding generated subtitle text.
Information storage means according to claim 1, characterized in that, when a plurality of subtitle data exists, the plurality of subtitle data is generated as images, and the generated images comprise a plurality of pages, respectively.
Information storage medium according to claim 1, characterized in that the subtitle data additionally comprises time information when at least one subtitle text is produced on a screen.
7. Text subtitle processing apparatus, characterized in that it comprises: a text subtitle recognizer separately extracted from the subtitle data generation information used to generate text in text subtitle data and control information used to present the text subtitle generated text; and a text source / design generator that generates a bitmap image of a subtitle text extracted by the text subtitle recognizer generating the subtitle text according to the extracted generation information.
Apparatus according to claim 7, characterized in that the text subtitle recognizer extracts the control information so that the control information is adjusted to a predetermined information structure format and transmits the control information to a motor of presentation.
9. Apparatus according to claim 7, characterized in that it additionally comprises: a text subtitle controller that controls the bitmap image generated by the text source / design generator to be produced directly on a screen using the information of control separately from a presentation engine that processes the bitmap subtitle data.
Apparatus according to claim 7, characterized in that the subtitle data is a plurality of subtitle data having overlapping production times.
Apparatus according to claim 7, characterized in that the source / text design generator generates the bitmap image by generating composition information data, position information data, and corresponding object information data. to a plurality of subtitle data having overlapping production times, and producing the bitmap image.
Apparatus according to claim 7, characterized in that the source / text design generator generates the bitmap image generating data of 'composition information., some position information data, and a plurality of object information data corresponding to a plurality of the subtitle data having overlaying production times, and generating the bitmap image.
Apparatus according to claim 7, characterized in that the font / text design generator generates the bitmap image by generating a plurality of composition information data and position information data and object information data. corresponding to each of the composition information data corresponding to a plurality of the subtitle data having overlaying production times, and generating the bitmap image.
Apparatus according to claim 7, characterized in that the font / text design generator generates an image of a plurality of the text subtitle data by applying the same production style to the plurality of text subtitle data, and generates a page that includes an image.
Apparatus according to claim 7, characterized in that the font / text design generator generates the images of a plurality of text subtitle data by applying different production styles to each of the plurality of subtitle data of text, and generates a page comprising a plurality of the generated images.
Apparatus according to claim 7, characterized in that the source / text design generator generates the images of a plurality of the data. subtitle text by applying different production styles to each of the plurality of text subtitle data, and generates a plurality of pages comprising a plurality of the generated images.
17. Reproduction apparatus using an information storage means, characterized in that it comprises: a generator which reads a plurality of subtitles, each subtitle comprises subtitle texts, control information and style information from the storage medium; a buffer memory which stores the subtitles, the subtitle texts, the control information, and the style information; and a player which decodes the subtitles based on the control information and style information and displays the subtitle texts according to the control information and style information, wherein any number of the plurality of subtitles is simultaneously displayed.
18. Apparatus in accordance with the claim 17, characterized in that the player comprises: a subtitle processor which extracts subtitle texts, control information, and style information from each of the subtitles, and generates a production image by generating subtitle texts extracted from according to the extracted control information and style information; and a presentation engine which displays the production image.
19. Apparatus in accordance with the claim 18, characterized in that the reader reads a bitmap image without subtitle and the display engine displays the bitmap image without subtitle simultaneously with the production image.
20. Apparatus according to claim 17, characterized in that the player comprises: a subtitle processor which extracts subtitle texts, control information, and style information from each of the subtitles, and generates a production image generating the subtitle texts extracted according to the extracted control information and style information; and a text subtitle controller that controls the production image generated by the subtitle processor to be displayed on a screen according to the extracted control information and style information.
21. Method of reproducing text subtitle files, characterized in that it comprises: selecting the subtitles that have playback times superimposed for reproduction; generate a unique set of composition information data, position information data and object information data for the subtitles; and generating an image corresponding to the subtitles in accordance with the single set of composition information data, position information data and object information data.
22. Method of reproducing text subtitle files, characterized in that it comprises: selecting the subtitles that have overlaying reproduction times for reproduction; generating a single set of composition information data having different position information data for each of the selected subtitles and different object information data for each of the selected subtitles; and generating an image corresponding to the subtitles according to the single set of composition information data, the different position information data for each of the subtitles and the different object information data for each of the subtitles.
23. Method of reproducing text subtitle files, characterized in that it comprises: selecting the subtitles that have overlaying reproduction times for reproduction; generating a different set of composition information data, position information data and object information data for each of the subtitles; and generating an image corresponding to the subtitles according to the different sets of composition information data, position information data and object information data for each of the subtitles.
MXPA/A/2006/005152A 2003-11-10 2006-05-08 Information storage medium containing subtitles and processing apparatus therefor MXPA06005152A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2003-0079181 2003-11-10
KR1020040083517 2004-10-19

Publications (1)

Publication Number Publication Date
MXPA06005152A true MXPA06005152A (en) 2006-10-17

Family

ID=

Similar Documents

Publication Publication Date Title
CA2764722C (en) Information storage medium containing subtitles and processing apparatus therefor
US8195036B2 (en) Storage medium for storing text-based subtitle data including style information, and reproducing apparatus and method for reproducing text-based subtitle data including style information
MXPA06005152A (en) Information storage medium containing subtitles and processing apparatus therefor
KR100644719B1 (en) Method of reproducing subtitle files