WO2004047430A1 - Method and apparatus for composition of subtitles - Google Patents

Method and apparatus for composition of subtitles Download PDF

Info

Publication number
WO2004047430A1
WO2004047430A1 PCT/EP2003/012261 EP0312261W WO2004047430A1 WO 2004047430 A1 WO2004047430 A1 WO 2004047430A1 EP 0312261 W EP0312261 W EP 0312261W WO 2004047430 A1 WO2004047430 A1 WO 2004047430A1
Authority
WO
WIPO (PCT)
Prior art keywords
subtitles
data
subtitle
mixer
video
Prior art date
Application number
PCT/EP2003/012261
Other languages
French (fr)
Inventor
Dirk Adolph
Jobst Hörentrup
Ralf Ostermann
Hartmut Peters
Harald Schiller
Original Assignee
Thomson Licensing S.A.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CA2506521A priority Critical patent/CA2506521C/en
Priority to BR122013022769A priority patent/BR122013022769B1/en
Priority to BRPI0316174A priority patent/BRPI0316174B1/en
Priority to EP03772298A priority patent/EP1576809B1/en
Priority to DE60314544T priority patent/DE60314544T2/en
Priority to AU2003279350A priority patent/AU2003279350B2/en
Priority to US10/535,106 priority patent/US7852411B2/en
Priority to JP2004552519A priority patent/JP4553248B2/en
Priority to MXPA05005133A priority patent/MXPA05005133A/en
Application filed by Thomson Licensing S.A. filed Critical Thomson Licensing S.A.
Publication of WO2004047430A1 publication Critical patent/WO2004047430A1/en
Priority to US12/800,418 priority patent/US8737810B2/en
Priority to US13/462,382 priority patent/US8363163B2/en
Priority to US13/462,364 priority patent/US8537282B2/en
Priority to US13/462,372 priority patent/US8432493B2/en
Priority to US14/166,940 priority patent/US9503678B2/en
Priority to US14/198,928 priority patent/US9595293B2/en
Priority to US14/222,126 priority patent/US9635306B2/en
Priority to US14/224,197 priority patent/US9749576B2/en
Priority to US15/658,949 priority patent/US20170324925A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/08Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division
    • H04N7/087Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only
    • H04N7/088Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only the inserted signal being digital
    • H04N7/0882Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only the inserted signal being digital for the transmission of character code signals, e.g. for teletext
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/278Subtitling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information

Definitions

  • the invention relates to a method and to an apparatus for composition of subtitles for audio/video presentations, which can be used e.g. for HDTV subtitles in pre-recorded formats like the so-called Blue-ray Disc.
  • AV Audio-Visual
  • the technique of subtitling for Audio-Visual (AV) material has been used beginning with the first celluloid cinema movies and further until the recent digital media appeared.
  • the main target of subtitling has been the support of handicapped people or small ethnographic language groups. Therefore subtitling often aims at the presentation of text information even when having been encoded as graphic data like pixel maps. Therefore pre-produced AV material for broadcasting (Closed Caption, Teletext, DVB-Subtitle etc.) and movie discs (DVD Sub-Picture etc.) primarily are optimized for subtitles representing simple static textual information.
  • progress in PC software development for presentation and animation of textual information induces a corresponding demand for possibilities and features within the digital subtitling technique used for pre-recording and broadcasting.
  • Subtitling can be based on either pixel data or on character data.
  • subtitling schemes comprise a general framework, which for instance deals with the synchronization of subtitling elements along the AV .time axis .
  • Pixel data based subtitling In the pixel-based subtitling approach, subtitling frames are conveyed directly in the form of graphical representations by describing them as (typically rectangular) regions of pixel values on the AV screen. Whenever anything is meant to be visible in the subtitling plane superimposed onto video, its pixel values must be encoded and provided in the subtitling bitstream, together with appropriate synchronization info, and hence for the full feature animation of subtitles all pixel changed must be transported. Obviously, when removing any limitations inherent with full feature an- imations of teletext, the pixel-based approach carries the penalty of a considerably increased bandwidth for the subtitling data.
  • Examples of pixel-based subtitling schemes can be found in DVD's sub-picture concept "DVD Specification for Read-Only disc", Part 3: Video, as well as in the "pixel object” concept of DVB Subtitling, specified in ETS 30.0 743.
  • the gist of the invention is a subtitling format encompassing elements of enhanced syntax and semantic to provide im- proved animation capabilities.
  • the disclosed elements improve subtitle performance without stressing the available subtitle bitrate. This will become essential for authoring content of high-end HDTV subtitles in pre-recorded format, which can be broadcast or pressed on high capacity optical media, e.g. the Blue-ray Disc.
  • the invention includes abilities for improved authoring possibilities for the content production to animate subtitles.
  • Fig.l segment__type values for enhanced PCS and RCS
  • Fig.2 Enhanced page composition segment
  • Fig.3 Enhanced region composition segment ,-
  • Fig.4 Example for the definition of a subtitle region and its location within a page
  • Fig.5 Example for definition of a region sub-CLUT and region cropping
  • Fig.6 Resulting display example; Fig.7 Interactive usage of subtitles; Fig.8 Video and Graphics Planes; Fig.9 Video and Graphics Mixing and Switching.
  • the invention can preferably be embodied based on the syntax and semantic of the DVB subtitle specification (DVB-ST).
  • DVB-ST DVB subtitle specification
  • PCS page composition segment
  • RCS region composition segment
  • DVB_ST uses page composition segments (PCS) to describe the positions of one or more rectangular regions on the display screen.
  • the region composition segments (RCS) are used to define the size of any such rectangular area and identifies the color-lookup-table (C UT) used within.
  • the proposed invention keeps backward compatibility with DVB-ST by using different segment_types for the enhanced PCS and RCS elements, as listed in Fig.i showing segment type values according to DVB-ST, with additional values for enhanced PCS and enhanced RCS. It would also be possible to choose other values instead.
  • Another approach for keeping backward compatibility would be to keep the existing seg- ment_types and increase the version_number of the specification, e.g. by incrementing the subtitle_stream_id in the PES_data_field structure.
  • Fig.2. shows the data structure of an enhanced page composition segment (PCS) , containing a region_cropping section and a region_sub_CLUT section.
  • Fig.3 shows the data structure of an enhanced region composition segment (RCS) , containing an identifier sub_CLUT_id for a sub-color-look-up-table. With respect to original DVB-ST, all structures shown are expanded. In the tables the additional entries are lines 15-28 in Fig.2 and line 16 in Fig .3.
  • the enhanced PCS shown in Fig.2 carries optional information about the region cropping and optional information about the region_sub-CLUT for every region listed.
  • the two values of region_cropping and region_sub_CLUT indicate if such optional information is available for the current region in process. Therefore cropping and sub-CLUT may be defined separately for every region.
  • the region_sub_CLUT shows the value how many sub-CLUT positions are described. This is done to provide different alternatives within the stream.
  • Alternative sub-CLUT positions can be used to define different menu button positions for the display screen. Only one of them - the first one as a default - is active and the user can change the position to navigate through the different predefined positions pressing the remote for example .
  • the enhanced RCS shown in Fig.3 carries the sub_CLUT_id identifying the family of CLUTs that applies to this region. This is done to re-use CLUTs for different regions and dif- ferent region sub_CLUTs as well .
  • the enhanced PCS and enhanced RCS elements provide the ability that subtitles can be manipulated independent from the encoding method i.e. independent from whether they are en- coded as character data or pixel data.
  • the enhanced PCS and RCS can be used to perform many different animation effects for subtitles. Those could be wiping boxes, blinds, scrolling, wipes, checker boxes, etc.
  • the following figures show an application example for karaoke.
  • Fig.4 shows the definition of a region R containing lyrics of a song displayed for karaoke.
  • the letters of the subtitle may be encoded as pixel data or as character data as well.
  • the region_vertical_address RVA and the region_horizontal_address RHA define the location of the subtitle within the frame, or page PG, to display.
  • Fig.5 depicts in the upper part region cropping, and in the lower part the location of the region sub-CLUT.
  • Region crop- ping defines which part of the region is effectively displayed. This is achieved by four parameters RHC, RVC, RCH, CW indicating the start coordinates and the size of the fragment to display.
  • region_horizontal_cropping RHC specifies the horizontal address of the top left pixel of this crop- ping
  • region_vertical_cropping RVC specifies the vertical address of the top line of this cropping
  • region_cropping_width RCW specifies the horizontal length of this cropping
  • region_cropping_height RCH specifies the vertical length of this cropping, wherein cropping means that part of the subtitles that is visible on a display.
  • the region sub-CLUT location shown in the lower part of Fig.5 defines which part of the region has to be displayed using a color-look-up-table (CLUT) different from the region CLUT.
  • CLUT color-look-up-table
  • sub_CLUT_horizontal_address SCHA specifies the horizontal address of the top left pixel of this sub-CLUT
  • sub_CLUT_vertical_address SCVA specifies the vertical address of the top line of this sub-CLUT
  • sub_CLUT_width SCW specifies the horizontal length of this sub-CLUT
  • sub_CLUT_height SCH specifies the vertical length of this sub-CLUT.
  • any effect can be synchronized to the AV.
  • PES MPEG packet elementary stream
  • PTS presentation time stamps
  • Another idea of the invention is the superseding of subtitle animation parameters by the user. This offers a way to realize interactive subtitles.
  • the enhanced PCS parameters are transferred as a default, and the user may change them via a remote control for example. Thus the user is able to move, crop or highlight the subtitle.
  • Fig.7 shows a block diagram for interactive subtitle modifications.
  • the default parameters DD read from a disc D are superseded by supersed- ing data SD being generated upon the user action UA and processed by a processor P.
  • Another application for overriding subtitle animation parameters like position, cropping rectangle, CLUTs and sub- CLUTs is the realization of some very basic sort of interactive gaming.
  • the subtitle may carry pixel data of an ani- mated character. This character is subsequently moved on the display screen driven by either user interaction, programmatic control or both.
  • the overriding of subtitle animation parameters can be im- plemented in at least two ways.
  • the first option is that the overriding parameters SD replace the parameters DD send in the bitstream.
  • the second option is that the overriding parameters SD are used as an offset that is added to or subtracted from the subtitle animation parameters DD send in the bitstream.
  • the enhanced PCS and RCS provide a lot more of animation capabilities not explained. Following is a non-exhaustive list of examples: wiping boxes, blinds, scrolling, wipes, checker boxes in details.
  • Exemplary video and graphics planes are shown in Fig.8 in an exemplary, schematic manner.
  • a background is provided by either an MPEG-2 video layer MVL or a still picture layer SPL. They are mutually exclusive, which means that not both of them need to be held in a buffer at a time.
  • the next two layers comprise a subtitle layer SL and an AV sync type graphics layer AVSGL. These two layers are in this example interchangeable, meaning that either the subtitle layer SL or the AV sync type graphics layer AVSGL may have priority over the other.
  • the front layer is a non-AV sync graphics layer NAVSGL, containing graphics that need not be synchronized with the AV content, such as e.g. menus or other onscreen displays.
  • the inventive method can preferably be used for the subtitle layer SL, the AV sync graphics layer AVSGL and/or the Non-AV sync graphics layer NAVSGL.
  • Fig.9 shows relevant components of an apparatus for video and graphics mixing and switching.
  • Data comprising either still picture data or MPEG-2 video data, further data for subtitles, data for animations and data for non-AV sync graphics such as menu buttons, are retrieved from a disc D.
  • data for subtitles, animations and/or non-AV sync graphics can be received from a network NW, e.g. internet.
  • NW e.g. internet.
  • a processing unit CPU processes the non-AV sync graphics data and sends the resulting data to a rendering device for non-AV sync graphics RNAVG.
  • the apparatus contains a still picture decoder SPDec and an MPEG-2 video decoder MVDec, but since only one of them is used at a time, a switch si can select which data shall be used for further processing.
  • two identical decod- ers AVSGDecl,AVSGDec2 are used for decoding subtitle and animation data.
  • the outputs of these two decoders AVSGDecl, AVSGDec2 may be switched by independent switches s2,s3 to either a mixer MX, or for preprocessing to a mixer and sealer MXS, which outputs its resulting data to said mixer MX.
  • These two units MX, XS are used to perform the superimposing of its various input data, thus controlling the display order of the layers.
  • the mixer MX has inputs for a front layer f2 , a middle front layer mf, a middle back layer mb and a background layer b2.
  • the front layer f2 may be unused, if the corresponding switch s3 is in a position to connect the second AV sync graphics decoder AVSGDec2 to the mixer and sealer MXS.
  • This unit MXS has inputs for front layer fl, middle layer m and background layer b. It superimposes these data correspondingly and sends the resulting picture data to the background input b2 of the mixer MX.
  • these data represent e.g. a frame comprising up to three layers of picture and subtitles, which can be scaled and moved together within the final picture.
  • the background input bl of the mixer and sealer MXS is connected to the switch si mentioned above, so that the background can be generated from a still picture or an MPEG-2 video.
  • the output of the first AV sync graphics decoder AVSGDecl is connected to a second switch s2, which may switch it to the middle layer input m of the mixer and sealer MXS or-to the middle back layer input mb of the mixer MX.
  • the output of the second AV sync graphics decoder AVSGDec2 is connected to a third switch s3 , which may switch it to the front layer input fl of the mixer and sealer MXS or to the middle front layer input mf of the mixer MX.
  • either the output of the first or the second AV sync graphics decoder AVSGDecl,AVSGD2 may have priority over the other, as described above.
  • the second switch s2 may route the subtitle data to the middle back input mb of the mixer MX, while the third switch s3 routes the animation graphics data to the front input fl of the mixer and sealer MXS, so that it ends up at the background input b2 of the mixer MX.
  • the switches s2,s3 may route their outputs to the same unit, either the mixer and sealer MXS or the mixer MX, as shown in Fig.9.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Circuits (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Indexing, Searching, Synchronizing, And The Amount Of Synchronization Travel Of Record Carriers (AREA)
  • Television Systems (AREA)
  • Adornments (AREA)
  • Manufacture, Treatment Of Glass Fibers (AREA)

Abstract

The gist of the invention is a subtitling format encompass-ing elements of enhanced syntax and semantic to provide im-proved animation capabilities. The disclosed elements im-prove subtitle performance without stressing the available subtitle bitrate. This will become essential for authoring content of high-end HDTV subtitles in pre-recorded format, which can be broadcast or stored on high capacity optical media, e.g. the Blue-ray Disc. The invention includes abili-ties for improved authoring possibilities for the content production to animate subtitles. For subtitles that are separate from AV material, the method includes using one or more superimposed subtitle layers, and displaying only a se-lected part of the transferred subtitles at a time. Further, colors of a selected part of the displayed subtitles may be modified, e.g. highlighted.

Description

Method and Apparatus for composition of subtitles
The invention relates to a method and to an apparatus for composition of subtitles for audio/video presentations, which can be used e.g. for HDTV subtitles in pre-recorded formats like the so-called Blue-ray Disc.
Background
The technique of subtitling for Audio-Visual (AV) material has been used beginning with the first celluloid cinema movies and further until the recent digital media appeared. The main target of subtitling has been the support of handicapped people or small ethnographic language groups. Therefore subtitling often aims at the presentation of text information even when having been encoded as graphic data like pixel maps. Therefore pre-produced AV material for broadcasting (Closed Caption, Teletext, DVB-Subtitle etc.) and movie discs (DVD Sub-Picture etc.) primarily are optimized for subtitles representing simple static textual information. However, progress in PC software development for presentation and animation of textual information induces a corresponding demand for possibilities and features within the digital subtitling technique used for pre-recording and broadcasting. Using straightforward approaches without any special precautions, these increased requirements for subtitling would consume a too big portion of the limited overall bandwidth. The conflicting requirements for a 'full feature' subtitle encompassing karaoke all through genuine animations are on one hand the coding efficiency and on the other hand the full control for any subtitle author.
For today's state of the art of digitally subtitling AV ma- terial with separate subtitling information two main approaches exist: Subtitling can be based on either pixel data or on character data. In both cases, subtitling schemes comprise a general framework, which for instance deals with the synchronization of subtitling elements along the AV .time axis .
Character data based subtitling:
In the character-based subtitling approach, e.g. in the teletext system ETS 300 706 of European analog or digital TV, strings are described by sequences of letter codes, e.g. ASCII or UNICODE, which intrinsically allows for a very efficient encoding. But from character strings alone, subtitling cannot be converted into a graphical representation to be overlaid over video. For this, the intended character set, font and some font parameters, most notably the font size, must either be coded explicitly within the subtitling bitstream or an implicit assumption must be made about them within a suitably defined subtitling context. Also, any subtitling in this approach is confined to what can be expressed with the letters and symbols of the specific font(s) in use. The DVB Subtitling specification ETS 300 743, in its mode of "character objects", constitutes another state-of- the-art example of character-based subtitling.
Pixel data based subtitling: In the pixel-based subtitling approach, subtitling frames are conveyed directly in the form of graphical representations by describing them as (typically rectangular) regions of pixel values on the AV screen. Whenever anything is meant to be visible in the subtitling plane superimposed onto video, its pixel values must be encoded and provided in the subtitling bitstream, together with appropriate synchronization info, and hence for the full feature animation of subtitles all pixel changed must be transported. Obviously, when removing any limitations inherent with full feature an- imations of teletext, the pixel-based approach carries the penalty of a considerably increased bandwidth for the subtitling data. Examples of pixel-based subtitling schemes can be found in DVD's sub-picture concept "DVD Specification for Read-Only disc", Part 3: Video, as well as in the "pixel object" concept of DVB Subtitling, specified in ETS 30.0 743.
Invention
The gist of the invention is a subtitling format encompassing elements of enhanced syntax and semantic to provide im- proved animation capabilities. The disclosed elements improve subtitle performance without stressing the available subtitle bitrate. This will become essential for authoring content of high-end HDTV subtitles in pre-recorded format, which can be broadcast or pressed on high capacity optical media, e.g. the Blue-ray Disc. The invention includes abilities for improved authoring possibilities for the content production to animate subtitles.
Introduced by the disclosure are elements of syntax and se- mantic describing the color change for parts of graphics to display. This can be used for highlight effects in applications like for example karaoke, avoiding the repeated transfer of pixel data.
Other disclosed elements of syntax and semantic facilitate the ability of cropping parts of the subtitles before displaying them. By using the technique of subsequently transferred cropping parameters for an object to display, a bit saving animation of subtitles becomes available. Such crop- ping parameter can be used for example to generate text changes by wiping boxes, blinds, scrolling, wipes, checker boxes, etc.
Furthermore the disclosed elements can be used to provide interactivity on textual and graphical information. Especially the positioning and/or color settings of subtitles can be manipulated based upon user request. Drawings
Exemplary embodiments of the invention are described with reference to the accompanying drawings and tables, which show:,
Fig.l: segment__type values for enhanced PCS and RCS; Fig.2: Enhanced page composition segment; Fig.3: Enhanced region composition segment ,-
Fig.4: Example for the definition of a subtitle region and its location within a page;
Fig.5: Example for definition of a region sub-CLUT and region cropping;
Fig.6 Resulting display example; Fig.7 Interactive usage of subtitles; Fig.8 Video and Graphics Planes; Fig.9 Video and Graphics Mixing and Switching.
Exemplary embodiments
The invention can preferably be embodied based on the syntax and semantic of the DVB subtitle specification (DVB-ST). To provide improved capabilities for the manipulation of graphic subtitle elements, the semantics of DVB-ST' s page composition segment (PCS) and region composition segment (RCS) are expanded.
DVB_ST uses page composition segments (PCS) to describe the positions of one or more rectangular regions on the display screen. The region composition segments (RCS) are used to define the size of any such rectangular area and identifies the color-lookup-table (C UT) used within.
The proposed invention keeps backward compatibility with DVB-ST by using different segment_types for the enhanced PCS and RCS elements, as listed in Fig.i showing segment type values according to DVB-ST, with additional values for enhanced PCS and enhanced RCS. It would also be possible to choose other values instead. Another approach for keeping backward compatibility would be to keep the existing seg- ment_types and increase the version_number of the specification, e.g. by incrementing the subtitle_stream_id in the PES_data_field structure.
Fig.2. shows the data structure of an enhanced page composition segment (PCS) , containing a region_cropping section and a region_sub_CLUT section. Fig.3 shows the data structure of an enhanced region composition segment (RCS) , containing an identifier sub_CLUT_id for a sub-color-look-up-table. With respect to original DVB-ST, all structures shown are expanded. In the tables the additional entries are lines 15-28 in Fig.2 and line 16 in Fig .3.
The enhanced PCS shown in Fig.2 carries optional information about the region cropping and optional information about the region_sub-CLUT for every region listed. The two values of region_cropping and region_sub_CLUT indicate if such optional information is available for the current region in process. Therefore cropping and sub-CLUT may be defined separately for every region. While region_cropping is used as a flag, as indicated by "if region_cropping==0x01" , the region_sub_CLUT shows the value how many sub-CLUT positions are described. This is done to provide different alternatives within the stream. Alternative sub-CLUT positions can be used to define different menu button positions for the display screen. Only one of them - the first one as a default - is active and the user can change the position to navigate through the different predefined positions pressing the remote for example .
The enhanced RCS shown in Fig.3 carries the sub_CLUT_id identifying the family of CLUTs that applies to this region. This is done to re-use CLUTs for different regions and dif- ferent region sub_CLUTs as well .
The enhanced PCS and enhanced RCS elements provide the ability that subtitles can be manipulated independent from the encoding method i.e. independent from whether they are en- coded as character data or pixel data.
The enhanced PCS and RCS can be used to perform many different animation effects for subtitles. Those could be wiping boxes, blinds, scrolling, wipes, checker boxes, etc. The following figures show an application example for karaoke. Fig.4 shows the definition of a region R containing lyrics of a song displayed for karaoke. The letters of the subtitle may be encoded as pixel data or as character data as well. The region_vertical_address RVA and the region_horizontal_address RHA define the location of the subtitle within the frame, or page PG, to display.
Fig.5 depicts in the upper part region cropping, and in the lower part the location of the region sub-CLUT. Region crop- ping defines which part of the region is effectively displayed. This is achieved by four parameters RHC, RVC, RCH, CW indicating the start coordinates and the size of the fragment to display. region_horizontal_cropping RHC specifies the horizontal address of the top left pixel of this crop- ping, region_vertical_cropping RVC specifies the vertical address of the top line of this cropping, region_cropping_width RCW specifies the horizontal length of this cropping, and region_cropping_height RCH specifies the vertical length of this cropping, wherein cropping means that part of the subtitles that is visible on a display.
The region sub-CLUT location shown in the lower part of Fig.5 defines which part of the region has to be displayed using a color-look-up-table (CLUT) different from the region CLUT. This is achieved by four parameters SCHA, SCVA, SCH, SCW indicating the start coordinates and the size of the sub- region used by the sub-CLUT. All coordinate parameters are to be understood relative to the region the sub-CLUT belongs to. sub_CLUT_horizontal_address SCHA specifies the horizontal address of the top left pixel of this sub-CLUT, sub_CLUT_vertical_address SCVA specifies the vertical address of the top line of this sub-CLUT, sub_CLUT_width SCW specifies the horizontal length of this sub-CLUT and sub_CLUT_height SCH specifies the vertical length of this sub-CLUT.
Picking up all parameters defined with the previous figures results in the displayed subtitle as depicted in Fig.6. The subtitle is not depicted in whole on the display but only the cropped part of it . Furthermore the sub-CLUT was used to provide a highlight HT, so that the user knows what to sing in the moment .
As the enhanced PCS are sent within MPEG packet elementary stream (PES) packets labeled by presentation time stamps (PTS) , any effect can be synchronized to the AV.
Another idea of the invention is the superseding of subtitle animation parameters by the user. This offers a way to realize interactive subtitles. The enhanced PCS parameters are transferred as a default, and the user may change them via a remote control for example. Thus the user is able to move, crop or highlight the subtitle.
This could be an advantage for a user defined repositioning of a subtitling text, so that the user can subjectively minimize the annoyance by the subtitle text placement on top of the motion video. Also the color of the subtitles could be set according to users preferences. Fig.7 shows a block diagram for interactive subtitle modifications. The default parameters DD read from a disc D are superseded by supersed- ing data SD being generated upon the user action UA and processed by a processor P. Another application for overriding subtitle animation parameters like position, cropping rectangle, CLUTs and sub- CLUTs is the realization of some very basic sort of interactive gaming. The subtitle may carry pixel data of an ani- mated character. This character is subsequently moved on the display screen driven by either user interaction, programmatic control or both.
The overriding of subtitle animation parameters can be im- plemented in at least two ways. The first option is that the overriding parameters SD replace the parameters DD send in the bitstream. The second option is that the overriding parameters SD are used as an offset that is added to or subtracted from the subtitle animation parameters DD send in the bitstream.
The enhanced PCS and RCS provide a lot more of animation capabilities not explained. Following is a non-exhaustive list of examples: wiping boxes, blinds, scrolling, wipes, checker boxes in details.
Exemplary video and graphics planes are shown in Fig.8 in an exemplary, schematic manner. A background is provided by either an MPEG-2 video layer MVL or a still picture layer SPL. They are mutually exclusive, which means that not both of them need to be held in a buffer at a time. The next two layers comprise a subtitle layer SL and an AV sync type graphics layer AVSGL. These two layers are in this example interchangeable, meaning that either the subtitle layer SL or the AV sync type graphics layer AVSGL may have priority over the other. The front layer is a non-AV sync graphics layer NAVSGL, containing graphics that need not be synchronized with the AV content, such as e.g. menus or other onscreen displays. The inventive method can preferably be used for the subtitle layer SL, the AV sync graphics layer AVSGL and/or the Non-AV sync graphics layer NAVSGL.
Fig.9 shows relevant components of an apparatus for video and graphics mixing and switching. Data comprising either still picture data or MPEG-2 video data, further data for subtitles, data for animations and data for non-AV sync graphics such as menu buttons, are retrieved from a disc D. Additionally or alternatively, data for subtitles, animations and/or non-AV sync graphics can be received from a network NW, e.g. internet. A processing unit CPU processes the non-AV sync graphics data and sends the resulting data to a rendering device for non-AV sync graphics RNAVG.
The apparatus contains a still picture decoder SPDec and an MPEG-2 video decoder MVDec, but since only one of them is used at a time, a switch si can select which data shall be used for further processing. Moreover, two identical decod- ers AVSGDecl,AVSGDec2 are used for decoding subtitle and animation data. The outputs of these two decoders AVSGDecl, AVSGDec2 may be switched by independent switches s2,s3 to either a mixer MX, or for preprocessing to a mixer and sealer MXS, which outputs its resulting data to said mixer MX. These two units MX, XS are used to perform the superimposing of its various input data, thus controlling the display order of the layers. The mixer MX has inputs for a front layer f2 , a middle front layer mf, a middle back layer mb and a background layer b2. The front layer f2 may be unused, if the corresponding switch s3 is in a position to connect the second AV sync graphics decoder AVSGDec2 to the mixer and sealer MXS. This unit MXS has inputs for front layer fl, middle layer m and background layer b. It superimposes these data correspondingly and sends the resulting picture data to the background input b2 of the mixer MX. Thus, these data represent e.g. a frame comprising up to three layers of picture and subtitles, which can be scaled and moved together within the final picture. The background input bl of the mixer and sealer MXS is connected to the switch si mentioned above, so that the background can be generated from a still picture or an MPEG-2 video. The output of the first AV sync graphics decoder AVSGDecl is connected to a second switch s2, which may switch it to the middle layer input m of the mixer and sealer MXS or-to the middle back layer input mb of the mixer MX. The output of the second AV sync graphics decoder AVSGDec2 is connected to a third switch s3 , which may switch it to the front layer input fl of the mixer and sealer MXS or to the middle front layer input mf of the mixer MX.
Depending on the positions of the second and third switch s2,s3, either the output of the first or the second AV sync graphics decoder AVSGDecl,AVSGD2 may have priority over the other, as described above. For having the data from the first decoder AVSGDecl in the foreground, the second switch s2 may route the subtitle data to the middle back input mb of the mixer MX, while the third switch s3 routes the animation graphics data to the front input fl of the mixer and sealer MXS, so that it ends up at the background input b2 of the mixer MX. Otherwise, for having the data from the second decoder AVSGDec2 in the foreground, the switches s2,s3 may route their outputs to the same unit, either the mixer and sealer MXS or the mixer MX, as shown in Fig.9.

Claims

Claims
1. Method for composition of subtitles for audio/video presentations, wherein subtitle information is separate from audio/video material, and subtitle information is transferred from a network or a storage medium, such as a disc, characterized in
- using one or more subtitle layers,- and
- cropping parts of the subtitles of a layer or layers before displaying them, so that only a selected
(RHC, RVC, RCH, RCW) part of the transferred subtitles is displayed at a time.
2. Method according to claim 1, wherein the colors of a specified (SCHA, SCVA, SCH, SCW) part of the subtitles may be modified.
3. Method according to claim 1 or 2 , wherein subtitles may be interactively moved, cropped or highlighted, or the colors of subtitles be interactively modified by a user.
4. Method according to any of the previous claims, wherein the subtitles may contain graphics.
5. Method according to any of the previous claims, wherein the AV material and the subtitles comply with the DVB-ST standard.
6. Apparatus for composition of subtitles, the apparatus mixing and switching video and graphics data, the data being read from a storage medium or received from a network and comprising still picture data or MPEG video data, data for at least two layers of subtitles or an- imations, and optionally data for non-synchronized graphics, the apparatus comprising
- a mixer (MX) that may superimpose video data of a back layer, at least two middle layers and a front layer;
- a mixer and sealer (MXS) that may superimpose video data of a back layer, a middle layer and a front layer, the mixer and sealer (MXS) providing its output data to the mixer (MX) ;
- a video decoder (MVDec) and/or a still picture decoder (SPDec) , wherein the output data of either the video decoder or the still picture decoder may be switched (si) to the mixer and sealer (MXS) ,-
- at least two simultaneously working decoders (AVSGDecl,AVSGDec2) for synchronized graphics or subtitles, wherein the output of each of the decoders may be switched (s2,s3) to either the mixer (MX) or the mixer and sealer (MXS) , and wherein a decoder
(AVSGDecl,AVSGDec2) may select a part (RHC, RVC, RCH, RCW) of its input data to be output for display;
- a renderer for the non-synchronized graphics, provid- ing data to the mixer (MX) .
7. Apparatus according to claim 6, wherein a decoder
(AVSGDecl,AVSGDec2) may apply a different color-look-up table to a specified (SCHA, SCVA, SCH, SCW) part of a sub- title layer.
8. Apparatus according to claim 6 or 7, comprising a subtitle decoder (ST-DEC) that is capable of superseding default subtitle parameters (DD) with other subtitle parameters (SD) generated upon user action, for interactively modifying or highlighting subtitles.
9. Apparatus according to any of claims 6-8, wherein the data comply with the DVB-ST standard.
PCT/EP2003/012261 2002-11-15 2003-11-03 Method and apparatus for composition of subtitles WO2004047430A1 (en)

Priority Applications (18)

Application Number Priority Date Filing Date Title
CA2506521A CA2506521C (en) 2002-11-15 2003-11-03 Method and apparatus for composition of subtitles
BR122013022769A BR122013022769B1 (en) 2002-11-15 2003-11-03 subtitle maker
BRPI0316174A BRPI0316174B1 (en) 2002-11-15 2003-11-03 method and apparatus for subtitling audio / video presentations and related optical storage media
EP03772298A EP1576809B1 (en) 2002-11-15 2003-11-03 Method and apparatus for composition of subtitles
DE60314544T DE60314544T2 (en) 2002-11-15 2003-11-03 METHOD AND DEVICE FOR PRODUCING SUBTITLES
AU2003279350A AU2003279350B2 (en) 2002-11-15 2003-11-03 Method and apparatus for composition of subtitles
US10/535,106 US7852411B2 (en) 2002-11-15 2003-11-03 Method and apparatus for composition of subtitles
JP2004552519A JP4553248B2 (en) 2002-11-15 2003-11-03 Method and apparatus for creating subtitles
MXPA05005133A MXPA05005133A (en) 2002-11-15 2003-11-03 Method and apparatus for composition of subtitles.
US12/800,418 US8737810B2 (en) 2002-11-15 2010-05-14 Method and apparatus for cropping of subtitle elements
US13/462,372 US8432493B2 (en) 2002-11-15 2012-05-02 Method and apparatus for composition of subtitles
US13/462,382 US8363163B2 (en) 2002-11-15 2012-05-02 Method and apparatus for composition of subtitles
US13/462,364 US8537282B2 (en) 2002-11-15 2012-05-02 Method and apparatus for composition of subtitles
US14/166,940 US9503678B2 (en) 2002-11-15 2014-01-29 Method and apparatus for composition of subtitles
US14/198,928 US9595293B2 (en) 2002-11-15 2014-03-06 Method and apparatus for composition of subtitles
US14/222,126 US9635306B2 (en) 2002-11-15 2014-03-21 Method and apparatus for composition of subtitles
US14/224,197 US9749576B2 (en) 2002-11-15 2014-03-25 Method and apparatus for composition of subtitles
US15/658,949 US20170324925A1 (en) 2002-11-15 2017-07-25 Method and apparatus for composition subtitles

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP02025474.4 2002-11-15
EP02025474 2002-11-15

Related Child Applications (3)

Application Number Title Priority Date Filing Date
US10/535,106 A-371-Of-International US7852411B2 (en) 2002-11-15 2003-11-03 Method and apparatus for composition of subtitles
US10535106 A-371-Of-International 2003-11-03
US12/800,418 Continuation-In-Part US8737810B2 (en) 2002-11-15 2010-05-14 Method and apparatus for cropping of subtitle elements

Publications (1)

Publication Number Publication Date
WO2004047430A1 true WO2004047430A1 (en) 2004-06-03

Family

ID=32319536

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2003/012261 WO2004047430A1 (en) 2002-11-15 2003-11-03 Method and apparatus for composition of subtitles

Country Status (14)

Country Link
US (3) US7852411B2 (en)
EP (1) EP1576809B1 (en)
JP (10) JP4553248B2 (en)
KR (2) KR101034969B1 (en)
CN (1) CN100377582C (en)
AT (1) ATE365423T1 (en)
AU (1) AU2003279350B2 (en)
BR (2) BR122013022769B1 (en)
CA (1) CA2506521C (en)
DE (1) DE60314544T2 (en)
ES (1) ES2289339T3 (en)
MX (1) MXPA05005133A (en)
WO (1) WO2004047430A1 (en)
ZA (1) ZA200503868B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006018786A1 (en) * 2004-08-20 2006-02-23 Koninklijke Philips Electronics N.V. Method of storing and transferring image signals
EP1652184A1 (en) * 2003-07-24 2006-05-03 Lg Electronics Inc. Recording medium having a data structure for managing reproduction of text subtitle data recorded thereon and recording and reproducing methods and apparatuses
WO2006051433A1 (en) * 2004-11-09 2006-05-18 Nokia Corporation Auxiliary content handling over digital communication systems
US20070077031A1 (en) * 2004-03-26 2007-04-05 Yoo Jea Y Recording medium and method and apparatus for reproducing and recording text subtitle streams
CN1328905C (en) * 2004-06-29 2007-07-25 乐金电子(沈阳)有限公司 Device and its method for correcting caption errors of TV set
CN101616270A (en) * 2008-06-27 2009-12-30 新奥特(北京)视频技术有限公司 A kind of method for generating captions that uses filter
CN101616268A (en) * 2008-06-27 2009-12-30 新奥特(北京)视频技术有限公司 A kind of method of utilizing texture coordinate to generate shadow captions
CN102724417A (en) * 2011-05-09 2012-10-10 新奥特(北京)视频技术有限公司 Method and system for realizing caption special effect in louver mode
US9621862B2 (en) 2013-12-18 2017-04-11 Seiko Epson Corporation Projector and method of controlling projector

Families Citing this family (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8737810B2 (en) * 2002-11-15 2014-05-27 Thomson Licensing Method and apparatus for cropping of subtitle elements
MXPA05005133A (en) 2002-11-15 2005-07-22 Thomson Licensing Sa Method and apparatus for composition of subtitles.
KR100939711B1 (en) * 2002-12-12 2010-02-01 엘지전자 주식회사 Apparatus and method for reproducing a text based subtitle
ATE517413T1 (en) * 2003-04-09 2011-08-15 Lg Electronics Inc RECORDING MEDIUM HAVING A DATA STRUCTURE FOR MANAGING THE PLAYBACK OF TEXT CAPTION DATA AND METHOD AND APPARATUS FOR RECORDING AND REPLAYING
WO2004097824A1 (en) * 2003-04-29 2004-11-11 Lg Electronics Inc. Recording medium having a data structure for managing reproduction of graphic data and methods and apparatuses of recording and reproducing
JP2007518205A (en) * 2004-01-06 2007-07-05 エルジー エレクトロニクス インコーポレーテッド RECORDING MEDIUM, METHOD AND DEVICE FOR REPRODUCING / RECORDING TEXT / SUBTITLE STREAM
KR20050072255A (en) * 2004-01-06 2005-07-11 엘지전자 주식회사 Method for managing and reproducing a subtitle of high density optical disc
KR20050078907A (en) * 2004-02-03 2005-08-08 엘지전자 주식회사 Method for managing and reproducing a subtitle of high density optical disc
BRPI0418524A (en) * 2004-02-10 2007-05-15 Lg Electronics Inc physical recording medium, method and apparatus for recording and reproducing a data structure
BRPI0507596A (en) * 2004-02-10 2007-07-03 Lg Electronics Inc physical recording medium, method and apparatus for decoding a text subtitle stream
WO2005074400A2 (en) 2004-02-10 2005-08-18 Lg Electronics Inc. Recording medium and method and apparatus for decoding text subtitle streams
RU2377669C2 (en) * 2004-02-10 2009-12-27 ЭлДжи ЭЛЕКТРОНИКС ИНК. Recording medium with data structure for managing different data, and method and device for recording and playing back
EP1716566A1 (en) * 2004-02-10 2006-11-02 LG Electronic Inc. Recording medium having a data structure for managing font information for text subtitles and recording and reproducing methods and apparatuses
EP1716701A1 (en) * 2004-02-10 2006-11-02 LG Electronic Inc. Text subtitle decoder and method for decoding text subtitle streams
KR20070028323A (en) * 2004-02-10 2007-03-12 엘지전자 주식회사 Recording medium having a data structure for managing data streams associated with different languages and recording and reproducing methods and apparatuses
US20050196146A1 (en) * 2004-02-10 2005-09-08 Yoo Jea Y. Method for reproducing text subtitle and text subtitle decoding system
WO2005081643A2 (en) * 2004-02-26 2005-09-09 Lg Electronics Inc. Recording medium and method and apparatus for reproducing and recording text subtitle streams
EP1728251A1 (en) * 2004-03-17 2006-12-06 LG Electronics, Inc. Recording medium, method, and apparatus for reproducing text subtitle streams
EP1730739B1 (en) * 2004-03-26 2010-09-01 LG Electronics, Inc. Recording medium, method, and apparatus for reproducing text subtitle streams
IN266747B (en) * 2004-03-26 2015-05-29 Lg Electronics Inc
KR20060047266A (en) * 2004-04-26 2006-05-18 엘지전자 주식회사 Recording medium, method and apparatus for the data recorded on the recording medium
JP4724710B2 (en) * 2004-05-03 2011-07-13 エルジー エレクトロニクス インコーポレイティド RECORDING MEDIUM HAVING DATA STRUCTURE FOR REPRODUCING MANAGEMENT OF TEXT SUBTITLE DATA
US20080238938A1 (en) * 2005-08-29 2008-10-02 Eklund Don Effects for interactive graphic data in disc authoring
US8015200B2 (en) * 2005-12-24 2011-09-06 Phil Seiflein Multimedia platform synchronizer
CN101005609B (en) * 2006-01-21 2010-11-03 腾讯科技(深圳)有限公司 Method and system for forming interaction video frequency image
WO2014160533A1 (en) * 2013-03-16 2014-10-02 Juan Garcia Universal barrier system panels
CN100471237C (en) * 2007-11-19 2009-03-18 新奥特(北京)视频技术有限公司 A video and audio and image separation playing system
CN101616274B (en) * 2008-06-27 2013-09-18 新奥特(北京)视频技术有限公司 Device for generating shadow captions by texture coordinates
CN101616269B (en) * 2008-06-27 2012-11-28 新奥特(北京)视频技术有限公司 Method for generating shadow caption based on structure of characters
CN101635804B (en) * 2008-07-23 2011-07-06 晨星软件研发(深圳)有限公司 Subtitle window output method and related device applied to television system
US8803948B2 (en) * 2009-02-12 2014-08-12 Lg Electronics Inc. Broadcast receiver and 3D subtitle data processing method thereof
US9056249B2 (en) * 2009-04-01 2015-06-16 Activision Publishing, Inc. Device and method for a streaming video game
JP4985807B2 (en) 2009-04-15 2012-07-25 ソニー株式会社 Playback apparatus and playback method
KR20110018261A (en) * 2009-08-17 2011-02-23 삼성전자주식회사 Method and apparatus for processing text subtitle data
CN102082923A (en) * 2009-11-30 2011-06-01 新奥特(北京)视频技术有限公司 Subtitle replacing method and device adopting subtitle templates
TWI400943B (en) * 2010-03-01 2013-07-01 Matsushita Electric Tw Co Ltd A method and a system for adjusting the control of the border / subtitle color of the media screen
KR101460464B1 (en) 2010-10-08 2014-11-12 미쓰이 가가쿠 가부시키가이샤 Solar cell sealing material, and solar cell module
CN102739976B (en) * 2011-05-17 2017-06-13 新奥特(北京)视频技术有限公司 A kind of method and system of the realization of the dynamic two-dimensional caption of shade
JP2013032483A (en) 2011-06-28 2013-02-14 Nitto Denko Corp Optical double-sided adhesive sheet, optical member, touch panel, image display and delamination method
CN103945140B (en) * 2013-01-17 2017-11-28 联想(北京)有限公司 The generation method and system of video caption
CN103301881B (en) * 2013-06-06 2014-11-05 山东科技大学 Preparation method of crosslinked polystyrene-immobilized benzothiazole catalyst used for formose reaction
US20150255121A1 (en) * 2014-03-06 2015-09-10 Thomson Licensing Method and apparatus for composition of subtitles
EP3376772B1 (en) * 2015-11-12 2023-01-25 Panasonic Intellectual Property Corporation of America Display method, program and display device
US10230812B1 (en) * 2016-01-29 2019-03-12 Amazon Technologies, Inc. Dynamic allocation of subtitle packaging
KR101961750B1 (en) 2017-10-11 2019-03-25 (주)아이디어콘서트 System for editing caption data of single screen
CN108989876B (en) * 2018-07-27 2021-07-30 青岛海信传媒网络技术有限公司 Subtitle display method and device
US10489496B1 (en) 2018-09-04 2019-11-26 Rovi Guides, Inc. Systems and methods for advertising within a subtitle of a media asset

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08317301A (en) * 1995-05-22 1996-11-29 Hitachi Ltd Video output device
JPH10108129A (en) * 1996-09-26 1998-04-24 Kenwood Corp Video disk reproduction device
US20020063681A1 (en) * 2000-06-04 2002-05-30 Lan Hsin Ting Networked system for producing multimedia files and the method thereof

Family Cites Families (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3891792A (en) 1974-06-25 1975-06-24 Asahi Broadcasting Television character crawl display method and apparatus
JPS6016777A (en) * 1983-07-08 1985-01-28 Victor Co Of Japan Ltd Transmission system of information signal
JPS63175583A (en) 1987-01-14 1988-07-19 Nec Corp Plural input picture edition recording system
DE3702220A1 (en) 1987-01-26 1988-08-04 Pietzsch Ibp Gmbh METHOD AND DEVICE FOR DISPLAYING A TOTAL IMAGE ON A SCREEN OF A DISPLAY DEVICE
US4961153A (en) 1987-08-18 1990-10-02 Hewlett Packard Company Graphics frame buffer with strip Z buffering and programmable Z buffer location
JPS6480181A (en) * 1987-09-21 1989-03-27 Nippon Denki Home Electronics Teletext receiver
US4853784A (en) 1988-02-19 1989-08-01 The Grass Valley Group, Inc. Video switcher with independent processing of selected video signals
JPH0251489A (en) 1988-08-12 1990-02-21 Sumitomo Electric Ind Ltd Molecular ray crystal growing device
JPH0251502A (en) 1988-08-15 1990-02-21 Idemitsu Petrochem Co Ltd Production of petroleum resin
US5260695A (en) 1990-03-14 1993-11-09 Hewlett-Packard Company Color map image fader for graphics window subsystem
JPH0484708A (en) 1990-07-27 1992-03-18 Mitsumi Electric Co Ltd Coordinate-position detecting method
US5214512A (en) 1991-02-11 1993-05-25 Ampex Systems Corporation Keyed, true-transparency image information combine
US5351067A (en) 1991-07-22 1994-09-27 International Business Machines Corporation Multi-source image real time mixing and anti-aliasing
JPH0537873A (en) 1991-07-31 1993-02-12 Sanyo Electric Co Ltd Picture division display system
JP2512250B2 (en) 1991-09-13 1996-07-03 松下電器産業株式会社 Video display workstation
US5530797A (en) 1992-04-09 1996-06-25 Matsushita Electric Industrial Co., Ltd. Workstation for simultaneously displaying overlapped windows using a priority control register
EP0790739B1 (en) 1993-09-16 2001-03-14 Kabushiki Kaisha Toshiba Digital video signal
JPH07203396A (en) 1993-12-28 1995-08-04 Sony Corp Subtitle data decoding device
JPH07226920A (en) * 1994-02-10 1995-08-22 Matsushita Electric Ind Co Ltd Recorded accompaniment equipment
JPH07250279A (en) * 1994-03-08 1995-09-26 Sony Corp Subtitle data decoding device
CA2168641C (en) 1995-02-03 2000-03-28 Tetsuya Kitamura Image information encoding/decoding system
JPH08234775A (en) * 1995-02-24 1996-09-13 Victor Co Of Japan Ltd Music reproducing device
US5930450A (en) 1995-02-28 1999-07-27 Kabushiki Kaisha Toshiba Recording medium, apparatus and method of recording data on the same, and apparatus and method of reproducing data from the recording medium
JPH08275205A (en) * 1995-04-03 1996-10-18 Sony Corp Method and device for data coding/decoding and coded data recording medium
AU698969B2 (en) 1995-04-14 1998-11-12 Kabushiki Kaisha Toshiba Recording medium, device and method for recording data on the medium, and device and method for reproducing data from the medium
JP3326670B2 (en) * 1995-08-02 2002-09-24 ソニー株式会社 Data encoding / decoding method and apparatus, and encoded data recording medium
JPH0951489A (en) * 1995-08-04 1997-02-18 Sony Corp Data coding/decoding method and device
JP3446473B2 (en) * 1996-04-12 2003-09-16 ソニー株式会社 Character information transmission system and transmission method, character information transmission and reception device, and recording medium
JP3962114B2 (en) * 1996-09-17 2007-08-22 株式会社エクシング Karaoke equipment
JP3966571B2 (en) 1997-04-02 2007-08-29 エルエスアイ ロジック コーポレーション High speed reproduction system and method for sub-picture unit in digital video disc
JPH10304308A (en) * 1997-04-23 1998-11-13 Sony Corp Sub picture data generating method and device, and computer readable recording medium for recording sub picture data generating program
JP2000023061A (en) 1998-07-02 2000-01-21 Sony Corp Television receiver
US6415437B1 (en) 1998-07-23 2002-07-02 Diva Systems Corporation Method and apparatus for combining video sequences with an interactive program guide
US6570579B1 (en) 1998-11-09 2003-05-27 Broadcom Corporation Graphics display system
US6741794B1 (en) 1999-01-29 2004-05-25 Sony Corporation System and method for flexibly blending multiple image planes in a video device
US7623140B1 (en) 1999-03-05 2009-11-24 Zoran Corporation Method and apparatus for processing video and graphics data to create a composite output image having independent and separate layers of video and graphics
US6466220B1 (en) 1999-03-05 2002-10-15 Teralogic, Inc. Graphics engine architecture
JP2001078149A (en) * 1999-09-08 2001-03-23 Toshiba Corp Device and method for reproducing media
DE19950490A1 (en) 1999-10-20 2001-04-26 Thomson Brandt Gmbh Method for coding an image sequence and partial image data unit for use in an electronic device and data carrier
US6493036B1 (en) 1999-11-17 2002-12-10 Teralogic, Inc. System and method for scaling real time video
GB2356999B (en) * 1999-12-02 2004-05-05 Sony Uk Ltd Video signal processing
DE10001369A1 (en) * 2000-01-14 2001-07-19 Infineon Technologies Ag Method and circuit arrangement for graphic display, in particular in a digital television set
WO2001054400A1 (en) * 2000-01-24 2001-07-26 Matsushita Electric Industrial Co., Ltd. Image synthesizing device, recorded medium, and program
JP4541482B2 (en) 2000-02-29 2010-09-08 キヤノン株式会社 Image processing apparatus and image processing method
TW522379B (en) * 2000-05-26 2003-03-01 Cyberlink Corp DVD playback system for displaying two types of captions and the playback method
JP2002016885A (en) * 2000-06-30 2002-01-18 Pioneer Electronic Corp Apparatus and method for reproducing picture
JP4978760B2 (en) 2000-08-23 2012-07-18 ソニー株式会社 Image processing method and image processing apparatus
US20020075403A1 (en) 2000-09-01 2002-06-20 Barone Samuel T. System and method for displaying closed captions in an interactive TV environment
JP2002091409A (en) * 2000-09-19 2002-03-27 Toshiba Corp Reproducing unit provided with subsidiary video processing function
JP2002216585A (en) 2001-01-18 2002-08-02 Minebea Co Ltd Touch panel for display device
US7050109B2 (en) 2001-03-02 2006-05-23 General Instrument Corporation Methods and apparatus for the provision of user selected advanced close captions
DE10126790A1 (en) 2001-06-01 2003-01-02 Micronas Munich Gmbh Method and device for displaying at least two images in an overall image
EP1454226A4 (en) 2001-10-23 2004-12-29 Samsung Electronics Co Ltd Information storage medium including markup document and av data, recording method, reproducing method, and reproducing apparatus therefor
US7676142B1 (en) 2002-06-07 2010-03-09 Corel Inc. Systems and methods for multimedia time stretching
JP3794355B2 (en) 2002-07-25 2006-07-05 カシオ計算機株式会社 Reproduction control device and reproduction control processing program
MXPA05005133A (en) 2002-11-15 2005-07-22 Thomson Licensing Sa Method and apparatus for composition of subtitles.
US8737810B2 (en) * 2002-11-15 2014-05-27 Thomson Licensing Method and apparatus for cropping of subtitle elements
JP5037873B2 (en) 2006-08-07 2012-10-03 株式会社リコー Positioning control device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08317301A (en) * 1995-05-22 1996-11-29 Hitachi Ltd Video output device
JPH10108129A (en) * 1996-09-26 1998-04-24 Kenwood Corp Video disk reproduction device
US20020063681A1 (en) * 2000-06-04 2002-05-30 Lan Hsin Ting Networked system for producing multimedia files and the method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN vol. 1997, no. 03 31 March 1997 (1997-03-31) *
PATENT ABSTRACTS OF JAPAN vol. 1998, no. 09 31 July 1998 (1998-07-31) *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1652184A1 (en) * 2003-07-24 2006-05-03 Lg Electronics Inc. Recording medium having a data structure for managing reproduction of text subtitle data recorded thereon and recording and reproducing methods and apparatuses
EP1652184A4 (en) * 2003-07-24 2007-05-23 Lg Electronics Inc Recording medium having a data structure for managing reproduction of text subtitle data recorded thereon and recording and reproducing methods and apparatuses
US20070077031A1 (en) * 2004-03-26 2007-04-05 Yoo Jea Y Recording medium and method and apparatus for reproducing and recording text subtitle streams
US8326118B2 (en) * 2004-03-26 2012-12-04 Lg Electronics, Inc. Recording medium storing a text subtitle stream including a style segment and a plurality of presentation segments, method and apparatus for reproducing a text subtitle stream including a style segment and a plurality of presentation segments
US8554053B2 (en) 2004-03-26 2013-10-08 Lg Electronics, Inc. Recording medium storing a text subtitle stream including a style segment and a plurality of presentation segments, method and apparatus for reproducing a text subtitle stream including a style segment and a plurality of presentation segments
CN1328905C (en) * 2004-06-29 2007-07-25 乐金电子(沈阳)有限公司 Device and its method for correcting caption errors of TV set
WO2006018786A1 (en) * 2004-08-20 2006-02-23 Koninklijke Philips Electronics N.V. Method of storing and transferring image signals
WO2006051433A1 (en) * 2004-11-09 2006-05-18 Nokia Corporation Auxiliary content handling over digital communication systems
CN101616270A (en) * 2008-06-27 2009-12-30 新奥特(北京)视频技术有限公司 A kind of method for generating captions that uses filter
CN101616268A (en) * 2008-06-27 2009-12-30 新奥特(北京)视频技术有限公司 A kind of method of utilizing texture coordinate to generate shadow captions
CN102724417A (en) * 2011-05-09 2012-10-10 新奥特(北京)视频技术有限公司 Method and system for realizing caption special effect in louver mode
US9621862B2 (en) 2013-12-18 2017-04-11 Seiko Epson Corporation Projector and method of controlling projector

Also Published As

Publication number Publication date
EP1576809B1 (en) 2007-06-20
JP2011097592A (en) 2011-05-12
JP2006506868A (en) 2006-02-23
US20140328572A1 (en) 2014-11-06
US20160323538A9 (en) 2016-11-03
KR101034969B1 (en) 2011-05-17
JP2011097593A (en) 2011-05-12
US7852411B2 (en) 2010-12-14
US9462221B2 (en) 2016-10-04
JP5263842B2 (en) 2013-08-14
CA2506521A1 (en) 2004-06-03
JP2011015409A (en) 2011-01-20
BR0316174A (en) 2005-09-27
DE60314544T2 (en) 2008-02-21
CN100377582C (en) 2008-03-26
CN1711756A (en) 2005-12-21
AU2003279350A1 (en) 2004-06-15
JP2011142649A (en) 2011-07-21
ZA200503868B (en) 2006-07-26
AU2003279350B2 (en) 2008-08-07
US20060013563A1 (en) 2006-01-19
ES2289339T3 (en) 2008-02-01
JP4553248B2 (en) 2010-09-29
JP2013146099A (en) 2013-07-25
ATE365423T1 (en) 2007-07-15
MXPA05005133A (en) 2005-07-22
EP1576809A1 (en) 2005-09-21
BR122013022769B1 (en) 2016-07-19
JP4707124B2 (en) 2011-06-22
US20140327824A1 (en) 2014-11-06
KR20050089005A (en) 2005-09-07
JP5517318B2 (en) 2014-06-11
JP5201698B2 (en) 2013-06-05
US9503678B2 (en) 2016-11-22
CA2506521C (en) 2010-04-27
JP4587338B1 (en) 2010-11-24
KR100989503B1 (en) 2010-10-22
JP2010252371A (en) 2010-11-04
JP5108125B2 (en) 2012-12-26
JP2011139496A (en) 2011-07-14
JP2011172221A (en) 2011-09-01
JP4707125B2 (en) 2011-06-22
KR20100066591A (en) 2010-06-17
JP2011091811A (en) 2011-05-06
BRPI0316174B1 (en) 2016-12-06
DE60314544D1 (en) 2007-08-02
JP5110715B2 (en) 2012-12-26
JP5108126B2 (en) 2012-12-26

Similar Documents

Publication Publication Date Title
US7852411B2 (en) Method and apparatus for composition of subtitles
US9595293B2 (en) Method and apparatus for composition of subtitles
US20150255121A1 (en) Method and apparatus for composition of subtitles

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 1200500808

Country of ref document: VN

AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1591/DELNP/2005

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 1-2005-500774

Country of ref document: PH

WWE Wipo information: entry into national phase

Ref document number: 2003772298

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2506521

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2005/03868

Country of ref document: ZA

Ref document number: PA/a/2005/005133

Country of ref document: MX

Ref document number: 200503868

Country of ref document: ZA

Ref document number: 20038A32599

Country of ref document: CN

Ref document number: 1020057008660

Country of ref document: KR

ENP Entry into the national phase

Ref document number: 2006013563

Country of ref document: US

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2003279350

Country of ref document: AU

Ref document number: 10535106

Country of ref document: US

Ref document number: 2004552519

Country of ref document: JP

WWP Wipo information: published in national office

Ref document number: 1020057008660

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2003772298

Country of ref document: EP

ENP Entry into the national phase

Ref document number: PI0316174

Country of ref document: BR

WWP Wipo information: published in national office

Ref document number: 10535106

Country of ref document: US

WWG Wipo information: grant in national office

Ref document number: 2003772298

Country of ref document: EP