WO2006092993A1 - 字幕表示装置 - Google Patents
字幕表示装置 Download PDFInfo
- Publication number
- WO2006092993A1 WO2006092993A1 PCT/JP2006/303132 JP2006303132W WO2006092993A1 WO 2006092993 A1 WO2006092993 A1 WO 2006092993A1 JP 2006303132 W JP2006303132 W JP 2006303132W WO 2006092993 A1 WO2006092993 A1 WO 2006092993A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- display
- subtitle
- unit
- video
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/16—Analogue secrecy systems; Analogue subscription systems
- H04N7/162—Authorising the user terminal, e.g. by paying; Registering the use of a subscription channel, e.g. billing
- H04N7/163—Authorising the user terminal, e.g. by paying; Registering the use of a subscription channel, e.g. billing by receiver means only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/414—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
- H04N21/41407—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4314—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4782—Web browsing, e.g. WebTV
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/485—End-user interface for client configuration
- H04N21/4858—End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
Definitions
- the present invention relates to a caption display device that displays video and captions 'character super, etc., and more specifically, displays subtitles' character super distribution etc. with a consistent operation system regardless of the type of terminal.
- the present invention relates to a caption display device in which settings can be changed and captions can be viewed even when a menu or dialog is displayed.
- the closed caption method has been adopted as a standard in addition to the open caption method that superimposes subtitles and character super images on the video on the transmitting station side, which has been common in conventional analog television broadcasting. ing.
- data related to subtitle 'character super' is transmitted from the transmitting station independently from the video, and data related to subtitle 'character super' (hereinafter referred to as subtitle 'character super data') is received from the receiver. To be presented to the user in a superimposed manner with the video.
- the receiver can control the display of subtitles / character super, etc., and for example, by adding a language identifier to the subtitles / character super, the language the user wants to see It becomes possible to browse the subtitle 'superscript'.
- the closed caption method for displaying subtitles and character super we will describe the closed caption method for displaying subtitles and character super.
- the caption / character super data includes character data representing a character string itself, which is a set of characters to be displayed, and additional information. By using additional information, the receiver can improve the expressive power of subtitles and character supers, simply by displaying a character string on the screen. In other words, it is possible to perform expressions such as display and emphasis that are easy for the user.
- the additional information of subtitles' character super includes the following types of data: [0005]
- the subtitle / character super display timing data is data representing the time at which the subtitle / character super is displayed.
- the receiver uses the subtitle 'character super display timing data to achieve synchronization between the subtitle and the TV program.
- the character size data is data that specifies the size for displaying a character string as a subtitle / character super.
- the color data is data for designating the color of the character string itself to be displayed as the subtitle character and the background color.
- the repeated data is data that specifies the number of repetitions of the character string so that the amount of subtitle character super data can be saved when the same character string is repeatedly displayed.
- Receiver built-in sound playback data is data that specifies sound data to be played back in accordance with the timing at which characters or character strings are displayed by specifying sound data stored in the receiver in advance using an identifier. .
- a receiver that is assumed to be used in a certain place during use is particularly called a fixed receiver.
- the feature of the fixed receiver is that the display screen of the receiver is generally more than a dozen inches.
- terminals that are equipped with a digital broadcast reception function for devices that are assumed to be carried and moved by users such as mobile phones, PDAs (Personal Digital Assistants), digital cameras, etc.
- the terminal is particularly called a portable receiver.
- the feature of the portable receiver is that the display screen of the receiver is a few inches or less (in many cases 3 inches or less).
- video data and subtitles / character super data that are generally transmitted from the transmitting station are converted into video, subtitles, and character super images, respectively, and converted.
- the video and the subtitle 'character super image are superimposed and combined and displayed on the display screen.
- mobile receivers that receive digital broadcasts generally convert video data and subtitles / text data transmitted from the transmitting station into video and subtitles / text super images, respectively, Display subtitles and character super images in a separate area of the display screen.
- the differences between the fixed receiver and the portable receiver regarding the display method of the subtitle 'character super are mainly due to the difference in the size of the display screen of each receiver.
- the display screen In a fixed receiver, the display screen is as large as more than a dozen inches, so video, subtitles, and text super are overlaid. Even if they are displayed together, it is possible for the user to recognize the subtitle 'text super' separately from the video. Displaying video as large as possible increases the expressiveness of the video and the power given to the user. Therefore, with fixed receivers, video and subtitles and text can be displayed in the entire display area of the display screen. It is more preferable to superimpose and superimpose them for display.
- the display screen of the portable receiver is as small as several inches or less, if the video and the subtitle 'character super are superimposed and synthesized, the user cannot recognize the subtitle' character super or it is difficult to recognize it. there is a possibility. Accordingly, it is preferable to reduce the display size of the video so that the video and the character / character super are displayed in different areas.
- the display screen There are empty display areas above and below. By displaying the subtitle 'character super in this empty area, it is possible to secure a dedicated display area for the subtitle' character super without sacrificing the size of the video display area.
- data broadcasting as content transmitted from a transmitting station to a receiver in addition to video, audio, program information, and caption / text.
- the content of the data broadcast is transmitted as a BML document expressed by, for example, BML (Broadcast Markup Language) or a still image 'video'.
- BML Broadcast Markup Language
- the content is presented to the user by displaying it on the same screen as the video and subtitles.
- the data broadcast is preferably displayed in an area independent of the video for the same reason as the subtitle 'text super' in the portable receiver.
- a caption display device such as a portable receiver displays a video, a data broadcast, and a caption 'text super' on the display screen at the same time
- the video display area, the data broadcast display area, and the caption 'text super display area Is preferably a separate region.
- FIG. 13 is a diagram showing an example of a data broadcast by a portable receiver and a display layout of subtitle / text.
- Fig. 13 (a) shows the layout for displaying video and data broadcasting.
- Figure 13 (b) shows the layout when video, subtitles, character super, and data broadcasting are displayed.
- a mobile receiver since a mobile receiver generally has a vertically long display screen, it is common to place a data broadcast display area below the video display area.
- FIG. 13B when the mobile receiver further displays the caption “character super”, the display area of the data broadcast is shared with the display area of the caption “character super”.
- the display area for data broadcasting is divided into a subtitle / text super area and a data broadcasting area. It is preferable that the distribution ratio between the subtitle 'text super broadcast display area and the data broadcast display area can be set by the user according to his / her preference.
- the difference between the fixed receiver and the portable receiver is a difference regarding the definition of the display screen.
- a fixed receiver can provide a standard model for resolution and ratio to the display screen.
- the display will depend on the function, application, and shape of the original terminal. The aspect ratio and resolution of the screen are greatly different. For this reason, it is difficult to provide a standard display model for portable receivers. If a standard display model is provided, it is not possible to provide a display model optimized for each type of terminal. Therefore, with conventional mobile receivers, the standard display model is not specified, and the display method of video, subtitles, and character supervision is left to the implementation of each terminal. In this way, the display method left to the implementation of each terminal is referred to as the conventional display method 1.
- a display method for displaying subtitles / characters using a WWW browser is disclosed (for example, Patent Document 1).
- teletext data is converted into HTML (Hyper Text Markup Language) data, and the teletext is displayed on the display screen using the WWW browser.
- WWW browsers have a function that optimizes the content display layout for the display screen resolution and display area size.
- using a WWW browser to display subtitles and character super-functions can use the function that optimizes the display layout. It is effective in. In this way, the method of displaying the subtitle 'character super using a WWW browser is called conventional display method 2.
- Patent Document 1 Japanese Patent Laid-Open No. 11-18060
- the conventional display method 1 that is, the display method entrusted to the implementation of each terminal
- operations related to screen display such as subtitles' character super broadcast display area and data broadcast display area allocation setting.
- the system was also left to the implementation of each terminal. Therefore, in order to avoid confusion for users, the subtitle display device to which the conventional display method 1 is applied provides a terminal-specific operation system and a consistent operation system for each terminal for screen display. There was a problem that we had to do.
- the contents of the data broadcast can only be viewed on the WWW browser, and the subtitle character Additional information such as super display timing data, character size data, color data, repeated data, and receiver built-in sound reproduction data could not be reflected in the display on the WWW browser.
- the caption display device to which the conventional display method 2 is applied has a problem in that it has poor expressive power.
- the caption display device displays a menu or a dialog panel for displaying information related to the television presentation function or a warning display, it is possible to display a caption. Area power Hide by S menu and dialog display (see Figure 14). For this reason, the caption display device to which the conventional display method 1 and display method 2 are applied has a problem that the user cannot view the caption superscript while displaying a menu or a dialog.
- the object of the present invention is to change the display settings such as subtitle / character super distribution and the like with a consistent operation system regardless of the type of terminal, and also when displaying menus and dialogs. It is to provide a caption display device that can be viewed.
- the present invention is directed to a caption display device that acquires, as content data, stream data including at least caption / character superstream data and section data, and displays the acquired content data on a screen.
- the caption display device of the present invention includes a stream analysis unit, a document data conversion unit, a section analysis unit, and a display data generation unit.
- the stream analysis unit analyzes the subtitle 'character super stream data included in the stream data, and outputs the subtitle' character super data to be displayed.
- the section analysis unit analyzes the section data included in the stream data and converts it into the first document data.
- the document data conversion unit converts the subtitle 'character super data output from the stream analysis unit to the second document data in the same format as the first document data, and specifies the display area for the second document data. Output together with the layout data.
- the display data generation unit generates display data related to subtitle / character super, based on the first document data output from the section analysis unit, the second document data output from the document data conversion unit, and the layout data. .
- the stream analysis unit further outputs subtitle time information indicating a presentation time of the subtitle 'character super data.
- the caption display device includes a display control unit that requests the display data generation unit to update the display data based on the timing indicated by the caption time information. Then, when the display data generation unit receives an update request from the display control unit, the display data generation unit updates the display data related to the subtitles' character super.
- the caption display device further includes a presentation data conversion unit, a video data analysis unit, a display switching unit, and a display data synthesis unit.
- the presentation data conversion unit converts the subtitle 'character super data output from the stream analysis unit into image data and presents it as subtitle' character super image data.
- the video data analysis unit analyzes the video stream data included in the stream data and outputs it as video data.
- the display switching unit determines whether or not the video data has the power to superimpose and display the subtitle character super image data.
- the display data synthesizing unit outputs video data or synthesized video display data obtained by superimposing subtitle / character super image data on the video data according to the determination of the display switching unit.
- the video output unit screen-displays the video data or composite video display data output from the display data synthesis unit and the display data related to the caption / character super generated by the display data generation unit.
- the display switching unit determines that the caption / character super image data is not superimposed on the video data
- the display data synthesizing unit outputs the video data
- the display switching unit Decided to superimpose subtitle 'character super image data on data
- the caption data and the super image data are superimposed and synthesized on the video data
- the synthesized data is output as synthesized video display data.
- the display data generation unit outputs, as mask data, a bitmap image representing an area in which the second text data is displayed in addition to the display data relating to the caption / character super.
- the caption display device further includes a video data analysis unit, a display switching unit, a display data synthesis unit, and a video output unit.
- the video data analysis unit analyzes the video stream data included in the stream data and outputs it as video data.
- the display switching unit determines whether or not to superimpose display data related to the caption “text super” on the video data.
- the display data synthesizing unit outputs video data or synthesized video display data obtained by superimposing a bitmap image on video data and display data related to the caption superscript according to the determination of the display switching unit.
- the video output unit displays the video data or composite video display data output by the display data synthesis unit and the display data related to the caption / character super generated by the display data generation unit.
- the display data composition unit outputs the video data and the display data related to the caption / character super when the display switching unit determines that the display data related to the caption / character super is not superimposed on the video data. To do.
- the display switching unit determines that the display switching unit displays the display data related to the caption / character superposition on the video data
- the display data combining unit and the display data related to the caption / character super Is output.
- the caption display device may further include a data receiving unit that receives content data including document data in caption supertext data.
- the document data converter extracts the received subtitle / character super data document data and outputs it to the display data generator.
- the present invention is also directed to a caption display method for realizing the above-described screen display, a program for executing the caption display method, a storage medium storing the program, and an integrated circuit.
- the subtitle display method of the present invention analyzes a subtitle 'character superstream data included in the stream data, and outputs a subtitle' character superdata to be displayed; Section analysis step that analyzes the section data contained in the data and converts it to the first document data Stream analysis step force
- the output subtitle character super data is converted to the second document data in the same format as the first text data, together with the layout data that specifies the display area of the second document data Display data related to subtitles and character super, based on the document data conversion step to be output, the first document data output in the section analysis step, the second document data output in the document data conversion step, and the layout data
- the document data conversion unit converts the 8-unit code character representing the subtitle / character supertext input via the stream analysis unit into BML document data, and the frame It is output to the display data generation unit together with the layout data for designating the system.
- the display data generation unit shows the BML document data output from the document data conversion unit and the data broadcasting content output from the section analysis unit using the HTML and BML interpretation 'display function of the WWW browser. Display data related to subtitles' character super is generated from BML document data. As a result, the caption display device can realize the display of the character 'character super' using the WWW browser.
- the same UI as the frame area allocation setting in the WWW browser can be used to allocate the data broadcast display area and the subtitle 'character super broadcast display area. Settings can be made.
- the subtitle display device can realize the same operation system as the WWW browser for screen display.
- the subtitle display device uses additional information such as subtitles 'subtitles included in character super' character super display timing data, character size data, color data, repetition data, and receiver built-in sound reproduction data. In this way, it is possible to display subtitles with excellent expressiveness and character super.
- the display switching unit receives flag data representing display / non-display processing of menus and the like from the UI display control unit, so that when the menu is displayed on the display data composition unit, subtitles are displayed. Instruct to synthesize the character super with the video data, and when the menu is not displayed, instruct not to synthesize the subtitle 'character super with the video data.
- the caption display device the caption / text broadcasting display area is hidden by the menu. In this case, it is possible to display subtitles and character super in the video display area. In other words, the user can view the subtitle 'character super while the menu and dialog are displayed.
- FIG. 1 is a block diagram showing an example of a configuration of a caption display device 101 according to a first embodiment of the present invention.
- FIG. 2 is a diagram showing an example of layout data output by a document data conversion unit 105.
- FIG. 3 is a diagram showing an example of 8-unit code characters inputted as subtitle character super data.
- FIG. 4 is a diagram showing an example of BML document data in which 8-unit code character power is also converted.
- FIG. 5 is a diagram showing an example of BML document data including a playromsound O function.
- FIG. 6 is a diagram showing an example of a display layout realized by the video output unit 113.
- FIG. 7 is a block diagram showing an example of the configuration of a caption display device 201 according to the second embodiment of the present invention.
- FIG. 8 is a block diagram showing an example of the configuration of a caption display device 301 according to the third embodiment of the present invention.
- FIG. 9 is a block diagram showing an example of the configuration of a caption display device 401 according to the fourth embodiment of the present invention.
- FIG. 10 is a block diagram showing an example of a configuration of a caption display device 501 according to a fifth embodiment of the present invention.
- FIG. 11 is a block diagram showing an example of a configuration of a caption display device 601 according to a sixth embodiment of the present invention.
- FIG. 12 is a block diagram showing an exemplary configuration of a caption display device 701 according to a seventh embodiment of the present invention.
- FIG. 13 is a diagram showing an example of a data broadcast by a mobile receiver and a display layout of subtitles' character super.
- FIG. 14 is a diagram for explaining problems of a conventional display method.
- FIG. 1 is a block diagram showing an example of the configuration of the caption display device 101 according to the first embodiment of the present invention.
- a caption display device 101 includes a user operation input unit 102, a caption 'character super stream analysis unit 103 (hereinafter simply referred to as a stream analysis unit 103), a section Analysis unit 104, subtitle'character super document data conversion unit 105 (hereinafter simply referred to as document data conversion unit 105), display data generation unit 106, subtitle / character super display control unit 107 (hereinafter simply referred to as display control unit 107).
- Video data analysis unit 108 subtitle 'character super presentation data conversion unit 109 (hereinafter simply referred to as presentation data conversion unit 109), video subtitle' character super display data synthesis unit 110 (hereinafter simply referred to as display data). Synthesizer 110), UI display controller 11 1, subtitle 'character super display switching unit 112 (hereinafter simply referred to as display switching unit 112), video output unit 113, audio data analysis unit 114, and audio output unit 115.
- MPEG2-TS MPEG2 System Transport Str earn
- MPEG2 System Transport Str earn which is data in the MPEG2 System transport stream format
- the user operation input unit 102 is realized by, for example, a keypad device of a mobile phone or a combination of software for monitoring the state of the keypad device.
- the user operation input unit 102 detects pressing of the keypad device and outputs information input by the user as a key event.
- the stream analysis unit 103 analyzes a PES (Packetized Elementary Stream) included in the MPE G2-TS input as subtitles 'character superstream data as data in which subtitles' character superdata are stored.
- the stream analysis unit 103 is realized by software, for example.
- the stream analysis unit 103 analyzes the PES in which the subtitle 'character super data is stored, and outputs the display start time as subtitle time information and the 8-unit character code data of the data data as subtitle' character super data. To do.
- the caption time information is represented by a 36-bit numerical value, for example.
- the stream analysis unit 103 can use PTS (Presentation Time Stamp) in PES as caption time information.
- the stream analysis unit 103 can set the current time as caption time information when the time control mode during PES is immediate reproduction.
- Section analysis unit 104 converts data stored in the section format included in MPEG2-TS input as section data into BML document data indicating content for data broadcasting.
- the section analysis unit 104 is realized by software, for example.
- BML document data is stored in DSMCC (Di gital Storage Media Commnad and Control) method.
- the section analysis unit 104 analyzes a DDB (Download Data Block) message or a DII (Download Info Indication) message transmitted in the DSMCC format on the section, and generates BML document data, which is a resource expressed in the DDB and DII. Extract.
- DDB Download Data Block
- DII Download Info Indication
- FIG. 2 is a diagram showing an example of layout data output from the document data conversion unit 105.
- the layout data shows that in the initial state, the data broadcast display area is divided into two parts, upper and lower, and subtitles and character super are displayed in the upper half, and the data broadcast is displayed in the lower half.
- the SRC attribute value “x—cc: default” of the first FRAME element specifies BML document data output from the document data conversion unit 105.
- the SRC attribute value “x—dc: default” of the second F RAME element is specified by specifying the BML document data output from the section analysis unit 104.
- the document data conversion unit 105 realizes conversion from 8-unit code characters to BML document data by a predetermined method.
- FIG. 3 is a diagram showing an example of 8-unit code characters input as subtitle character super data.
- FIG. 4 is a diagram showing an example of BML document data converted from 8-unit code characters.
- the character string enclosed in “[” and “]” represents the control character, and the number written immediately after the control character type represents the parameter for the control character.
- the document data conversion unit 105 includes a control character representing the built-in sound reproduction data and character size data in the 8-unit code character (see Fig. 3), so it includes a playromsound O function and is styled for character size.
- the BML document data (see Fig. 4) is output. The specific conversion method from 8-unit code characters to BML document data will be described in detail later.
- the display data generation unit 106 receives BML document data representing content for data broadcasting from the section analysis unit 104, and receives BML document data and layout data representing subtitle character superscript from the document data conversion unit 105. Is done.
- the display data generation unit 106 For example, WWW browser software that can interpret and display HTML and BML.
- the display data generation unit 106 generates display data related to subtitles / characters according to the contents of tags and function declarations specified in HTML and BML included in the input BML document data and layout data.
- the SRC attribute specified by the FRAME element in the input layout data stores a URL indicating BML document data.
- the display data generation unit 106 determines a layout method for a plurality of BML document data based on the rows attribute or cols attribute of the FRAMESET element included in the layout data.
- the display data generation unit 106 divides the display area into two parts vertically, and the upper half contains the BML document data indicated by the URI "x— cc: d efaultj.
- the BML document data indicated by the URI “x— dc: default” is displayed.
- the display data generation unit 106 uses “x—cc: default” as the URI representing the BML document data output from the document data conversion unit 105.
- X—dc: defa ult is recognized as a URI representing the BML document data output from the section analysis unit 104.
- the display data generation unit 106 uses the HTML and BML analysis' display function to convert bitmap data as display data based on the BML document data specified by the FRAM E element and the layout specified by the FRAMSET element. Convert. If the BML document data includes a play romsound O function, and the corresponding function is executed while the BML document data is being interpreted, the display data generation unit 106 stores in advance in ROM or RAM specified by the argument of the playromsound O function. The built-in sound data is output as voice presentation data. At this time, the time stamp indicating the audio time information can be the current time.
- the built-in sound data is expressed in PCM (Pulse Code Modulation) format, for example.
- FIG. 5 is a diagram illustrating an example of BML document data including a playromsound () function.
- the display data generation unit 106 provides an interface for accepting a display data update request to external software as an update request function. When an external software isotropic display data update request function is called, the display data generation unit 106 re-inputs and interprets the BML document data, and outputs display data and voice presentation data. Display data generation unit 106 displays multiple document data using layout data In addition, a parameter for specifying document data is prepared in the update request function so that display data can be updated by specifying one of the specific document data.
- the display data generation unit 106 responds to input to the WWW browser, such as changing the frame area, scrolling, pressing a link, etc., by inputting a key event from the user operation input unit 102, and updating the display data if necessary. I do.
- Subtitle time information is input from the stream analysis unit 103 to the display control unit 107.
- the display control unit 107 calls an update request function of the WWW browser software that is the display data generation unit 106 when the time indicated by the caption time information comes.
- the display data generation unit 106 can update the subtitle display at the timing synchronized with the television program.
- the video data analysis unit 108 analyzes the PES included in the MPEG2-TS input as the video stream data as data storing video.
- the video data analysis unit 108 is realized by software such as a decoder, for example.
- the video stream data stored in the PES is, for example, MEPG4 AVC video ES (Elementary Stream) format data.
- the video data analyzer 108 analyzes the video stream data and outputs it as YUV format video data. At this time, the time stamp indicating the video time information can be the PTS in the PES.
- the presentation data conversion unit 109 receives 8-unit code characters and subtitle time information as subtitle character super data from the stream analysis unit 103.
- the presentation data conversion unit 109 converts the subtitle 'character super data into a bitmap format image according to the display timing indicated in the subtitle time information, and outputs it as subtitle' character super image data.
- the presentation data conversion unit 109 is realized by software, for example. Specifically, the presentation data conversion unit 109 analyzes the 8-unit code character input as subtitle / character super data, and expresses the subtitle 'character super using the character font stored in the ROM or RAM. Generate image data in bitmap format.
- the presentation data conversion unit 109 considers the character size, character color, background color, and the number of character repetitions according to the control data included in the 8-unit code character when converting the character subtitle super to the bitmap image.
- the presentation data converter 10 9 is a bitmap image representing a character font as a subtitle 'character super, and a character super
- a bitmap mask image for synthesizing alpha by distinguishing the area representing the path from the other area is output as subtitle character super image data.
- the presentation data conversion unit 109 sets the parameter specified by the control character. Based on the above, PCM internal sound data stored in ROM or RAM in advance is output as voice presentation data. At this time, the time stamp representing the audio time information is the time of the time stamp input as the caption time information.
- the display data synthesizing unit 110 receives the YUV format video data output presentation data conversion unit 109 from the video data analysis unit 108, and the bitmap format subtitle character super image data. When instructed by the display switching unit 112, the display data synthesizing unit 110 superimposes the subtitle / character super image data on the video data.
- the display data synthesis unit 110 is realized by, for example, video processing software.
- the display data synthesizing unit 110 provides an interface for designating whether or not to perform superimposition to an external software as a superposition designation function.
- the overlay specification function has a boolean parameter indicating whether or not to perform overlay.
- the display data composition unit 110 When designated to superimpose by external software, the display data composition unit 110 analyzes each frame of the YUV format video data, and based on the input bitmap image based on the bitmap mask image, Then, the image data is alpha-combined and converted again into YUV format video data that consists of frames. When it is designated not to perform superimposition by the superposition designation function, the display data composition unit 110 outputs the input video data as it is. Further, the display data synthesis unit 110 outputs the time stamp of the input video time information as the video time information as it is.
- a key event is input from the user operation input unit 102 to the UI display control unit 111.
- the UI display control unit 111 displays and deletes the menu and dialog based on the contents of the input key event.
- the UI display control unit 111 is realized by software, for example.
- the UI display control unit 111 When the menu key on the keypad is pressed, the UI display control unit 111 generates a bitmap image and a bitmap mask image representing the UI menu (hereinafter, the generated image is referred to as a menu image).
- the UI display control unit 111 performs menu display 'non-display processing.
- the flag data is output as UI display data.
- the flag data is set to “true” when performing menu display processing, and “false” when performing menu non-display processing.
- Flag data is input to the display switching unit 112 from the UI display control unit 111.
- the display switching unit 112 determines whether or not to superimpose the caption / character super image data on the video data by calling the overlay designation function of the display data synthesis unit 110 based on the true / false value of the input flag data. Is specified.
- the display switching unit 112 is realized by software, for example.
- the display switching unit 112 calls the overlay specification function of the display data synthesis unit 110 with the parameter set to true (that is, instructs to perform overlay).
- the display switching unit 112 calls the overlay specification function of the display data synthesis unit 110 with the parameter set to false (that is, instructs not to perform overlay). .
- the display switching unit 112 superimposes and combines the caption / text in the video display area.
- the display switching unit 112 does not superimpose the caption / superscript on the video display area.
- the display switching unit 112 can display the subtitle'character super in the video display area even when the subtitle / text super broadcast display area is hidden by the menu display.
- Video data is input to the video output unit 113 from the display data synthesizing unit 110, and display data related to the subtitle character is input from the display data generating unit 106.
- display data related to the subtitle character is input from the display data generating unit 106.
- a menu image is input to the video output unit 113 via the display switching unit 112.
- the video output unit 113 displays the input video data, subtitles' character super, menu screen, and the like on the display screen.
- the video output unit 113 is realized by, for example, a combination of a display screen and software that controls layout display on the display screen.
- FIG. 6 is a diagram showing an example of a display layout realized by the video output unit 113.
- FIG. 6 shows the case where a liquid crystal display with a resolution of QVGA (vertical 320 pixels, horizontal 240 pixels) is used as the display screen.
- the video output unit 113 is a rectangular area (vertical 180 pixels wide by 240 pixels wide) above the liquid crystal display (hereinafter referred to as the video display area).
- the video data is displayed in a rectangular area (hereinafter referred to as subtitle “text super broadcast display area”) of 140 pixels in the bottom and 240 pixels in the bottom of the display (see Fig. 6 (a)).
- the video output unit 113 When a menu image is input, the video output unit 113 receives the video data combined with the caption “character super” from the display data combining unit 110. Therefore, the video output unit 113 displays the video data combined with the subtitle / character super in the video display area. Then, the video output unit 113 displays the input menu image on the front of the display data (that is, the subtitle / text super broadcast display area) (see FIG. 6B).
- the audio data analysis unit 114 analyzes the PES included in the MPEG2-TS input as the audio stream data as data storing audio, and outputs the audio presentation data to the audio output unit 115.
- the audio data analysis unit 114 is realized by software such as a decoder, for example.
- the audio stream data stored in the PES is, for example, data in AAC (Advanced Audio Coding) ES format.
- the audio data analysis unit 114 analyzes the AAC ES format audio stream data and outputs the PCM format audio presentation data.
- the PTS in the PES can be used for the time stamp representing the audio time information.
- Voice presentation data is input to the voice output unit 115 from the voice data analysis unit 114, the presentation data conversion unit 109, and the display data generation unit 106.
- the voice output unit 115 mixes the voice input as voice presentation data and presents it to the user.
- the audio output unit 115 is realized by a combination of hardware such as a speaker and software, for example.
- the audio output unit 115 outputs the audio input as the audio presentation data in accordance with the times described in the corresponding time stamps.
- the document data conversion unit 105 sequentially analyzes the byte sequence represented by the 8-unit code character to generate body data and header data.
- the initial value of the body data is the string “ku body>”.
- the initial value of the header data is the character string “ ⁇ bml> ⁇ head> ⁇ title>subtitle> / title> ⁇ script> ⁇ ! [CDATA [function play SO und () ⁇ ".
- Document data converter 105 is currently used as an internal state It has the character set table type and character font information that stores the display specification of the current character font. Character font information has size, foreground color, and background color as attributes, the initial value of the size attribute is "normal", the initial value of the foreground color attribute is "# 000000", and the initial value of the background color is "#FFFFFFJ" .
- Character size data which is additional information in subtitle character super data, is represented by control characters such as SSZ, MMZ, and NSZ in 8-unit code characters. Color data is represented by control characters such as BKF, RDF, and CSI.
- control characters such as BKF, RDF, and CSI.
- the character set table currently used as the internal state of the document data conversion unit 105 is switched, and nothing is added to the body data. If a control character representing character size data appears, add the character string “ku Zspan>” to the body data (however, add the “ku span” character string to the body data once. In this case, do not add “Zspan>”.
- control character representing the character size data is SMZ
- “x—small” is used as the character string for the size attribute of the character font information
- “small” is used for the MMZ
- “normal” is used for the NMZ.
- add “ku span style “ font—size: ”to the body data.
- the character string stored in the size attribute of the character font information is added to the body data.
- “; color:” is added to the body data.
- the character string stored in the foreground color attribute of the character font information is added to the body data.
- add “background—color:” is added to the body data.
- the character string stored in the background color attribute of the character font information is added to the body data.
- the RBG designation is used for the foreground color attribute and the background color attribute of the character font information according to the foreground color and background color specified by each control character.
- the character string stored in the size attribute of the character font information is added to the body data.
- “; color:” is the body Append to data.
- the character string stored in the background color attribute of the character font information is added to the body data. Then add ">".
- a control character representing repeated data such as an RPC control character
- the character that appears immediately after the control character is added to the body data as many times as the parameter is specified by the RPC control character.
- a control character representing the sound reproduction data built in the receiver for example, a PRA control character appears, “playromsound (“ romsound: ZZ) ”is added to the header data.
- a decimal number representing the internal sound specifier specified by the PRA control character parameter is added to the header data as a character string.
- the 8-unit code character representing the caption 'character super input via the stream analysis unit 103 is documented.
- the data conversion unit 105 converts the data into BML document data and outputs it to the display data generation unit 106 together with the layout data for designating the frame.
- the display data generation unit 106 uses the HTML and BML interpretation 'display function of the WWW browser to display the BML document data output from the document data conversion unit 105 and the data broadcast output from the section analysis unit 104.
- Display data related to subtitles' character super is generated from BML document data indicating the content.
- the caption display device 101 can realize the display of the caption “character super” using the WWW browser.
- the distribution of the data broadcast display area and the subtitle 'character super broadcast display area is the same as the frame area distribution setting in the WWW browser. Settings can be made.
- the caption display device 101 can realize the same operation system as the WWW browser with respect to the screen display.
- the caption display device 101 uses additional information such as caption "subtitles included in the character super" character super display timing data, character size data, color data, repetition data, and receiver built-in sound reproduction data. And display of expressive subtitles' character super Can be realized.
- the display switching unit 112 receives flag data representing display / non-display processing of a menu or the like from the UI display control unit 111, so that when the menu is displayed on the display data synthesis unit 110, Indicates that the subtitle 'character super is combined with the video data, and when the menu is hidden, the subtitle' character super is not combined with the video data. Accordingly, the caption display device 101 can display the caption “text super” in the video display area when the caption “text super broadcast display area” is hidden by the menu. In other words, according to the caption display device 101, the user can view the caption / character supervision while the menu or the dialog is displayed.
- FIG. 7 is a block diagram showing an example of the configuration of the caption display device 201 according to the second embodiment of the present invention.
- the same components as those in the first embodiment are denoted by the same reference numerals, and description thereof is omitted.
- the caption display device 201 includes a user operation input unit 102, a stream analysis unit 103, a section analysis unit 104, a document data conversion unit 105, a display data generation unit 106, a display control unit 107, a video data analysis unit 108, A video output unit 213, an audio data analysis unit 114, and an audio output unit 215 are provided.
- the caption display device 201 according to the second embodiment has a configuration for synthesizing menus and dialogs with video data from the caption display device 101 according to the first embodiment (that is, the presentation data conversion unit 109, The display data synthesis unit 110, UI display control unit 111, and display switching unit 112) are omitted.
- video data is input to the video output unit 213 from the video data analysis unit 108, and display data related to subtitles / character super is input from the display data generation unit 106.
- the video output unit 213 displays the input video data and subtitles' character super on the display screen.
- Voice presentation data is input to the voice output unit 215 from the voice data analysis unit 114 and the display data generation unit 106.
- the voice output unit 215 mixes the voice input as voice presentation data and presents it to the user.
- the caption display device 201 As described above, according to the caption display device 201 according to the second embodiment of the present invention, the caption / character super display using the WWW browser is realized as in the first embodiment. Therefore, it is possible to set the distribution of the data broadcasting display area and the caption / text broadcasting display area with the same UI as the frame area distribution setting in the WWW browser. Thereby, the caption display device 201 can realize the same operation system as the WWW browser with respect to the screen display.
- the subtitle display device 201 uses additional information such as subtitle 'subtitles included in character super' character super display timing data, character size data, color data, repetition data, and receiver built-in sound reproduction data. In this way, it is possible to display subtitles that are expressive and to display character super.
- FIG. 8 is a block diagram showing an example of the configuration of the caption display device 301 according to the third embodiment of the present invention.
- a caption display device 301 includes a user operation input unit 102, a stream analysis unit 103, a video data analysis unit 108, a presentation data conversion unit 109, a display data synthesis unit 110, a UI display control unit 111, and a display switching unit.
- 112 a video output unit 313, an audio data analysis unit 114, and an audio output unit 315.
- the caption display device 301 according to the third embodiment is different from the caption display device 101 according to the first embodiment in that the section analysis unit 104, the document data conversion unit 105, the display data generation unit 106, and the display control unit 107 is omitted.
- video data is input to the video output unit 313 from the display data synthesis unit 110, and subtitle “character super image data is input from the presentation data conversion unit 109.
- subtitle “character super image data” is input from the presentation data conversion unit 109.
- the video output unit 313 displays the video data in the video display area and the subtitle 'character super image data in the subtitle' text super broadcast display area (see FIG. 6 (a)). Also, when a menu image is input, the video output unit 313 receives the video data combined with the subtitle / character text from the display data combining unit 110. For this reason, the video output unit 313 displays the video data combined with the subtitle character in the video display area. Then, the video output unit 313 displays the input menu image in the subtitle 'text super broadcast display area. (See Fig. 6 (b)).
- the voice output unit 315 receives voice presentation data from the voice data analysis unit 114 and the presentation data conversion unit 109.
- the voice output unit 215 mixes the voice input as voice presentation data and presents it to the user.
- the caption display device 301 when the caption / text broadcast display area is hidden by the menu, as in the first embodiment. Makes it possible to display subtitles and character super in the video display area. In other words, according to the caption display device 301, the user can view the caption subtitle “superscript” while the menu or dialog is displayed.
- FIG. 9 is a block diagram showing an example of the configuration of a caption display device 401 according to the fourth embodiment of the present invention.
- the same components as those in the first to third embodiments are denoted by the same reference numerals and the description thereof is omitted.
- FIG. 9 the same components as those in the first to third embodiments are denoted by the same reference numerals and the description thereof is omitted.
- the caption display device 401 includes a user operation input unit 102, a stream analysis unit 403, a section analysis unit 404, a document data conversion unit 105, a display data generation unit 106, a display control unit 107, a video data analysis unit 408, Presentation data conversion unit 109, display data synthesis unit 110, UI display control unit 111, display switching unit 112, video output unit 113, audio data analysis unit 414, audio output unit 115, tuner demodulation unit 416, and TS analysis unit 417.
- the tuner demodulator 416 receives, for example, an OFDM (Orthogonal Frequency Division Multiplexing) carrier and demodulates it into transport stream data that stores digital TV broadcast content, and includes demodulator software and demodulator software. Realized by combination.
- the transport stream data is, for example, data in the MP EG2 System transport stream format.
- Tuner demodulation section 416 outputs demodulated transport stream data.
- the TS analysis unit 417 inputs'analyzes transport stream data, analyzes PES format audio stream data, video stream data, subtitles' character superstream data, multiplexed in the transport stream data, And software that outputs section data in section format.
- the stream analysis unit 403 is the same as the stream analysis unit 103 according to the first embodiment except that the subtitle 'character Suno stream data is input from the TS analysis unit 417.
- the section analysis unit 404 is the same as the section analysis unit 104 according to the first embodiment except that section data is input from the TS analysis unit 417.
- the video data analysis unit 408 is the same as the video data analysis unit 108 according to the first embodiment except that video stream data is input from the TS analysis unit 417.
- the audio data analysis unit 414 is the same as the audio data analysis unit 117 according to the first embodiment except that audio stream data is input from the TS analysis unit 417.
- the caption display device 401 includes the tuner demodulation unit 416 and the TS analysis unit 417, so that the television broadcast from the transmission station can be directly received.
- FIG. 10 is a block diagram showing an example of the configuration of a caption display device 501 according to the fifth embodiment of the present invention.
- the same components as those in the first to fourth embodiments are denoted by the same reference numerals, and the description thereof is omitted.
- a screen display device 501 according to the fifth embodiment includes a TS storage unit 516 instead of the tuner demodulation unit 416, as compared with the caption display device 401 according to the fourth embodiment.
- the TS storage unit 516 is realized by, for example, a combination of storage device hardware and control software.
- Storage hardware includes, for example, a fixed hard disk, USB connection memory, RAM, ROM, DVD (Digital Versatile Disc), BD (Blue-ray Disc), HD DVD (High Definition DVD) ⁇ SD (Secure Digital) memory card. Medium and reading device.
- the TS storage unit 516 outputs transport stream data stored in the storage device hardware under the control of the control software.
- the caption display device 501 of the fifth embodiment of the present invention includes a TS storage unit 516 and a TS analysis unit 417, so that the TV broadcast content stored in the device can be stored. Enable subtitle display.
- FIG. 11 is a block diagram showing an example of the configuration of a caption display device 601 according to the sixth embodiment of the present invention.
- the caption display device 601 according to the sixth embodiment includes a user operation input unit 102, a stream analysis unit 103, a section analysis unit 104, a document data conversion unit 605, a display data generation unit 606, and a display control unit 107.
- the display data generation unit 606 outputs, as mask data, a bit map mask image related to an area for displaying document data indicated by URI ⁇ -cc: default] in addition to the display data related to the caption / character super described above. Further, the display data generation unit 606 may output the height of the document data indicated by the URI ⁇ -cc: default] as the caption display length! /.
- the display data generation unit 606 is the same as the display data generation unit 106 according to the first embodiment except for the points described above.
- the document data conversion unit 605 When the document data conversion unit 605 receives a BML document instead of an 8-unit code character as subtitle / character super data, the document data conversion unit 605 does not perform the conversion described in the first embodiment, and receives the received subtitle character super.
- the data is output as BML document data indicating the display data.
- the document data conversion unit 605 is the same as the document data conversion unit 105 according to the first embodiment except for the points described above.
- Display data composition unit 610 receives video data from video data analysis unit 108, and display data generation unit 606 receives display data related to subtitles / character super, mask data, and subtitle display length. .
- the display data composition unit 610 is software that outputs display data having a resolution of QVGA (vertical 320 pixels, horizontal 240 pixels), for example.
- the display data composition unit 610 arranges video data in a rectangular area of 180 pixels in the top and 240 pixels in the top of the display, and the display data on the subtitle 'character super in the rectangular area of 140 pixels in the bottom and 240 pixels in the bottom of the display. Place. Furthermore, the display data composition unit 610 also uses external software capabilities to match the video data and character using an overlay designation function. If you are instructed to superimpose the display data related to the curtain and text super, overlay the video data and the display data related to the text super on the 180-pixel by 240-pixel rectangular area at the top of the display. Deploy.
- the display data composition unit 610 performs alpha composition on display data related to subtitles and character super on the video data based on the mask data and the caption display length, and uses the synthesized data as composite video display data. Output.
- the display data composition unit 610 is the same as the display data composition unit 110 according to the first embodiment except for the points described above.
- the video output unit 613 is supplied with a time stamp as video time information, video data or composite video display data, and display data related to subtitles' character super, from the display data synthesis unit 610.
- the video output unit 613 displays the input video data or the composite video display data and the display data related to the caption / text supervision on the display screen.
- the video output unit 613 is the same as the video output unit 113 according to the first embodiment except for the points described above.
- the video output unit 613 When the video output unit 613 is instructed to enlarge and display the video data by the user via the user operation input unit 102, the above-described composite video display data is displayed on the entire display screen. It may be displayed. As a result, the user can view the video data on which the subtitle 'character super data is superimposed using the entire display screen.
- Voice output unit 615 receives voice data in PCM format as voice presentation data from voice data analysis unit 114, and a time stamp as voice time information from display data generation unit 606.
- the audio output unit 615 is the same as the audio output unit 615 according to the first embodiment except for the points described above.
- FIG. 12 is a block diagram showing an example of the configuration of a caption display device 701 according to the seventh embodiment of the present invention.
- a caption display device 701 includes a user operation input unit 102, a stream analysis unit 403, a section analysis unit 404, a document data conversion unit.
- the caption transmission device includes a transmission TS storage unit 722, a transmission caption 'character super document data conversion unit 723 (hereinafter referred to as a transmission document data conversion unit 723), and a modulation transmission unit 724.
- the transmission TS storage unit 722 is the same as the TS storage unit 516 according to the fifth embodiment.
- Data in the MPEG2 System transport stream format is input to the transmission document data conversion unit 723 as transport stream data.
- the transmission document data conversion unit 723 converts subtitle / character super data represented by 8-unit code characters in the data unit data included in the transport data stream into BML document data, and outputs it as a transport stream.
- the transmission document data conversion unit 723 is realized by software, for example.
- the method for converting subtitle character super data represented by 8-unit code characters into BML document data is the same as in the first embodiment.
- MPEG2 System transport stream format data is input to modulation transmission section 724.
- the modulation transmission unit 724 is realized by a combination of software that modulates and transmits input data to an OFDM carrier wave and hardware including a transmitter.
- the subtitle display device 701 can convert the subtitle character super data in the transport stream into a BML document before transmission on the transmitting station side.
- Each processing procedure performed by the caption display devices described in the first to seventh embodiments is a predetermined processing procedure that can execute the above-described processing procedure stored in a storage device (ROM, RAM, hard disk, etc.).
- Program data power May be realized by being interpreted and executed by the CPU.
- the program data may be introduced into the storage device via the storage medium, or may be directly executed from the storage medium.
- the storage medium is a semiconductor memory such as ROM, RAM or flash memory, or a magnetic disk such as a flexible disk or a node disk.
- the storage medium is a concept including a communication medium such as a telephone line or a conveyance path.
- the configurations included in the caption display devices described in the first to seventh embodiments can be realized as LSIs that are integrated circuits. These may be individually made into one chip, or may be made into one chip so as to include all or part of each. Here, it is sometimes called IC, system LSI, super LSI, or ultra LS I depending on the difference in power integration.
- the method of circuit integration is not limited to LSI's, and implementation using dedicated circuitry or general purpose processors is also possible.
- An FPGA Field Programmable Gate Array
- a reconfigurable 'processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
- integrated circuit technology that replaces LSI emerges as a result of advances in semiconductor technology or other derived technologies, it is naturally possible to integrate functional blocks using this technology. Possible applications of biotechnology are possible.
- the caption display device has an effect of improving the operability of the user and the caption readability with respect to the viewing of the caption, and is useful as a television receiver, a content reproduction device with captions, and the like. It is.
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2006800063752A CN101129070B (zh) | 2005-02-28 | 2006-02-22 | 字幕显示设备 |
EP06714272A EP1855479A4 (en) | 2005-02-28 | 2006-02-22 | CAPTION DISPLAY |
JP2007505861A JP4792458B2 (ja) | 2005-02-28 | 2006-02-22 | 字幕表示装置 |
US11/884,784 US20090207305A1 (en) | 2005-02-28 | 2006-02-22 | Caption Display Device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2005-054671 | 2005-02-28 | ||
JP2005054671 | 2005-02-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006092993A1 true WO2006092993A1 (ja) | 2006-09-08 |
Family
ID=36941027
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2006/303132 WO2006092993A1 (ja) | 2005-02-28 | 2006-02-22 | 字幕表示装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20090207305A1 (ja) |
EP (1) | EP1855479A4 (ja) |
JP (1) | JP4792458B2 (ja) |
CN (1) | CN101129070B (ja) |
WO (1) | WO2006092993A1 (ja) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008047648A1 (fr) * | 2006-10-13 | 2008-04-24 | Sharp Kabushiki Kaisha | Dispositif terminal d'informations mobile |
JP2008131328A (ja) * | 2006-11-21 | 2008-06-05 | Sharp Corp | コンテンツ表示装置、およびコンテンツ表示装置の制御方法 |
JP2009159483A (ja) * | 2007-12-27 | 2009-07-16 | Kyocera Corp | 放送受信装置 |
JP2011030224A (ja) * | 2009-07-27 | 2011-02-10 | Ipeer Multimedia Internatl Ltd | マルチメディア字幕表示システム及びマルチメディア字幕表示方法 |
JP2015159366A (ja) * | 2014-02-21 | 2015-09-03 | 日本放送協会 | 受信機 |
JP2015173444A (ja) * | 2014-02-21 | 2015-10-01 | 日本放送協会 | 受信機 |
JP2016129296A (ja) * | 2015-01-09 | 2016-07-14 | 株式会社アステム | 番組出力装置、サーバ、番組と絵文字の出力方法、およびプログラム |
JP2017500770A (ja) * | 2013-10-24 | 2017-01-05 | ▲華▼▲為▼▲終▼端有限公司 | サブタイトル表示方法およびサブタイトル表示デバイス |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7966560B2 (en) * | 2006-10-24 | 2011-06-21 | International Business Machines Corporation | Laying out web components using mounting and pooling functions |
KR20080057847A (ko) * | 2006-12-21 | 2008-06-25 | 삼성전자주식회사 | 방송수신장치 및 이의 오픈 캡션 정보 저장 방법 |
US20090129749A1 (en) * | 2007-11-06 | 2009-05-21 | Masayuki Oyamatsu | Video recorder and video reproduction method |
US8621505B2 (en) * | 2008-03-31 | 2013-12-31 | At&T Intellectual Property I, L.P. | Method and system for closed caption processing |
KR101479079B1 (ko) * | 2008-09-10 | 2015-01-08 | 삼성전자주식회사 | 디지털 캡션에 포함된 용어의 설명을 표시해주는 방송수신장치 및 이에 적용되는 디지털 캡션 처리방법 |
JP4482051B1 (ja) * | 2008-12-23 | 2010-06-16 | 株式会社東芝 | 装置制御システム |
US8817072B2 (en) | 2010-03-12 | 2014-08-26 | Sony Corporation | Disparity data transport and signaling |
US9565466B2 (en) * | 2010-03-26 | 2017-02-07 | Mediatek Inc. | Video processing method and video processing system |
CN102566952B (zh) * | 2010-12-20 | 2014-11-26 | 福建星网视易信息系统有限公司 | 应用于嵌入式数字娱乐点播系统的显示系统和方法 |
KR101830656B1 (ko) * | 2011-12-02 | 2018-02-21 | 엘지전자 주식회사 | 이동 단말기 및 이의 제어방법 |
CN102883213B (zh) * | 2012-09-13 | 2018-02-13 | 中兴通讯股份有限公司 | 字幕提取方法及装置 |
JP5509284B2 (ja) * | 2012-09-14 | 2014-06-04 | 株式会社東芝 | マルチフォーマット出力装置、マルチフォーマット出力装置の制御方法 |
US10582255B2 (en) * | 2014-06-30 | 2020-03-03 | Lg Electronics Inc. | Broadcast receiving device, method of operating broadcast receiving device, linking device for linking to broadcast receiving device, and method of operating linking device |
CN104093063B (zh) * | 2014-07-18 | 2017-06-27 | 三星电子(中国)研发中心 | 还原字幕属性的方法和装置 |
JP6340994B2 (ja) * | 2014-08-22 | 2018-06-13 | スター精密株式会社 | プリンタ、印刷システムおよび印刷制御方法 |
CN104994312A (zh) * | 2015-07-15 | 2015-10-21 | 北京金山安全软件有限公司 | 一种视频生成方法及装置 |
CN104978161A (zh) * | 2015-07-30 | 2015-10-14 | 张阳 | Mv全屏显示方法及系统 |
US10511882B2 (en) * | 2016-01-26 | 2019-12-17 | Sony Corporation | Reception apparatus, reception method, and transmission apparatus |
CN110602566B (zh) * | 2019-09-06 | 2021-10-01 | Oppo广东移动通信有限公司 | 匹配方法、终端和可读存储介质 |
CN115689860A (zh) * | 2021-07-23 | 2023-02-03 | 北京字跳网络技术有限公司 | 视频蒙层显示方法、装置、设备及介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1118060A (ja) * | 1997-06-27 | 1999-01-22 | Matsushita Electric Ind Co Ltd | テレビジョン受信機 |
JP2005027053A (ja) * | 2003-07-02 | 2005-01-27 | Toshiba Corp | コンテンツ処理装置 |
JP2005033764A (ja) * | 2003-06-19 | 2005-02-03 | Ekitan & Co Ltd | 受信機及び受信方法 |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08275205A (ja) * | 1995-04-03 | 1996-10-18 | Sony Corp | データ符号化/復号化方法および装置、および符号化データ記録媒体 |
JPH09102940A (ja) * | 1995-08-02 | 1997-04-15 | Sony Corp | 動画像信号の符号化方法、符号化装置、復号化装置、記録媒体及び伝送方法 |
KR100218474B1 (ko) * | 1997-06-10 | 1999-09-01 | 구자홍 | 에치티엠엘 데이터 송신 및 수신 장치 |
GB2327837B (en) * | 1997-07-29 | 1999-09-15 | Microsoft Corp | Providing enhanced content with broadcast video |
DE19757046A1 (de) * | 1997-12-20 | 1999-06-24 | Thomson Brandt Gmbh | Vorrichtung zur Erzeugung der digitalen Daten für die Bilder einer Animations-/Informations-Sequenz für ein elektronisches Gerät |
KR100631499B1 (ko) * | 2000-01-24 | 2006-10-09 | 엘지전자 주식회사 | 디지털 티브이의 캡션 표시 방법 |
JP2002016885A (ja) * | 2000-06-30 | 2002-01-18 | Pioneer Electronic Corp | 映像再生装置及び映像再生方法 |
US6704024B2 (en) * | 2000-08-07 | 2004-03-09 | Zframe, Inc. | Visual content browsing using rasterized representations |
JP4672856B2 (ja) * | 2000-12-01 | 2011-04-20 | キヤノン株式会社 | マルチ画面表示装置及びマルチ画面表示方法 |
JP2002232802A (ja) * | 2001-01-31 | 2002-08-16 | Mitsubishi Electric Corp | 映像表示装置 |
US7050109B2 (en) * | 2001-03-02 | 2006-05-23 | General Instrument Corporation | Methods and apparatus for the provision of user selected advanced close captions |
US7546527B2 (en) * | 2001-03-06 | 2009-06-09 | International Business Machines Corporation | Method and apparatus for repurposing formatted content |
US20020188959A1 (en) * | 2001-06-12 | 2002-12-12 | Koninklijke Philips Electronics N.V. | Parallel and synchronized display of augmented multimedia information |
US6952236B2 (en) * | 2001-08-20 | 2005-10-04 | Ati Technologies, Inc. | System and method for conversion of text embedded in a video stream |
JP3945687B2 (ja) * | 2001-12-26 | 2007-07-18 | シャープ株式会社 | 映像表示装置 |
JP4192476B2 (ja) * | 2002-02-27 | 2008-12-10 | 株式会社日立製作所 | 映像変換装置及び映像変換方法 |
US8522267B2 (en) * | 2002-03-08 | 2013-08-27 | Caption Colorado Llc | Method and apparatus for control of closed captioning |
WO2003081917A1 (en) * | 2002-03-21 | 2003-10-02 | Koninklijke Philips Electronics N.V. | Multi-lingual closed-captioning |
US20030189669A1 (en) * | 2002-04-05 | 2003-10-09 | Bowser Todd S. | Method for off-image data display |
JP2005523555A (ja) * | 2002-04-16 | 2005-08-04 | サムスン エレクトロニクス カンパニー リミテッド | インタラクティブコンテンツバージョン情報が記録された情報保存媒体、その記録方法及び再生方法 |
EP1420580A1 (en) * | 2002-11-18 | 2004-05-19 | Deutsche Thomson-Brandt GmbH | Method and apparatus for coding/decoding items of subtitling data |
US7555199B2 (en) * | 2003-01-16 | 2009-06-30 | Panasonic Corporation | Recording apparatus, OSD controlling method, program, and recording medium |
US7106381B2 (en) * | 2003-03-24 | 2006-09-12 | Sony Corporation | Position and time sensitive closed captioning |
JP3830913B2 (ja) * | 2003-04-14 | 2006-10-11 | パイオニア株式会社 | 情報表示装置及び情報表示方法等 |
KR100532997B1 (ko) * | 2003-05-23 | 2005-12-02 | 엘지전자 주식회사 | 디지털 티브이의 클로즈 캡션 운용 장치 |
WO2005001614A2 (en) * | 2003-06-02 | 2005-01-06 | Disney Enterprises, Inc. | System and method of dynamic interface placement based on aspect ratio |
JP4449359B2 (ja) * | 2003-07-23 | 2010-04-14 | ソニー株式会社 | 電子機器及び情報視聴方法、並びに情報視聴システム |
KR100828354B1 (ko) * | 2003-08-20 | 2008-05-08 | 삼성전자주식회사 | 자막 위치 제어 장치 및 방법 |
KR20050078907A (ko) * | 2004-02-03 | 2005-08-08 | 엘지전자 주식회사 | 고밀도 광디스크의 서브타이틀 재생방법과 기록재생장치 |
US20060015649A1 (en) * | 2004-05-06 | 2006-01-19 | Brad Zutaut | Systems and methods for managing, creating, modifying, and distributing media content |
-
2006
- 2006-02-22 JP JP2007505861A patent/JP4792458B2/ja not_active Expired - Fee Related
- 2006-02-22 US US11/884,784 patent/US20090207305A1/en not_active Abandoned
- 2006-02-22 CN CN2006800063752A patent/CN101129070B/zh not_active Expired - Fee Related
- 2006-02-22 EP EP06714272A patent/EP1855479A4/en not_active Withdrawn
- 2006-02-22 WO PCT/JP2006/303132 patent/WO2006092993A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1118060A (ja) * | 1997-06-27 | 1999-01-22 | Matsushita Electric Ind Co Ltd | テレビジョン受信機 |
JP2005033764A (ja) * | 2003-06-19 | 2005-02-03 | Ekitan & Co Ltd | 受信機及び受信方法 |
JP2005027053A (ja) * | 2003-07-02 | 2005-01-27 | Toshiba Corp | コンテンツ処理装置 |
Non-Patent Citations (2)
Title |
---|
OTSU T. ET AL.: "Digital Jushinki Muke Kumikomigata Data Hoso Software 'BML Browser'", MATSUSHITA TECHNICAL JOURNAL, vol. 46, no. 6, December 2000 (2000-12-01), pages 653 - 660, XP003006772 * |
See also references of EP1855479A4 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008047648A1 (fr) * | 2006-10-13 | 2008-04-24 | Sharp Kabushiki Kaisha | Dispositif terminal d'informations mobile |
JP2008099053A (ja) * | 2006-10-13 | 2008-04-24 | Sharp Corp | 携帯情報端末装置 |
JP2008131328A (ja) * | 2006-11-21 | 2008-06-05 | Sharp Corp | コンテンツ表示装置、およびコンテンツ表示装置の制御方法 |
JP2009159483A (ja) * | 2007-12-27 | 2009-07-16 | Kyocera Corp | 放送受信装置 |
JP2011030224A (ja) * | 2009-07-27 | 2011-02-10 | Ipeer Multimedia Internatl Ltd | マルチメディア字幕表示システム及びマルチメディア字幕表示方法 |
JP2017500770A (ja) * | 2013-10-24 | 2017-01-05 | ▲華▼▲為▼▲終▼端有限公司 | サブタイトル表示方法およびサブタイトル表示デバイス |
US9813773B2 (en) | 2013-10-24 | 2017-11-07 | Huawei Device Co., Ltd. | Subtitle display method and subtitle display device |
JP2015159366A (ja) * | 2014-02-21 | 2015-09-03 | 日本放送協会 | 受信機 |
JP2015173444A (ja) * | 2014-02-21 | 2015-10-01 | 日本放送協会 | 受信機 |
JP2016129296A (ja) * | 2015-01-09 | 2016-07-14 | 株式会社アステム | 番組出力装置、サーバ、番組と絵文字の出力方法、およびプログラム |
Also Published As
Publication number | Publication date |
---|---|
CN101129070A (zh) | 2008-02-20 |
EP1855479A1 (en) | 2007-11-14 |
JP4792458B2 (ja) | 2011-10-12 |
EP1855479A4 (en) | 2009-10-14 |
JPWO2006092993A1 (ja) | 2008-08-07 |
CN101129070B (zh) | 2010-09-01 |
US20090207305A1 (en) | 2009-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4792458B2 (ja) | 字幕表示装置 | |
US8752120B2 (en) | Digital broadcasting receiving apparatus and method for controlling the same | |
JP6529493B2 (ja) | メディアデータの送信及び受信のための方法及び装置 | |
JP6077200B2 (ja) | 受信装置、表示制御方法、放送システム、並びにコンピューター・プログラム | |
JP6399725B1 (ja) | テキストコンテンツ生成装置、送信装置、受信装置、およびプログラム | |
JP5345224B2 (ja) | デジタル放送受信機 | |
JP4287621B2 (ja) | テレビジョン受信機およびこれに対する情報提供方法 | |
CN111601142B (zh) | 一种字幕的显示方法及显示设备 | |
JP2005124163A (ja) | 受信装置、番組連携表示方法および印刷制御方法 | |
JP4829443B2 (ja) | 受信装置、受信方法および記録媒体 | |
JP2003219372A (ja) | データ放送受信再生装置、その制御方法、データ放送システム、データ放送装置、データ放送ショッピングにおける商品表示方法、及び制御プログラム | |
US20100251294A1 (en) | Moving image processor and moving image processing method | |
JP2015037264A (ja) | 受信装置、送出装置、及びプログラム | |
JP2001211401A (ja) | デジタル放送受信機およびメール端末装置 | |
JP5501359B2 (ja) | デジタル放送受信装置及びデジタル放送受信方法 | |
JP2004336179A (ja) | 放送受信端末、並びに、放送受信端末の操作キー制御方法及び操作キー制御プログラム | |
JP2009260685A (ja) | 放送受信装置 | |
JP2008085940A (ja) | テレビジョン受像機 | |
JP2003224783A (ja) | データ放送受信再生装置、その制御方法、データ放送システム、放送データ送信装置、データ放送ショッピングにおける商品表示方法、及び制御プログラム | |
KR100721561B1 (ko) | 콘텐츠 변환 장치 및 그 방법 | |
JP4785543B2 (ja) | 携帯通信端末及びその制御方法 | |
JP6307182B2 (ja) | 受信装置、表示制御方法、並びに放送システム | |
JP2016116032A (ja) | 受信装置、放送システム、受信方法及びプログラム | |
JP2016001918A (ja) | 表示装置、受信装置、表示方法、テレビジョン受像機、表示システム、プログラムおよび記録媒体 | |
JP5010102B2 (ja) | 放送受信方式 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2007505861 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006714272 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11884784 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200680006375.2 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 2006714272 Country of ref document: EP |