WO2017038332A1 - Broadcast receiver and subtitle presenting method - Google Patents
Broadcast receiver and subtitle presenting method Download PDFInfo
- Publication number
- WO2017038332A1 WO2017038332A1 PCT/JP2016/072227 JP2016072227W WO2017038332A1 WO 2017038332 A1 WO2017038332 A1 WO 2017038332A1 JP 2016072227 W JP2016072227 W JP 2016072227W WO 2017038332 A1 WO2017038332 A1 WO 2017038332A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- subtitle
- information
- processing unit
- caption
- program
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H40/00—Arrangements specially adapted for receiving broadcast information
- H04H40/18—Arrangements characterised by circuits or components specially adapted for receiving
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/68—Systems specially adapted for using specific information, e.g. geographical or meteorological information
- H04H60/73—Systems specially adapted for using specific information, e.g. geographical or meteorological information using meta-information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
Definitions
- the present invention relates to a broadcast receiver that presents subtitles and a subtitle presenting method using such a broadcast receiver.
- Hybridcast which is a broadcasting / communication cooperation service that has recently started, an extended API (Application Programming Interface) that can be controlled from an HTML (Hyper Text Markup Language) application to access broadcast resources such as broadcast video and program information.
- HTTP Application Programming Interface
- Non-Patent Document 3 In addition, in broadcasting for 8K (4K) that is currently being developed, a new subtitle / text super technology based on an encoding method based on TTML is adopted (see Non-Patent Document 3).
- the present invention has been made in view of the above problems, and the main purpose of the present invention is to present an upper layer program desired by a program distributor (or an HTML application provider) on an HTML application. It is to realize a broadcast receiver capable of presenting subtitles by a method.
- a broadcast receiver is realized as a function of the upper layer program in a broadcast receiver in which a lower layer program and an upper layer program operate.
- a subtitle presenting section that presents subtitles of a program on an HTML application, and an acquisition processing section that obtains meta information indicating the subtitle presenting method from the outside of the broadcast receiver, the function of the lower layer program
- the subtitle presenting unit obtains the meta information acquired by the acquisition processing unit, and presents the subtitle with reference to the meta information.
- the broadcast receiver has an effect that the upper layer program can present subtitles on the HTML application in a presentation method desired by the program distributor (or the provider of the HTML application). Play.
- FIG. 1 is a block diagram showing an example of the configuration of a system according to Embodiments 1 to 6 of the present invention and the main configuration of a transmission device and a reception device included in the system.
- FIG. 6 is a functional block diagram illustrating a main configuration of an application-related processing unit included in a receiving apparatus according to Embodiments 1 to 5.
- FIG. 7 is a diagram illustrating an HTML5 application source (structured document) executed by a receiving apparatus according to the first to sixth embodiments.
- 6 is a diagram illustrating an example of a definition of a method “tuneTo” called by a receiving device according to the first embodiment.
- FIG. 6 is a diagram illustrating an example of a data structure of a broadcast signal transmitted by a transmission apparatus according to Embodiments 1 to 3.
- FIG. 10 is a diagram illustrating a data structure of caption presentation control information referred to by a receiving apparatus according to Embodiments 1 to 3 for presenting captions.
- FIG. 10 is a diagram for explaining an example of a data format of caption data referred to by the receiving apparatus according to the first to third embodiments. It is the figure which illustrated schematically the detail of the script tag in the source of the HTML5 application which the receiver concerning Embodiment 1 performs. It is the figure which illustrated the data (structured document) of the TTML format.
- FIG. 10 is a diagram for explaining another example of the data format of captiondata referred to by the receiving apparatuses according to the first to third embodiments.
- FIG. 10 is a diagram for explaining another example of the data format of captiondata referred to by the receiving apparatuses according to the first to third embodiments.
- FIG. 10 is a diagram for explaining yet another example of the data format of captiondata referred to by the receiving apparatuses according to the first to third embodiments.
- FIG. 10 is a diagram illustrating an example of a definition of a method “getCurrentSubtitleInformation” called by a receiving apparatus according to the second embodiment. It is the figure which illustrated schematically the detail of the script tag in the source
- FIG. 10 is a functional block diagram illustrating a main configuration of an application-related processing unit included in a receiving device according to a sixth embodiment.
- FIG. 10 is a diagram for explaining an example of a data format of caption data referred to by a receiving apparatus according to the sixth embodiment.
- FIG. 10 is a diagram for explaining another example of the data format of captiondata referred to by the receiving apparatus according to the sixth embodiment.
- MMT MPEG Media Transport
- IP Internet Protocol
- the MMT can refer to components such as video, audio, subtitles, and data constituting a program via different transmission paths by specifying a packet ID, IP address, or URL in the control information. it can.
- 8K (or 4K) broadcasting realized by MMT is assumed.
- Embodiment 1 Hereinafter, a system including a receiving apparatus according to an embodiment of the present invention will be described with reference to the drawings.
- FIG. 1 is a block diagram illustrating an example of a configuration of the system and a configuration of main parts of a transmission device and a reception device included in the system.
- FIG. 2 is a functional block diagram illustrating a functional configuration of an application-related processing unit included in the receiving device.
- the system 1 includes a transmission device 2, a reception device 3 (broadcast receiver), a content server 5, and a content server 6.
- the transmission device 2 is a device that transmits content in which a plurality of components are assigned to one channel, and is a device managed by a broadcaster or the like.
- the receiving device 3 is a device that acquires an HTML application (Hybridcast application, structured document) via broadcast or communication, and reproduces the content by executing the acquired HTML application.
- the receiving device 3 is, for example, a television receiver, a mobile phone (smart phone), a tablet terminal, a PC, a portable game machine, or the like.
- the transmission device 2 and the reception device 3 are connected via the Internet.
- the transmission device 2 can transmit content and various other data to the reception device 3 via broadcast waves and the Internet.
- the content server 5 is a server that stores content transmitted by the transmission device 2.
- the content server 5 is separate from the transmission device 2, but the transmission device 2 may include the content server 5.
- the content server 6 is a server managed by a business entity that manages the transmission device 2 or a business operator that is different from the business entity that manages the transmission device 2 (for example, a business operator that is trusted by the broadcaster that manages the transmission device 2). Is a server managed by.
- the receiving device 3 can acquire content and various other data not only from the transmitting device 2 but also from the content server 6 via the Internet.
- the system 1 includes one transmission device 2, one reception device 3, one content server 5, and one content server 6, but the present invention is not limited to this.
- the system 1 may include a plurality of transmission apparatuses 2, reception apparatuses 3, content servers 5, and content servers 6, respectively.
- the transmission device 2 includes a component multiplexing unit 21, a broadcast transmission unit 22, and a communication transmission / reception unit 23.
- the component multiplexing unit 21 packetizes components such as video, audio, subtitles, and data, multiplexes them into one stream, and transmits them to the receiving device 3.
- the component multiplexing unit 21 acquires program content from the content server 5. Then, the component multiplexing unit 21 controls the acquired program content (SI information, which will be described later), video, audio, and subtitles constituting the acquired program content, and HTML for presenting the program content Packetize applications and multiplex into one stream. The component multiplexing unit 21 transmits the generated stream to the reception device 3 via the broadcast transmission unit 22 or the communication transmission / reception unit 23.
- SI information which will be described later
- HTML for presenting the program content Packetize applications and multiplex into one stream.
- the component multiplexing unit 21 transmits the generated stream to the reception device 3 via the broadcast transmission unit 22 or the communication transmission / reception unit 23.
- the component multiplexing unit 21 generates a plurality of streams (broadcast signals to be described later) from components such as video, audio, and subtitles constituting the program content, and the control information of the program content.
- the stream may be transmitted to the receiver 3 via the broadcast transmitter 22 and the remaining stream may be transmitted to the receiver 3 via the communication transceiver 23.
- the component multiplexing unit 21 may be configured to packetize storage destination information in which the HTML application is stored instead of the HTML application.
- the broadcast transmission unit 22 transmits data (content, etc.) via broadcast waves.
- the communication transmitting / receiving unit 23 communicates with other devices such as the receiving device 3 or the content server 6 by wireless communication means or wired communication means, and exchanges data.
- the reception device 3 includes a broadcast reception unit 31, an operation unit 32, a communication transmission / reception unit 33, a component demultiplexing unit 34, a video decoding unit 35, an audio decoding unit 36, a caption decoding unit 37, and application-related items.
- a processing unit 38, a display 39, a speaker 40, and a storage unit 41 are provided.
- the operation unit 32 receives an operation from the user and outputs an operation signal indicating the received operation.
- the operation unit 32 may be configured by input devices such as a keyboard, a mouse, a keypad, and operation buttons. Further, a touch panel in which the operation unit 32 and the display 39 are integrated may be used.
- the operation unit 32 may be a remote control device such as a remote controller that is separate from the receiving device 3.
- the broadcast receiving unit 31 receives data via broadcast waves.
- the broadcast receiving unit 31 is, for example, a broadcast receiving tuner.
- the communication transmitting / receiving unit 33 communicates with other devices such as the transmission device 2 or the content server 6 by wireless communication means or wired communication means, and exchanges data.
- the communication transceiver 33 is, for example, a LAN terminal or a wireless LAN interface.
- the component demultiplexer 34 demultiplexes the stream received via the broadcast receiver 31 and, based on the control information, demultiplexed video components into the video decoder 35 and audio components into the audio decoder 36. Then, the demultiplexed caption component is output to the caption decoding unit 37, and SI information and an HTML application described later are output to the application related processing unit 38, respectively. Further, the component demultiplexing unit 34 may output the operation signal output from the operation unit 32 to the application related processing unit 38.
- the video decoding unit 35 acquires the encoded video component from the component demultiplexing unit 34 and decodes the encoded video component.
- the audio decoding unit 36 acquires the encoded audio component from the component demultiplexing unit 34 and decodes the encoded audio component.
- the subtitle decoding unit 37 acquires the encoded subtitle component from the component demultiplexing unit 34 and decodes the encoded subtitle component.
- the application related processing unit 38 acquires SI information and an HTML application from the component demultiplexing unit 34.
- the application related processing unit 38 refers to the HTML application and specifies a component to be presented.
- FIG. 3 is a diagram illustrating an HTML5 application source (structured document). Specifically, the application-related processing unit 38 specifies a component to be presented by referring to each param element constituting the content of the object element 211 of the structured document.
- the application-related processing unit 38 acquires the video component, the audio component, and the subtitle component to be presented from the video decoding unit 35, the audio decoding unit 36, and the subtitle decoding unit 37, respectively, and presents the video component and the audio component.
- the caption component is presented by the presentation method determined based on the SI information.
- This SI information includes meta information (Subtitle_info information described later) indicating a caption presentation method.
- the display 39 displays the video indicated by the video signal from the application-related processing unit 38.
- the display 39 for example, an LCD (liquid crystal display), an organic EL display, a plasma display, or the like can be applied.
- the speaker 40 outputs the sound indicated by the sound signal from the application related processing unit 38.
- FIG. 2 is a block diagram showing a main configuration of the application related processing unit 38 in the present embodiment.
- the application-related processing unit 38 includes a middleware unit and a synthesis unit.
- the middleware unit functions as an SI information acquisition processing unit 381, an application data acquisition processing unit 382, and a caption acquisition processing unit 383, as shown in FIG. That is, the SI information acquisition processing unit 381, the application data acquisition processing unit 382, and the caption acquisition processing unit 383 are realized as middleware (lower layer program) functions.
- the SI information acquisition processing unit 381 acquires the above-described SI information, and stores the acquired SI information in the storage unit 41.
- the application data acquisition processing unit 382 acquires an operation signal and an HTML application.
- the subtitle acquisition processing unit 383 acquires the decoded subtitle component.
- the synthesizing unit is a program (for example, HTML5 browser) in a higher layer than the middleware having a function of executing an HTML application.
- the HTML application executed by the synthesizing unit is an application that performs presentation control of video, audio, and subtitles according to the description of HTML.
- composition unit that has executed the HTML application functions as a video / audio presentation processing unit 384 and a caption presentation processing unit 385 as shown in FIG.
- the video / audio presentation processing unit 384 identifies video components and audio components to be presented with reference to the HTML application.
- the video / audio presentation processing unit 384 acquires the specified video component and audio component from the video decoding unit 35 and the audio decoding unit 36, respectively, and presents the acquired video component and audio component.
- the caption presentation processing unit 385 (caption presentation unit) identifies the caption component to be presented with reference to the HTML application.
- the caption presentation processing unit 385 acquires the caption component to be presented from the caption acquisition processing unit 383 and presents the acquired caption component.
- the caption presentation processing unit 385 acquires SI information (specifically, Subtitle_info information) related to the program from the SI information acquisition processing unit 381 when the user selects a program, and thereafter Then, the caption component is presented by the presentation method indicated by the Subtitle_info information.
- SI information specifically, Subtitle_info information
- the video / audio presentation processing unit 384 and the caption presentation processing unit 385 acquire the video component, the audio component, and the caption component to be presented according to the SI information regarding the program, and the SI Each component is presented according to the presentation information included in the information and caption component.
- FIG. 4 is a diagram showing two examples of the tuneTo method called by the HTML application.
- each tuneTo method shown in FIG. 4 is an extension of the tuneTo method disclosed in Non-Patent Document 1. That is, the Subtitle_info information is the return value of the tuneTo method. Therefore, regardless of which tuneTo method is called, the HTML application can obtain the Subtitle_info information at that time as a return value.
- the user performs an operation of selecting a program.
- the tuneTo method may be called by describing the operation of selecting a channel in the HTML application. .
- FIG. 5 is a diagram illustrating a data structure of a broadcast signal.
- the broadcast signal 100 shown in FIG. 5 is a content transmission unit according to the present embodiment. As shown in FIG. 5, the broadcast signal 100 includes each component of the program (caption asset 151, video asset 152, audio asset 153, and data asset 154), PA message 110, and M2 section message.
- the PA message 110 and the M2 selection message are SI information (MMT-SI).
- the caption asset 151 includes a TTML document file in which the caption text is described.
- the data asset 154 includes an HTML application.
- the MPT 111 may include information indicating the storage location of the data asset 154 (HTML application) (for example, URL information indicating the content server 6). Good.
- components such as video and audio are defined as assets. That is, a video asset is a video component, an audio asset is an audio component, and a caption asset is a caption component. Each asset has a component tag for identifying it from other assets.
- M2 section message is information used to transmit the MPEG-2 IV Systems section extension format.
- the M2 section message includes MH-AIT.
- MH-AIT is an AIT (Application Information Table) for MMT.
- the PA message 110 is control information indicating the asset configuration and the like.
- a message includes a table having elements and attributes indicating specific information, and the table includes a descriptor indicating more detailed information.
- the PA message 110 includes an MPT (MMT Package Table) 111 as shown in FIG.
- MPT 111 indicates information constituting a package such as a list of assets, a packet ID for specifying an MMT packet including the asset, and a position of a broadcast signal. By analyzing the MPT 111, it is possible to specify the assets constituting the program.
- the MPT 111 includes Data_Component_Descriptor information 130.
- Data_Component_Descriptor information 130 indicates a list of components constituting the program, and includes a component tag of each component. Further, as shown in FIG. 5, the aforementioned Subtitle_info information (Subtitle_info information 131) is included in the Data_Component_Descriptor information 130.
- FIG. 6 is a diagram illustrating a data structure of the Subtitle_info information 131 (caption presentation control information).
- the Subtitle_info information 131 includes ISO_639_language_code (language code), type (subtitle type), OPM (operation mode), TMD (time control mode), DMF (display mode), and resolution (display).
- ISO_639_language_code language code
- type subtitle type
- OPM operation mode
- TMD time control mode
- DMF display mode
- resolution resolution
- N subtitle components that make up a program for example, there are two subtitle components consisting of a Japanese subtitle component and an English subtitle component
- the subtitle components are linked to each subtitle component.
- N pieces of Subtitle_info information 131 are included in the MPT 111.
- FIG. 7 is a diagram for explaining an example of the data format of captiondata that the caption presentation processing unit 385 according to the present embodiment acquires from the caption acquisition processing unit 383.
- 7A shows the definition of the data format of captiondata
- FIG. 7B shows an example of captiondata.
- the corresponding caption data (text data) is converted to JSON format, and binary data (png Data, aif, etc.) may be encoded into Base64 and then converted to JSON format.
- the subtitle stream acquired by the subtitle acquisition processing unit 383 includes meta information (information for acquiring subsample data, etc.) constituting the captiondata, and the captiondata is generated from the subtitle stream by the subtitle acquisition processing unit 383. Is done.
- FIG. 7B shows a caption component acquired from the caption acquisition processing unit 383 (a TTML document in which captions to be presented by the broadcast receiver, caption images, and subsample data files such as caption audio).
- a caption component acquired from the caption acquisition processing unit 383 (a TTML document in which captions to be presented by the broadcast receiver, caption images, and subsample data files such as caption audio).
- captiondata does not have to be described in JSON format. That is, the captiondata may have any data structure including the type of information, the length of the field in which the information is stored, and the value stored in the field for each piece of information constituting the captiondata. .
- the information for acquiring the subsample data included in the captiondata may be only the file name of the TTML document.
- the file name of the subtitle image, the file name of the subtitle audio, and the file name of the external font are all or any part of the file name It may be included.
- the TTML document is converted into the JSON format and sent.
- the TTML document may be analyzed and only the corresponding text data extracted.
- FIG. 8 is a description example of the script tag 212 for presenting the subtitle_info information 131 using the tuneTo method.
- FIG. 9 is a diagram illustrating data in the ttml format.
- a script tag 212 is included in the source of the HTML application.
- the synthesis unit that executes the HTML application including the script tag 212 shown in FIG. 8 as a source executes the tuneTo method.
- the synthesizing unit (video / audio presentation processing unit 384) presents the video and audio of the selected program, and the synthesizing unit (caption presentation processing unit 385) displays the selected program acquired from the return value of the tuneTo method.
- the Subtitle_info information 131 included in the SI information (SI information acquired by the SI information acquisition processing unit 381) is presented.
- the caption presentation processing unit 385 performs caption presentation control according to the Subtitle_info information 131 and the HTML description.
- the caption presentation processing unit 385 can refer to the HTML application in the storage unit 41 for the Subtitle_info information 131 included in the SI information acquired by the SI information acquisition processing unit 381 immediately after the program is selected.
- Each subtitle data is presented in accordance with the Subtitle_info information 131 and the HTML description recorded in the memory area.
- the caption presentation processing unit 385 presents each caption data on the HTML application (in the HTML application screen).
- the script tag 212 included in the source of the HTML application according to this embodiment is also shown in FIG. That is, the script tag 212 includes not only a description regarding the tuneTo method but also a description regarding the event listener addCaptionListener.
- the composition unit registers the event listener addCaptionListener in the broadcast video / audio object at the start of execution of the HTML application.
- the subtitle acquisition processing unit 383 acquires the subtitle stream
- the event registered by the addCaptionListener is fired, and the acquired subtitle stream is converted into the data format of the captiondata and notified to the subtitle presentation processing unit 385.
- the caption presentation processing unit 385 presents each caption data in the caption data according to the Subtitle_info information 131 and the HTML description included in the SI information acquired from the return value of the tuneTo method.
- the caption presentation processing unit 385 includes a 0.ttml file (TTML document file illustrated in FIG. 9) included in the caption data, 1. Get png file (subtitle image file) and 2.aif file (subtitle audio file), subtitle text, subtitle image and subtitle described in TTML document according to Subtitle_info information 131 and HTML description Present audio.
- the receiving device 3 can present each caption data of a program according to the Subtitle_info information 131 and the description of HTML.
- the receiving device 3 can present each caption data in a manner desired by a program distributor (or a provider of the HTML application) on the HTML application.
- FIG. 10 and 11 are diagrams for explaining other examples of the data format of captiondata.
- FIG. 10A shows the definition of the data format of caption data according to another example
- FIG. 10B shows an example of the caption data.
- the captiondata in FIG. 10 is also different from the captiondata in FIG. 7 in the following points.
- the caption data of FIG. 10 is different from the caption data of FIG. 7 in that reference_start_time is included as caption sub-information instead of MPU ⁇ Time Stamp.
- FIG. (A) of FIG. 11 shows the definition of the data format of caption data according to still another example, and (b) and (c) of FIG. 11 show an example of such caption data.
- the caption data in FIG. 11 is different from the caption data in FIG. 10 in that the information for acquiring each sub-sample data is described as the URL of the storage location information in which the data is stored instead of the data. ing.
- the above URL may be described in a format of “http: // localhost / ⁇ service_id> / ⁇ asset_id> / ⁇ mpu_sequence_number> / ⁇ subsample_no>”, for example, as shown in FIG. That is, the URL may be a URL indicating the acquisition destination of each subsample data.
- the URL may be a URL indicating the acquisition destination of the TTML document file.
- the subtitle presentation processing unit 385 acquires and analyzes the TTML document file based on the caption data in (c) of FIG. 11, and information for acquiring other subsample data indicated in the TTML document file Other subsample data can be acquired with reference to the URL.
- the caption presentation processing unit 385 does not acquire the data of FIG. 10 and the data of FIG. 11 in the form of a JSON message using WebSocket or the like, instead of acquiring the data in the form of the return value captiondata of the event listener addCaptionListener. Also good.
- FIGS. 12 and 13 Another embodiment of the present invention will be further described with reference to FIGS. 12 and 13 as follows.
- members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
- FIG. 12 is a diagram illustrating the getCurrentSubtitleInformation method that is called by the HTML application when a predetermined event is fired.
- FIG. 13 is a diagram schematically illustrating details of the script tag 212 included in the source of the HTML application according to the present embodiment.
- the system according to the present embodiment is the same system as the system according to the first embodiment, except that the opportunity to acquire Subtitle_info information is different.
- FIG. 12 shows an example of the definition of the getCurrentSubtitleInformation method.
- the HTML application can obtain the Subtitle_info information at that time as a return value of the getCurrentSubtitleInformation method when a predetermined event is fired.
- the predetermined event may be, for example, an event “Reception of SI information including Subtitle_info information”, or “Subtitle_info information update (content different from Subtitle_info information included in previously received SI information). Event of “SI information including Subtitle_info information”) or “switching of subtitle selection language by the user”. Of the above, occurrence of a predetermined event due to reception of SI information is notified from the middleware unit (SI information acquisition processing unit 381) to the synthesis unit.
- FIG. 13 is a description example of a script tag 212 for presenting subtitle_info information 131 using the getCurrentSubtitleInformation method.
- the synthesis unit that executes the HTML application including the script tag 212 in the source executes the getCurrentSubtitleInformation method and acquires the subtitle_info information 131 when a predetermined event is fired.
- the composition unit presents the Subtitle_info information 131 included in the SI information acquired by the SI information acquisition processing unit 381.
- the caption presentation processing unit 385 performs caption presentation control according to the Subtitle_info information 131 and the HTML description.
- the caption presentation processing unit 385 stores the Subtitle_info information 131 included in the SI information acquired by the SI information acquisition processing unit 381 immediately after the occurrence of a predetermined event is notified from an operation by middleware or a user.
- the subtitle data is recorded in a memory area that can be referred to by the HTML application in the unit 41, and each subtitle data is presented in accordance with the Subtitle_info information 131 and the description of the HTML.
- the receiving device 3 according to the present embodiment has the same effects as the receiving device 3 according to the first embodiment. Further, even when the subtitle_info information 131 to be distributed is updated during the broadcast period of the program or when the selection language of the subtitles to be presented is switched by the user's operation, the program distributor (or HTML application providing business) Each subtitle data can be presented in a desired manner.
- the getCurrentSubtitleInformation method may not be a method that the HTML application calls when an event is fired. That is, the timing at which the getCurrentSubtitleInformation method is called may be a timing according to the implementation of the HTML application source.
- the source of the HTML application may be implemented such that the getCurrentSubtitleInformation method is called when a certain button (for example, a button for switching subtitles) on the application screen is operated.
- a certain button for example, a button for switching subtitles
- FIGS. 14 and 15 Still another embodiment of the present invention will be described with reference to FIGS. 14 and 15 as follows.
- members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
- FIG. 14 is a diagram schematically illustrating the definition of a broadcast video / audio object according to the present embodiment.
- FIG. 15 is a diagram schematically illustrating details of the script tag 212 included in the source of the HTML application according to the present embodiment.
- the system according to the present embodiment is the same system as the system according to the second embodiment, except that when the middleware receives the subtitle stream, the HTML application acquires the Subtitle_info information via the event listener.
- the broadcast video / audio object shown in FIG. 14 is an extension of the broadcast video / audio object of Non-Patent Document 1.
- the broadcast video / audio object according to the present embodiment includes Subtitle_info information in the return value of the event listener addCaptionListener.
- FIG. 15 is an example of a script tag for registering an event listener addCaptionListener in a broadcast video / audio object.
- the synthesizing unit registers an event listener addCaptionListener in the broadcast video / audio object at the start of execution of the HTML application including the script tag 212 shown in FIG. 15 as a source.
- the caption acquisition processing unit 383 receives the caption stream, the event registered by the addCaptionListener is fired, and the Subtitle_info information 131 and the caption data read from the storage unit 41 are notified to the caption presentation processing unit 385.
- the caption presentation processing unit 385 presents each caption data in the caption stream according to the notified Subtitle_info information 131 and the HTML description.
- the subtitle presentation processing unit 385 can refer to the HTML application in the storage unit 41 for the Subtitle_info information 131 included in the SI information acquired by the SI information acquisition processing unit 381 immediately after the middleware receives the subtitle stream.
- the subtitle data in the subtitle stream is presented in accordance with the Subtitle_info information 131 and the HTML description.
- the receiving device 3 according to the present embodiment has the same effects as the receiving device 3 according to the first and second embodiments.
- FIG. 16A schematically illustrates the definition of the broadcast video / audio object according to the present embodiment
- FIG. 16B schematically illustrates the definition of the caption list information according to the present embodiment
- FIG. 17 is a diagram schematically illustrating details of the script tag 212 included in the source of the HTML application according to the present embodiment.
- the system according to the present embodiment is different from that according to the first embodiment in that the HTML application of the receiving apparatus acquires Subtitle_list information (caption list information) instead of Subtitle_info information at a timing according to the implementation of the source. It is different from the system concerned.
- the broadcast video / audio object shown in (a) of FIG. 16 is an extension of the broadcast video / audio object of Non-Patent Document 1.
- the broadcast video / audio object according to the present embodiment has a getSubtitleList method.
- the getSubtitleList method is a method for acquiring Subtitle_list information.
- the Subtitle_list information is generated from the Subtitle_info information 131 and SI information included in the MPT.
- the Subtitle_list information includes a language code (ISO_639_language_code) indicating the language of the subtitle stream and a URL indicating the storage location where the subtitle stream is stored.
- a language code ISO_639_language_code
- the Subtitle_list information includes N combinations of these pieces of information.
- the synthesis unit executes processing when the getSubtitleList method is called in an HTML application that includes the script tag 212 shown in FIG. 17 as a source.
- the synthesis unit presents the Subtitle_list information included in the SI information acquired by the SI information acquisition processing unit 381 while presenting the video, audio, and caption of the program.
- the receiver 3 defines the object which stores the information contained in Subtitle_list, and stores the acquired information in the memory
- the subtitle presented by the subtitle presentation processing unit 385 is a subtitle in a language specified by the user among a plurality of different languages.
- the user can grasp which other language can be selected as the subtitle language by looking at the Subtitle_list information presented by the synthesis unit.
- the receiving device 3 that combines the features of the first embodiment and the features of the fourth embodiment is also included in the scope of the broadcast receiver according to the present invention.
- Such a caption presentation processing unit 385 of the receiving device 3 acquires both the Subtitle_info information and the Subtitle_list information, and presents the Subtitle_list information while presenting the caption by the presentation method indicated by the Subtitle_info information.
- FIG. 18 is a diagram schematically illustrating the definition of a broadcast video / audio object according to the present embodiment.
- FIG. 19 is a diagram schematically illustrating details of the script tag 212 in the source of the HTML5 application according to the present embodiment.
- the system according to the present embodiment is the same system as the system according to the fourth embodiment except that when the middleware receives a subtitle stream, the HTML application acquires subtitle list information via the event listener.
- the broadcast video / audio object shown in FIG. 18 is an extension of the broadcast video / audio object of Non-Patent Document 1.
- the broadcast video / audio object according to the present embodiment includes subtitle list information in the return value of the event listener addCaptionListener.
- the synthesizing unit registers the event listener addCaptionListner in the broadcast video / audio object at the start of the execution of the HTML application including the script tag 212 shown in FIG. 19 as a source.
- the caption acquisition processing unit 383 receives the caption stream, the event registered by the addCaptionListener fires, and the Subtitle_info information 131, the caption list information, and caption data read from the storage unit 41 are notified to the caption presentation processing unit 385.
- the caption presentation processing unit 385 presents the caption list information included in the SI information acquired by the SI information acquisition processing unit 381.
- the subtitle presentation processing unit 385 refers to the subtitle list information included in the SI information acquired by the SI information acquisition processing unit 381 immediately after the middleware unit receives the subtitle stream by the HTML application in the storage unit 41. Record in possible memory area. Thereafter, the caption presentation processing unit 385 presents the caption list information recorded in the memory area.
- the receiving device 3 according to the present embodiment has the same effects as the receiving device 3 according to the fourth embodiment.
- FIGS. 20 to 23 Still another embodiment of the present invention will be described with reference to FIGS. 20 to 23 as follows.
- members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
- FIG. 20 is a functional block diagram showing the main configuration of the application-related processing unit 38 ′ included in the receiving device 3 ′ of the system 1 ′ according to the present embodiment.
- FIG. 21 is a diagram schematically illustrating details of the script tag 212 in the source of the HTML5 application executed by the receiving device 3 ′.
- 22 and 23 are diagrams for explaining examples of the data format of the captiondata referred to by the receiving device 3 '.
- Embodiments 1 to 3 The system according to the present embodiment is different from Embodiments 1 to 3 in that the HTML application does not present the caption based on the Subtitle_info information, but presents the caption with reference to caption data including information regarding the presentation time of the caption. It is different from the system concerned.
- the system 1 ′ includes a transmission device 2, a reception device 3 ′ (broadcast receiver), a content server 5, and a content server 6.
- the reception device 3 ′ includes a broadcast reception unit 31, an operation unit 32, a communication transmission / reception unit 33, a component demultiplexing unit 34, a video decoding unit 35, an audio decoding unit 36, a caption decoding unit 37, an application A related processing unit 38 ′, a display 39, a speaker 40, and a storage unit 41 are provided.
- the application related processing unit 38 ′ acquires SI information and an HTML application from the component demultiplexing unit 34.
- the application related processing unit 38 refers to the HTML application and specifies a component to be presented.
- the application-related processing unit 38 ′ acquires the video component, the audio component, and the caption component to be presented from the video decoding unit 35, the audio decoding unit 36, and the caption decoding unit 37, respectively, and the video component, the audio component, and the caption component are obtained. Present.
- the application related processing unit 38 ′ will be described with reference to FIG. As shown in FIG. 20, the application-related processing unit 38 ′ includes a middleware unit and a synthesis unit.
- the synthesis unit has a function of executing an HTML application.
- the synthesizing unit that has executed the HTML application functions as a video / audio presentation processing unit 384 and a caption presentation processing unit 385 'as shown in FIG.
- the subtitle presentation processing unit 385 specifies a subtitle component to be presented with reference to the HTML application.
- the caption presentation processing unit 385 acquires the caption component to be presented from the caption acquisition processing unit 383, and presents the acquired caption component.
- the synthesis unit registers an event listener addCaptionListener in the broadcast video / audio object at the start of the execution of the HTML application including the script tag 212 shown in FIG.
- the caption acquisition processing unit 383 receives the caption stream of the program
- the event registered by addCaptionListener is fired, and caption data obtained by converting the received caption stream is notified to the caption presentation processing unit 385 ′.
- the data format of the captiondata is as shown in FIG.
- captiondata is composed of the presentation start time, presentation period, ID, style element, subtitle text data, and subsample data (binary data) of subtitles and subsample data.
- the presentation start time information includes Subtitle_info information 131 acquired by the SI information acquisition processing unit 381, information extracted by the subtitle acquisition processing unit 383 from the MMTP packet of the subtitle component, and the subtitle acquisition processing unit 383 by the TTML document.
- the information is calculated from data calculated from data extracted from a file, but may be information calculated from any of the above data.
- the caption presentation processing unit 385 'that has acquired the caption data presents the caption and other subsample data by the presentation method specified by the caption data. That is, the subtitle text data and each sub-sample data are presented from the presentation start time designated by the caption data to the length of time designated by the caption data.
- the presentation start time and the length of the presentation period are sent as time information.
- the presentation start time and the presentation end time may be used as long as the information can control the presentation time of the caption. That's fine.
- the actual data of each caption data is included in the caption data and sent, but instead of the data, the storage location information in which the data is stored may be included and sent as a URL.
- captiondata (About another example of captiondata)
- the data format of captiondata is not limited to that shown in FIG. That is, the data format of captiondata may be as shown in FIG.
- the caption data may include meta information of a presentation start time, a presentation period, an ID, and a style element for the caption and each subsample data.
- the presentation start time and the length of the presentation period are sent as time information.
- the presentation start time and the presentation end time may be used as long as the information can control the presentation time of the caption. That's fine.
- the actual data of each caption data is included in the caption data and sent, but instead of the data, the storage location information in which the data is stored may be included and sent as a URL.
- the receiving device 3 according to the present embodiment has the same effects as the receiving device 3 according to the first to third embodiments.
- the synthesizing unit presents the Subtitle_info information as it is according to the description of the script.
- the script of FIGS. 11 and 13 may include a function for converting the Subtitle_info information into words that can be understood by general consumers. And a synthetic
- combination part may show the said wording according to the script in which such a function was incorporated.
- the synthesizing unit presents the Subtitle_list information as it is according to the description of the script.
- a function for converting the Subtitle_list information into words that can be understood by general consumers may be included in the scripts shown in FIGS. 11 and 13. And a synthetic
- combination part may show the said wording according to the script in which such a function was incorporated.
- Control blocks of receiving devices 3 and 3 ′ (in particular, SI information acquisition processing unit 381, application data acquisition processing unit 382, caption acquisition processing unit 383, video / audio presentation processing unit 384, and caption presentation processing units 385 and 385 ′). May be realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like, or may be realized by software using a CPU (Central Processing Unit).
- the receiving devices 3 and 3 ′ are a CPU that executes instructions of a program that is software that realizes each function, and a ROM (Read Only) in which the program and various data are recorded so as to be readable by a computer (or CPU).
- Memory or a storage device (these are referred to as “recording media”), a RAM (Random Access Memory) for expanding the program, and the like.
- the computer or CPU reads the program from the recording medium and executes the program, thereby achieving the object of one embodiment of the present invention.
- a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used.
- the program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program.
- an arbitrary transmission medium such as a communication network or a broadcast wave
- one embodiment of the present invention can also be realized in the form of a data signal embedded in a carrier wave, in which the program is embodied by electronic transmission.
- the broadcast receiver (reception devices 3 and 3 ′) is a broadcast receiver in which a lower layer program (middleware) and an upper layer program (synthesizing unit) are operated.
- the subtitle presenting section (for example, the subtitle presenting processing section 385) that presents the subtitles of the program on the HTML application and meta information (for example, Subtitle_info information) indicating the subtitle presenting method is broadcasted.
- An acquisition processing unit (for example, an SI information acquisition processing unit 381) that is acquired from outside the receiver (transmitting device 2, 2 ′), and is implemented as a function of the lower layer program,
- the caption presentation unit acquires the meta information acquired by the acquisition processing unit, and presents the caption with reference to the meta information.
- the said broadcast receiver will be the provider (or provider of HTML application).
- the caption presentation unit acquires the meta information acquired by the acquisition processing unit when a predetermined event is fired. Also good.
- the broadcast receiver according to aspect 3 of the present invention is the broadcast receiver according to aspect 1 or 2, wherein the caption presentation unit acquires the meta information acquired by the acquisition processing unit when the program is selected. May be.
- the broadcast receiver receives subtitles presented in the presentation method desired by the distributor (or HTML application provider) immediately after the user starts viewing the program. There is a further effect that can be seen.
- the meta information (information included in caption data included in the caption stream) acquired by the acquisition processing unit is TTML.
- information for obtaining document subsample data file name of TTML document
- information for obtaining image subsample data file name of image
- information for obtaining audio subsample data sound file name
- information for acquiring sub-sample data of external character font may be included in whole or in part.
- the broadcast receiver has the further effect that the upper layer program can present subtitles using images, sounds, and / or external character fonts.
- the broadcast receiver according to aspect 5 of the present invention is the broadcast receiver according to any one of the aspects 1 to 4, wherein the caption presentation unit has a language specified by a user among a plurality of different languages as the caption of the program. Subtitles are presented, and the acquisition processing unit is configured to obtain a list (Subtitle_list information) of the plurality of languages from the outside of the broadcast receiver. The above list may be presented together with subtitles in the language specified in.
- the said broadcast receiver has the further effect that the said user can confirm the list
- the meta information acquired by the acquisition processing unit may be information related to the caption presentation time (for example, reference_start_time).
- the broadcast receiver has the further effect that the upper layer program can present subtitles at a timing desired by the distributor (or provider of the HTML application).
- the subtitle presentation method is a subtitle presentation method for a broadcast receiver in which a lower layer program and an upper layer program operate, and the upper layer program presents a program subtitle on an HTML application.
- the acquired meta information may be acquired, and the subtitles may be presented with reference to the meta information.
- the caption presentation method according to aspect 7 has the same operational effects as the broadcast receiver according to aspect 1.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
On this receiving device (3), middleware operates as well as an application (hybrid cast application) run by a synthesis unit. From a transmission device (2), the middleware acquires metainformation indicating a presentation method of subtitles of a program. The hybrid cast application (the application run by the synthesis unit) acquires the metainformation, and presents the subtitles with the presentation method indicated by the metainformation.
Description
本発明は、字幕を提示する放送受信機、及び、そのような放送受信機による字幕提示方法に関する。
The present invention relates to a broadcast receiver that presents subtitles and a subtitle presenting method using such a broadcast receiver.
近年が開始された放送通信連携サービスであるハイブリッドキャストでは、放送映像および番組情報等の放送リソースにアクセスするために、HTML(Hyper Text Markup Language)アプリケーションから制御可能な拡張API(Application Programming Interface)が定義されている(非特許文献1、2参照)。
In Hybridcast, which is a broadcasting / communication cooperation service that has recently started, an extended API (Application Programming Interface) that can be controlled from an HTML (Hyper Text Markup Language) application to access broadcast resources such as broadcast video and program information. (See Non-Patent Documents 1 and 2).
また、現在策定中の8K(4K)向け放送では、TTMLをベースに拡張した符号化方式による新しい字幕・文字スーパー技術が採用されている(非特許文献3参照)。
In addition, in broadcasting for 8K (4K) that is currently being developed, a new subtitle / text super technology based on an encoding method based on TTML is adopted (see Non-Patent Document 3).
しかしながら、上記従来の構成では、HTMLアプリケーション(Hybridcastアプリケーション)から制御可能な拡張APIにて提供される機能が十分ではなく、HTMLアプリケーション上で、番組の配信者(もしくはHTMLアプリケーションの提供事業者)が所望する提示方法(提示タイミングや提示レイアウト等)で字幕を提示することができない、という問題がある。
However, in the conventional configuration described above, the function provided by the extended API that can be controlled from the HTML application (Hybridcast application) is not sufficient, and the program distributor (or the provider providing the HTML application) on the HTML application. There is a problem that subtitles cannot be presented in a desired presentation method (presentation timing, presentation layout, etc.).
本発明は、上記課題に鑑みて成されたものであり、その主な目的は、上位層のプログラムがHTMLアプリケーション上にて、番組の配信者(もしくはHTMLアプリケーションの提供事業者)が所望する提示方法で字幕を提示できる放送受信機を実現することにある。
The present invention has been made in view of the above problems, and the main purpose of the present invention is to present an upper layer program desired by a program distributor (or an HTML application provider) on an HTML application. It is to realize a broadcast receiver capable of presenting subtitles by a method.
上記の課題を解決するために、本発明の一態様に係る放送受信機は、下位層のプログラムと上位層のプログラムとが動作する放送受信機において、上記上位層のプログラムの機能として実現されている、HTMLアプリケーション上で番組の字幕を提示する字幕提示部と、上記字幕の提示方法を示すメタ情報を上記放送受信機の外部から取得する取得処理部であって、上記下位層のプログラムの機能として実現されている取得処理部と、を備え、上記字幕提示部は、上記取得処理部が取得した上記メタ情報を取得し、上記メタ情報を参照して上記字幕を提示する。
In order to solve the above problems, a broadcast receiver according to one aspect of the present invention is realized as a function of the upper layer program in a broadcast receiver in which a lower layer program and an upper layer program operate. A subtitle presenting section that presents subtitles of a program on an HTML application, and an acquisition processing section that obtains meta information indicating the subtitle presenting method from the outside of the broadcast receiver, the function of the lower layer program The subtitle presenting unit obtains the meta information acquired by the acquisition processing unit, and presents the subtitle with reference to the meta information.
本発明の一態様に係る放送受信機は、上位層のプログラムがHTMLアプリケーション上にて、番組の配信者(もしくはHTMLアプリケーションの提供事業者)が所望する提示方法で字幕を提示できる、という効果を奏する。
The broadcast receiver according to one aspect of the present invention has an effect that the upper layer program can present subtitles on the HTML application in a presentation method desired by the program distributor (or the provider of the HTML application). Play.
本明細書では、番組(コンテンツ)の伝送規格としてMMT(MPEG Media Transport)を採用する。MMTは、IP(Internet Protocol)上で用いるメディアトランスポート方式であり、多様な伝送路でのメディア伝送を可能にする方式である。具体的には、MMTは、制御情報でパケットIDやIPアドレスやURLを指定することにより、異なる伝送路を介して、番組を構成する映像、音声、字幕およびデータ等のコンポーネントを参照することができる。また、本明細書では、MMTにより実現される8K(または4K)放送を想定する。
In this specification, MMT (MPEG Media Transport) is adopted as a transmission standard for programs (contents). MMT is a media transport method used on IP (Internet Protocol), and enables media transmission on various transmission paths. Specifically, the MMT can refer to components such as video, audio, subtitles, and data constituting a program via different transmission paths by specifying a packet ID, IP address, or URL in the control information. it can. Further, in this specification, 8K (or 4K) broadcasting realized by MMT is assumed.
〔実施形態1〕
以下、本発明の一実施形態に係る受信装置を含むシステムについて、図面を参照しながら説明する。Embodiment 1
Hereinafter, a system including a receiving apparatus according to an embodiment of the present invention will be described with reference to the drawings.
以下、本発明の一実施形態に係る受信装置を含むシステムについて、図面を参照しながら説明する。
Hereinafter, a system including a receiving apparatus according to an embodiment of the present invention will be described with reference to the drawings.
〔システムの概要および構成〕
システムの概要及び構成について、図1及び図2を参照しながら説明する。 [System overview and configuration]
The outline and configuration of the system will be described with reference to FIGS.
システムの概要及び構成について、図1及び図2を参照しながら説明する。 [System overview and configuration]
The outline and configuration of the system will be described with reference to FIGS.
図1は、当該システムの構成および当該システムに含まれる送信装置および受信装置の要部構成の一例を示すブロック図である。また、図2は、受信装置が備えるアプリケーション関連処理部の機能構成を示した機能ブロック図である。
FIG. 1 is a block diagram illustrating an example of a configuration of the system and a configuration of main parts of a transmission device and a reception device included in the system. FIG. 2 is a functional block diagram illustrating a functional configuration of an application-related processing unit included in the receiving device.
図1に示すように、システム1は、送信装置2、受信装置3(放送受信機)、コンテンツサーバ5およびコンテンツサーバ6を含んでいる。
As shown in FIG. 1, the system 1 includes a transmission device 2, a reception device 3 (broadcast receiver), a content server 5, and a content server 6.
送信装置2は、1つのチャンネルに対し、複数のコンポーネントが割り当てられたコンテンツを送信する装置であり、放送事業者等が管理する装置である。
The transmission device 2 is a device that transmits content in which a plurality of components are assigned to one channel, and is a device managed by a broadcaster or the like.
受信装置3は、放送または通信経由でHTMLアプリケーション(Hybridcastアプリケーション、構造化文書)を取得し、取得したHTMLアプリケーションを実行することにより、上記コンテンツを再生する装置である。受信装置3は、例えば、テレビジョン受像機、携帯電話機(スマートフォン)、タブレット端末、PC、携帯ゲーム機等である。送信装置2と受信装置3とは、インターネットを介して接続されている。送信装置2は、放送波およびインターネットを介して、受信装置3にコンテンツおよびその他各種データを送信することができる。
The receiving device 3 is a device that acquires an HTML application (Hybridcast application, structured document) via broadcast or communication, and reproduces the content by executing the acquired HTML application. The receiving device 3 is, for example, a television receiver, a mobile phone (smart phone), a tablet terminal, a PC, a portable game machine, or the like. The transmission device 2 and the reception device 3 are connected via the Internet. The transmission device 2 can transmit content and various other data to the reception device 3 via broadcast waves and the Internet.
コンテンツサーバ5は、送信装置2が送信するコンテンツを格納するサーバである。図1に示す例では、コンテンツサーバ5は送信装置2と別体であるが、送信装置2がコンテンツサーバ5を備えていてもよい。
The content server 5 is a server that stores content transmitted by the transmission device 2. In the example illustrated in FIG. 1, the content server 5 is separate from the transmission device 2, but the transmission device 2 may include the content server 5.
コンテンツサーバ6は、送信装置2を管理する事業者が管理するサーバ、または、送信装置2を管理する事業者と異なる事業者(例えば、送信装置2を管理する放送事業者が信託した事業者)が管理するサーバである。受信装置3は、インターネットを介して、送信装置2だけではなく、コンテンツサーバ6からもコンテンツおよびその他各種データを取得することができる。
The content server 6 is a server managed by a business entity that manages the transmission device 2 or a business operator that is different from the business entity that manages the transmission device 2 (for example, a business operator that is trusted by the broadcaster that manages the transmission device 2). Is a server managed by. The receiving device 3 can acquire content and various other data not only from the transmitting device 2 but also from the content server 6 via the Internet.
図1に示す例では、システム1が送信装置2、受信装置3、コンテンツサーバ5およびコンテンツサーバ6をそれぞれ1つずつ含んでいるが、これに限るものではない。システム1は、送信装置2、受信装置3、コンテンツサーバ5およびコンテンツサーバ6をそれぞれ複数含んでもよい。
In the example shown in FIG. 1, the system 1 includes one transmission device 2, one reception device 3, one content server 5, and one content server 6, but the present invention is not limited to this. The system 1 may include a plurality of transmission apparatuses 2, reception apparatuses 3, content servers 5, and content servers 6, respectively.
以下に、システム1を構成する送信装置2および受信装置3の構成について説明する。
Hereinafter, the configurations of the transmission device 2 and the reception device 3 constituting the system 1 will be described.
(送信装置2の構成)
図1に示すように、送信装置2は、コンポーネント多重化部21、放送送信部22および通信送受信部23を備える。 (Configuration of transmitter 2)
As illustrated in FIG. 1, thetransmission device 2 includes a component multiplexing unit 21, a broadcast transmission unit 22, and a communication transmission / reception unit 23.
図1に示すように、送信装置2は、コンポーネント多重化部21、放送送信部22および通信送受信部23を備える。 (Configuration of transmitter 2)
As illustrated in FIG. 1, the
コンポーネント多重化部21は、映像、音声、字幕、データ等のコンポーネントをパケット化し、1つのストリームに多重化して受信装置3に送信するものである。
The component multiplexing unit 21 packetizes components such as video, audio, subtitles, and data, multiplexes them into one stream, and transmits them to the receiving device 3.
具体的には、コンポーネント多重化部21は、コンテンツサーバ5から番組コンテンツを取得する。そして、コンポーネント多重化部21は、取得した番組コンテンツの制御情報(後述のSI情報等)、取得した番組コンテンツを構成する映像、音声、および、字幕、並びに、当該番組コンテンツを提示するためのHTMLアプリケーションをパケット化し、1つのストリームに多重化する。コンポーネント多重化部21は、生成したストリームを放送送信部22または通信送受信部23を介して受信装置3に送信する。
Specifically, the component multiplexing unit 21 acquires program content from the content server 5. Then, the component multiplexing unit 21 controls the acquired program content (SI information, which will be described later), video, audio, and subtitles constituting the acquired program content, and HTML for presenting the program content Packetize applications and multiplex into one stream. The component multiplexing unit 21 transmits the generated stream to the reception device 3 via the broadcast transmission unit 22 or the communication transmission / reception unit 23.
なお、コンポーネント多重化部21は、番組コンテンツを構成する映像、音声、および字幕等のコンポーネント、並びに、番組コンテンツの制御情報から複数のストリーム(後述する放送信号)を生成し、一部のストリームを、放送送信部22を介して受信装置3に送信し、残りのストリームを、通信送受信部23を介して受信装置3に送信してもよい。また、コンポーネント多重化部21は、HTMLアプリケーションの替わりに、当該HTMLアプリケーションが格納されている格納先情報をパケット化する構成であってもよい。
The component multiplexing unit 21 generates a plurality of streams (broadcast signals to be described later) from components such as video, audio, and subtitles constituting the program content, and the control information of the program content. Alternatively, the stream may be transmitted to the receiver 3 via the broadcast transmitter 22 and the remaining stream may be transmitted to the receiver 3 via the communication transceiver 23. In addition, the component multiplexing unit 21 may be configured to packetize storage destination information in which the HTML application is stored instead of the HTML application.
放送送信部22は、放送波を介してデータ(コンテンツ等)を送信するものである。
The broadcast transmission unit 22 transmits data (content, etc.) via broadcast waves.
通信送受信部23は、無線通信手段または有線通信手段によって、受信装置3またはコンテンツサーバ6等の他の装置と通信を行い、データのやりとりを行うものである。
The communication transmitting / receiving unit 23 communicates with other devices such as the receiving device 3 or the content server 6 by wireless communication means or wired communication means, and exchanges data.
(受信装置3の構成)
図1に示すように、受信装置3は、放送受信部31、操作部32、通信送受信部33、コンポーネント逆多重化部34、映像復号部35、音声復号部36、字幕復号部37、アプリケーション関連処理部38、ディスプレイ39、スピーカ40および記憶部41を備える。 (Configuration of receiving apparatus 3)
As illustrated in FIG. 1, thereception device 3 includes a broadcast reception unit 31, an operation unit 32, a communication transmission / reception unit 33, a component demultiplexing unit 34, a video decoding unit 35, an audio decoding unit 36, a caption decoding unit 37, and application-related items. A processing unit 38, a display 39, a speaker 40, and a storage unit 41 are provided.
図1に示すように、受信装置3は、放送受信部31、操作部32、通信送受信部33、コンポーネント逆多重化部34、映像復号部35、音声復号部36、字幕復号部37、アプリケーション関連処理部38、ディスプレイ39、スピーカ40および記憶部41を備える。 (Configuration of receiving apparatus 3)
As illustrated in FIG. 1, the
操作部32は、ユーザから操作を受け付け、受け付けた操作を示す操作信号を出力する。操作部32は、キーボード、マウス、キーパッド、操作ボタンなどの入力機器等で構成されているものであってもよい。また、操作部32とディスプレイ39とが一体となっているタッチパネルであってもよい。また、操作部32は、受信装置3とは別体のリモートコントローラ等の遠隔制御装置であってもよい。
The operation unit 32 receives an operation from the user and outputs an operation signal indicating the received operation. The operation unit 32 may be configured by input devices such as a keyboard, a mouse, a keypad, and operation buttons. Further, a touch panel in which the operation unit 32 and the display 39 are integrated may be used. The operation unit 32 may be a remote control device such as a remote controller that is separate from the receiving device 3.
放送受信部31は、放送波を介してデータを受信するものである。放送受信部31は、例えば、放送受信チューナである。
The broadcast receiving unit 31 receives data via broadcast waves. The broadcast receiving unit 31 is, for example, a broadcast receiving tuner.
通信送受信部33は、無線通信手段または有線通信手段によって、送信装置2またはコンテンツサーバ6等の他の装置と通信を行い、データのやりとりを行うものである。通信送受信部33は、例えば、LAN端子や無線LANインターフェースである。
The communication transmitting / receiving unit 33 communicates with other devices such as the transmission device 2 or the content server 6 by wireless communication means or wired communication means, and exchanges data. The communication transceiver 33 is, for example, a LAN terminal or a wireless LAN interface.
コンポーネント逆多重化部34は、放送受信部31を介して受信したストリームを逆多重化し、制御情報に基づいて、逆多重化した映像コンポーネントを映像復号部35に、音声コンポーネントを音声復号部36に、逆多重化した字幕コンポーネントを字幕復号部37に、後述するSI情報およびHTMLアプリケーションをアプリケーション関連処理部38にそれぞれ出力する。また、コンポーネント逆多重化部34は、操作部32から出力された操作信号をアプリケーション関連処理部38に出力する場合もある。
The component demultiplexer 34 demultiplexes the stream received via the broadcast receiver 31 and, based on the control information, demultiplexed video components into the video decoder 35 and audio components into the audio decoder 36. Then, the demultiplexed caption component is output to the caption decoding unit 37, and SI information and an HTML application described later are output to the application related processing unit 38, respectively. Further, the component demultiplexing unit 34 may output the operation signal output from the operation unit 32 to the application related processing unit 38.
映像復号部35は、コンポーネント逆多重化部34から符号化された映像コンポーネントを取得し、符号化された映像コンポーネントを復号するものである。
The video decoding unit 35 acquires the encoded video component from the component demultiplexing unit 34 and decodes the encoded video component.
音声復号部36は、コンポーネント逆多重化部34から符号化された音声コンポーネントを取得し、符号化された音声コンポーネントを復号するものである。
The audio decoding unit 36 acquires the encoded audio component from the component demultiplexing unit 34 and decodes the encoded audio component.
字幕復号部37は、コンポーネント逆多重化部34から符号化された字幕コンポーネントを取得し、符号化された字幕コンポーネントを復号するものである。
The subtitle decoding unit 37 acquires the encoded subtitle component from the component demultiplexing unit 34 and decodes the encoded subtitle component.
アプリケーション関連処理部38は、コンポーネント逆多重化部34からSI情報およびHTMLアプリケーションを取得する。アプリケーション関連処理部38は、HTMLアプリケーションを参照して、提示すべきコンポーネントを特定する。
The application related processing unit 38 acquires SI information and an HTML application from the component demultiplexing unit 34. The application related processing unit 38 refers to the HTML application and specifies a component to be presented.
図3は、HTML5アプリケーションのソース(構造化文書)を例示した図である。アプリケーション関連処理部38は、具体的には、構造化文書のobject要素211の内容を構成する各param要素を参照することによって、提示すべきコンポーネントを特定する。
FIG. 3 is a diagram illustrating an HTML5 application source (structured document). Specifically, the application-related processing unit 38 specifies a component to be presented by referring to each param element constituting the content of the object element 211 of the structured document.
そして、アプリケーション関連処理部38は、提示すべき映像コンポーネント、音声コンポーネントおよび字幕コンポーネントを、それぞれ、映像復号部35、音声復号部36および字幕復号部37から取得し、映像コンポーネント及び音声コンポーネントを提示する共に、SI情報に基づいて決定した提示方法で字幕コンポーネントを提示する。このSI情報は、字幕の提示方法を示すメタ情報(後述のSubtitle_info情報等)を含んでいる。
Then, the application-related processing unit 38 acquires the video component, the audio component, and the subtitle component to be presented from the video decoding unit 35, the audio decoding unit 36, and the subtitle decoding unit 37, respectively, and presents the video component and the audio component. In both cases, the caption component is presented by the presentation method determined based on the SI information. This SI information includes meta information (Subtitle_info information described later) indicating a caption presentation method.
なお、アプリケーション関連処理部38の詳しい処理については、参照する図面を替えて後述する。
Note that detailed processing of the application-related processing unit 38 will be described later with reference to another drawing.
ディスプレイ39は、アプリケーション関連処理部38からの映像信号が示す映像を表示する。ディスプレイ39として、例えば、LCD(液晶ディスプレイ)、有機ELディスプレイ、プラズマディスプレイなどを適用することが可能である。
The display 39 displays the video indicated by the video signal from the application-related processing unit 38. As the display 39, for example, an LCD (liquid crystal display), an organic EL display, a plasma display, or the like can be applied.
スピーカ40は、アプリケーション関連処理部38からの音声信号が示す音声を出力する。
The speaker 40 outputs the sound indicated by the sound signal from the application related processing unit 38.
(アプリケーション関連処理部38)
アプリケーション関連処理部38について、図2を用いて説明する。図2は、本実施形態におけるアプリケーション関連処理部38の要部構成を示すブロック図である。図2に示すように、アプリケーション関連処理部38には、ミドルウェア部および合成部が含まれている。 (Application-related processing unit 38)
The application relatedprocess part 38 is demonstrated using FIG. FIG. 2 is a block diagram showing a main configuration of the application related processing unit 38 in the present embodiment. As shown in FIG. 2, the application-related processing unit 38 includes a middleware unit and a synthesis unit.
アプリケーション関連処理部38について、図2を用いて説明する。図2は、本実施形態におけるアプリケーション関連処理部38の要部構成を示すブロック図である。図2に示すように、アプリケーション関連処理部38には、ミドルウェア部および合成部が含まれている。 (Application-related processing unit 38)
The application related
ミドルウェア部は、図2に示すように、SI情報取得処理部381、アプリデータ取得処理部382及び字幕取得処理部383として機能する。即ち、SI情報取得処理部381、アプリデータ取得処理部382及び字幕取得処理部383は、ミドルウェア(下位層のプログラム)の機能として実現されている。
The middleware unit functions as an SI information acquisition processing unit 381, an application data acquisition processing unit 382, and a caption acquisition processing unit 383, as shown in FIG. That is, the SI information acquisition processing unit 381, the application data acquisition processing unit 382, and the caption acquisition processing unit 383 are realized as middleware (lower layer program) functions.
SI情報取得処理部381(取得処理部)は、前述のSI情報を取得し、取得したSI情報を記憶部41に格納する。
The SI information acquisition processing unit 381 (acquisition processing unit) acquires the above-described SI information, and stores the acquired SI information in the storage unit 41.
アプリデータ取得処理部382は、操作信号およびHTMLアプリケーションを取得する。
The application data acquisition processing unit 382 acquires an operation signal and an HTML application.
字幕取得処理部383は、復号された字幕コンポーネントを取得する。
The subtitle acquisition processing unit 383 acquires the decoded subtitle component.
合成部は、HTMLアプリケーションを実行する機能を有している、ミドルウェアよりも上位層のプログラム(例えば、HTML5ブラウザ)である。本実施形態では、合成部が実行するHTMLアプリケーションは、HTMLの記述に従って映像、音声及び字幕の提示制御を行うアプリケーションである。
The synthesizing unit is a program (for example, HTML5 browser) in a higher layer than the middleware having a function of executing an HTML application. In the present embodiment, the HTML application executed by the synthesizing unit is an application that performs presentation control of video, audio, and subtitles according to the description of HTML.
このHTMLアプリケーションを実行した合成部は、図2に示すように、映像音声提示処理部384及び字幕提示処理部385として機能する。
The composition unit that has executed the HTML application functions as a video / audio presentation processing unit 384 and a caption presentation processing unit 385 as shown in FIG.
映像音声提示処理部384は、HTMLアプリケーションを参照して提示すべき映像コンポーネントおよび音声コンポーネントを特定する。映像音声提示処理部384は、特定した映像コンポーネント及び音声コンポーネントをそれぞれ映像復号部35及び音声復号部36から取得し、取得した映像コンポーネント及び音声コンポーネントを提示する。
The video / audio presentation processing unit 384 identifies video components and audio components to be presented with reference to the HTML application. The video / audio presentation processing unit 384 acquires the specified video component and audio component from the video decoding unit 35 and the audio decoding unit 36, respectively, and presents the acquired video component and audio component.
字幕提示処理部385(字幕提示部)は、HTMLアプリケーションを参照して提示すべき字幕コンポーネントを特定する。字幕提示処理部385は、提示すべき字幕コンポーネントを字幕取得処理部383から取得し、取得した字幕コンポーネントを提示する。
The caption presentation processing unit 385 (caption presentation unit) identifies the caption component to be presented with reference to the HTML application. The caption presentation processing unit 385 acquires the caption component to be presented from the caption acquisition processing unit 383 and presents the acquired caption component.
具体的には、字幕提示処理部385は、ユーザが番組を選局したことを契機に、その番組に関するSI情報(具体的には、Subtitle_info情報)をSI情報取得処理部381から取得し、以降、そのSubtitle_info情報が示す提示方法で字幕コンポーネントを提示する。なお、放送信号にHTMLアプリケーションが含まれない場合には、映像音声提示処理部384及び字幕提示処理部385は、番組に関するSI情報に従い提示すべき映像コンポーネント、音声コンポーネント及び字幕コンポーネントを取得し、SI情報及び字幕コンポーネントに含まれる提示情報に従い各コンポーネントを提示する。
Specifically, the caption presentation processing unit 385 acquires SI information (specifically, Subtitle_info information) related to the program from the SI information acquisition processing unit 381 when the user selects a program, and thereafter Then, the caption component is presented by the presentation method indicated by the Subtitle_info information. When the HTML application is not included in the broadcast signal, the video / audio presentation processing unit 384 and the caption presentation processing unit 385 acquire the video component, the audio component, and the caption component to be presented according to the SI information regarding the program, and the SI Each component is presented according to the presentation information included in the information and caption component.
(tuneToメソッドについて)
ユーザがHTMLアプリケーションに対して番組を選局する操作を行うと、HTMLアプリケーションは、tuneToメソッドを呼び出す。図4は、HTMLアプリケーションが呼び出すtuneToメソッドの2つの例を示した図である。 (About tuneTo method)
When the user performs an operation of selecting a program for the HTML application, the HTML application calls the tuneTo method. FIG. 4 is a diagram showing two examples of the tuneTo method called by the HTML application.
ユーザがHTMLアプリケーションに対して番組を選局する操作を行うと、HTMLアプリケーションは、tuneToメソッドを呼び出す。図4は、HTMLアプリケーションが呼び出すtuneToメソッドの2つの例を示した図である。 (About tuneTo method)
When the user performs an operation of selecting a program for the HTML application, the HTML application calls the tuneTo method. FIG. 4 is a diagram showing two examples of the tuneTo method called by the HTML application.
図4からわかるように、図4に示されている各tuneToメソッドは、非特許文献1に開示されているtuneToメソッドを拡張したものとなっている。即ち、Subtitle_info情報がtuneToメソッドの戻り値になっている。従って、HTMLアプリケーションは、いずれのtuneToメソッドを呼び出した場合であっても、その戻り値としてその時点におけるSubtitle_info情報を得ることができる。ここでは、ユーザが番組を選局する操作を行うこととしたが、予めHTMLアプリケーションが読み込まれた際に、選局を行う操作をHTMLアプリケーションに記述することで、tuneToメソッドを呼び出しても構わない。
As can be seen from FIG. 4, each tuneTo method shown in FIG. 4 is an extension of the tuneTo method disclosed in Non-Patent Document 1. That is, the Subtitle_info information is the return value of the tuneTo method. Therefore, regardless of which tuneTo method is called, the HTML application can obtain the Subtitle_info information at that time as a return value. Here, the user performs an operation of selecting a program. However, when an HTML application is read in advance, the tuneTo method may be called by describing the operation of selecting a channel in the HTML application. .
(放送信号のデータ構造の例)
HTMLアプリケーションが提示する各コンポーネント及びSI情報を含んでいる放送信号について、図5を用いて説明する。図5は、放送信号のデータ構造を示す図である。 (Example of data structure of broadcast signal)
A broadcast signal including each component and SI information presented by the HTML application will be described with reference to FIG. FIG. 5 is a diagram illustrating a data structure of a broadcast signal.
HTMLアプリケーションが提示する各コンポーネント及びSI情報を含んでいる放送信号について、図5を用いて説明する。図5は、放送信号のデータ構造を示す図である。 (Example of data structure of broadcast signal)
A broadcast signal including each component and SI information presented by the HTML application will be described with reference to FIG. FIG. 5 is a diagram illustrating a data structure of a broadcast signal.
図5に示す放送信号100は、本実施形態に係るコンテンツの伝送単位である。図5に示すように、放送信号100には、番組の各コンポーネント(字幕アセット151、ビデオアセット152、オーディオアセット153、及び、データアセット154)、PAメッセージ110、M2セクションメッセージが含まれている。PAメッセージ110およびM2セレクションメッセージは、SI情報(MMT-SI)である。
The broadcast signal 100 shown in FIG. 5 is a content transmission unit according to the present embodiment. As shown in FIG. 5, the broadcast signal 100 includes each component of the program (caption asset 151, video asset 152, audio asset 153, and data asset 154), PA message 110, and M2 section message. The PA message 110 and the M2 selection message are SI information (MMT-SI).
字幕アセット151は、字幕の本文が記述されたTTML文書ファイルを含んでいる。
The caption asset 151 includes a TTML document file in which the caption text is described.
データアセット154には、HTMLアプリケーションが含まれる。また、例えば、放送信号100にデータアセット154が含まれない場合は、MPT111にデータアセット154(HTMLアプリケーション)の格納先を示す情報(例えば、コンテンツサーバ6を指し示すURL情報)が含まれていてもよい。
The data asset 154 includes an HTML application. For example, if the data asset 154 is not included in the broadcast signal 100, the MPT 111 may include information indicating the storage location of the data asset 154 (HTML application) (for example, URL information indicating the content server 6). Good.
なお、MMTでは、映像や音声等のコンポーネントをアセットと定義する。すなわち、ビデオアセットは映像コンポーネントであり、オーディオアセットは音声コンポーネントであり、字幕アセットは字幕コンポーネントである。また、各アセットには、他のアセットと識別するためのコンポーネントタグが付されている。
In MMT, components such as video and audio are defined as assets. That is, a video asset is a video component, an audio asset is an audio component, and a caption asset is a caption component. Each asset has a component tag for identifying it from other assets.
M2セクションメッセージは、MPEG-2 Systemsのセクション拡張形式を伝送するために用いる情報である。本実施形態では、M2セクションメッセージには、MH-AITが含まれている。MH-AITは、MMT用のAIT(Application Information Table:アプリケーション情報テーブル)である。
M2 section message is information used to transmit the MPEG-2 IV Systems section extension format. In the present embodiment, the M2 section message includes MH-AIT. MH-AIT is an AIT (Application Information Table) for MMT.
PAメッセージ110は、アセットの構成等を示す制御情報である。MMTでは、メッセージには特定の情報を示す要素や属性を持つテーブルが含まれ、テーブルには、より詳細な情報を示す記述子が含まれる。具体的には、PAメッセージ110には、図5に示すように、MPT(MMT Package Table)111が含まれる。
The PA message 110 is control information indicating the asset configuration and the like. In MMT, a message includes a table having elements and attributes indicating specific information, and the table includes a descriptor indicating more detailed information. Specifically, the PA message 110 includes an MPT (MMT Package Table) 111 as shown in FIG.
MPT111は、アセットのリスト、そのアセットが含まれるMMTパケットを特定するパケットIDおよび放送信号の位置などのパッケージを構成する情報を示す。MPT111を解析することにより、番組を構成するアセットを特定することができる。
MPT 111 indicates information constituting a package such as a list of assets, a packet ID for specifying an MMT packet including the asset, and a position of a broadcast signal. By analyzing the MPT 111, it is possible to specify the assets constituting the program.
図5に示すように、MPT111には、Data_Component_Descriptor情報130が含まれている。Data_Component_Descriptor情報130は、番組を構成するコンポーネントのリストを示しており、各コンポーネントのコンポーネントタグを含んでいる。また、図5に示すように、前述のSubtitle_info情報(Subtitle_info情報131)はこのData_Component_Descriptor情報130に含まれている。
As shown in FIG. 5, the MPT 111 includes Data_Component_Descriptor information 130. Data_Component_Descriptor information 130 indicates a list of components constituting the program, and includes a component tag of each component. Further, as shown in FIG. 5, the aforementioned Subtitle_info information (Subtitle_info information 131) is included in the Data_Component_Descriptor information 130.
図6は、Subtitle_info情報131(字幕提示制御情報)のデータ構造を示す図である。図6に示すように、Subtitle_info情報131は、ISO_639_language_code(言語コード)、type(字幕タイプ)、OPM(動作モード)、TMD(時刻制御モード)、及び、DMF(表示モード)、及び、resolution(表示解像度モード)を含み、時刻制御モードの値が0010である場合には、更に、reference_start_time (参照開始時刻、即ち、TTML文書内のタイムコードの起点となるUTC時刻)を含んでいる。
FIG. 6 is a diagram illustrating a data structure of the Subtitle_info information 131 (caption presentation control information). As shown in FIG. 6, the Subtitle_info information 131 includes ISO_639_language_code (language code), type (subtitle type), OPM (operation mode), TMD (time control mode), DMF (display mode), and resolution (display). In the case where the value of the time control mode is 0010, reference_start_time 開始 (reference start time, that is, UTC time as the starting point of the time code in the TTML document) is further included.
ARIB標準規格に精通した本願発明の技術分野の当業者であればこれらの情報の意味を明確に把握できる。従って、ここでは、これらの情報について詳細には説明しない。
Those skilled in the technical field of the present invention who are familiar with the ARIB standard can clearly understand the meaning of this information. Therefore, this information will not be described in detail here.
なお、番組を構成する字幕コンポーネントがN個存在する場合(例えば、日本語の字幕コンポーネントと英語の字幕コンポーネントとからなる2個の字幕コンポーネントが存在する場合)には、各字幕コンポーネントに紐づけられたN個のSubtitle_info情報131がMPT111に含まれる。
When there are N subtitle components that make up a program (for example, there are two subtitle components consisting of a Japanese subtitle component and an English subtitle component), the subtitle components are linked to each subtitle component. N pieces of Subtitle_info information 131 are included in the MPT 111.
(字幕データ(captiondata)のデータフォーマットの一例について)
次にcaptiondataのデータフォーマットの一例について、図7に基づいて説明すれば、以下のとおりである。 (Example of data format of caption data (captiondata))
Next, an example of the data format of captiondata will be described with reference to FIG.
次にcaptiondataのデータフォーマットの一例について、図7に基づいて説明すれば、以下のとおりである。 (Example of data format of caption data (captiondata))
Next, an example of the data format of captiondata will be described with reference to FIG.
図7は、本実施形態に係る字幕提示処理部385が字幕取得処理部383から取得するcaptiondataのデータフォーマットの一例について説明するための図である。図7の(a)は、captiondataのデータフォーマットの定義を示しており、図7の(b)は、captiondataの一例を示している。
FIG. 7 is a diagram for explaining an example of the data format of captiondata that the caption presentation processing unit 385 according to the present embodiment acquires from the caption acquisition processing unit 383. 7A shows the definition of the data format of captiondata, and FIG. 7B shows an example of captiondata.
図7の(b)においてdataで示された値(0.ttml、1.png、2.aif)については、該当する字幕データ(テキストデータ)をJSON形式に変換することとし、バイナリデータ(pngデータやaif等)をBase64等にエンコードしてからJSON形式に変換してもよい。
For the values (0.ttml, 1.png, 2.aif) indicated by data in (b) of FIG. 7, the corresponding caption data (text data) is converted to JSON format, and binary data (png Data, aif, etc.) may be encoded into Base64 and then converted to JSON format.
字幕取得処理部383が取得する字幕ストリームにはcaptiondataを構成するメタ情報(サブサンプルデータを取得するための情報等)が含まれており、captiondataは、字幕取得処理部383によって、字幕ストリームから生成される。
The subtitle stream acquired by the subtitle acquisition processing unit 383 includes meta information (information for acquiring subsample data, etc.) constituting the captiondata, and the captiondata is generated from the subtitle stream by the subtitle acquisition processing unit 383. Is done.
図7からわかるように、本実施形態では、captiondataはJSON形式で記述されている。図7の(b)は、字幕取得処理部383から取得した字幕コンポーネント(放送受信機が提示すべき字幕が記述されたTTML文書、字幕画像、及び、字幕音声等の各サブサンプルデータのファイル)を図7(a)のcaptiondataのデータフォーマットの定義に従って、JSON形式に変換したcaptiondataの例を示している。
As can be seen from FIG. 7, in this embodiment, caption data is described in JSON format. FIG. 7B shows a caption component acquired from the caption acquisition processing unit 383 (a TTML document in which captions to be presented by the broadcast receiver, caption images, and subsample data files such as caption audio). Is an example of captiondata converted into JSON format in accordance with the definition of the data format of captiondata in FIG.
なお、captiondataはJSON形式で記述されていなくてもよい。即ち、captiondataは、captiondataを構成する各情報について、その情報の種類と、その情報が格納されるフィールドの長さと、及び、そのフィールドに格納される値とを含むようなデータ構造であればよい。
Note that captiondata does not have to be described in JSON format. That is, the captiondata may have any data structure including the type of information, the length of the field in which the information is stored, and the value stored in the field for each piece of information constituting the captiondata. .
また、captiondataに含まれる、サブサンプルデータを取得するための情報は、TTML文書のファイル名のみであってもよい。あるいは、サブサンプルデータを取得するための情報として、TTML文書のファイル名に加え、字幕画像のファイル名、字幕音声のファイル名、及び、外字フォントのファイル名の全部又は任意の一部がcaptiondataに含まれていてもよい。
Also, the information for acquiring the subsample data included in the captiondata may be only the file name of the TTML document. Alternatively, as the information for acquiring the subsample data, in addition to the file name of the TTML document, the file name of the subtitle image, the file name of the subtitle audio, and the file name of the external font are all or any part of the file name It may be included.
また、本実施形態では、TTML文書をJSON形式に変換して送るが、TTML文書を解析して該当するテキストデータのみ抽出したものを使用してもよい。
In this embodiment, the TTML document is converted into the JSON format and sent. However, the TTML document may be analyzed and only the corresponding text data extracted.
(HTMLアプリケーションの選局時の動作について)
HTMLアプリケーションの選局時の動作について、図3、図7~図9を参照しながら説明する。図8は、tuneToメソッドを利用してsubtitle_info情報131を提示するためのスクリプトタグ212の記述例である。図9は、ttml形式のデータを例示した図である。 (Operations when tuning an HTML application)
The operation at the time of channel selection of the HTML application will be described with reference to FIGS. 3 and 7 to 9. FIG. 8 is a description example of thescript tag 212 for presenting the subtitle_info information 131 using the tuneTo method. FIG. 9 is a diagram illustrating data in the ttml format.
HTMLアプリケーションの選局時の動作について、図3、図7~図9を参照しながら説明する。図8は、tuneToメソッドを利用してsubtitle_info情報131を提示するためのスクリプトタグ212の記述例である。図9は、ttml形式のデータを例示した図である。 (Operations when tuning an HTML application)
The operation at the time of channel selection of the HTML application will be described with reference to FIGS. 3 and 7 to 9. FIG. 8 is a description example of the
図3からわかるように、HTMLアプリケーションのソースには、スクリプトタグ212が含まれている。
As can be seen from FIG. 3, a script tag 212 is included in the source of the HTML application.
図8に示すスクリプトタグ212をソースに含むHTMLアプリケーションを実行している合成部は、HTMLアプリケーションに対する番組選局操作が行われると、tuneToメソッドを実行する。
When the program tuning operation for the HTML application is performed, the synthesis unit that executes the HTML application including the script tag 212 shown in FIG. 8 as a source executes the tuneTo method.
合成部(映像音声提示処理部384)は、選局された番組の映像及び音声を提示し、合成部(字幕提示処理部385)は、tuneToメソッドの戻り値から取得した選局された番組のSI情報(SI情報取得処理部381が取得したSI情報)に含まれるSubtitle_info情報131を提示する。
The synthesizing unit (video / audio presentation processing unit 384) presents the video and audio of the selected program, and the synthesizing unit (caption presentation processing unit 385) displays the selected program acquired from the return value of the tuneTo method. The Subtitle_info information 131 included in the SI information (SI information acquired by the SI information acquisition processing unit 381) is presented.
また、図8に図示はしていないが、字幕提示処理部385は、このSubtitle_info情報131及びHTMLの記述に従って字幕の提示制御を行う。
Although not shown in FIG. 8, the caption presentation processing unit 385 performs caption presentation control according to the Subtitle_info information 131 and the HTML description.
字幕提示処理部385は、具体的には、番組が選局された直後に、SI情報取得処理部381が取得したSI情報に含まれるSubtitle_info情報131を、記憶部41におけるHTMLアプリケーションが参照可能なメモリ領域に記録し、上記Subtitle_info情報131及びHTMLの記述に従って、各字幕データを提示する。
Specifically, the caption presentation processing unit 385 can refer to the HTML application in the storage unit 41 for the Subtitle_info information 131 included in the SI information acquired by the SI information acquisition processing unit 381 immediately after the program is selected. Each subtitle data is presented in accordance with the Subtitle_info information 131 and the HTML description recorded in the memory area.
即ち、字幕提示処理部385は、HTMLアプリケーション上(HTMLアプリケーションの画面内)で、各字幕データを提示する。
That is, the caption presentation processing unit 385 presents each caption data on the HTML application (in the HTML application screen).
(字幕ストリーム受信時におけるHTMLアプリケーションの動作について)
ミドルウェアが字幕ストリームを受信したことをトリガとしてHTMLアプリケーションが各字幕を提示する際の動作について、図7、図9及び図21を参照しながら説明する。 (Operation of HTML application when receiving subtitle stream)
The operation when the HTML application presents each subtitle with the middleware receiving the subtitle stream as a trigger will be described with reference to FIG. 7, FIG. 9, and FIG.
ミドルウェアが字幕ストリームを受信したことをトリガとしてHTMLアプリケーションが各字幕を提示する際の動作について、図7、図9及び図21を参照しながら説明する。 (Operation of HTML application when receiving subtitle stream)
The operation when the HTML application presents each subtitle with the middleware receiving the subtitle stream as a trigger will be described with reference to FIG. 7, FIG. 9, and FIG.
本実施形態に係るHTMLアプリケーションのソースに含まれるスクリプトタグ212は、図21によっても示される。即ち、スクリプトタグ212には、tuneToメソッドに関する記述だけでなく、イベントリスナaddCaptionListenerに関する記述も含まれている。
The script tag 212 included in the source of the HTML application according to this embodiment is also shown in FIG. That is, the script tag 212 includes not only a description regarding the tuneTo method but also a description regarding the event listener addCaptionListener.
合成部は、HTMLアプリケーションの実行の開始時に放送映像音声オブジェクトにイベントリスナaddCaptionListenerを登録する。これにより、字幕取得処理部383は、字幕ストリームを取得するとaddCaptionListenerで登録したイベントが発火し、取得した字幕ストリームがcaptiondataのデータフォーマットに変換されて字幕提示処理部385に通知される。字幕提示処理部385は、tuneToメソッドの戻り値から取得したSI情報に含まれるSubtitle_info情報131およびHTML記述に従って、captiondata中の各字幕データを提示する。
The composition unit registers the event listener addCaptionListener in the broadcast video / audio object at the start of execution of the HTML application. As a result, when the subtitle acquisition processing unit 383 acquires the subtitle stream, the event registered by the addCaptionListener is fired, and the acquired subtitle stream is converted into the data format of the captiondata and notified to the subtitle presentation processing unit 385. The caption presentation processing unit 385 presents each caption data in the caption data according to the Subtitle_info information 131 and the HTML description included in the SI information acquired from the return value of the tuneTo method.
例えば、提示する字幕データが図7の(b)のcaptiondataによって示される場合、字幕提示処理部385は、captiondataに含まれる0.ttmlファイル(図9に例示されるTTML文書のファイル)、1.pngファイル(字幕画像のファイル)、及び、2.aifファイル(字幕音声のファイル)を取得し、Subtitle_info情報131及びHTML記述に従って、TTML文書に記述されている字幕文字列、字幕画像、及び、字幕音声を提示する。
For example, when the caption data to be presented is indicated by the caption data of FIG. 7B, the caption presentation processing unit 385 includes a 0.ttml file (TTML document file illustrated in FIG. 9) included in the caption data, 1. Get png file (subtitle image file) and 2.aif file (subtitle audio file), subtitle text, subtitle image and subtitle described in TTML document according to Subtitle_info information 131 and HTML description Present audio.
(受信装置3の利点)
以上のように、受信装置3は、Subtitle_info情報131及びHTMLの記述に従って、番組の各字幕データを提示することができる。 (Advantages of the receiving device 3)
As described above, the receivingdevice 3 can present each caption data of a program according to the Subtitle_info information 131 and the description of HTML.
以上のように、受信装置3は、Subtitle_info情報131及びHTMLの記述に従って、番組の各字幕データを提示することができる。 (Advantages of the receiving device 3)
As described above, the receiving
従って、受信装置3では、HTMLアプリケーション上にて、番組の配信者(もしくはHTMLアプリケーションの提供事業者)が所望する態様で各字幕データを提示できる、と言える。
Therefore, it can be said that the receiving device 3 can present each caption data in a manner desired by a program distributor (or a provider of the HTML application) on the HTML application.
(captiondataのデータフォーマットの別の例について)
captiondataのデータフォーマットの別の例について、図10及び図11を参照しながら説明する。 (About another example of the data format of captiondata)
Another example of the data format of captiondata will be described with reference to FIGS.
captiondataのデータフォーマットの別の例について、図10及び図11を参照しながら説明する。 (About another example of the data format of captiondata)
Another example of the data format of captiondata will be described with reference to FIGS.
図10及び図11は、captiondataのデータフォーマットの別の各例について説明するための図である。図10の(a)は、別の例に係るcaptiondataのデータフォーマットの定義を示しており、図10の(b)は、そのcaptiondataの一例を示している。
10 and 11 are diagrams for explaining other examples of the data format of captiondata. FIG. 10A shows the definition of the data format of caption data according to another example, and FIG. 10B shows an example of the caption data.
図10のcaptiondataも、以下の点で図7のcaptiondataと異なっている。
The captiondata in FIG. 10 is also different from the captiondata in FIG. 7 in the following points.
即ち、図10のcaptiondata中に、字幕のメタ情報としてMPU Time Stampではなく、reference_start_timeが含まれている点が、図7のcaptiondataと異なっている。
That is, the caption data of FIG. 10 is different from the caption data of FIG. 7 in that reference_start_time is included as caption sub-information instead of MPU 字幕 Time Stamp.
captiondataのデータフォーマットの更に別の例について、図11を参照しながら説明する。図11の(a)は、更に別の例に係るcaptiondataのデータフォーマットの定義を示しており、図11の(b)及び(c)は、そのようなcaptiondataの一例を示している。
Still another example of the data format of captiondata will be described with reference to FIG. (A) of FIG. 11 shows the definition of the data format of caption data according to still another example, and (b) and (c) of FIG. 11 show an example of such caption data.
図11のcaptiondataは、以下の点で図10のcaptiondataと異なっている。
11 is different from the caption data of FIG. 10 in the following points.
即ち、図11のcaptiondataは、各サブサンプルデータを取得するための情報が、データの替わりに当該データが格納されている格納先情報がURLとして記述されている点で、図10のcaptiondataと異なっている。
That is, the caption data in FIG. 11 is different from the caption data in FIG. 10 in that the information for acquiring each sub-sample data is described as the URL of the storage location information in which the data is stored instead of the data. ing.
上記URLは、例えば、図11の(b)に示すように、"http://localhost/<service_id>/<asset_id>/<mpu_sequence_number>/<subsample_no>"というフォーマットで記述されてもよい。即ち、上記URLは、各サブサンプルデータの取得先を示すURLであってもよい。
The above URL may be described in a format of “http: // localhost / <service_id> / <asset_id> / <mpu_sequence_number> / <subsample_no>”, for example, as shown in FIG. That is, the URL may be a URL indicating the acquisition destination of each subsample data.
あるいは、例えば、図11の(c)に示すように、"http://localhost/<service_id>/<asset_id>/<mpu_sequence_number>/"というフォーマットで記述されてもよい。即ち、上記URLは、TTML文書ファイルの取得先を示すURLであってもよい。
Or, for example, as shown in (c) of FIG. 11, it may be described in a format of “http: // localhost / <service_id> / <asset_id> / <mpu_sequence_number> /”. That is, the URL may be a URL indicating the acquisition destination of the TTML document file.
即ち、字幕提示処理部385は、図11の(c)のcaptiondataに基づいて、TTML文書ファイルを取得及び解析し、TTML文書ファイルに示されている、他のサブサンプルデータを取得するための情報と上記URLを参照して、他のサブサンプルデータを取得することができる。
That is, the subtitle presentation processing unit 385 acquires and analyzes the TTML document file based on the caption data in (c) of FIG. 11, and information for acquiring other subsample data indicated in the TTML document file Other subsample data can be acquired with reference to the URL.
なお、字幕提示処理部385は、図10のデータ及び図11のデータを、イベントリスナaddCaptionListenerの戻り値captiondataの形で取得するのではなく、WebSocket等を利用してJSONメッセージの形で取得してもよい。
Note that the caption presentation processing unit 385 does not acquire the data of FIG. 10 and the data of FIG. 11 in the form of a JSON message using WebSocket or the like, instead of acquiring the data in the form of the return value captiondata of the event listener addCaptionListener. Also good.
〔実施形態2〕
本発明の別の実施形態について、更に図12及び図13に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 2]
Another embodiment of the present invention will be further described with reference to FIGS. 12 and 13 as follows. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
本発明の別の実施形態について、更に図12及び図13に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 2]
Another embodiment of the present invention will be further described with reference to FIGS. 12 and 13 as follows. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
図12は、所定のイベントが発火したことを契機にHTMLアプリケーションが呼び出すgetCurrentSubtitleInformationメソッドを例示した図である。図13は、本実施形態に係るHTMLアプリケーションのソースに含まれているスクリプトタグ212の詳細を概略的に例示した図である。
FIG. 12 is a diagram illustrating the getCurrentSubtitleInformation method that is called by the HTML application when a predetermined event is fired. FIG. 13 is a diagram schematically illustrating details of the script tag 212 included in the source of the HTML application according to the present embodiment.
〔システムの概要および構成〕
本実施形態に係るシステムは、Subtitle_info情報を取得する契機が異なっている点を除き、実施形態1に係るシステムと同様のシステムである。 [System overview and configuration]
The system according to the present embodiment is the same system as the system according to the first embodiment, except that the opportunity to acquire Subtitle_info information is different.
本実施形態に係るシステムは、Subtitle_info情報を取得する契機が異なっている点を除き、実施形態1に係るシステムと同様のシステムである。 [System overview and configuration]
The system according to the present embodiment is the same system as the system according to the first embodiment, except that the opportunity to acquire Subtitle_info information is different.
本実施形態に係るシステムは、実施形態1に係るシステムと同様に構成されているので、本実施形態に係るシステムの構成の説明を省略する。
Since the system according to the present embodiment is configured in the same manner as the system according to the first embodiment, description of the configuration of the system according to the present embodiment is omitted.
(getCurrentSubtitleInformationメソッドについて)
図12にgetCurrentSubtitleInformationメソッドの定義の一例を示す。図12からわかるように、HTMLアプリケーションは、所定のイベントが発火した場合に、getCurrentSubtitleInformationメソッドの戻り値としてその時点におけるSubtitle_info情報を得ることができる。 (About getCurrentSubtitleInformation method)
FIG. 12 shows an example of the definition of the getCurrentSubtitleInformation method. As can be seen from FIG. 12, the HTML application can obtain the Subtitle_info information at that time as a return value of the getCurrentSubtitleInformation method when a predetermined event is fired.
図12にgetCurrentSubtitleInformationメソッドの定義の一例を示す。図12からわかるように、HTMLアプリケーションは、所定のイベントが発火した場合に、getCurrentSubtitleInformationメソッドの戻り値としてその時点におけるSubtitle_info情報を得ることができる。 (About getCurrentSubtitleInformation method)
FIG. 12 shows an example of the definition of the getCurrentSubtitleInformation method. As can be seen from FIG. 12, the HTML application can obtain the Subtitle_info information at that time as a return value of the getCurrentSubtitleInformation method when a predetermined event is fired.
なお、所定のイベントは、例えば、「Subtitle_info情報を含むSI情報の受信」というイベントであってもよいし、「Subtitle_info情報の更新(前回受信したSI情報に含まれているSubtitle_info情報とは異なる内容のSubtitle_info情報が含まれているSI情報の受信)」というイベントであってもよいし、「ユーザによる字幕の選択言語の切替」というイベントであってもよい。また、上記のうち、SI情報の受信による所定のイベントの発生については、ミドルウェア部(SI情報取得処理部381)から合成部に通知される。
The predetermined event may be, for example, an event “Reception of SI information including Subtitle_info information”, or “Subtitle_info information update (content different from Subtitle_info information included in previously received SI information). Event of “SI information including Subtitle_info information”) or “switching of subtitle selection language by the user”. Of the above, occurrence of a predetermined event due to reception of SI information is notified from the middleware unit (SI information acquisition processing unit 381) to the synthesis unit.
(HTMLアプリケーションの選局時の動作について)
番組再生中に所定のイベントが発火したことをトリガとしてHTMLアプリケーションが実行する動作について、図13を参照しながら説明する。図13は、getCurrentSubtitleInformationメソッドを利用してsubtitle_info情報131を提示するためのスクリプトタグ212の記述例である。 (Operations when tuning an HTML application)
An operation executed by the HTML application triggered by the occurrence of a predetermined event during program playback will be described with reference to FIG. FIG. 13 is a description example of ascript tag 212 for presenting subtitle_info information 131 using the getCurrentSubtitleInformation method.
番組再生中に所定のイベントが発火したことをトリガとしてHTMLアプリケーションが実行する動作について、図13を参照しながら説明する。図13は、getCurrentSubtitleInformationメソッドを利用してsubtitle_info情報131を提示するためのスクリプトタグ212の記述例である。 (Operations when tuning an HTML application)
An operation executed by the HTML application triggered by the occurrence of a predetermined event during program playback will be described with reference to FIG. FIG. 13 is a description example of a
図13に示すスクリプトタグ212をソースに含むHTMLアプリケーションを実行している合成部は、所定のイベントが発火した場合に、getCurrentSubtitleInformationメソッドを実行し、subtitle_info情報131を取得する。
13, the synthesis unit that executes the HTML application including the script tag 212 in the source executes the getCurrentSubtitleInformation method and acquires the subtitle_info information 131 when a predetermined event is fired.
合成部(字幕提示処理部385)は、SI情報取得処理部381が取得したSI情報に含まれるSubtitle_info情報131を提示する。
The composition unit (caption presentation processing unit 385) presents the Subtitle_info information 131 included in the SI information acquired by the SI information acquisition processing unit 381.
また、図13に図示はしていないが、字幕提示処理部385は、このSubtitle_info情報131及びHTMLの記述に従って字幕の提示制御を行う。
Although not shown in FIG. 13, the caption presentation processing unit 385 performs caption presentation control according to the Subtitle_info information 131 and the HTML description.
字幕提示処理部385は、具体的には、ミドルウェアやユーザによる操作から所定のイベントの発生が通知された直後に、SI情報取得処理部381が取得したSI情報に含まれるSubtitle_info情報131を、記憶部41におけるHTMLアプリケーションが参照可能なメモリ領域に記録し、上記Subtitle_info情報131及びHTMLの記述に従って、各字幕データを提示する。
Specifically, the caption presentation processing unit 385 stores the Subtitle_info information 131 included in the SI information acquired by the SI information acquisition processing unit 381 immediately after the occurrence of a predetermined event is notified from an operation by middleware or a user. The subtitle data is recorded in a memory area that can be referred to by the HTML application in the unit 41, and each subtitle data is presented in accordance with the Subtitle_info information 131 and the description of the HTML.
(受信装置3の利点)
本実施形態に係る受信装置3は、実施形態1に係る受信装置3と同様の作用効果を奏する。更に、番組の放送期間中に、配信されるsubtitle_info情報131が更新された場合や、ユーザの操作により提示する字幕の選択言語が切り替えられた場合でも、番組の配信者(もしくはHTMLアプリケーションの提供事業者)が所望する態様で各字幕データを提示できる。 (Advantages of the receiving device 3)
The receivingdevice 3 according to the present embodiment has the same effects as the receiving device 3 according to the first embodiment. Further, even when the subtitle_info information 131 to be distributed is updated during the broadcast period of the program or when the selection language of the subtitles to be presented is switched by the user's operation, the program distributor (or HTML application providing business) Each subtitle data can be presented in a desired manner.
本実施形態に係る受信装置3は、実施形態1に係る受信装置3と同様の作用効果を奏する。更に、番組の放送期間中に、配信されるsubtitle_info情報131が更新された場合や、ユーザの操作により提示する字幕の選択言語が切り替えられた場合でも、番組の配信者(もしくはHTMLアプリケーションの提供事業者)が所望する態様で各字幕データを提示できる。 (Advantages of the receiving device 3)
The receiving
(実施形態2の付記事項)
getCurrentSubtitleInformationメソッドは、HTMLアプリケーションがイベントの発火時に呼び出すメソッドではなくてもよい。即ち、getCurrentSubtitleInformationメソッドが呼び出されるタイミングは、HTMLアプリケーションのソースの実装に応じたタイミングであってもよい。 (Additional notes of embodiment 2)
The getCurrentSubtitleInformation method may not be a method that the HTML application calls when an event is fired. That is, the timing at which the getCurrentSubtitleInformation method is called may be a timing according to the implementation of the HTML application source.
getCurrentSubtitleInformationメソッドは、HTMLアプリケーションがイベントの発火時に呼び出すメソッドではなくてもよい。即ち、getCurrentSubtitleInformationメソッドが呼び出されるタイミングは、HTMLアプリケーションのソースの実装に応じたタイミングであってもよい。 (Additional notes of embodiment 2)
The getCurrentSubtitleInformation method may not be a method that the HTML application calls when an event is fired. That is, the timing at which the getCurrentSubtitleInformation method is called may be a timing according to the implementation of the HTML application source.
例えば、HTMLアプリケーションのソースは、アプリケーション画面上のあるボタン(例えば、字幕を切り替えるボタン)が操作されたときに、getCurrentSubtitleInformationメソッドを呼び出すように実装されていてもよい。
For example, the source of the HTML application may be implemented such that the getCurrentSubtitleInformation method is called when a certain button (for example, a button for switching subtitles) on the application screen is operated.
〔実施形態3〕
本発明の更に別の実施形態について、更に図14及び図15に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 3]
Still another embodiment of the present invention will be described with reference to FIGS. 14 and 15 as follows. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
本発明の更に別の実施形態について、更に図14及び図15に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 3]
Still another embodiment of the present invention will be described with reference to FIGS. 14 and 15 as follows. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
図14は、本実施形態に係る放送映像音声オブジェクトの定義を概略的に例示した図である。図15は、本実施形態に係るHTMLアプリケーションのソースに含まれているスクリプトタグ212の詳細を概略的に例示した図である。
FIG. 14 is a diagram schematically illustrating the definition of a broadcast video / audio object according to the present embodiment. FIG. 15 is a diagram schematically illustrating details of the script tag 212 included in the source of the HTML application according to the present embodiment.
〔システムの概要および構成〕
本実施形態に係るシステムは、ミドルウェアが字幕ストリームを受信した時に、HTMLアプリケーションがイベントリスナ経由でSubtitle_info情報を取得する点を除き、実施形態2に係るシステムと同様のシステムである。 [System overview and configuration]
The system according to the present embodiment is the same system as the system according to the second embodiment, except that when the middleware receives the subtitle stream, the HTML application acquires the Subtitle_info information via the event listener.
本実施形態に係るシステムは、ミドルウェアが字幕ストリームを受信した時に、HTMLアプリケーションがイベントリスナ経由でSubtitle_info情報を取得する点を除き、実施形態2に係るシステムと同様のシステムである。 [System overview and configuration]
The system according to the present embodiment is the same system as the system according to the second embodiment, except that when the middleware receives the subtitle stream, the HTML application acquires the Subtitle_info information via the event listener.
本実施形態に係るシステムは、実施形態1に係るシステムと同様に構成されているので、本実施形態に係るシステムの構成の説明を省略する。
Since the system according to the present embodiment is configured in the same manner as the system according to the first embodiment, description of the configuration of the system according to the present embodiment is omitted.
(放送映像音声オブジェクトについて)
図14に示されている放送映像音声オブジェクトは、非特許文献1の放送映像音声オブジェクトを拡張したものである。図14からわかるように、本実施形態に係る放送映像音声オブジェクトは、イベントリスナaddCaptionListenerの戻り値にSubtitle_info情報が含まれている。 (About broadcast video and audio objects)
The broadcast video / audio object shown in FIG. 14 is an extension of the broadcast video / audio object ofNon-Patent Document 1. As can be seen from FIG. 14, the broadcast video / audio object according to the present embodiment includes Subtitle_info information in the return value of the event listener addCaptionListener.
図14に示されている放送映像音声オブジェクトは、非特許文献1の放送映像音声オブジェクトを拡張したものである。図14からわかるように、本実施形態に係る放送映像音声オブジェクトは、イベントリスナaddCaptionListenerの戻り値にSubtitle_info情報が含まれている。 (About broadcast video and audio objects)
The broadcast video / audio object shown in FIG. 14 is an extension of the broadcast video / audio object of
(字幕ストリーム受信時におけるHTMLアプリケーションの動作について)
ミドルウェアが字幕ストリーム(字幕コンポーネント)を受信したことをトリガとしてHTMLアプリケーションが実行する動作について、図15を参照しながら説明する。図15は、放送映像音声オブジェクトにイベントリスナaddCaptionListenerを登録するスクリプトタグの一例である。 (Operation of HTML application when receiving subtitle stream)
An operation performed by the HTML application triggered by the middleware receiving a caption stream (caption component) will be described with reference to FIG. FIG. 15 is an example of a script tag for registering an event listener addCaptionListener in a broadcast video / audio object.
ミドルウェアが字幕ストリーム(字幕コンポーネント)を受信したことをトリガとしてHTMLアプリケーションが実行する動作について、図15を参照しながら説明する。図15は、放送映像音声オブジェクトにイベントリスナaddCaptionListenerを登録するスクリプトタグの一例である。 (Operation of HTML application when receiving subtitle stream)
An operation performed by the HTML application triggered by the middleware receiving a caption stream (caption component) will be described with reference to FIG. FIG. 15 is an example of a script tag for registering an event listener addCaptionListener in a broadcast video / audio object.
合成部は、図15に示すスクリプトタグ212をソースに含むHTMLアプリケーションの実行の開始時に放送映像音声オブジェクトにイベントリスナaddCaptionListenerを登録する。
The synthesizing unit registers an event listener addCaptionListener in the broadcast video / audio object at the start of execution of the HTML application including the script tag 212 shown in FIG. 15 as a source.
これにより、字幕取得処理部383が字幕ストリームを受信すると、addCaptionListenerで登録したイベントが発火し、記憶部41から読み出されたSubtitle_info情報131とcaptiondataが字幕提示処理部385に通知される。字幕提示処理部385は、通知されたSubtitle_info情報131、および、HTML記述に従って、字幕ストリーム中の各字幕データを提示する。
Thereby, when the caption acquisition processing unit 383 receives the caption stream, the event registered by the addCaptionListener is fired, and the Subtitle_info information 131 and the caption data read from the storage unit 41 are notified to the caption presentation processing unit 385. The caption presentation processing unit 385 presents each caption data in the caption stream according to the notified Subtitle_info information 131 and the HTML description.
字幕提示処理部385は、具体的には、ミドルウェアが字幕ストリームを受信した直後に、SI情報取得処理部381が取得したSI情報に含まれるSubtitle_info情報131を、記憶部41におけるHTMLアプリケーションが参照可能なメモリ領域に記録し、上記Subtitle_info情報131およびHTML記述に従って、字幕ストリーム中の各字幕データを提示する。
Specifically, the subtitle presentation processing unit 385 can refer to the HTML application in the storage unit 41 for the Subtitle_info information 131 included in the SI information acquired by the SI information acquisition processing unit 381 immediately after the middleware receives the subtitle stream. The subtitle data in the subtitle stream is presented in accordance with the Subtitle_info information 131 and the HTML description.
(受信装置3の利点)
本実施形態に係る受信装置3は、実施形態1、2に係る受信装置3と同様の作用効果を奏する。 (Advantages of the receiving device 3)
The receivingdevice 3 according to the present embodiment has the same effects as the receiving device 3 according to the first and second embodiments.
本実施形態に係る受信装置3は、実施形態1、2に係る受信装置3と同様の作用効果を奏する。 (Advantages of the receiving device 3)
The receiving
〔実施形態4〕
本発明の更に別の実施形態について、更に図16及び図17に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 4]
Still another embodiment of the present invention will be described below with reference to FIGS. 16 and 17. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
本発明の更に別の実施形態について、更に図16及び図17に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 4]
Still another embodiment of the present invention will be described below with reference to FIGS. 16 and 17. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
図16の(a)は、本実施形態に係る放送映像音声オブジェクトの定義を概略的に例示し、図16の(b)は、本実施形態に係る字幕リスト情報の定義を概略的に例示した図である。また、図17は、本実施形態に係るHTMLアプリケーションのソースに含まれているスクリプトタグ212の詳細を概略的に例示した図である。
FIG. 16A schematically illustrates the definition of the broadcast video / audio object according to the present embodiment, and FIG. 16B schematically illustrates the definition of the caption list information according to the present embodiment. FIG. FIG. 17 is a diagram schematically illustrating details of the script tag 212 included in the source of the HTML application according to the present embodiment.
〔システムの概要および構成〕
本実施形態に係るシステムは、受信装置のHTMLアプリケーションが、そのソースの実装に応じたタイミングでSubtitle_info情報ではなくSubtitle_list情報(字幕リスト情報)を取得するようになっている点で、実施形態1に係るシステムと異なっている。 [System overview and configuration]
The system according to the present embodiment is different from that according to the first embodiment in that the HTML application of the receiving apparatus acquires Subtitle_list information (caption list information) instead of Subtitle_info information at a timing according to the implementation of the source. It is different from the system concerned.
本実施形態に係るシステムは、受信装置のHTMLアプリケーションが、そのソースの実装に応じたタイミングでSubtitle_info情報ではなくSubtitle_list情報(字幕リスト情報)を取得するようになっている点で、実施形態1に係るシステムと異なっている。 [System overview and configuration]
The system according to the present embodiment is different from that according to the first embodiment in that the HTML application of the receiving apparatus acquires Subtitle_list information (caption list information) instead of Subtitle_info information at a timing according to the implementation of the source. It is different from the system concerned.
本実施形態に係るシステムは、実施形態1に係るシステムと同様に構成されているので、本実施形態に係るシステムの構成の説明を省略する。
Since the system according to the present embodiment is configured in the same manner as the system according to the first embodiment, description of the configuration of the system according to the present embodiment is omitted.
(放送映像音声オブジェクトについて)
図16の(a)に示されている放送映像音声オブジェクトは、非特許文献1の放送映像音声オブジェクトを拡張したものである。図16の(a)からわかるように、本実施形態に係る放送映像音声オブジェクトは、getSubtitleListメソッドを有している。 (About broadcast video and audio objects)
The broadcast video / audio object shown in (a) of FIG. 16 is an extension of the broadcast video / audio object ofNon-Patent Document 1. As can be seen from FIG. 16A, the broadcast video / audio object according to the present embodiment has a getSubtitleList method.
図16の(a)に示されている放送映像音声オブジェクトは、非特許文献1の放送映像音声オブジェクトを拡張したものである。図16の(a)からわかるように、本実施形態に係る放送映像音声オブジェクトは、getSubtitleListメソッドを有している。 (About broadcast video and audio objects)
The broadcast video / audio object shown in (a) of FIG. 16 is an extension of the broadcast video / audio object of
getSubtitleListメソッドは、Subtitle_list情報を取得するためのメソッドである。Subtitle_list情報は、Subtitle_info情報131やMPTに含まれるSI情報から生成される。
The getSubtitleList method is a method for acquiring Subtitle_list information. The Subtitle_list information is generated from the Subtitle_info information 131 and SI information included in the MPT.
Subtitle_list情報は、図16の(b)に示すように、字幕ストリームの言語を示す言語コード(ISO_639_language_code)と字幕ストリームが格納された格納先を表すURLとから構成されている。
As shown in FIG. 16B, the Subtitle_list information includes a language code (ISO_639_language_code) indicating the language of the subtitle stream and a URL indicating the storage location where the subtitle stream is stored.
なお、番組を構成する字幕ストリーム(字幕コンポーネント)がN個存在する場合には、Subtitle_list情報は、これらの情報の組み合わせをN個含むこととなる。
If there are N subtitle streams (subtitle components) that make up the program, the Subtitle_list information includes N combinations of these pieces of information.
(HTMLアプリケーションの起動時における動作について)
HTMLアプリケーションが起動時に実行する動作について、図17を参照しながら説明する。 (Operations when starting HTML applications)
An operation executed when the HTML application is activated will be described with reference to FIG.
HTMLアプリケーションが起動時に実行する動作について、図17を参照しながら説明する。 (Operations when starting HTML applications)
An operation executed when the HTML application is activated will be described with reference to FIG.
合成部は、図17に示すスクリプトタグ212をソースに含むHTMLアプリケーションでgetSubtitleListメソッドが呼び出された際に処理を実行する。
The synthesis unit executes processing when the getSubtitleList method is called in an HTML application that includes the script tag 212 shown in FIG. 17 as a source.
これにより、合成部(字幕提示処理部385)は、番組の映像、音声及び字幕を提示しながら、SI情報取得処理部381が取得したSI情報に含まれるSubtitle_list情報を提示する。なお、上記ではSubtitle_listに含まれる情報をテキストとして表示する例を示したが、受信装置3は、Subtitle_listに含まれる情報を格納するオブジェクトを定義し、取得した情報を前記オブジェクトとして記憶部41に格納してもよい。
Thereby, the synthesis unit (caption presentation processing unit 385) presents the Subtitle_list information included in the SI information acquired by the SI information acquisition processing unit 381 while presenting the video, audio, and caption of the program. In addition, although the example which displays the information contained in Subtitle_list as a text was shown above, the receiver 3 defines the object which stores the information contained in Subtitle_list, and stores the acquired information in the memory | storage part 41 as said object May be.
なお、番組を構成する字幕ストリームがN個存在する場合において字幕提示処理部385が提示する字幕は、異なる複数の言語のうちのユーザに指定されている言語の字幕である。
Note that when there are N subtitle streams constituting the program, the subtitle presented by the subtitle presentation processing unit 385 is a subtitle in a language specified by the user among a plurality of different languages.
ユーザは、合成部によって提示されるSubtitle_list情報を見て、字幕の言語として他のどの言語を選択できるかを把握することができる。
The user can grasp which other language can be selected as the subtitle language by looking at the Subtitle_list information presented by the synthesis unit.
(実施形態4の付記事項)
実施形態1の特徴と実施形態4の特徴とを兼ね備えた受信装置3も本発明に係る放送受信機の範囲に含まれる。そのような受信装置3の字幕提示処理部385は、Subtitle_info情報とSubtitle_list情報との両方を取得し、Subtitle_info情報が示す提示方法で字幕を提示しながら、Subtitle_list情報を提示することになる。 (Additional notes of Embodiment 4)
The receivingdevice 3 that combines the features of the first embodiment and the features of the fourth embodiment is also included in the scope of the broadcast receiver according to the present invention. Such a caption presentation processing unit 385 of the receiving device 3 acquires both the Subtitle_info information and the Subtitle_list information, and presents the Subtitle_list information while presenting the caption by the presentation method indicated by the Subtitle_info information.
実施形態1の特徴と実施形態4の特徴とを兼ね備えた受信装置3も本発明に係る放送受信機の範囲に含まれる。そのような受信装置3の字幕提示処理部385は、Subtitle_info情報とSubtitle_list情報との両方を取得し、Subtitle_info情報が示す提示方法で字幕を提示しながら、Subtitle_list情報を提示することになる。 (Additional notes of Embodiment 4)
The receiving
〔実施形態5〕
本発明の更に別の実施形態について、更に図18及び図19に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 5]
Still another embodiment of the present invention will be described below with reference to FIGS. 18 and 19. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
本発明の更に別の実施形態について、更に図18及び図19に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 5]
Still another embodiment of the present invention will be described below with reference to FIGS. 18 and 19. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
図18は、本実施形態に係る放送映像音声オブジェクトの定義を概略的に例示した図である。図19は、本実施形態に係るHTML5アプリケーションのソースにおけるスクリプトタグ212の詳細を概略的に例示した図である。
FIG. 18 is a diagram schematically illustrating the definition of a broadcast video / audio object according to the present embodiment. FIG. 19 is a diagram schematically illustrating details of the script tag 212 in the source of the HTML5 application according to the present embodiment.
〔システムの概要および構成〕
本実施形態に係るシステムは、ミドルウェアが字幕ストリームを受信した時に、HTMLアプリケーションがイベントリスナ経由で字幕リスト情報を取得する点を除き、実施形態4に係るシステムと同様のシステムである。 [System overview and configuration]
The system according to the present embodiment is the same system as the system according to the fourth embodiment except that when the middleware receives a subtitle stream, the HTML application acquires subtitle list information via the event listener.
本実施形態に係るシステムは、ミドルウェアが字幕ストリームを受信した時に、HTMLアプリケーションがイベントリスナ経由で字幕リスト情報を取得する点を除き、実施形態4に係るシステムと同様のシステムである。 [System overview and configuration]
The system according to the present embodiment is the same system as the system according to the fourth embodiment except that when the middleware receives a subtitle stream, the HTML application acquires subtitle list information via the event listener.
本実施形態に係るシステムは、実施形態1に係るシステムと同様に構成されているので、本実施形態に係るシステムの構成の説明を省略する。
Since the system according to the present embodiment is configured in the same manner as the system according to the first embodiment, description of the configuration of the system according to the present embodiment is omitted.
(放送映像音声オブジェクトについて)
図18に示されている放送映像音声オブジェクトは、非特許文献1の放送映像音声オブジェクトを拡張したものである。図18からわかるように、本実施形態に係る放送映像音声オブジェクトは、イベントリスナaddCaptionListenerの戻り値に字幕リスト情報が含まれている。 (About broadcast video and audio objects)
The broadcast video / audio object shown in FIG. 18 is an extension of the broadcast video / audio object ofNon-Patent Document 1. As can be seen from FIG. 18, the broadcast video / audio object according to the present embodiment includes subtitle list information in the return value of the event listener addCaptionListener.
図18に示されている放送映像音声オブジェクトは、非特許文献1の放送映像音声オブジェクトを拡張したものである。図18からわかるように、本実施形態に係る放送映像音声オブジェクトは、イベントリスナaddCaptionListenerの戻り値に字幕リスト情報が含まれている。 (About broadcast video and audio objects)
The broadcast video / audio object shown in FIG. 18 is an extension of the broadcast video / audio object of
(字幕ストリーム受信時におけるHTMLアプリケーションの動作について)
ミドルウェアが字幕ストリームを受信したことをトリガとしてHTMLアプリケーションが実行する動作について、図19を参照しながら説明する。 (Operation of HTML application when receiving subtitle stream)
The operation executed by the HTML application triggered by the middleware receiving the subtitle stream will be described with reference to FIG.
ミドルウェアが字幕ストリームを受信したことをトリガとしてHTMLアプリケーションが実行する動作について、図19を参照しながら説明する。 (Operation of HTML application when receiving subtitle stream)
The operation executed by the HTML application triggered by the middleware receiving the subtitle stream will be described with reference to FIG.
合成部は、図19に示すスクリプトタグ212をソースに含むHTMLアプリケーションの実行の開始時に放送映像音声オブジェクトにイベントリスナaddCaptionListnerを登録する。
The synthesizing unit registers the event listener addCaptionListner in the broadcast video / audio object at the start of the execution of the HTML application including the script tag 212 shown in FIG. 19 as a source.
これにより、字幕取得処理部383が字幕ストリームを受信すると、addCaptionListenerで登録したイベントが発火し、記憶部41から読み出されたSubtitle_info情報131と字幕リスト情報とcaptiondataが字幕提示処理部385に通知され、字幕提示処理部385は、SI情報取得処理部381が取得したSI情報に含まれる字幕リスト情報を提示する。
As a result, when the caption acquisition processing unit 383 receives the caption stream, the event registered by the addCaptionListener fires, and the Subtitle_info information 131, the caption list information, and caption data read from the storage unit 41 are notified to the caption presentation processing unit 385. The caption presentation processing unit 385 presents the caption list information included in the SI information acquired by the SI information acquisition processing unit 381.
字幕提示処理部385は、具体的には、ミドルウェア部が字幕ストリームを受信した直後に、SI情報取得処理部381が取得したSI情報に含まれる字幕リスト情報を、記憶部41におけるHTMLアプリケーションが参照可能なメモリ領域に記録する。その後、字幕提示処理部385は、上記メモリ領域に記録された字幕リスト情報を提示する。
Specifically, the subtitle presentation processing unit 385 refers to the subtitle list information included in the SI information acquired by the SI information acquisition processing unit 381 immediately after the middleware unit receives the subtitle stream by the HTML application in the storage unit 41. Record in possible memory area. Thereafter, the caption presentation processing unit 385 presents the caption list information recorded in the memory area.
(受信装置3の利点)
本実施形態に係る受信装置3は、実施形態4に係る受信装置3と同様の作用効果を奏する。 (Advantages of the receiving device 3)
The receivingdevice 3 according to the present embodiment has the same effects as the receiving device 3 according to the fourth embodiment.
本実施形態に係る受信装置3は、実施形態4に係る受信装置3と同様の作用効果を奏する。 (Advantages of the receiving device 3)
The receiving
〔実施形態6〕
本発明の更に別の実施形態について、更に図20~図23に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 6]
Still another embodiment of the present invention will be described with reference to FIGS. 20 to 23 as follows. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
本発明の更に別の実施形態について、更に図20~図23に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 6]
Still another embodiment of the present invention will be described with reference to FIGS. 20 to 23 as follows. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
図20は、本実施形態に係るシステム1’の受信装置3’が備えるアプリケーション関連処理部38’の要部構成を示した機能ブロック図である。図21は、受信装置3’が実行するHTML5アプリケーションのソースにおけるスクリプトタグ212の詳細を概略的に例示した図である。図22及び図23は、受信装置3’が参照するcaptiondataのデータフォーマットの各例について説明するための図である。
FIG. 20 is a functional block diagram showing the main configuration of the application-related processing unit 38 ′ included in the receiving device 3 ′ of the system 1 ′ according to the present embodiment. FIG. 21 is a diagram schematically illustrating details of the script tag 212 in the source of the HTML5 application executed by the receiving device 3 ′. 22 and 23 are diagrams for explaining examples of the data format of the captiondata referred to by the receiving device 3 '.
〔システムの概要および構成〕
本実施形態に係るシステムは、HTMLアプリケーションがSubtitle_info情報に基づいて字幕を提示するのではなく、字幕の提示時刻に関する情報を含むcaptiondataを参照して字幕を提示する点で、実施形態1から3に係るシステムと異なっている。 [System overview and configuration]
The system according to the present embodiment is different fromEmbodiments 1 to 3 in that the HTML application does not present the caption based on the Subtitle_info information, but presents the caption with reference to caption data including information regarding the presentation time of the caption. It is different from the system concerned.
本実施形態に係るシステムは、HTMLアプリケーションがSubtitle_info情報に基づいて字幕を提示するのではなく、字幕の提示時刻に関する情報を含むcaptiondataを参照して字幕を提示する点で、実施形態1から3に係るシステムと異なっている。 [System overview and configuration]
The system according to the present embodiment is different from
図1に示すように、システム1’は、送信装置2、受信装置3’(放送受信機)、コンテンツサーバ5およびコンテンツサーバ6を含んでいる。
As shown in FIG. 1, the system 1 ′ includes a transmission device 2, a reception device 3 ′ (broadcast receiver), a content server 5, and a content server 6.
(受信装置3’の構成)
図1に示すように、受信装置3’は、放送受信部31、操作部32、通信送受信部33、コンポーネント逆多重化部34、映像復号部35、音声復号部36、字幕復号部37、アプリケーション関連処理部38’、ディスプレイ39、スピーカ40および記憶部41を備える。 (Configuration of receiving device 3 ')
As illustrated in FIG. 1, thereception device 3 ′ includes a broadcast reception unit 31, an operation unit 32, a communication transmission / reception unit 33, a component demultiplexing unit 34, a video decoding unit 35, an audio decoding unit 36, a caption decoding unit 37, an application A related processing unit 38 ′, a display 39, a speaker 40, and a storage unit 41 are provided.
図1に示すように、受信装置3’は、放送受信部31、操作部32、通信送受信部33、コンポーネント逆多重化部34、映像復号部35、音声復号部36、字幕復号部37、アプリケーション関連処理部38’、ディスプレイ39、スピーカ40および記憶部41を備える。 (Configuration of receiving device 3 ')
As illustrated in FIG. 1, the
アプリケーション関連処理部38’は、コンポーネント逆多重化部34からSI情報およびHTMLアプリケーションを取得する。アプリケーション関連処理部38は、HTMLアプリケーションを参照して、提示すべきコンポーネントを特定する。
The application related processing unit 38 ′ acquires SI information and an HTML application from the component demultiplexing unit 34. The application related processing unit 38 refers to the HTML application and specifies a component to be presented.
アプリケーション関連処理部38’は、提示すべき映像コンポーネント、音声コンポーネントおよび字幕コンポーネントを、それぞれ、映像復号部35、音声復号部36および字幕復号部37から取得し、映像コンポーネント、音声コンポーネント及び字幕コンポーネントを提示する。
The application-related processing unit 38 ′ acquires the video component, the audio component, and the caption component to be presented from the video decoding unit 35, the audio decoding unit 36, and the caption decoding unit 37, respectively, and the video component, the audio component, and the caption component are obtained. Present.
なお、アプリケーション関連処理部38’の詳しい処理については、参照する図面を替えて後述する。
Note that detailed processing of the application-related processing unit 38 'will be described later with reference to another drawing.
(アプリケーション関連処理部38’)
アプリケーション関連処理部38’について、図20を用いて説明する。図20に示すように、アプリケーション関連処理部38’には、ミドルウェア部および合成部が含まれている。 (Application-related processing unit 38 ')
The application relatedprocessing unit 38 ′ will be described with reference to FIG. As shown in FIG. 20, the application-related processing unit 38 ′ includes a middleware unit and a synthesis unit.
アプリケーション関連処理部38’について、図20を用いて説明する。図20に示すように、アプリケーション関連処理部38’には、ミドルウェア部および合成部が含まれている。 (Application-related processing unit 38 ')
The application related
合成部は、HTMLアプリケーションを実行する機能を有している。
The synthesis unit has a function of executing an HTML application.
このHTMLアプリケーションを実行した合成部は、図20に示すように、映像音声提示処理部384及び字幕提示処理部385’として機能する。
The synthesizing unit that has executed the HTML application functions as a video / audio presentation processing unit 384 and a caption presentation processing unit 385 'as shown in FIG.
字幕提示処理部385’は、HTMLアプリケーションを参照して提示すべき字幕コンポーネントを特定する。字幕提示処理部385’は、提示すべき字幕コンポーネントを字幕取得処理部383から取得し、取得した字幕コンポーネントを提示する。
The subtitle presentation processing unit 385 'specifies a subtitle component to be presented with reference to the HTML application. The caption presentation processing unit 385 'acquires the caption component to be presented from the caption acquisition processing unit 383, and presents the acquired caption component.
具体的には、図21からわかるように、合成部は、図21に示すスクリプトタグ212をソースに含むHTMLアプリケーションの実行の開始時に放送映像音声オブジェクトにイベントリスナaddCaptionListenerを登録する。これにより、字幕取得処理部383が番組の字幕ストリームを受信するとaddCaptionListenerで登録したイベントが発火し、受信した字幕ストリームを変換したcaptiondataが字幕提示処理部385’に通知される。このcaptiondataのデータフォーマットは図22に示す通りである。
Specifically, as can be seen from FIG. 21, the synthesis unit registers an event listener addCaptionListener in the broadcast video / audio object at the start of the execution of the HTML application including the script tag 212 shown in FIG. Thereby, when the caption acquisition processing unit 383 receives the caption stream of the program, the event registered by addCaptionListener is fired, and caption data obtained by converting the received caption stream is notified to the caption presentation processing unit 385 ′. The data format of the captiondata is as shown in FIG.
即ち、図22に示すように、captiondataは、字幕及び各サブサンプルデータの提示開始時刻、提示期間、ID、スタイル要素、字幕の本文データ、並びに、各サブサンプルデータ(バイナリデータ)から構成されている。
That is, as shown in FIG. 22, captiondata is composed of the presentation start time, presentation period, ID, style element, subtitle text data, and subsample data (binary data) of subtitles and subsample data. Yes.
なお、上記提示開始時刻の情報は、SI情報取得処理部381が取得したSubtitle_info情報131と、字幕取得処理部383が、字幕コンポーネントのMMTPパケットから抽出したものと、字幕取得処理部383がTTML文書ファイルから抽出したデータから算出したものとから算出した情報であるが、上記のうちいずれかのデータから算出した情報でもよい。
The presentation start time information includes Subtitle_info information 131 acquired by the SI information acquisition processing unit 381, information extracted by the subtitle acquisition processing unit 383 from the MMTP packet of the subtitle component, and the subtitle acquisition processing unit 383 by the TTML document. The information is calculated from data calculated from data extracted from a file, but may be information calculated from any of the above data.
このcaptiondataを取得した字幕提示処理部385’は、このcaptiondataが指定する提示方法によって、字幕、及び、その他のサブサンプルデータを提示する。即ち、このcaptiondataによって指定される提示開始時刻から、このcaptiondataによって指定される長さの時間に亘って、字幕の本文データ及び各サブサンプルデータを提示する。
The caption presentation processing unit 385 'that has acquired the caption data presents the caption and other subsample data by the presentation method specified by the caption data. That is, the subtitle text data and each sub-sample data are presented from the presentation start time designated by the caption data to the length of time designated by the caption data.
なお、本実施例では、時刻情報として提示開始時刻、および提示期間の長さを送ることとしたが、提示開始時刻と提示終了時刻であってもよく、字幕の提示時刻を制御できる情報であればよい。また、本実施例では、各字幕データの実データをcaptiondataに含めて送ることとしたが、データの替わりに当該データが格納されている格納先情報をURLとして含めて送ってもよい。
In the present embodiment, the presentation start time and the length of the presentation period are sent as time information. However, the presentation start time and the presentation end time may be used as long as the information can control the presentation time of the caption. That's fine. In the present embodiment, the actual data of each caption data is included in the caption data and sent, but instead of the data, the storage location information in which the data is stored may be included and sent as a URL.
(captiondataの別の例について)
captiondataのデータフォーマットは図22に示すものには限定されない。即ち、captiondataのデータフォーマットは図23に示すようなものであってもよい。 (About another example of captiondata)
The data format of captiondata is not limited to that shown in FIG. That is, the data format of captiondata may be as shown in FIG.
captiondataのデータフォーマットは図22に示すものには限定されない。即ち、captiondataのデータフォーマットは図23に示すようなものであってもよい。 (About another example of captiondata)
The data format of captiondata is not limited to that shown in FIG. That is, the data format of captiondata may be as shown in FIG.
即ち、図23に示すように、captiondataは、字幕及び各サブサンプルデータについて、提示開始時刻、提示期間、ID及びスタイル要素の各メタ情報を含んでいてもよい。
That is, as shown in FIG. 23, the caption data may include meta information of a presentation start time, a presentation period, an ID, and a style element for the caption and each subsample data.
なお、本実施例では、時刻情報として提示開始時刻、および提示期間の長さを送ることとしたが、提示開始時刻と提示終了時刻であってもよく、字幕の提示時刻を制御できる情報であればよい。また、本実施例では、各字幕データの実データをcaptiondataに含めて送ることとしたが、データの替わりに当該データが格納されている格納先情報をURLとして含めて送ってもよい。
In the present embodiment, the presentation start time and the length of the presentation period are sent as time information. However, the presentation start time and the presentation end time may be used as long as the information can control the presentation time of the caption. That's fine. In the present embodiment, the actual data of each caption data is included in the caption data and sent, but instead of the data, the storage location information in which the data is stored may be included and sent as a URL.
(受信装置3の利点)
本実施形態に係る受信装置3は、実施形態1から3に係る受信装置3と同様の作用効果を奏する。 (Advantages of the receiving device 3)
The receivingdevice 3 according to the present embodiment has the same effects as the receiving device 3 according to the first to third embodiments.
本実施形態に係る受信装置3は、実施形態1から3に係る受信装置3と同様の作用効果を奏する。 (Advantages of the receiving device 3)
The receiving
(その他の付記事項)
図11および図13の例では、合成部は、スクリプトの記述に従い、Subtitle_info情報をそのまま提示するようになっていた。 (Other supplementary notes)
In the example of FIG. 11 and FIG. 13, the synthesizing unit presents the Subtitle_info information as it is according to the description of the script.
図11および図13の例では、合成部は、スクリプトの記述に従い、Subtitle_info情報をそのまま提示するようになっていた。 (Other supplementary notes)
In the example of FIG. 11 and FIG. 13, the synthesizing unit presents the Subtitle_info information as it is according to the description of the script.
これに関し、図11及び図13のスクリプトに、Subtitle_info情報を一般消費者が理解できるような文言に変換する関数を盛り込んでもよい。そして、合成部は、そのような関数が盛り込まれたスクリプトに従って、上記文言を提示してもよい。
In this regard, the script of FIGS. 11 and 13 may include a function for converting the Subtitle_info information into words that can be understood by general consumers. And a synthetic | combination part may show the said wording according to the script in which such a function was incorporated.
図17の例では、合成部は、スクリプトの記述に従い、Subtitle_list情報をそのまま提示するようになっていた。
In the example of FIG. 17, the synthesizing unit presents the Subtitle_list information as it is according to the description of the script.
これに関し、図11及び図13のスクリプトに、Subtitle_list情報を一般消費者が理解できるような文言に変換する関数を盛り込んでもよい。そして、合成部は、そのような関数が盛り込まれたスクリプトに従って、上記文言を提示してもよい。
In this regard, a function for converting the Subtitle_list information into words that can be understood by general consumers may be included in the scripts shown in FIGS. 11 and 13. And a synthetic | combination part may show the said wording according to the script in which such a function was incorporated.
〔ソフトウェアによる実現例〕
受信装置3、3’の制御ブロック(特に、SI情報取得処理部381、アプリデータ取得処理部382、字幕取得処理部383、映像音声提示処理部384、及び、字幕提示処理部385、385’)は、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェアによって実現してもよい。 [Example of software implementation]
Control blocks of receiving devices 3 and 3 ′ (in particular, SI information acquisition processing unit 381, application data acquisition processing unit 382, caption acquisition processing unit 383, video / audio presentation processing unit 384, and caption presentation processing units 385 and 385 ′). May be realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like, or may be realized by software using a CPU (Central Processing Unit).
受信装置3、3’の制御ブロック(特に、SI情報取得処理部381、アプリデータ取得処理部382、字幕取得処理部383、映像音声提示処理部384、及び、字幕提示処理部385、385’)は、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェアによって実現してもよい。 [Example of software implementation]
Control blocks of receiving
後者の場合、受信装置3、3’は、各機能を実現するソフトウェアであるプログラムの命令を実行するCPU、上記プログラムおよび各種データがコンピュータ(またはCPU)で読み取り可能に記録されたROM(Read Only Memory)または記憶装置(これらを「記録媒体」と称する)、上記プログラムを展開するRAM(Random Access Memory)などを備えている。そして、コンピュータ(またはCPU)が上記プログラムを上記記録媒体から読み取って実行することにより、本発明の一態様の目的が達成される。上記記録媒体としては、「一時的でない有形の媒体」、例えば、テープ、ディスク、カード、半導体メモリ、プログラマブルな論理回路などを用いることができる。また、上記プログラムは、該プログラムを伝送可能な任意の伝送媒体(通信ネットワークや放送波等)を介して上記コンピュータに供給されてもよい。なお、本発明の一態様は、上記プログラムが電子的な伝送によって具現化された、搬送波に埋め込まれたデータ信号の形態でも実現され得る。
In the latter case, the receiving devices 3 and 3 ′ are a CPU that executes instructions of a program that is software that realizes each function, and a ROM (Read Only) in which the program and various data are recorded so as to be readable by a computer (or CPU). Memory) or a storage device (these are referred to as “recording media”), a RAM (Random Access Memory) for expanding the program, and the like. The computer (or CPU) reads the program from the recording medium and executes the program, thereby achieving the object of one embodiment of the present invention. As the recording medium, a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used. The program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program. Note that one embodiment of the present invention can also be realized in the form of a data signal embedded in a carrier wave, in which the program is embodied by electronic transmission.
〔まとめ〕
本発明の態様1に係る放送受信機(受信装置3、3’)は、下位層のプログラム(ミドルウェア)と上位層のプログラム(合成部)とが動作する放送受信機において、上記上位層のプログラムの機能として実現されている、HTMLアプリケーション上で番組の字幕を提示する字幕提示部(例えば、字幕提示処理部385)と、上記字幕の提示方法を示すメタ情報(例えば、Subtitle_info情報)を上記放送受信機の外部(送信装置2、2’)から取得する取得処理部(例えば、SI情報取得処理部381)であって、上記下位層のプログラムの機能として実現されている取得処理部と、を備え、上記字幕提示部は、上記取得処理部が取得した上記メタ情報を取得し、上記メタ情報を参照して上記字幕を提示する。 [Summary]
The broadcast receiver ( reception devices 3 and 3 ′) according to the first aspect of the present invention is a broadcast receiver in which a lower layer program (middleware) and an upper layer program (synthesizing unit) are operated. The subtitle presenting section (for example, the subtitle presenting processing section 385) that presents the subtitles of the program on the HTML application and meta information (for example, Subtitle_info information) indicating the subtitle presenting method is broadcasted. An acquisition processing unit (for example, an SI information acquisition processing unit 381) that is acquired from outside the receiver (transmitting device 2, 2 ′), and is implemented as a function of the lower layer program, The caption presentation unit acquires the meta information acquired by the acquisition processing unit, and presents the caption with reference to the meta information.
本発明の態様1に係る放送受信機(受信装置3、3’)は、下位層のプログラム(ミドルウェア)と上位層のプログラム(合成部)とが動作する放送受信機において、上記上位層のプログラムの機能として実現されている、HTMLアプリケーション上で番組の字幕を提示する字幕提示部(例えば、字幕提示処理部385)と、上記字幕の提示方法を示すメタ情報(例えば、Subtitle_info情報)を上記放送受信機の外部(送信装置2、2’)から取得する取得処理部(例えば、SI情報取得処理部381)であって、上記下位層のプログラムの機能として実現されている取得処理部と、を備え、上記字幕提示部は、上記取得処理部が取得した上記メタ情報を取得し、上記メタ情報を参照して上記字幕を提示する。 [Summary]
The broadcast receiver (
上記の構成によれば、上記メタ情報を上記番組の配信者(もしくはHTMLアプリケーションの提供事業者)が所望する内容にすれば、上記放送受信機は、上記配信者(もしくはHTMLアプリケーションの提供事業者)が所望する提示方法で上位層のプログラムがHTMLアプリケーション上で字幕を提示できる、という効果を奏する。
According to said structure, if the said meta information is made into the content which the distributor (or provider of HTML application) of the said program desires, the said broadcast receiver will be the provider (or provider of HTML application). ) Has an effect that the upper layer program can present subtitles on the HTML application in the desired presentation method.
なお、本発明の態様2に係る放送受信機は、上記態様1において、上記字幕提示部が、所定のイベントが発火したことを契機に、上記取得処理部が取得した上記メタ情報を取得してもよい。
In the broadcast receiver according to aspect 2 of the present invention, in the aspect 1, the caption presentation unit acquires the meta information acquired by the acquisition processing unit when a predetermined event is fired. Also good.
本発明の態様3に係る放送受信機は、上記態様1または2において、上記字幕提示部は、上記番組が選局されたことを契機に、上記取得処理部が取得した上記メタ情報を取得してもよい。
The broadcast receiver according to aspect 3 of the present invention is the broadcast receiver according to aspect 1 or 2, wherein the caption presentation unit acquires the meta information acquired by the acquisition processing unit when the program is selected. May be.
上記の構成によれば、上記放送受信機は、ユーザが上記番組を視聴し始めた直後から、上記配信者(もしくはHTMLアプリケーションの提供事業者)が所望する提示方法で提示される字幕を上記ユーザに見させることができる、という更なる効果を奏する。
According to the above configuration, the broadcast receiver receives subtitles presented in the presentation method desired by the distributor (or HTML application provider) immediately after the user starts viewing the program. There is a further effect that can be seen.
本発明の態様4に係る放送受信機は、上記態様1から3のいずれかの態様において、上記取得処理部が取得する上記メタ情報(字幕ストリームに含まれる、captiondataを構成する情報)は、TTML文書のサブサンプルデータを取得するための情報(TTML文書のファイル名)に加え、画像のサブサンプルデータを取得するための情報(画像のファイル名)、音声のサブサンプルデータを取得するための情報(音声のファイル名)、及び、外字フォントのサブサンプルデータを取得するための情報(外字フォントのファイル名)の全部又は一部を含んでいてもよい。
In the broadcast receiver according to aspect 4 of the present invention, in any of the aspects 1 to 3, the meta information (information included in caption data included in the caption stream) acquired by the acquisition processing unit is TTML. In addition to information for obtaining document subsample data (file name of TTML document), information for obtaining image subsample data (file name of image), information for obtaining audio subsample data (Sound file name) and information for acquiring sub-sample data of external character font (external character font file name) may be included in whole or in part.
上記の構成によれば、上記放送受信機は、上位層のプログラムが、画像、音声、及び/又は外字フォントを利用して字幕を提示できる、という更なる効果を奏する。
According to the above configuration, the broadcast receiver has the further effect that the upper layer program can present subtitles using images, sounds, and / or external character fonts.
本発明の態様5に係る放送受信機は、上記態様1から4のいずれかの態様において、上記字幕提示部が、上記番組の字幕として、異なる複数の言語のうちのユーザに指定された言語の字幕を提示するようになっており、上記取得処理部は、上記複数の言語のリスト(Subtitle_list情報)を上記放送受信機の外部から取得するようになっており、上記字幕提示部は、上記ユーザに指定された言語の字幕と共に、上記リストを提示してもよい。
The broadcast receiver according to aspect 5 of the present invention is the broadcast receiver according to any one of the aspects 1 to 4, wherein the caption presentation unit has a language specified by a user among a plurality of different languages as the caption of the program. Subtitles are presented, and the acquisition processing unit is configured to obtain a list (Subtitle_list information) of the plurality of languages from the outside of the broadcast receiver. The above list may be presented together with subtitles in the language specified in.
上記の構成によれば、上記放送受信機は、上位層のプログラムを通じて、上記ユーザが字幕の言語として指定可能な複数の言語のリストを上記ユーザに確認させることができる、という更なる効果を奏する。
According to said structure, the said broadcast receiver has the further effect that the said user can confirm the list | wrist of the several language which the said user can designate as a language of a subtitles through the upper layer program. .
本発明の態様6に係る放送受信機は、上記態様1において、上記取得処理部が取得する上記メタ情報は、上記字幕の提示時刻に関する情報(例えば、reference_start_time)であってもよい。
In the broadcast receiver according to aspect 6 of the present invention, in the aspect 1, the meta information acquired by the acquisition processing unit may be information related to the caption presentation time (for example, reference_start_time).
上記の構成によれば、上記放送受信機は、上記配信者(もしくはHTMLアプリケーションの提供事業者)が所望するタイミングで上位層のプログラムが字幕を提示できる、という更なる効果を奏する。
According to the above configuration, the broadcast receiver has the further effect that the upper layer program can present subtitles at a timing desired by the distributor (or provider of the HTML application).
本発明の態様7に係る字幕提示方法は、下位層のプログラムと上位層のプログラムとが動作する放送受信機の字幕提示方法において、上記上位層のプログラムがHTMLアプリケーション上で番組の字幕を提示する字幕提示ステップと、上記下位層のプログラムが、上記字幕の提示方法を示すメタ情報を上記放送受信機の外部から取得する取得ステップと、を含み、上記字幕提示ステップにて、上記取得ステップにて取得した上記メタ情報を取得し、上記メタ情報を参照して上記字幕を提示してもよい。
The subtitle presentation method according to aspect 7 of the present invention is a subtitle presentation method for a broadcast receiver in which a lower layer program and an upper layer program operate, and the upper layer program presents a program subtitle on an HTML application. A subtitle presenting step, and an acquisition step in which the lower layer program obtains meta information indicating the subtitle presenting method from outside the broadcast receiver. The acquired meta information may be acquired, and the subtitles may be presented with reference to the meta information.
上記の構成によれば、上記態様7に係る字幕提示方法は、上記態様1に係る放送受信機と同様の作用効果を奏する。
According to the above configuration, the caption presentation method according to aspect 7 has the same operational effects as the broadcast receiver according to aspect 1.
本発明の一実施形態は上述した各実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本発明の技術的範囲に含まれる。さらに、各実施形態にそれぞれ開示された技術的手段を組み合わせることにより、新しい技術的特徴を形成することができる。
The embodiment of the present invention is not limited to the above-described embodiments, and various modifications are possible within the scope shown in the claims, and technical means disclosed in different embodiments are appropriately combined. The obtained embodiment is also included in the technical scope of the present invention. Furthermore, a new technical feature can be formed by combining the technical means disclosed in each embodiment.
本出願は、2015年9月3日に出願された特願2015-174158に対して、優先権の利益を主張するものであり、それを参照することにより、その内容のすべてを本書に含める。
This application claims the benefit of priority to Japanese Patent Application No. 2015-174158 filed on September 3, 2015, and the contents of this application are incorporated herein by reference.
1 システム
2 送信装置
3、3’ 受信装置
5、6 コンテンツサーバ
34 コンポーネント逆多重化部
35 映像復号部
36 音声復号部
37 字幕復号部
38 アプリケーション関連処理部
38’ アプリケーション関連処理部
381 SI情報取得処理部(取得処理部)
383 字幕取得処理部(取得処理部)
385 字幕提示処理部(字幕提示部) DESCRIPTION OFSYMBOLS 1 System 2 Transmission apparatus 3, 3 'Reception apparatus 5, 6 Content server 34 Component demultiplexing part 35 Video decoding part 36 Audio decoding part 37 Subtitle decoding part 38 Application related process part 38' Application related process part 381 SI information acquisition process Part (acquisition processing part)
383 caption acquisition processing unit (acquisition processing unit)
385 Subtitle presentation processing unit (subtitle presentation unit)
2 送信装置
3、3’ 受信装置
5、6 コンテンツサーバ
34 コンポーネント逆多重化部
35 映像復号部
36 音声復号部
37 字幕復号部
38 アプリケーション関連処理部
38’ アプリケーション関連処理部
381 SI情報取得処理部(取得処理部)
383 字幕取得処理部(取得処理部)
385 字幕提示処理部(字幕提示部) DESCRIPTION OF
383 caption acquisition processing unit (acquisition processing unit)
385 Subtitle presentation processing unit (subtitle presentation unit)
Claims (7)
- 下位層のプログラムと上位層のプログラムとが動作する放送受信機において、
上記上位層のプログラムの機能として実現されている、HTMLアプリケーション上で番組の字幕を提示する字幕提示部と、
上記字幕の提示方法を示すメタ情報を上記放送受信機の外部から取得する取得処理部であって、上記下位層のプログラムの機能として実現されている取得処理部と、を備え、
上記字幕提示部は、上記取得処理部が取得した上記メタ情報を取得し、上記メタ情報を参照して上記字幕を提示する、ことを特徴とする放送受信機。 In a broadcast receiver in which a lower layer program and an upper layer program operate,
A subtitle presenting section that is implemented as a function of the upper layer program and presents a subtitle of a program on an HTML application;
An acquisition processing unit that acquires meta information indicating the subtitle presentation method from the outside of the broadcast receiver, the acquisition processing unit being realized as a function of the lower layer program,
The subtitle presenting section acquires the meta information acquired by the acquisition processing section, and presents the subtitle with reference to the meta information. - 上記字幕提示部は、所定のイベントが発火したことを契機に、上記取得処理部が取得した上記メタ情報を取得する、ことを特徴とする請求項1に記載の放送受信機。 2. The broadcast receiver according to claim 1, wherein the caption presentation unit acquires the meta information acquired by the acquisition processing unit when a predetermined event is fired.
- 上記字幕提示部は、上記番組が選局されたことを契機に、上記取得処理部が取得した上記メタ情報を取得する、ことを特徴とする請求項1または2に記載の放送受信機。 The broadcast receiver according to claim 1 or 2, wherein the caption presentation unit acquires the meta information acquired by the acquisition processing unit when the program is selected.
- 上記取得処理部が取得する上記メタ情報は、TTML文書のサブサンプルデータを取得するための情報に加え、画像のサブサンプルデータを取得するための情報、音声のサブサンプルデータを取得するための情報、及び、外字フォントのサブサンプルデータを取得するための情報の全部又は一部を含んでいる、ことを特徴とする請求項1から3のいずれか1項に記載の放送受信機。 The meta information acquired by the acquisition processing unit is information for acquiring sub-sample data of an image, information for acquiring sub-sample data of audio, in addition to information for acquiring sub-sample data of a TTML document 4. The broadcast receiver according to claim 1, wherein the broadcast receiver includes all or a part of information for acquiring sub-sample data of an external character font. 5.
- 上記字幕提示部は、上記番組の字幕として、異なる複数の言語のうちのユーザに指定された言語の字幕を提示するようになっており、
上記取得処理部は、上記複数の言語のリストを上記放送受信機の外部から取得するようになっており、
上記字幕提示部は、上記ユーザに指定された言語の字幕と共に、上記リストを提示する、ことを特徴とする請求項1から4のいずれか1項に記載の放送受信機。 The subtitle presenting unit is configured to present subtitles in a language specified to the user among a plurality of different languages as subtitles of the program,
The acquisition processing unit is configured to acquire the list of the plurality of languages from outside the broadcast receiver,
The broadcast receiver according to any one of claims 1 to 4, wherein the subtitle presenting section presents the list together with subtitles in a language designated by the user. - 上記取得処理部が取得する上記メタ情報は、上記字幕の提示時刻に関する情報である、ことを特徴とする請求項1に記載の放送受信機。 The broadcast receiver according to claim 1, wherein the meta information acquired by the acquisition processing unit is information related to a presentation time of the caption.
- 下位層のプログラムと上位層のプログラムとが動作する放送受信機の字幕提示方法において、
上記上位層のプログラムがHTMLアプリケーション上で番組の字幕を提示する字幕提示ステップと、
上記下位層のプログラムが、上記字幕の提示方法を示すメタ情報を上記放送受信機の外部から取得する取得ステップと、を含み、
上記字幕提示ステップにて、上記取得ステップにて取得した上記メタ情報を取得し、上記メタ情報を参照して上記字幕を提示する、ことを特徴とする字幕提示方法。 In a subtitle presentation method of a broadcast receiver in which a lower layer program and an upper layer program operate,
A subtitle presenting step in which the upper layer program presents a subtitle of a program on an HTML application;
An acquisition step in which the lower-layer program acquires meta-information indicating the subtitle presentation method from the outside of the broadcast receiver;
The subtitle presenting method characterized in that, in the subtitle presenting step, the meta information obtained in the obtaining step is obtained, and the subtitle is presented with reference to the meta information.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015174158A JP2017050788A (en) | 2015-09-03 | 2015-09-03 | Broadcast receiving apparatus and caption presentation method |
JP2015-174158 | 2015-09-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017038332A1 true WO2017038332A1 (en) | 2017-03-09 |
Family
ID=58187320
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/072227 WO2017038332A1 (en) | 2015-09-03 | 2016-07-28 | Broadcast receiver and subtitle presenting method |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2017050788A (en) |
WO (1) | WO2017038332A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5713142B1 (en) * | 2014-12-05 | 2015-05-07 | ソニー株式会社 | Receiving apparatus and data processing method |
JP2015159364A (en) * | 2014-02-21 | 2015-09-03 | 日本放送協会 | receiver and broadcasting system |
-
2015
- 2015-09-03 JP JP2015174158A patent/JP2017050788A/en active Pending
-
2016
- 2016-07-28 WO PCT/JP2016/072227 patent/WO2017038332A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015159364A (en) * | 2014-02-21 | 2015-09-03 | 日本放送協会 | receiver and broadcasting system |
JP5713142B1 (en) * | 2014-12-05 | 2015-05-07 | ソニー株式会社 | Receiving apparatus and data processing method |
Non-Patent Citations (1)
Title |
---|
AKITSUGU BABA: "New Closed Captioning And Character Superimposition System And Service Examples For Super Hi-vision Satellite Broadcasting", THE JOURNAL OF THE INSTITUTE OF IMAGE INFORMATION AND TELEVISION ENGINEERS, vol. 69, no. 7, 1 September 2015 (2015-09-01), pages 693 - 696 * |
Also Published As
Publication number | Publication date |
---|---|
JP2017050788A (en) | 2017-03-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2866436A1 (en) | Method and apparatus for transmission and reception of media data | |
KR20150025514A (en) | Method for relaying contents in contents reproducing device | |
JP6363745B2 (en) | Transmission device, reception device, and content transmission / reception system | |
JP7062117B2 (en) | Receiver and program | |
CA2953751C (en) | Method and apparatus for transmission and reception of media data | |
JP2018520546A (en) | Method for rendering audio-video content, decoder for implementing this method, and rendering device for rendering this audio-video content | |
WO2017038332A1 (en) | Broadcast receiver and subtitle presenting method | |
JP6455974B2 (en) | Receiving machine | |
WO2017038331A1 (en) | Receiving unit and program | |
JP7001639B2 (en) | system | |
JP6002438B2 (en) | Receiving machine | |
JP6928130B2 (en) | Broadcast program output method | |
JP6923727B2 (en) | Copy control method for broadcast program content | |
JP6923718B2 (en) | Broadcast receiver | |
JP6752949B2 (en) | Output control method and broadcast receiver | |
JP6923719B2 (en) | Broadcast receiver | |
JP7189518B2 (en) | Broadcast station application remote control key operation | |
JP2021170850A (en) | Copy control method for content of broadcast program | |
CN117812368A (en) | Display equipment and program playing method | |
Series | Integrated broadcast-broadband systems | |
JP2016028471A (en) | Receiving device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16841364 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16841364 Country of ref document: EP Kind code of ref document: A1 |