WO2017152935A1 - Image display device with synchronous audio and subtitle content generation function - Google Patents

Image display device with synchronous audio and subtitle content generation function Download PDF

Info

Publication number
WO2017152935A1
WO2017152935A1 PCT/EP2016/054771 EP2016054771W WO2017152935A1 WO 2017152935 A1 WO2017152935 A1 WO 2017152935A1 EP 2016054771 W EP2016054771 W EP 2016054771W WO 2017152935 A1 WO2017152935 A1 WO 2017152935A1
Authority
WO
WIPO (PCT)
Prior art keywords
language
image display
word
display device
subtitles
Prior art date
Application number
PCT/EP2016/054771
Other languages
French (fr)
Inventor
Gizem MISIRLI
Ozgur OZEN
Dogan BAHAROZU
Emre PEKER
Ozgur CEYLAN
Original Assignee
Arcelik Anonim Sirketi
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Arcelik Anonim Sirketi filed Critical Arcelik Anonim Sirketi
Priority to PCT/EP2016/054771 priority Critical patent/WO2017152935A1/en
Publication of WO2017152935A1 publication Critical patent/WO2017152935A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Definitions

  • the present invention relates to an image display device having an automatic and dynamic subtitle generation feature such that the users are automatically delivered a real-time processed subtitle set according to their predefined interest settings.
  • Digital and interactive TV systems provide a comprehensive amount of TV channels and programs leading to the general outcome that conventional methods for setting personal viewing preferences including subtitle management may become inefficient to the extent that such methods are simply unperformable in an efficient manner.
  • users can be offered subtitles containing information in addition to the actual content thereof so as to emphasize the information in the subtitles in a non-intrusive manner with minimal interaction with the program content actually displayed on the screen.
  • US2013209058 discloses apparatus and a method for changing an attribute of a subtitle in an image display device having a touchscreen.
  • the method includes reproducing moving picture contents, when a subtitle corresponding to the moving picture contents that are being reproduced exists, displaying a subtitle corresponding to the moving picture contents, and when a touch for a region on which the subtitle is displayed exists, changing an attribute of the subtitle depending on the touch information.
  • the present invention provides a system and method by which an image display device is operated such that users are automatically delivered dynamic subtitles by associating the audio content and the subtitle content.
  • the present invention therefore solves the technical problem of connecting a textual content in a subtitle block with a corresponding audio content in the actual audio stream without interrupting the actual TV viewing experience or undermining the quality of viewing for the users.
  • the present invention provides a system and method by which an image display device is operable to automatically associate predetermined expressions in the textual information with audio content during a program broadcast, as provided by the characterizing features defined in Claim 1.
  • Primary object of the present invention is to provide a system and method for operating an image display device by which textual content in a subtitle block is connected with a corresponding audio content in the actual audio stream such that the users are allowed to easily match corresponding written and audio contents.
  • the present invention proposes an image display device transforming audio signal content of a broadcast program content in the form to be transcribed as a time stamped text file by way of text-to-speech conversion techniques, for instance by employing the methods as disclosed in WO2014186662.
  • the image display device stores the time stamped text file and performs keyword searches to locate words in a lookup table containing a list of words in the language of the broadcast program content associated with words in a second language of subtitles. Further, another database contains words in the second language of subtitles as associated with a font different than a regular subtitle font or with an image.
  • the electronic control unit compares interested words as provided in the lookup table with those in the time stamped text file and determine their timewise locations.
  • the image display device plays a soft sound each time before a word in the language of the broadcast program content appearing in the lookup table occurs in the course of the audio content. Simultaneously, subtitles with a distinctive representation of the respective word are displayed.
  • Fig. 1 demonstrates a general flow diagram according to which the image display device of the invention executes the method of improved audio content subtitle content association according to the present invention.
  • the present invention proposes an image display device enabling a user to set a list of interested words and expressions in the form of a lookup table.
  • the image display devices having Internet connectivity may retrieve visually distinctive fonts or images to be fitted into the actual subtitle line as explained hereinafter.
  • the image display device of the present invention is fitted with an internal memory in order for processing and storing subtitles and associated fonts and visual formats such as images.
  • the users are provided with visually more distinctive subtitles presenting certain predefined words in more distinctive fonts or visual formats such as images.
  • the image display device may connect to a remote database to obtain distinctive fonts and visual formats as images and process the same to create subtitles containing certain words differentiating in the respective subtitle line or subtitle block due to their visual appearance.
  • the image display device provides that an image of a car is embedded in the subtitled, sized to occupy a respective area in the subtitle line, which size corresponds to the size of the actual word “car”, which would otherwise be present in the subtitle line.
  • the user can manually configure the list of interested words in different word categories, each one in association with the specific presentation format.
  • the subtitles are typically in the form of a horizontal band on the lower part of the screen at a predetermined location.
  • the present invention essentially provides that the user is provided with a locational sound marker to associate the visually distinctive word in a given subtitle block with the actual word in the audio content as will be delineated hereinafter.
  • an electronic control unit transforms the audio data transmitted through and stored in the transport stream by processing the same.
  • the electronic control unit decodes the received programs and the audio content of the program being viewed as extracted from the transport stream thereof is transcribed as a time stamped text file.
  • the electronic control unit determines timewise locations of any predefined words that are to be associated with fonts and visual formats such as images.
  • the electronic control unit based on the words in a lookup table prepared by the user or as retrieved from a remote network source, speech recognition and translation of spoken language into text is performed to associate each word in the lookup table with the corresponding original word in the original audio content.
  • the word in the source language of the original audio content is also marked by a sound marker such as a soft and non-intrusive musical note immediately before the respective word occurs in the audio track.
  • a sound marker such as a soft and non-intrusive musical note immediately before the respective word occurs in the audio track.
  • the present invention proposes an image display device receiving a plurality of broadcast program contents with audio signal contents from broadcasting service providers, the image display device comprising an electronic control unit.
  • the electronic control unit is configured to transform audio signal content of a broadcast program content in the form to be transcribed as a time stamped text file.
  • the image display device is fitted with an internal memory on which the electronic control unit stores the time stamped text file, a lookup table containing a list of words in the language of the broadcast program content associated with words in a second language of subtitles, each word in the second language of subtitles being associated with a font different than a regular subtitle font or with an image.
  • the electronic control unit is configured to perform a word search in the time stamped text file to timewise locate any word in the language of the broadcast program content.
  • the image display device is configured to play a sound marker before timewise location of a word in the language of the broadcast program content and to simultaneously display subtitles containing a word in a second language of subtitles and corresponding to the word in the language of the broadcast program content, the word in a second language of subtitles being displayed with a font different than a regular subtitle font or with an image.
  • transformed audio signal content associated with the broadcast program content is processed as searchable text data.
  • the image display devices has access to a remote network source to retrieve a list of words in the language of a broadcast program content associated with words in a second language of subtitles as well as fonts or images for words in the second language of subtitles.
  • a word in a different font than the font of the subtitles or an image associated with the word is fitted into a subtitle line in the manner to be sized to occupy a respective area in the subtitle line, the area corresponding to the size of the actual word.
  • the sound marker is a notification sound playable immediately before a respective word in the language of the broadcast program content occurs in the audio signal content.
  • the subtitles are presented in the form of a horizontal band.
  • the image display devices has access to a remote network source to retrieve a time stamped text file of a broadcast program content from a remote source or database.
  • the system of the present invention is particularly advantageous in that language learners can improve their language skills in non-native languages more practically because while the translation of a conversation in the audio track is offered in the form of distinctive subtitle content, the word in the source language of the original audio content is also marked by a sound marker immediately before the same.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present invention relates to an image display device having an automatic and dynamic subtitle generation feature such that the users are automatically delivered a real-time processed subtitle set according to their predefined interest settings. The present invention more particularly relates to an image display device with an electronic control unit configured to transform audio signal content of a broadcast program content in the form to be transcribed as a time stamped text file, to play a sound marker before timewise location of a word in the language of the broadcast program content and to simultaneously display subtitles containing a word in a second language of subtitles and corresponding to the word in the language of the broadcast program content, the word in a second language of subtitles being displayed with a font different than a regular subtitle font or with an image.

Description

IMAGE DISPLAY DEVICE WITH SYNCHRONOUS AUDIO AND SUBTITLE CONTENT GENERATION FUNCTION
The present invention relates to an image display device having an automatic and dynamic subtitle generation feature such that the users are automatically delivered a real-time processed subtitle set according to their predefined interest settings.
Digital and interactive TV systems provide a comprehensive amount of TV channels and programs leading to the general outcome that conventional methods for setting personal viewing preferences including subtitle management may become inefficient to the extent that such methods are simply unperformable in an efficient manner.
Assuming that image display devices today connect to the Internet and are provided with a plurality of services and applications to retrieve information from various online sources, it is desirable to provide a viewing experience allowing more interaction in accordance with a given user’s interests in a non-intrusive manner, i.e. interfering with the actual program content in a minimal level.
Accordingly, users can be offered subtitles containing information in addition to the actual content thereof so as to emphasize the information in the subtitles in a non-intrusive manner with minimal interaction with the program content actually displayed on the screen.
Among others, one of the prior art disclosures in the technical field of the present invention can be referred to as US2013209058, which discloses apparatus and a method for changing an attribute of a subtitle in an image display device having a touchscreen. The method includes reproducing moving picture contents, when a subtitle corresponding to the moving picture contents that are being reproduced exists, displaying a subtitle corresponding to the moving picture contents, and when a touch for a region on which the subtitle is displayed exists, changing an attribute of the subtitle depending on the touch information.
The present invention provides a system and method by which an image display device is operated such that users are automatically delivered dynamic subtitles by associating the audio content and the subtitle content. The present invention therefore solves the technical problem of connecting a textual content in a subtitle block with a corresponding audio content in the actual audio stream without interrupting the actual TV viewing experience or undermining the quality of viewing for the users.
The present invention provides a system and method by which an image display device is operable to automatically associate predetermined expressions in the textual information with audio content during a program broadcast, as provided by the characterizing features defined in Claim 1.
Primary object of the present invention is to provide a system and method for operating an image display device by which textual content in a subtitle block is connected with a corresponding audio content in the actual audio stream such that the users are allowed to easily match corresponding written and audio contents.
The present invention proposes an image display device transforming audio signal content of a broadcast program content in the form to be transcribed as a time stamped text file by way of text-to-speech conversion techniques, for instance by employing the methods as disclosed in WO2014186662.
The image display device stores the time stamped text file and performs keyword searches to locate words in a lookup table containing a list of words in the language of the broadcast program content associated with words in a second language of subtitles. Further, another database contains words in the second language of subtitles as associated with a font different than a regular subtitle font or with an image.
The electronic control unit then compares interested words as provided in the lookup table with those in the time stamped text file and determine their timewise locations. The image display device plays a soft sound each time before a word in the language of the broadcast program content appearing in the lookup table occurs in the course of the audio content. Simultaneously, subtitles with a distinctive representation of the respective word are displayed.
Accompanying drawings are given solely for the purpose of exemplifying a system and method by which an image display device is operable, whose advantages over prior art were outlined above and will be explained in brief hereinafter.
The drawings are not meant to delimit the scope of protection as identified in the claims nor should they be referred to alone in an effort to interpret the scope identified in the claims without recourse to the technical disclosure in the description of the present invention.
Fig. 1 demonstrates a general flow diagram according to which the image display device of the invention executes the method of improved audio content subtitle content association according to the present invention.
The present invention proposes an image display device enabling a user to set a list of interested words and expressions in the form of a lookup table. The image display devices having Internet connectivity may retrieve visually distinctive fonts or images to be fitted into the actual subtitle line as explained hereinafter. The image display device of the present invention is fitted with an internal memory in order for processing and storing subtitles and associated fonts and visual formats such as images.
According to the invention, the users are provided with visually more distinctive subtitles presenting certain predefined words in more distinctive fonts or visual formats such as images. This is especially advantageous for viewers intending to improve their language skills in non-native languages. The image display device may connect to a remote database to obtain distinctive fonts and visual formats as images and process the same to create subtitles containing certain words differentiating in the respective subtitle line or subtitle block due to their visual appearance. For instance, the image display device provides that an image of a car is embedded in the subtitled, sized to occupy a respective area in the subtitle line, which size corresponds to the size of the actual word “car”, which would otherwise be present in the subtitle line.
The user can manually configure the list of interested words in different word categories, each one in association with the specific presentation format. The subtitles are typically in the form of a horizontal band on the lower part of the screen at a predetermined location.
Other than presentation of predetermined words in a visually distinctive manner, the present invention essentially provides that the user is provided with a locational sound marker to associate the visually distinctive word in a given subtitle block with the actual word in the audio content as will be delineated hereinafter.
According to the invention, an electronic control unit transforms the audio data transmitted through and stored in the transport stream by processing the same. The electronic control unit decodes the received programs and the audio content of the program being viewed as extracted from the transport stream thereof is transcribed as a time stamped text file. To this end, the electronic control unit determines timewise locations of any predefined words that are to be associated with fonts and visual formats such as images. In other words, by processing the audio content of a broadcast program, the electronic control unit, based on the words in a lookup table prepared by the user or as retrieved from a remote network source, speech recognition and translation of spoken language into text is performed to associate each word in the lookup table with the corresponding original word in the original audio content.
Therefore, while the translation of a conversation in the audio record is delivered as a subtitle block and a respective word in the conversation is visually distinctively marked with a special font or image at its respective dedicated location in the subtitle, the word in the source language of the original audio content is also marked by a sound marker such as a soft and non-intrusive musical note immediately before the respective word occurs in the audio track. In a more specific manner, while the sound marker is effective in drawing the attention of the user to the spoken word that is just about to occur in the audio record, the visually distinctive font or image in the subtitle block supports the natural learning process by a visual indication.
In summary, the present invention proposes an image display device receiving a plurality of broadcast program contents with audio signal contents from broadcasting service providers, the image display device comprising an electronic control unit.
In one embodiment of the invention, the electronic control unit is configured to transform audio signal content of a broadcast program content in the form to be transcribed as a time stamped text file.
In a further embodiment of the present invention, the image display device is fitted with an internal memory on which the electronic control unit stores the time stamped text file, a lookup table containing a list of words in the language of the broadcast program content associated with words in a second language of subtitles, each word in the second language of subtitles being associated with a font different than a regular subtitle font or with an image.
In a further embodiment of the present invention, the electronic control unit is configured to perform a word search in the time stamped text file to timewise locate any word in the language of the broadcast program content.
In a further embodiment of the present invention, the image display device is configured to play a sound marker before timewise location of a word in the language of the broadcast program content and to simultaneously display subtitles containing a word in a second language of subtitles and corresponding to the word in the language of the broadcast program content, the word in a second language of subtitles being displayed with a font different than a regular subtitle font or with an image.
In a further embodiment of the present invention, transformed audio signal content associated with the broadcast program content is processed as searchable text data.
In a further embodiment of the present invention, the image display devices has access to a remote network source to retrieve a list of words in the language of a broadcast program content associated with words in a second language of subtitles as well as fonts or images for words in the second language of subtitles.
In a further embodiment of the present invention, a word in a different font than the font of the subtitles or an image associated with the word is fitted into a subtitle line in the manner to be sized to occupy a respective area in the subtitle line, the area corresponding to the size of the actual word.
In a further embodiment of the present invention, the sound marker is a notification sound playable immediately before a respective word in the language of the broadcast program content occurs in the audio signal content.
In a further embodiment of the present invention, the subtitles are presented in the form of a horizontal band.
In a further embodiment of the present invention, the image display devices has access to a remote network source to retrieve a time stamped text file of a broadcast program content from a remote source or database.
In this regard, the system of the present invention is particularly advantageous in that language learners can improve their language skills in non-native languages more practically because while the translation of a conversation in the audio track is offered in the form of distinctive subtitle content, the word in the source language of the original audio content is also marked by a sound marker immediately before the same.

Claims (7)

  1. An image display device receiving a plurality of broadcast program contents with audio signal contents from broadcasting service providers, the image display device comprising an electronic control unit, characterized in that
    the electronic control unit is configured to transform audio signal content of a broadcast program content in the form to be transcribed as a time stamped text file,
    the image display device is fitted with an internal memory on which the electronic control unit stores the time stamped text file, a lookup table containing a list of words in the language of the broadcast program content associated with words in a second language of subtitles, each word in the second language of subtitles being associated with a font different than a regular subtitle font or with an image,
    the electronic control unit is configured to perform a word search in the time stamped text file to timewise locate any word in the language of the broadcast program content and,
    the image display device is configured to play a sound marker before timewise location of a word in the language of the broadcast program content and to simultaneously display subtitles containing a word in a second language of subtitles and corresponding to the word in the language of the broadcast program content, the word in a second language of subtitles being displayed with a font different than a regular subtitle font or with an image.
  2. An image display device as in Claim 1, characterized in that transformed audio signal content associated with the broadcast program content is processed as searchable text data.
  3. An image display device as in Claim 1 or 2, characterized in that the image display devices has access to a remote network source to retrieve a list of words in the language of a broadcast program content associated with words in a second language of subtitles as well as fonts or images for words in the second language of subtitles.
  4. An image display device as in Claim 2 or 3, characterized in that a word in a different font than the font of the subtitles or an image associated with the word is fitted into a subtitle line in the manner to be sized to occupy a respective area in the subtitle line, the area corresponding to the size of the actual word.
  5. An image display device as in Claim 2, 3 or 4, characterized in that the sound marker is a notification sound playable immediately before a respective word in the language of the broadcast program content occurs in the audio signal content.
  6. An image display device as in Claim 2, 3 or 4, characterized in that the subtitles are presented in the form of a horizontal band.
  7. An image display device as in any preceding Claim, characterized in that the image display devices has access to a remote network source to retrieve a time stamped text file of a broadcast program content from a remote source or database.
PCT/EP2016/054771 2016-03-07 2016-03-07 Image display device with synchronous audio and subtitle content generation function WO2017152935A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2016/054771 WO2017152935A1 (en) 2016-03-07 2016-03-07 Image display device with synchronous audio and subtitle content generation function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2016/054771 WO2017152935A1 (en) 2016-03-07 2016-03-07 Image display device with synchronous audio and subtitle content generation function

Publications (1)

Publication Number Publication Date
WO2017152935A1 true WO2017152935A1 (en) 2017-09-14

Family

ID=55542636

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/054771 WO2017152935A1 (en) 2016-03-07 2016-03-07 Image display device with synchronous audio and subtitle content generation function

Country Status (1)

Country Link
WO (1) WO2017152935A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019164535A1 (en) * 2018-02-26 2019-08-29 Google Llc Automated voice translation dubbing for prerecorded videos

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100332214A1 (en) * 2009-06-30 2010-12-30 Shpalter Shahar System and method for network transmision of subtitles
US20110153330A1 (en) * 2009-11-27 2011-06-23 i-SCROLL System and method for rendering text synchronized audio
US20130209058A1 (en) 2012-02-15 2013-08-15 Samsung Electronics Co. Ltd. Apparatus and method for changing attribute of subtitle in image display device
WO2014186662A1 (en) 2013-05-17 2014-11-20 Aereo, Inc. Method and system for displaying speech to text converted audio with streaming video content data
US20160066055A1 (en) * 2013-03-24 2016-03-03 Igal NIR Method and system for automatically adding subtitles to streaming media content

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100332214A1 (en) * 2009-06-30 2010-12-30 Shpalter Shahar System and method for network transmision of subtitles
US20110153330A1 (en) * 2009-11-27 2011-06-23 i-SCROLL System and method for rendering text synchronized audio
US20130209058A1 (en) 2012-02-15 2013-08-15 Samsung Electronics Co. Ltd. Apparatus and method for changing attribute of subtitle in image display device
US20160066055A1 (en) * 2013-03-24 2016-03-03 Igal NIR Method and system for automatically adding subtitles to streaming media content
WO2014186662A1 (en) 2013-05-17 2014-11-20 Aereo, Inc. Method and system for displaying speech to text converted audio with streaming video content data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019164535A1 (en) * 2018-02-26 2019-08-29 Google Llc Automated voice translation dubbing for prerecorded videos
US11582527B2 (en) 2018-02-26 2023-02-14 Google Llc Automated voice translation dubbing for prerecorded video

Similar Documents

Publication Publication Date Title
US10567834B2 (en) Using an audio stream to identify metadata associated with a currently playing television program
US9826270B2 (en) Content receiver system and method for providing supplemental content in translated and/or audio form
JP6150405B2 (en) System and method for captioning media
US9576581B2 (en) Metatagging of captions
CN104246750A (en) Transcription of speech
WO2017152935A1 (en) Image display device with synchronous audio and subtitle content generation function
KR20180128656A (en) English Teaching and Learning through the Application of Native Speakers Video Subtitles Recognition and Interpretation Systems
Fresno Closed captioning quality in the information society: the case of the American newscasts reshown online
CN111090977A (en) Intelligent writing system and intelligent writing method
KR20140077730A (en) Method of displaying caption based on user preference, and apparatus for perfoming the same
KR20140122807A (en) Apparatus and method of providing language learning data
Lambourne Ameliorating the quality issues in live subtitling
US20210158723A1 (en) Method and System for Teaching Language via Multimedia Content
Tamayo Masero Formal Aspects in SDH for Children in Spanish Television: A Descriptive Study
US11700430B2 (en) Systems and methods to implement preferred subtitle constructs
WO2015144248A1 (en) Image display device with automatic subtitle generation function
KR101522649B1 (en) Apparatus and method for classifying watching grade of digital broadcasting by closed-cation analysis real time
US11665392B2 (en) Methods and systems for selective playback and attenuation of audio based on user preference
US20230345082A1 (en) Interactive pronunciation learning system
Saerens et al. How to implement live subtitling in TV settings: guidelines on making television broadcasts accessible to hard of hearing and deaf people as well as foreigners
GB2589102A (en) Video comment button (VIDCOM)

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16710409

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16710409

Country of ref document: EP

Kind code of ref document: A1