WO2015061620A1 - Customizing mobile media captioning based on mobile media rendering - Google Patents

Customizing mobile media captioning based on mobile media rendering Download PDF

Info

Publication number
WO2015061620A1
WO2015061620A1 PCT/US2014/062054 US2014062054W WO2015061620A1 WO 2015061620 A1 WO2015061620 A1 WO 2015061620A1 US 2014062054 W US2014062054 W US 2014062054W WO 2015061620 A1 WO2015061620 A1 WO 2015061620A1
Authority
WO
WIPO (PCT)
Prior art keywords
captioning
rendering mode
media item
user interface
mobile device
Prior art date
Application number
PCT/US2014/062054
Other languages
French (fr)
Inventor
Justin LEWIS
Ruxandra Georgiana PAUN
Original Assignee
Google Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google Inc. filed Critical Google Inc.
Priority to EP14855223.5A priority Critical patent/EP3061238A4/en
Priority to CN201480058324.9A priority patent/CN105659584A/en
Publication of WO2015061620A1 publication Critical patent/WO2015061620A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42202Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] environmental sensors, e.g. for detecting temperature, luminosity, pressure, earthquakes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1694Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2200/00Indexing scheme relating to G06F1/04 - G06F1/32
    • G06F2200/16Indexing scheme relating to G06F1/16 - G06F1/18
    • G06F2200/161Indexing scheme relating to constructional details of the monitor
    • G06F2200/1614Image rotation following screen orientation, e.g. switching from landscape to portrait mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2200/00Indexing scheme relating to G06F1/04 - G06F1/32
    • G06F2200/16Indexing scheme relating to G06F1/16 - G06F1/18
    • G06F2200/163Indexing scheme relating to constructional details of the computer
    • G06F2200/1637Sensing arrangement for detection of housing movement or orientation, e.g. for controlling scrolling or cursor movement on the display of an handheld computer

Definitions

  • the present disclosure relates to media captioning and, more particularly, to a technique of customizing mobile media captioning based on mobile media rendering.
  • closed captioning subtitles are provided in videos by encoding the subtitles in the video itself.
  • Conventional solutions may include the subtitles in the pixel scheme of the video, which generally results in the subtitles having static characteristics (e.g., position, font, font size) during the entire time that the video is being played.
  • some traditional solutions draw the subtitles on top of the video using a third party API (application program interface).
  • the third party API is independent of the media application that is rendering the video, and thus, unaware of how the video is being rendered.
  • the third party API generally places the subtitles in one location on top of the video.
  • subtitles that are generally immobile in position and have fixed font characteristics.
  • Conventional subtitle solutions may impede a user's experience when the user is watching a video. For example, when a user watches a video on a mobile device in a horizontal orientation, the subtitles may be a large font size and may appear on the lower portion of the video. When the user changes the mobile device to a vertical orientation, the mobile device may change the size of the video, such that the video is much smaller.
  • a method and apparatus to provide custom mobile media captioning based on mobile media rendering includes determining a rendering mode for a media item being presented in a user interface on a mobile device.
  • the rendering mode is one of multiple rendering modes.
  • the method includes determining a set of captioning parameters that corresponds to the rendering mode of the media item and providing captioning for the media item in the user interface based on the set of captioning parameters that corresponds to the rendering mode.
  • the captioning parameters include font of the captioning, font size of the captioning, kerning of the captioning, position of the captioning, wrapping of the captioning, orientation of the captioning, and/or animation of the captioning.
  • the determining of the rendering mode for the media item includes determining an orientation of the mobile device and determining one or more elements of the user interface on the mobile device.
  • the one or more elements of the user interface includes a portion of the user interface that is presenting the media item, a location in the user interface of the portion, dimensions of the media item that is presented, and/or data pertaining to the media item.
  • the method further includes determining a change is made to the rendering mode to create a changed rendering mode, determining another one of the plurality of sets of captioning parameters that corresponds to the changed rendering mode, and adjusting the captioning for the media item in the user interface based on the other one of the plurality of sets of captioning parameters that correspond to the rendering mode.
  • the determining of the change includes determining that the mobile device is changed from being in a first orientation to a second orientation and/or receiving user input changing one of the elements of the user interface.
  • the rendering mode includes the mobile device being in a portrait orientation and the media item being presented in a lower portion of a display of the mobile device in the portrait orientation.
  • a method includes detecting a change in a rendering mode of a media item presented in a user interface on a mobile device.
  • the rendering mode is one of a plurality of rendering modes.
  • the method further includes modifying, in the user interface, captioning for the media item in accordance with the changed rendering mode.
  • the apparatus includes means for determining a rendering mode for a media item being presented in a user interface on a mobile device.
  • the rendering mode is one of multiple rendering modes.
  • the apparatus includes means for determining a set of captioning parameters that corresponds to the rendering mode of the media item and means for providing captioning for the media item in the user interface based on the set of captioning parameters that corresponds to the rendering mode.
  • the captioning parameters include font of the captioning, font size of the captioning, kerning of the captioning, position of the captioning, wrapping of the captioning, orientation of the captioning, and/or animation of the captioning.
  • means for determining the rendering mode for the media item includes means for determining an orientation of the mobile device and means for determining one or more elements of the user interface on the mobile device.
  • the one or more elements of the user interface includes a portion of the user interface that is presenting the media item, a location in the user interface of the portion, dimensions of the media item that is presented, and/or data pertaining to the media item.
  • apparatus includes means for detecting a change in a rendering mode of a media item presented in a user interface on a mobile device.
  • the rendering mode is one of a plurality of rendering modes.
  • the apparatus further includes means for modifying, in the user interface, captioning for the media item in accordance with the changed rendering mode.
  • the apparatus further includes means for determining a change is made to the rendering mode to create a changed rendering mode, means for determining another one of the plurality of sets of captioning parameters that corresponds to the changed rendering mode, and means for adjusting the captioning for the media item in the user interface based on the other one of the plurality of sets of captioning parameters that correspond to the rendering mode.
  • means for determining the change includes means for determining that the mobile device is changed from being in a first orientation to a second orientation and/or means for receiving user input changing one of the elements of the user interface.
  • the rendering mode includes the mobile device being in a portrait orientation and the media item being presented in a lower portion of a display of the mobile device in the portrait orientation.
  • computing devices for performing the operations of the above described implementations are also implemented.
  • a computer readable storage media may store instructions for performing the operations of the implementations described herein.
  • Figure 1 is a diagram illustrating example media user interfaces providing custom captioning for a media item in accordance with one or more implementations.
  • Figure 2 is flow diagram of an implementation of a method providing custom captioning for a media item in a media user interface based the rendering of the media item.
  • Figure 3 illustrates exemplary system architecture in which implementations can be implemented.
  • FIG. 4 is a block diagram of an example computer system that may perform one or more of the operations described herein, in accordance with various implementations.
  • a media consumption document hereinafter refers to a document (e.g., webpage, mobile application document) that is rendered to provide a media user interface (UI) that is presented to a user for presenting (e.g., playing) a media item (e.g., video).
  • UI media user interface
  • the media consumption document can be rendered to provide the media UI, which plays the video.
  • Examples of a media item can include, and are not limited to, digital video, digital movies, digital photos, digital music, website content, social media updates, electronic books (ebooks), electronic magazines, digital newspapers, digital audio books, electronic journals, web blogs, real simple syndication (RSS) feeds, electronic comic books, software applications, etc.
  • a media item can be a media item consumed via the Internet and/or via an application.
  • “media,” media item,” “online media item,” “digital media,” and “digital media item” can include an electronic file that can be executed or loaded using software, firmware or hardware configured to present the digital media item to an entity.
  • an online video also hereinafter referred to as a video
  • a video is used as an example of a media item throughout this document.
  • Captioning hereinafter refers to closed captioning, subtitling, or any other form of displaying data (e.g., text, characters, symbols, punctuation, pictorial representations, etc.) on a visual display to provide additional information or interpretive information for a media item.
  • captioning can provide a transcription (either verbatim or in edited form) of an audio portion of a video as the corresponding portion of the video has been presented, is being presented, or is to be presented.
  • captioning can provide user-based comments that relate to a portion of a video that has presented (e.g., played), is being presented, or to be presented.
  • Implementations of the present disclosure customize the captioning for the media item that is being presented in the media UI based on how the media item is being rendered. For example, if the video is playing while the mobile device is in a portrait (vertical) orientation and is playing using the top third of the display of the mobile device, the captioning may have a small font size and the captioning may be animated. For example, the captioning may be scrolling across the horizontal length of the video. In another example, if the video is playing while the mobile device is in a landscape (horizontal) orientation and is playing using 90% of the display of the mobile device, the captioning may have a large font size and the captioning may not be animated.
  • implementations of the present disclosure dynamically adjust the characteristics of the captioning based on the how the video is being rendered to provide users more comprehensible captioning.
  • the mobile device may initially present a video in a landscape mode using a large (e.g., 95%>) portion of the mobile device display.
  • the captioning may be displayed using a large font size.
  • the video may be scaled down according to the layout that is available in the portrait orientation.
  • the captioning characteristics e.g., font size, spacing between characters, animation, position, etc.
  • FIG. 1 is a diagram illustrating example media user interfaces (UIs) 117A-B providing custom captioning for a media item in accordance with one implementation of the present disclosure.
  • the mobile device 100 can include a media application 107 to present (e.g., play) videos 103A-D.
  • the media application 107 can be for example, a mobile application, a web application, etc.
  • the media application 107 may be a media player embedded in a webpage that is accessible via a web browser.
  • the media application is a mobile device application for presenting (e.g., playing) media items (e.g., videos).
  • the media application 107 can include a captioning module 109 to provide custom captioning in the media user interfaces 117A,B based on how the video 103A-D that is being presented is rendered.
  • the captioning module 109 can take into account the orientation 101,115 of the mobile device 100, the portion of the mobile device display that is presenting the video 103A-D, the size of the video 103A-D, etc.
  • a mobile device 100 can be in a portrait orientation 101 or in a landscape mode 115.
  • the mobile device 100 may be in a portrait orientation 101 and can provide a media UI 117A in the portrait orientation 101.
  • the mobile device 100 may be in a landscape orientation 115 and can provide a media UI 117B in the landscape orientation 115.
  • the media application 107 can present a media item (e.g., video 103A-D) using one of multiple rendering modes.
  • a media item e.g., video 103A-D
  • the media UI 117A can render the video 103A-D using one of multiple rendering modes.
  • Rendering modes 131,137 illustrate various example rendering modes for when the mobile device 100 is in a portrait orientation 101.
  • the media UI 117B can render a video using one of multiple rendering modes.
  • the media application 107 can use the entire media UI 117B or a large percentage (e.g., 95%) of the media UI 117B to play the video.
  • Each of the rendering modes can include one or more elements, such as, and not limited to, the portion (e.g., portion 141,153) of the media UI that is presenting the media item (e.g., video 103A-D), a location in the media UI of the portion, dimensions of the media item that is presented, data (e.g., video info 105A-D) pertaining to the media item, etc.
  • the portion e.g., portion 141,153
  • the media item e.g., video 103A-D
  • data e.g., video info 105A-D
  • rendering mode 131 can include a portion 141 (e.g., 30% of the media UI) of the media UI 117A and a corresponding location (e.g., top) in the media UI 117A for the portion 141 that can be used to present video 103 A in the portrait orientation 101.
  • the video 103 A can have dimensions to appropriately fill the portion 141.
  • the rendering mode 131 can include another portion 143 (e.g., 70% of the media UI) of the media UI 117A and a corresponding location (e.g., bottom) in the media UI 117A for the portion 143 that can be used to provide video information 105 A in the portrait orientation 101.
  • Examples of video information 105A-D can include, and are not limited to, a title of the video, a number of views of the video, the time elapsed since the video was uploaded, the time the video was uploaded, the number of likes of the video, the number of recommendations made for the video, the length of the video, a rating of the video, a description of the video, comments, related videos, user input UI elements, etc.
  • rendering mode 137 can include a portion 153 (e.g., 15% of the media UI) of the media UI 117A and a corresponding location (e.g., bottom right) in the media UI 117A for the portion 153 that can be used to present video 103D in the portrait orientation 101.
  • the video 103D may have small dimensions to appropriately fill the portion 153.
  • the rendering mode 137 can include another portion 151 (e.g., 70% of the media UI) of the media UI 117A and a corresponding location (e.g., top) in the media UI 117A for the portion 151 that can be used to provide video information 105D in the portrait orientation 101.
  • the video 103D may be scaled down to smaller dimensions to fit a smaller portion of the mobile device 100 display to allow a user to use a larger portion (e.g., 70%) of the display for scrolling through the video information 105D.
  • the captioning module 109 determines how the video 103A-D is being rendered and can provide custom captioning (e.g., captioning 121A-D) in the media UI 117A-B depending on how the video 103A-D is being rendered. In one implementation, the determination how the video 103A-D is being rendered.
  • custom captioning e.g., captioning 121A-D
  • the captioning module 109 may determine that rendering mode 131 is being used and may provide captioning 121 A in the media UI 117 A, such that the captioning 121 A is a layer on top of the video 103 A and in a horizontal orientation.
  • the captioning module 109 may determine that rendering mode 131 is being used and may provide captioning 12 IB in the media UI 117A, such that the video information 105B is moved down and the captioning 121B is provided below the video 103B.
  • the captioning module 109 may determine that rendering mode 131 is being used and may provide captioning 121C in the media UI 117 A, such that the captioning 121C is a layer on top of the video 103 A and in a vertical orientation. In another example, the captioning module 109 may determine that rendering mode 137 is being used and may provide captioning 12 ID in the media UI 117A, such that the captioning 12 ID is a layer on top of the video 103D and in a horizontal orientation.
  • the rendering modes 131,137 can be assigned a corresponding set of captioning parameters.
  • the captioning parameters specify the characteristics of the captioning 121A-D.
  • Examples of captioning parameters can include, and are not limited to, font of the captioning, font size of the captioning, kerning (spacing between characters) of the captioning, position of the captioning, wrapping of the captioning, orientation of the captioning, animation of the captioning, etc.
  • captioning 12 ID may have a font size that is smaller than captioning 121 A,B.
  • captioning 121C may be scrolling in a vertical direction
  • captioning 121 A may be scrolling in a horizontal direction
  • captioning 12 IB and captioning 12 ID may not be scrolling.
  • FIG. 2 is flow diagram of an implementation of a method 200 for providing custom captioning for a media item in a media user interface based the rendering of the media item.
  • the method may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both.
  • the method 200 may be performed by the captioning module 109 on a mobile device 100 in Figure 1.
  • one or more portions of method 200 may be performed by a server computer system coupled to the mobile device over one or more networks.
  • the captioning module determines the rendering mode for a media item being presented (e.g., played) in a media user interface (UI) on a mobile device.
  • the captioning module can determine the rendering mode by determining the orientation (e.g., portrait, landscape) of the mobile device and one or more elements of the media UI on the mobile device.
  • elements of the media UI can include, and are not limited to, the portion of the media UI that is presenting the media item, a location in the media UI of the portion, dimensions of the media item that is presented, data pertaining to the media item, location in the media UI of the data pertaining to the media item, etc.
  • an application platform that queries the operating system of the mobile device in the background for the orientation of the mobile device and receives the orientation from the operating system.
  • the application platform can store an orientation state, which indicates whether the mobile device is in portrait mode or landscape mode, in a data store.
  • the captioning module can query the application platform for the orientation state and receive a result.
  • the captioning module accesses the stored orientation state in the data store.
  • the captioning module queries the operating system for the orientation of the mobile device and receives a result.
  • the captioning module can determine the one or more elements of the media UI, for example, from the media consumption document (e.g., webpage, mobile application document) that is rendered to provide the media UI that is presented to a user for presenting (e.g., playing) the media item.
  • the media consumption document can include data that indicates, for example, the portion of the media UI that is presenting the media item, the location in the media UI of the portion, the dimensions of the media item that is presented, the location in the media UI of data (e.g., related videos, number of likes, comments, etc.) pertaining to the media item, the data (e.g., related videos, number of likes, comments, etc.) in the media UI that pertains to the media item, etc.
  • data e.g., related videos, number of likes, comments, etc.
  • the captioning module may determine that the rendering mode for the media item includes the mobile device being in a portrait orientation, a 30% portion of the media UI is used to play the video, the 30% portion is located in the top portion of the media UI, a 70%> portion of the media UI is used to provide video information, and the 70% portion of the media UI is used to provide video information is located at the bottom portion of the media UI.
  • the captioning module determines captioning parameters that correspond to the rendering mode of the media item.
  • captioning parameters can include, and are not limited to, font of the captioning, font size of the captioning, kerning of the captioning, position of the captioning, wrapping of the captioning, orientation of the captioning, animation of the captioning, etc.
  • the captioning module determines the captioning parameters that correspond to the rendering mode from the media consumption document (e.g., webpage, mobile application document) that is rendered to play the media item.
  • the media consumption document may include code (e.g., IF- THEN statements) indicating which captioning parameters correspond to the rendering mode of the media item.
  • the media consumption document can include multiple IF-THEN statements corresponding to different types of captioning.
  • the captioning module may determine that when the rendering mode for the media item includes the mobile device being in a portrait orientation, a 30% portion of the media UI is used to play the video, and the 30%> portion is located in the top portion of the media UI, the corresponding set of captioning parameters includes Font-XYZ, Font-Size- AB, kerning of X spacing between characters, position Y as a layer on top of the video, wrapping disabled, horizontal orientation, animation enabled as scrolling, etc.
  • the captioning module determines the captioning parameters that correspond to the rendering mode from a configuration file that is stored in a data store that is coupled to the captioning module.
  • the configuration file can map a set of captioning parameters to a rendering mode.
  • the mapping can be configurable and can be user (e.g., system administrator) defined.
  • the captioning module sends the rendering mode to a server computer system via one or more networks, and the server determines the captioning parameters that correspond to the rendering mode of the media item.
  • the server computer system provides the captioning parameters that correspond to the rendering mode of the media item to the captioning module.
  • the captioning module provides the captioning for the media item in the media UI based on the set of captioning parameters.
  • the content e.g., text, characters, symbols, punctuation, pictorial representations, etc.
  • the content can be predetermined.
  • the content can be provided by one or more users (e.g., system administrator, end-users) and/or one or more other systems.
  • the content may be a transcription of the audio portion of a video.
  • the content may be user-defined.
  • the content may include user comments about a video. The user comments may correspond to particular segments of the video.
  • the captioning module formats the content according to the set of captioning parameters and provides the formatted content in the media UI.
  • the captioning module formats a transcription of Video-ABC using Font-XYZ, Font-Size-AB, kerning of X spacing between characters, position Y as a layer on top of the Video-ABC, wrapping disabled, horizontal orientation, animation enabled as scrolling, etc.
  • the captioning module sends the rendering mode to the server computer system, and the server computer system determines the captioning parameters that correspond to the rendering mode of the media item and creates a document (e.g., webpage) with the appropriate captioning based on the rendering mode.
  • the server computer system can provide the document (e.g., webpage) with the appropriate captioning to the mobile device, and the mobile device renders the document to present the media item with the appropriate captioning to the user.
  • the captioning module determines whether the rendering mode has changed. For example, the captioning module may receive an event notification from the media application. The change can be based on the orientation of the mobile device and/or user input pertaining to how the video is to be played. For example, the captioning module may determine that the mobile device is changed from being in a first orientation to a second orientation. For example, the captioning module may determine that the mobile device is changed from being in a portrait orientation to a landscape orientation. In another example, the captioning module may determine that the mobile device is changed from being in a landscape orientation to a portrait orientation.
  • the captioning module may receive user input changing one of the elements of the user interface.
  • a user may initially view a video using a first rendering mode (e.g., rendering mode 131 in Figure 1), and the user may provide input to the captioning module to change the playing of the video to use a second rendering mode (e.g., rendering mode 137 in Figure 1).
  • the first rendering mode may play the video in a portrait orientation using the top 30% portion of the display of the mobile device
  • the second rendering mode may play the video in the portrait orientation using 15% (e.g., bottom right corner) of the display of the mobile device.
  • the orientation of the mobile device does not change, but an element (e.g., portion, location, etc.) in the media UI changes.
  • the captioning module determines whether the presentation (e.g., playing) of the media item is complete at block 213.
  • the captioning module may receive an event notification, for example, from the media application. If the presentation of the media item is not complete, the captioning module returns to block 207 to determine whether the rendering mode has changed. If the rendering mode has changed (block 207), the captioning module determines the set of captioning parameters that corresponds to the changed rendering mode at block 209. The captioning module can identify the changed rendering mode and can identify the set of captioning parameters that corresponds to the changed rendering mode.
  • the captioning module may determine that the set of captioning parameters for the changed rendering mode includes using Font- 123, Font-Size-00, kerning of ZZ spacing between characters, position YY as a layer on top of the video, wrapping disabled, horizontal orientation, animation disabled, etc.
  • the captioning module provides the captioning for the media item in the media UI based on the set of captioning parameters for the changed rendering mode. For example, the captioning module adjusts the existing captioning and/or creates new captioning to conform to Font- 123, Font-Size-00, kerning of ZZ spacing between characters, position YY as a layer on top of the video, wrapping disabled, horizontal orientation, animation disabled, etc.
  • the captioning module sends data to the server computer system indicating the changed rendering mode
  • the server computer system creates a document (e.g., webpage) with appropriate captioning based on the changed rendering mode and sends the document to the mobile device.
  • the mobile device can render the document to present the media item with the appropriate captioning to the user.
  • FIG. 3 illustrates exemplary system architecture 300 in which implementations can be implemented.
  • the system architecture 300 can include one or more mobile devices 301, one or more servers 315,317, and one or more data stores 313 coupled to each other over one or more networks 310.
  • the network 310 may be public networks (e.g., the Internet), private networks (e.g., a local area network (LAN) or wide area network (WAN)), or a combination thereof.
  • public networks e.g., the Internet
  • private networks e.g., a local area network (LAN) or wide area network (WAN)
  • LAN local area network
  • WAN wide area network
  • the data stores 313 can store media items, such as, and not limited to, digital video, digital movies, digital photos, digital music, website content, social media updates, electronic books (ebooks), electronic magazines, digital newspapers, digital audio books, electronic journals, web blogs, real simple syndication (RSS) feeds, electronic comic books, software applications, etc.
  • a data store 313 can be a persistent storage that is capable of storing data.
  • data store 313 might be a network-attached file server, while in other implementations data store 313 might be some other type of persistent storage such as an object-oriented database, a relational database, and so forth.
  • the mobile devices 301 can be portable computing devices such as a cellular telephones, personal digital assistants (PDAs), portable media players, netbooks, laptop computers, an electronic book reader or a tablet computer (e.g., that includes a book reader application), a set-top box, a gaming console, a television, and the like.
  • PDAs personal digital assistants
  • portable media players such as netbooks, laptop computers, an electronic book reader or a tablet computer (e.g., that includes a book reader application), a set-top box, a gaming console, a television, and the like.
  • the mobile devices 301 can run an operating system (OS) that manages hardware and software of the mobile devices 301.
  • a media application 303 can run on the mobile devices 301 (e.g., on the OS of the mobile devices).
  • the media application 303 may be a web browser that can access content served by an application server 317 (e.g., web server).
  • the mobile application 303 may be an application that can access content served by an application server 317 (e.g., mobile application server).
  • the application server 317 can provide web applications and/or mobile device applications and data for the applications.
  • the recommendation server 315 can provide media items (e.g., videos) that are related to other media items.
  • the sets of related media items can be stored on one or more data stores 313.
  • the servers 315,317 can be hosted on machines, such as, and not limited to, rackmount servers, personal computers, desktop computers, media centers, or any combination of the above.
  • the captioning module 305 can provide custom captioning for a media item in media user interfaces based on how the media item is being rendered. For example, Video- XYZ may be playing using a rendering mode that includes the mobile device 301 being in a landscape orientation and use of a large (e.g., 95%) portion of the display of the media device.
  • the captioning module 205 can provide captioning for Video-XYZ according to a set of captioning parameters that corresponds to the rendering mode. For example, the captioning module 305 may format a transcription of the audio of Video-XYZ using Font-Type-1, Font- Size-3, kerning of XYZ spacing between characters, position ABC as a layer on top of the video, wrapping enabled, horizontal orientation, animation disabled, etc.
  • the captioning module 305 can detect that the rendering mode has changed and can dynamically adjust the captioning and/or create new captioning based on the changed rendering mode.
  • the rendering mode may include the mobile device being in a portrait orientation, and the captioning module 305 may decrease the font size of the captioning, may decrease the kerning spacing between characters of the captioning, etc. to adjust for a smaller layout that is associated with the portrait orientation.
  • Figure 4 illustrates a diagram of a machine in an example form of a computer system 400 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
  • the computer system 400 can be mobile device 100 in Figure 1.
  • the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet.
  • the machine may operate in the capacity of a server or a client machine in client- server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA Personal Digital Assistant
  • STB set-top box
  • WPA Personal Digital Assistant
  • a cellular telephone a web appliance
  • server a server
  • network router switch or bridge
  • the example computer system 400 includes a processing device (processor) 402, a main memory 404 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), double data rate (DDR SDRAM), or DRAM (RDRAM), etc.), a static memory 406 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 418, which communicate with each other via a bus 430.
  • a processing device processor
  • main memory 404 e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), double data rate (DDR SDRAM), or DRAM (RDRAM), etc.
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate
  • RDRAM DRAM
  • static memory 406 e.g., flash memory, static random access memory (SRAM), etc.
  • SRAM static random access memory
  • Processor 402 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processor 402 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processor 402 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processor 402 is configured to execute instructions 422 for performing the operations and steps discussed herein.
  • CISC complex instruction set computing
  • RISC reduced instruction set computing
  • VLIW very long instruction word
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP digital signal processor
  • network processor or the like.
  • the processor 402 is configured to execute instructions 422 for performing the operations and steps discussed here
  • the computer system 400 may further include a network interface device 408.
  • the computer system 400 also may include a video display unit 410 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an input device 412 (e.g., a keyboard, and alphanumeric keyboard, a motion sensing input device, ), a cursor control device 414 (e.g., a mouse), and a signal generation device 416 (e.g., a speaker).
  • a video display unit 410 e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)
  • an input device 412 e.g., a keyboard, and alphanumeric keyboard, a motion sensing input device,
  • a cursor control device 414 e.g., a mouse
  • signal generation device 416 e.g., a speaker
  • the data storage device 418 may include a computer-readable storage medium 428 on which is stored one or more sets of instructions 422 (e.g., software) embodying any one or more of the methodologies or functions described herein.
  • the instructions 422 may also reside, completely or at least partially, within the main memory 404 and/or within the processor 402 during execution thereof by the computer system 400, the main memory 404 and the processor 402 also constituting computer-readable storage media.
  • the instructions 422 may further be transmitted or received over a network 420 via the network interface device 408.
  • the instructions 422 include instructions for a captioning module (e.g., captioning module 109 in Figure 1) and/or a software library containing methods that call the captioning module.
  • a captioning module e.g., captioning module 109 in Figure 1
  • a software library containing methods that call the captioning module.
  • the computer-readable storage medium 428 is shown in an exemplary implementation to be a single medium, the term "computer-readable storage medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • computer-readable storage medium shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure.
  • computer-readable storage medium shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
  • article of manufacture is intended to encompass a computer program accessible from any computer-readable device or storage media.
  • Certain implementations of the present disclosure also relate to an apparatus for performing the operations herein.
  • This apparatus may be constructed for the intended purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
  • example or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion.

Abstract

A processing device determines a rendering mode for a media item being presented in a user interface on a mobile device. The rendering mode is one of multiple rendering modes. The processing device determines a set of captioning parameters that corresponds to the rendering mode of the media item and provides captioning for the media item in the user interface based on the set of captioning parameters that corresponds to the rendering mode.

Description

CUSTOMIZING MOBILE MEDIA CAPTIONING
BASED ON MOBILE MEDIA RENDERING
TECHNICAL FIELD
[0001] The present disclosure relates to media captioning and, more particularly, to a technique of customizing mobile media captioning based on mobile media rendering.
BACKGROUND
[0002] Traditionally, closed captioning subtitles are provided in videos by encoding the subtitles in the video itself. Conventional solutions may include the subtitles in the pixel scheme of the video, which generally results in the subtitles having static characteristics (e.g., position, font, font size) during the entire time that the video is being played. In another example, some traditional solutions draw the subtitles on top of the video using a third party API (application program interface). The third party API is independent of the media application that is rendering the video, and thus, unaware of how the video is being rendered. The third party API generally places the subtitles in one location on top of the video.
Traditional solutions provide subtitles that are generally immobile in position and have fixed font characteristics. Conventional subtitle solutions may impede a user's experience when the user is watching a video. For example, when a user watches a video on a mobile device in a horizontal orientation, the subtitles may be a large font size and may appear on the lower portion of the video. When the user changes the mobile device to a vertical orientation, the mobile device may change the size of the video, such that the video is much smaller.
However, traditional subtitle solutions generally continue to display the subtitles using the large font size. When the video is smaller in the vertical position users typically find that the large font size is too large and make the subtitles incomprehensible.
SUMMARY
[0003] The following presents a simplified summary of various aspects of this disclosure in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements nor delineate the scope of such aspects. Its purpose is to present some concepts of this disclosure in a simplified form as a prelude to the more detailed description that is presented later.
[0004] A method and apparatus to provide custom mobile media captioning based on mobile media rendering is described. The method includes determining a rendering mode for a media item being presented in a user interface on a mobile device. The rendering mode is one of multiple rendering modes. The method includes determining a set of captioning parameters that corresponds to the rendering mode of the media item and providing captioning for the media item in the user interface based on the set of captioning parameters that corresponds to the rendering mode.
[0005] In one implementation, the captioning parameters include font of the captioning, font size of the captioning, kerning of the captioning, position of the captioning, wrapping of the captioning, orientation of the captioning, and/or animation of the captioning. In one implementation, the determining of the rendering mode for the media item includes determining an orientation of the mobile device and determining one or more elements of the user interface on the mobile device. In one implementation, the one or more elements of the user interface includes a portion of the user interface that is presenting the media item, a location in the user interface of the portion, dimensions of the media item that is presented, and/or data pertaining to the media item.
[0006] In one implementation, the method further includes determining a change is made to the rendering mode to create a changed rendering mode, determining another one of the plurality of sets of captioning parameters that corresponds to the changed rendering mode, and adjusting the captioning for the media item in the user interface based on the other one of the plurality of sets of captioning parameters that correspond to the rendering mode. In one implementation, the determining of the change includes determining that the mobile device is changed from being in a first orientation to a second orientation and/or receiving user input changing one of the elements of the user interface. In one implementation, the rendering mode includes the mobile device being in a portrait orientation and the media item being presented in a lower portion of a display of the mobile device in the portrait orientation.
[0007] In one implementation, a method includes detecting a change in a rendering mode of a media item presented in a user interface on a mobile device. The rendering mode is one of a plurality of rendering modes. The method further includes modifying, in the user interface, captioning for the media item in accordance with the changed rendering mode.
[0008] An apparatus to provide custom mobile media captioning based on mobile media rendering is also described. The apparatus includes means for determining a rendering mode for a media item being presented in a user interface on a mobile device. The rendering mode is one of multiple rendering modes. The apparatus includes means for determining a set of captioning parameters that corresponds to the rendering mode of the media item and means for providing captioning for the media item in the user interface based on the set of captioning parameters that corresponds to the rendering mode.
[0009] In one implementation, the captioning parameters include font of the captioning, font size of the captioning, kerning of the captioning, position of the captioning, wrapping of the captioning, orientation of the captioning, and/or animation of the captioning. In one implementation, means for determining the rendering mode for the media item includes means for determining an orientation of the mobile device and means for determining one or more elements of the user interface on the mobile device. In one implementation, the one or more elements of the user interface includes a portion of the user interface that is presenting the media item, a location in the user interface of the portion, dimensions of the media item that is presented, and/or data pertaining to the media item.
[0010] In one implementation, apparatus includes means for detecting a change in a rendering mode of a media item presented in a user interface on a mobile device. The rendering mode is one of a plurality of rendering modes. The apparatus further includes means for modifying, in the user interface, captioning for the media item in accordance with the changed rendering mode.
[0011] In one implementation, the apparatus further includes means for determining a change is made to the rendering mode to create a changed rendering mode, means for determining another one of the plurality of sets of captioning parameters that corresponds to the changed rendering mode, and means for adjusting the captioning for the media item in the user interface based on the other one of the plurality of sets of captioning parameters that correspond to the rendering mode. In one implementation, means for determining the change includes means for determining that the mobile device is changed from being in a first orientation to a second orientation and/or means for receiving user input changing one of the elements of the user interface. In one implementation, the rendering mode includes the mobile device being in a portrait orientation and the media item being presented in a lower portion of a display of the mobile device in the portrait orientation.
[0012] In additional implementations, computing devices for performing the operations of the above described implementations are also implemented. Additionally, in implementations of the disclosure, a computer readable storage media may store instructions for performing the operations of the implementations described herein. BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various implementations of the disclosure.
[0014] Figure 1 is a diagram illustrating example media user interfaces providing custom captioning for a media item in accordance with one or more implementations.
[0015] Figure 2 is flow diagram of an implementation of a method providing custom captioning for a media item in a media user interface based the rendering of the media item.
[0016] Figure 3 illustrates exemplary system architecture in which implementations can be implemented.
[0017] Figure 4 is a block diagram of an example computer system that may perform one or more of the operations described herein, in accordance with various implementations. DETAILED DESCRIPTION
[0018] A system and method for providing custom mobile media captioning based on mobile media rendering is described, according to various implementations. A media consumption document hereinafter refers to a document (e.g., webpage, mobile application document) that is rendered to provide a media user interface (UI) that is presented to a user for presenting (e.g., playing) a media item (e.g., video). For example, when a video has been selected for playing, the media consumption document can be rendered to provide the media UI, which plays the video. Examples of a media item can include, and are not limited to, digital video, digital movies, digital photos, digital music, website content, social media updates, electronic books (ebooks), electronic magazines, digital newspapers, digital audio books, electronic journals, web blogs, real simple syndication (RSS) feeds, electronic comic books, software applications, etc. A media item can be a media item consumed via the Internet and/or via an application. As used herein, "media," media item," "online media item," "digital media," and "digital media item" can include an electronic file that can be executed or loaded using software, firmware or hardware configured to present the digital media item to an entity. For brevity and simplicity, an online video (also hereinafter referred to as a video) is used as an example of a media item throughout this document.
[0019] The user may select to play a video with captioning. Captioning hereinafter refers to closed captioning, subtitling, or any other form of displaying data (e.g., text, characters, symbols, punctuation, pictorial representations, etc.) on a visual display to provide additional information or interpretive information for a media item. For example, captioning can provide a transcription (either verbatim or in edited form) of an audio portion of a video as the corresponding portion of the video has been presented, is being presented, or is to be presented. In another example, captioning can provide user-based comments that relate to a portion of a video that has presented (e.g., played), is being presented, or to be presented.
[0020] Implementations of the present disclosure customize the captioning for the media item that is being presented in the media UI based on how the media item is being rendered. For example, if the video is playing while the mobile device is in a portrait (vertical) orientation and is playing using the top third of the display of the mobile device, the captioning may have a small font size and the captioning may be animated. For example, the captioning may be scrolling across the horizontal length of the video. In another example, if the video is playing while the mobile device is in a landscape (horizontal) orientation and is playing using 90% of the display of the mobile device, the captioning may have a large font size and the captioning may not be animated.
[0021] Accordingly, contrary to conventional solutions, which display subtitles using fixed captioning characteristics (e.g., font, font size, position) regardless, for example, of whether the video is being rendered using a large portion of a display or a small portion of the display, implementations of the present disclosure dynamically adjust the characteristics of the captioning based on the how the video is being rendered to provide users more comprehensible captioning. For example, the mobile device may initially present a video in a landscape mode using a large (e.g., 95%>) portion of the mobile device display. The captioning may be displayed using a large font size. When the user changes the position of the mobile device to a portrait orientation, the video may be scaled down according to the layout that is available in the portrait orientation. Implementations of the present disclosure can
dynamically scale the captioning characteristics (e.g., font size, spacing between characters, animation, position, etc.) accordingly to provide comprehensible captioning in the portrait orientation.
[0022] Figure 1 is a diagram illustrating example media user interfaces (UIs) 117A-B providing custom captioning for a media item in accordance with one implementation of the present disclosure. The mobile device 100 can include a media application 107 to present (e.g., play) videos 103A-D. The media application 107 can be for example, a mobile application, a web application, etc. For example, the media application 107 may be a media player embedded in a webpage that is accessible via a web browser. In another example, the media application is a mobile device application for presenting (e.g., playing) media items (e.g., videos). The media application 107 can include a captioning module 109 to provide custom captioning in the media user interfaces 117A,B based on how the video 103A-D that is being presented is rendered. For example, the captioning module 109 can take into account the orientation 101,115 of the mobile device 100, the portion of the mobile device display that is presenting the video 103A-D, the size of the video 103A-D, etc. A mobile device 100 can be in a portrait orientation 101 or in a landscape mode 115. For example, the mobile device 100 may be in a portrait orientation 101 and can provide a media UI 117A in the portrait orientation 101. In another example, the mobile device 100 may be in a landscape orientation 115 and can provide a media UI 117B in the landscape orientation 115.
[0023] The media application 107 can present a media item (e.g., video 103A-D) using one of multiple rendering modes. For example, when the mobile device 100 is in a portrait orientation 101, the media UI 117A can render the video 103A-D using one of multiple rendering modes. Rendering modes 131,137 illustrate various example rendering modes for when the mobile device 100 is in a portrait orientation 101. In another example, when the mobile device 100 is in landscape orientation 115, the media UI 117B can render a video using one of multiple rendering modes. For example, in one rendering mode, the media application 107 can use the entire media UI 117B or a large percentage (e.g., 95%) of the media UI 117B to play the video.
[0024] Each of the rendering modes can include one or more elements, such as, and not limited to, the portion (e.g., portion 141,153) of the media UI that is presenting the media item (e.g., video 103A-D), a location in the media UI of the portion, dimensions of the media item that is presented, data (e.g., video info 105A-D) pertaining to the media item, etc.
[0025] For example, rendering mode 131 can include a portion 141 (e.g., 30% of the media UI) of the media UI 117A and a corresponding location (e.g., top) in the media UI 117A for the portion 141 that can be used to present video 103 A in the portrait orientation 101. The video 103 A can have dimensions to appropriately fill the portion 141. The rendering mode 131 can include another portion 143 (e.g., 70% of the media UI) of the media UI 117A and a corresponding location (e.g., bottom) in the media UI 117A for the portion 143 that can be used to provide video information 105 A in the portrait orientation 101. Examples of video information 105A-D can include, and are not limited to, a title of the video, a number of views of the video, the time elapsed since the video was uploaded, the time the video was uploaded, the number of likes of the video, the number of recommendations made for the video, the length of the video, a rating of the video, a description of the video, comments, related videos, user input UI elements, etc.
[0026] In another example, rendering mode 137 can include a portion 153 (e.g., 15% of the media UI) of the media UI 117A and a corresponding location (e.g., bottom right) in the media UI 117A for the portion 153 that can be used to present video 103D in the portrait orientation 101. The video 103D may have small dimensions to appropriately fill the portion 153. The rendering mode 137 can include another portion 151 (e.g., 70% of the media UI) of the media UI 117A and a corresponding location (e.g., top) in the media UI 117A for the portion 151 that can be used to provide video information 105D in the portrait orientation 101. For example, the video 103D may be scaled down to smaller dimensions to fit a smaller portion of the mobile device 100 display to allow a user to use a larger portion (e.g., 70%) of the display for scrolling through the video information 105D.
[0027] In one implementation, the captioning module 109 determines how the video 103A-D is being rendered and can provide custom captioning (e.g., captioning 121A-D) in the media UI 117A-B depending on how the video 103A-D is being rendered. In one implementation, the determination how the video 103A-D is being rendered. One
implementation of customizing the captioning in the media UI based on the how the video is being rendered is described in greater detail below in conjunction with Figure 2. For example, the captioning module 109 may determine that rendering mode 131 is being used and may provide captioning 121 A in the media UI 117 A, such that the captioning 121 A is a layer on top of the video 103 A and in a horizontal orientation. In another example, the captioning module 109 may determine that rendering mode 131 is being used and may provide captioning 12 IB in the media UI 117A, such that the video information 105B is moved down and the captioning 121B is provided below the video 103B. In another example, the captioning module 109 may determine that rendering mode 131 is being used and may provide captioning 121C in the media UI 117 A, such that the captioning 121C is a layer on top of the video 103 A and in a vertical orientation. In another example, the captioning module 109 may determine that rendering mode 137 is being used and may provide captioning 12 ID in the media UI 117A, such that the captioning 12 ID is a layer on top of the video 103D and in a horizontal orientation.
[0028] The rendering modes 131,137 can be assigned a corresponding set of captioning parameters. The captioning parameters specify the characteristics of the captioning 121A-D. Examples of captioning parameters can include, and are not limited to, font of the captioning, font size of the captioning, kerning (spacing between characters) of the captioning, position of the captioning, wrapping of the captioning, orientation of the captioning, animation of the captioning, etc. For example, captioning 12 ID may have a font size that is smaller than captioning 121 A,B. In other examples, captioning 121C may be scrolling in a vertical direction, captioning 121 A may be scrolling in a horizontal direction, and captioning 12 IB and captioning 12 ID may not be scrolling.
[0029] Figure 2 is flow diagram of an implementation of a method 200 for providing custom captioning for a media item in a media user interface based the rendering of the media item. The method may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both. In one implementation, the method 200 may be performed by the captioning module 109 on a mobile device 100 in Figure 1. In another implementation, one or more portions of method 200 may be performed by a server computer system coupled to the mobile device over one or more networks.
[0030] At block 201, the captioning module determines the rendering mode for a media item being presented (e.g., played) in a media user interface (UI) on a mobile device. The captioning module can determine the rendering mode by determining the orientation (e.g., portrait, landscape) of the mobile device and one or more elements of the media UI on the mobile device. Examples of elements of the media UI can include, and are not limited to, the portion of the media UI that is presenting the media item, a location in the media UI of the portion, dimensions of the media item that is presented, data pertaining to the media item, location in the media UI of the data pertaining to the media item, etc.
[0031] In one implementation, there is an application platform that queries the operating system of the mobile device in the background for the orientation of the mobile device and receives the orientation from the operating system. The application platform can store an orientation state, which indicates whether the mobile device is in portrait mode or landscape mode, in a data store. The captioning module can query the application platform for the orientation state and receive a result. In another implementation, the captioning module accesses the stored orientation state in the data store. In another implementation, the captioning module queries the operating system for the orientation of the mobile device and receives a result.
[0032] The captioning module can determine the one or more elements of the media UI, for example, from the media consumption document (e.g., webpage, mobile application document) that is rendered to provide the media UI that is presented to a user for presenting (e.g., playing) the media item. The media consumption document can include data that indicates, for example, the portion of the media UI that is presenting the media item, the location in the media UI of the portion, the dimensions of the media item that is presented, the location in the media UI of data (e.g., related videos, number of likes, comments, etc.) pertaining to the media item, the data (e.g., related videos, number of likes, comments, etc.) in the media UI that pertains to the media item, etc. For example, the captioning module may determine that the rendering mode for the media item includes the mobile device being in a portrait orientation, a 30% portion of the media UI is used to play the video, the 30% portion is located in the top portion of the media UI, a 70%> portion of the media UI is used to provide video information, and the 70% portion of the media UI is used to provide video information is located at the bottom portion of the media UI.
[0033] At block 203, the captioning module determines captioning parameters that correspond to the rendering mode of the media item. Examples of captioning parameters can include, and are not limited to, font of the captioning, font size of the captioning, kerning of the captioning, position of the captioning, wrapping of the captioning, orientation of the captioning, animation of the captioning, etc. In one implementation, the captioning module determines the captioning parameters that correspond to the rendering mode from the media consumption document (e.g., webpage, mobile application document) that is rendered to play the media item. For example, the media consumption document may include code (e.g., IF- THEN statements) indicating which captioning parameters correspond to the rendering mode of the media item. The media consumption document can include multiple IF-THEN statements corresponding to different types of captioning.
[0034] For example, the captioning module may determine that when the rendering mode for the media item includes the mobile device being in a portrait orientation, a 30% portion of the media UI is used to play the video, and the 30%> portion is located in the top portion of the media UI, the corresponding set of captioning parameters includes Font-XYZ, Font-Size- AB, kerning of X spacing between characters, position Y as a layer on top of the video, wrapping disabled, horizontal orientation, animation enabled as scrolling, etc.
[0035] In another implementation, the captioning module determines the captioning parameters that correspond to the rendering mode from a configuration file that is stored in a data store that is coupled to the captioning module. The configuration file can map a set of captioning parameters to a rendering mode. The mapping can be configurable and can be user (e.g., system administrator) defined.
[0036] In one implementation, the captioning module sends the rendering mode to a server computer system via one or more networks, and the server determines the captioning parameters that correspond to the rendering mode of the media item. In one implementation, the server computer system provides the captioning parameters that correspond to the rendering mode of the media item to the captioning module. [0037] At block 205, the captioning module provides the captioning for the media item in the media UI based on the set of captioning parameters. In one implementation, the content (e.g., text, characters, symbols, punctuation, pictorial representations, etc.) for the captioning is stored in a data store that is coupled to the captioning module. The content can be predetermined. The content can be provided by one or more users (e.g., system administrator, end-users) and/or one or more other systems. For example, the content may be a transcription of the audio portion of a video. In another example, the content may be user-defined. For example, the content may include user comments about a video. The user comments may correspond to particular segments of the video.
[0038] For example, the captioning module formats the content according to the set of captioning parameters and provides the formatted content in the media UI. For example, the captioning module formats a transcription of Video-ABC using Font-XYZ, Font-Size-AB, kerning of X spacing between characters, position Y as a layer on top of the Video-ABC, wrapping disabled, horizontal orientation, animation enabled as scrolling, etc. In one implementation, the captioning module sends the rendering mode to the server computer system, and the server computer system determines the captioning parameters that correspond to the rendering mode of the media item and creates a document (e.g., webpage) with the appropriate captioning based on the rendering mode. The server computer system can provide the document (e.g., webpage) with the appropriate captioning to the mobile device, and the mobile device renders the document to present the media item with the appropriate captioning to the user.
[0039] At block 207, the captioning module determines whether the rendering mode has changed. For example, the captioning module may receive an event notification from the media application. The change can be based on the orientation of the mobile device and/or user input pertaining to how the video is to be played. For example, the captioning module may determine that the mobile device is changed from being in a first orientation to a second orientation. For example, the captioning module may determine that the mobile device is changed from being in a portrait orientation to a landscape orientation. In another example, the captioning module may determine that the mobile device is changed from being in a landscape orientation to a portrait orientation.
[0040] In another example, the captioning module may receive user input changing one of the elements of the user interface. For example, a user may initially view a video using a first rendering mode (e.g., rendering mode 131 in Figure 1), and the user may provide input to the captioning module to change the playing of the video to use a second rendering mode (e.g., rendering mode 137 in Figure 1). For example, the first rendering mode may play the video in a portrait orientation using the top 30% portion of the display of the mobile device, and the second rendering mode may play the video in the portrait orientation using 15% (e.g., bottom right corner) of the display of the mobile device. In one implementation, the orientation of the mobile device does not change, but an element (e.g., portion, location, etc.) in the media UI changes.
[0041] If the rendering mode has not changed (block 207), the captioning module determines whether the presentation (e.g., playing) of the media item is complete at block 213. The captioning module may receive an event notification, for example, from the media application. If the presentation of the media item is not complete, the captioning module returns to block 207 to determine whether the rendering mode has changed. If the rendering mode has changed (block 207), the captioning module determines the set of captioning parameters that corresponds to the changed rendering mode at block 209. The captioning module can identify the changed rendering mode and can identify the set of captioning parameters that corresponds to the changed rendering mode. For example, the captioning module may determine that the set of captioning parameters for the changed rendering mode includes using Font- 123, Font-Size-00, kerning of ZZ spacing between characters, position YY as a layer on top of the video, wrapping disabled, horizontal orientation, animation disabled, etc. At block 211, the captioning module provides the captioning for the media item in the media UI based on the set of captioning parameters for the changed rendering mode. For example, the captioning module adjusts the existing captioning and/or creates new captioning to conform to Font- 123, Font-Size-00, kerning of ZZ spacing between characters, position YY as a layer on top of the video, wrapping disabled, horizontal orientation, animation disabled, etc. In one implementation, the captioning module sends data to the server computer system indicating the changed rendering mode, the server computer system creates a document (e.g., webpage) with appropriate captioning based on the changed rendering mode and sends the document to the mobile device. The mobile device can render the document to present the media item with the appropriate captioning to the user.
[0042] Figure 3 illustrates exemplary system architecture 300 in which implementations can be implemented. The system architecture 300 can include one or more mobile devices 301, one or more servers 315,317, and one or more data stores 313 coupled to each other over one or more networks 310. The network 310 may be public networks (e.g., the Internet), private networks (e.g., a local area network (LAN) or wide area network (WAN)), or a combination thereof. [0043] The data stores 313 can store media items, such as, and not limited to, digital video, digital movies, digital photos, digital music, website content, social media updates, electronic books (ebooks), electronic magazines, digital newspapers, digital audio books, electronic journals, web blogs, real simple syndication (RSS) feeds, electronic comic books, software applications, etc. A data store 313 can be a persistent storage that is capable of storing data. As will be appreciated by those skilled in the art, in some implementations data store 313 might be a network-attached file server, while in other implementations data store 313 might be some other type of persistent storage such as an object-oriented database, a relational database, and so forth.
[0044] The mobile devices 301 can be portable computing devices such as a cellular telephones, personal digital assistants (PDAs), portable media players, netbooks, laptop computers, an electronic book reader or a tablet computer (e.g., that includes a book reader application), a set-top box, a gaming console, a television, and the like.
[0045] The mobile devices 301 can run an operating system (OS) that manages hardware and software of the mobile devices 301. A media application 303 can run on the mobile devices 301 (e.g., on the OS of the mobile devices). For example, the media application 303 may be a web browser that can access content served by an application server 317 (e.g., web server). In another example, the mobile application 303 may be an application that can access content served by an application server 317 (e.g., mobile application server).
[0046] The application server 317 can provide web applications and/or mobile device applications and data for the applications. The recommendation server 315 can provide media items (e.g., videos) that are related to other media items. The sets of related media items can be stored on one or more data stores 313. The servers 315,317 can be hosted on machines, such as, and not limited to, rackmount servers, personal computers, desktop computers, media centers, or any combination of the above.
[0047] The captioning module 305 can provide custom captioning for a media item in media user interfaces based on how the media item is being rendered. For example, Video- XYZ may be playing using a rendering mode that includes the mobile device 301 being in a landscape orientation and use of a large (e.g., 95%) portion of the display of the media device. The captioning module 205 can provide captioning for Video-XYZ according to a set of captioning parameters that corresponds to the rendering mode. For example, the captioning module 305 may format a transcription of the audio of Video-XYZ using Font-Type-1, Font- Size-3, kerning of XYZ spacing between characters, position ABC as a layer on top of the video, wrapping enabled, horizontal orientation, animation disabled, etc. The captioning module 305 can detect that the rendering mode has changed and can dynamically adjust the captioning and/or create new captioning based on the changed rendering mode. For example, the rendering mode may include the mobile device being in a portrait orientation, and the captioning module 305 may decrease the font size of the captioning, may decrease the kerning spacing between characters of the captioning, etc. to adjust for a smaller layout that is associated with the portrait orientation.
[0048] Figure 4 illustrates a diagram of a machine in an example form of a computer system 400 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. The computer system 400 can be mobile device 100 in Figure 1. In alternative implementations, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client machine in client- server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
[0049] The example computer system 400 includes a processing device (processor) 402, a main memory 404 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), double data rate (DDR SDRAM), or DRAM (RDRAM), etc.), a static memory 406 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 418, which communicate with each other via a bus 430.
[0050] Processor 402 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processor 402 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processor 402 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processor 402 is configured to execute instructions 422 for performing the operations and steps discussed herein.
[0051] The computer system 400 may further include a network interface device 408. The computer system 400 also may include a video display unit 410 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an input device 412 (e.g., a keyboard, and alphanumeric keyboard, a motion sensing input device, ), a cursor control device 414 (e.g., a mouse), and a signal generation device 416 (e.g., a speaker).
[0052] The data storage device 418 may include a computer-readable storage medium 428 on which is stored one or more sets of instructions 422 (e.g., software) embodying any one or more of the methodologies or functions described herein. The instructions 422 may also reside, completely or at least partially, within the main memory 404 and/or within the processor 402 during execution thereof by the computer system 400, the main memory 404 and the processor 402 also constituting computer-readable storage media. The instructions 422 may further be transmitted or received over a network 420 via the network interface device 408.
[0053] In one implementation, the instructions 422 include instructions for a captioning module (e.g., captioning module 109 in Figure 1) and/or a software library containing methods that call the captioning module. While the computer-readable storage medium 428 (machine-readable storage medium) is shown in an exemplary implementation to be a single medium, the term "computer-readable storage medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "computer- readable storage medium" shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term "computer-readable storage medium" shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
[0054] In the foregoing description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that the present disclosure may be practiced without these specific details. In some instances, well- known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present disclosure.
[0055] Some portions of the detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
[0056] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "determining", "providing", "populating", "changing", "detecting", "modifying", or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[0057] For simplicity of explanation, the methods are depicted and described herein as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this
specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term article of manufacture, as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media.
[0058] Certain implementations of the present disclosure also relate to an apparatus for performing the operations herein. This apparatus may be constructed for the intended purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
[0059] Reference throughout this specification to "one implementation" or "an implementation" means that a particular feature, structure, or characteristic described in connection with the implementation is included in at least one implementation. Thus, the appearances of the phrase "in one implementation" or "in an implementation" in various places throughout this specification are not necessarily all referring to the same
implementation. In addition, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or." Moreover, the words "example" or "exemplary" are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words "example" or "exemplary" is intended to present concepts in a concrete fashion.
[0060] It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other implementations will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims

CLAIMS What is claimed is:
1. A method comprising:
determining a rendering mode for a media item presented in a user interface on a mobile device, wherein the rendering mode is one of a plurality of rendering modes;
determining, by a processing device, one of a plurality of sets of captioning parameters that corresponds to the rendering mode of the media item; and
providing captioning for the media item in the user interface based on the one of the plurality of sets of captioning parameters that corresponds to the rendering mode.
2. The method of claim 1, wherein the plurality of captioning parameters comprises at least one of font of the captioning, font size of the captioning, kerning of the captioning, position of the captioning, wrapping of the captioning, orientation of the captioning, or animation of the captioning.
3. The method of claim 1, wherein determining the rendering mode for the media item comprises:
determining an orientation of the mobile device; and
determining one or more elements of the user interface on the mobile device.
4. The method of claim 3, wherein the one or more elements of the user interface comprises at least one of a portion of the user interface that is presenting the media item, a location in the user interface of the portion, dimensions of the media item that is presented, or data pertaining to the media item.
5. The method of claim 1, further comprising:
determining a change is made to the rendering mode to create a changed rendering mode;
determining another one of the plurality of sets of captioning parameters that corresponds to the changed rendering mode; and
adjusting the captioning for the media item in the user interface based on the other one of the plurality of sets of captioning parameters that corresponds to the rendering mode.
6. The method of claim 5, wherein determining the change comprises at least one of: determining that the mobile device is changed from being in a first orientation to a second orientation, or
receiving user input changing one of the elements of the user interface.
7. The method of claim 1, wherein the rendering mode comprises the mobile device being in a portrait orientation and the media item being presented in a lower portion of a display of the mobile device in the portrait orientation.
8. An apparatus comprising:
a memory; and
a processing device coupled with the memory to:
determine a rendering mode for a media item presented in a user interface on a mobile device, wherein the rendering mode is one of a plurality of rendering modes;
determine one of a plurality of sets of captioning parameters that corresponds to the rendering mode of the media item; and
provide captioning for the media item in the user interface based on the one of the plurality of sets of captioning parameters that corresponds to the rendering mode.
9. The apparatus of claim 8, wherein the plurality of captioning parameters comprises at least one of font of the captioning, font size of the captioning, kerning of the captioning, position of the captioning, wrapping of the captioning, orientation of the captioning, or animation of the captioning.
10. The apparatus of claim 8, wherein to determine the rendering mode for the media item comprises the processing device to:
determine an orientation of the mobile device; and
determine one or more elements of the user interface on the mobile device.
11. The apparatus of claim 10, wherein the one or more elements of the user interface comprises at least one of a portion of the user interface that is presenting the media item, a location in the user interface of the portion, dimensions of the media item that is presented, or data pertaining to the media item.
12. The apparatus of claim 8, wherein the processing device is further to:
determine a change is made to the rendering mode to create a changed rendering mode;
determine another one of the plurality of sets of captioning parameters that corresponds to the changed rendering mode; and
adjust the captioning for the media item in the user interface based on the other one of the plurality of sets of captioning parameters that corresponds to the rendering mode.
13. The apparatus of claim 12, wherein to determine the change comprises the processing device to at least one of:
determine that the mobile device is changed from being in a first orientation to a second orientation, or
receive user input changing one of the elements of the user interface.
14. The apparatus of claim 8, wherein the rendering mode comprises the mobile device being in a portrait orientation and the media item being presented in a lower portion of a display of the mobile device in the portrait orientation.
15. A non-transitory computer readable storage medium encoding instructions thereon that, in response to execution by a processing device, cause the processing device to perform operations comprising:
determining a rendering mode for a media item presented in a user interface on a mobile device, wherein the rendering mode is one of a plurality of rendering modes;
determining, by the processing device, one of a plurality of sets of captioning parameters that corresponds to the rendering mode of the media item; and
providing captioning for the media item in the user interface based on the one of the plurality of sets of captioning parameters that corresponds to the rendering mode.
16. The non-transitory computer readable storage medium of claim 15, wherein the plurality of captioning parameters comprises at least one of font of the captioning, font size of the captioning, kerning of the captioning, position of the captioning, wrapping of the captioning, orientation of the captioning, or animation of the captioning.
17. The non-transitory computer readable storage medium of claim 15, wherein determining the rendering mode for the media item comprises:
determining an orientation of the mobile device; and
determining one or more elements of the user interface on the mobile device.
18. The non-transitory computer readable storage medium of claim 17, wherein the one or more elements of the user interface comprises at least one of a portion of the user interface that is presenting the media item, a location in the user interface of the portion, dimensions of the media item that is presented, or data pertaining to the media item.
19. The non-transitory computer readable storage medium of claim 15, the operations further comprising:
determining a change is made to the rendering mode to create a changed rendering mode; determining another one of the plurality of sets of captioning parameters that corresponds to the changed rendering mode; and
adjusting the captioning for the media item in the user interface based on the other one of the plurality of sets of captioning parameters that corresponds to the rendering mode.
20. The non-transitory computer readable storage medium of claim 15, wherein the rendering mode comprises the mobile device being in a portrait orientation and the media item being presented in a lower portion of a display of the mobile device in the portrait orientation.
21. A computer-implemented method comprising:
detecting a change in a rendering mode of a media item presented in a user interface on a mobile device, wherein the rendering mode is one of a plurality of rendering modes; and modifying, in the user interface, captioning for the media item in accordance with the changed rendering mode.
PCT/US2014/062054 2013-10-23 2014-10-23 Customizing mobile media captioning based on mobile media rendering WO2015061620A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP14855223.5A EP3061238A4 (en) 2013-10-23 2014-10-23 Customizing mobile media captioning based on mobile media rendering
CN201480058324.9A CN105659584A (en) 2013-10-23 2014-10-23 Customizing mobile media captioning based on mobile media rendering

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/061,231 2013-10-23
US14/061,231 US20150109532A1 (en) 2013-10-23 2013-10-23 Customizing mobile media captioning based on mobile media rendering

Publications (1)

Publication Number Publication Date
WO2015061620A1 true WO2015061620A1 (en) 2015-04-30

Family

ID=52825887

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/062054 WO2015061620A1 (en) 2013-10-23 2014-10-23 Customizing mobile media captioning based on mobile media rendering

Country Status (4)

Country Link
US (1) US20150109532A1 (en)
EP (1) EP3061238A4 (en)
CN (1) CN105659584A (en)
WO (1) WO2015061620A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150348278A1 (en) * 2014-05-30 2015-12-03 Apple Inc. Dynamic font engine
KR101620050B1 (en) * 2015-03-03 2016-05-12 주식회사 카카오 Display method of scenario emoticon using instant message service and user device therefor
US20170236318A1 (en) * 2016-02-15 2017-08-17 Microsoft Technology Licensing, Llc Animated Digital Ink
US10356481B2 (en) 2017-01-11 2019-07-16 International Business Machines Corporation Real-time modifiable text captioning
CN115842907A (en) * 2018-03-27 2023-03-24 京东方科技集团股份有限公司 Rendering method, computer product and display device
US11157156B2 (en) 2019-06-03 2021-10-26 International Business Machines Corporation Speed-based content rendering
CN112395825A (en) * 2019-08-01 2021-02-23 北京字节跳动网络技术有限公司 Method and device for processing special effects of characters
CN117676053B (en) * 2024-01-31 2024-04-16 成都华栖云科技有限公司 Dynamic subtitle rendering method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090060452A1 (en) * 2007-09-04 2009-03-05 Apple Inc. Display of Video Subtitles
US20120092454A1 (en) * 2009-06-24 2012-04-19 Alexandros Tourapis Method for embedding subtitles and/or graphic overlays in a 3d or multi-view video data
US20120206567A1 (en) * 2010-09-13 2012-08-16 Trident Microsystems (Far East) Ltd. Subtitle detection system and method to television video
US20130141551A1 (en) * 2011-12-02 2013-06-06 Lg Electronics Inc. Mobile terminal and control method thereof

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7106381B2 (en) * 2003-03-24 2006-09-12 Sony Corporation Position and time sensitive closed captioning
US8698812B2 (en) * 2006-08-04 2014-04-15 Ati Technologies Ulc Video display mode control
US9813531B2 (en) * 2007-01-22 2017-11-07 Sisvel International S.A. System and method for screen orientation in a rich media environment
US8817188B2 (en) * 2007-07-24 2014-08-26 Cyberlink Corp Systems and methods for automatic adjustment of text
JP2009177581A (en) * 2008-01-25 2009-08-06 Necディスプレイソリューションズ株式会社 Projection type display device and caption display method
US9087337B2 (en) * 2008-10-03 2015-07-21 Google Inc. Displaying vertical content on small display devices
US20110011180A1 (en) * 2009-07-17 2011-01-20 John Wilson Sensor housing assembly facilitating sensor installation, replacement, recovery and reuse
CA2717519A1 (en) * 2009-10-13 2011-04-13 Research In Motion Limited Mobile wireless communications device to display closed captions and associated methods
US8874090B2 (en) * 2010-04-07 2014-10-28 Apple Inc. Remote control operations in a video conference
JP2011242685A (en) * 2010-05-20 2011-12-01 Hitachi Consumer Electronics Co Ltd Image display device
WO2012030267A1 (en) * 2010-08-30 2012-03-08 Telefonaktiebolaget L M Ericsson (Publ) Methods of launching applications responsive to device orientation and related electronic devices
US9131060B2 (en) * 2010-12-16 2015-09-08 Google Technology Holdings LLC System and method for adapting an attribute magnification for a mobile communication device
KR101919787B1 (en) * 2012-05-09 2018-11-19 엘지전자 주식회사 Mobile terminal and method for controlling thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090060452A1 (en) * 2007-09-04 2009-03-05 Apple Inc. Display of Video Subtitles
US20120092454A1 (en) * 2009-06-24 2012-04-19 Alexandros Tourapis Method for embedding subtitles and/or graphic overlays in a 3d or multi-view video data
US20120206567A1 (en) * 2010-09-13 2012-08-16 Trident Microsystems (Far East) Ltd. Subtitle detection system and method to television video
US20130141551A1 (en) * 2011-12-02 2013-06-06 Lg Electronics Inc. Mobile terminal and control method thereof

Also Published As

Publication number Publication date
EP3061238A4 (en) 2017-07-05
CN105659584A (en) 2016-06-08
US20150109532A1 (en) 2015-04-23
EP3061238A1 (en) 2016-08-31

Similar Documents

Publication Publication Date Title
US20150109532A1 (en) Customizing mobile media captioning based on mobile media rendering
US9524083B2 (en) Customizing mobile media end cap user interfaces based on mobile device orientation
US10652605B2 (en) Visual hot watch spots in content item playback
US8881055B1 (en) HTML pop-up control
US10928983B2 (en) Mobile user interface for contextual browsing while playing digital content
US11782585B2 (en) Cloud-based tool for creating video interstitials
US20150261834A1 (en) Method and apparatus for providing search result
US20130145244A1 (en) Quick analysis tool for spreadsheet application programs
US20110214086A1 (en) Displaying feed data
US11714529B2 (en) Navigation of a list of content sharing platform media items on a client device via gesture controls and contextual synchronization
US11627362B2 (en) Touch gesture control of video playback
JP2018535461A (en) Touch screen user interface to provide media
BRPI1103186A2 (en) DATA PROCESSING APPLIANCE, METHODS FOR DISPLAYING GRAPHIC ELEMENTS, AND FOR PERFORMING ONE OR MORE COMPUTER PROGRAM, COMPUTER PROGRAM, AND DATA SUB-CARRIER APPLICATIONS
US10909310B2 (en) Assistive graphical user interface for preserving document layout while improving readability
US10444846B2 (en) Adjustable video player
CN107967344A (en) Implementation method, system, equipment and the storage medium of web animation effect
US20150213117A1 (en) Adaptive ui for nested data categories across browser viewports
US10248630B2 (en) Dynamic adjustment of select elements of a document
US20110202858A1 (en) Customisation of actions selectable by a user interface
US10924441B1 (en) Dynamically generating video context
US20110202857A1 (en) Customisation of the appearance of a user interface
WO2016106257A1 (en) Dynamic application of a rendering scale factor
US8635268B1 (en) Content notification
US10127312B1 (en) Mutable list resilient index for canonical addresses of variable playlists
JP2012014495A (en) Selection item control device, selection item control method, and selection item control program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14855223

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2014855223

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014855223

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE