US20140344359A1 - Relevant commentary for media content - Google Patents

Relevant commentary for media content Download PDF

Info

Publication number
US20140344359A1
US20140344359A1 US13/901,880 US201313901880A US2014344359A1 US 20140344359 A1 US20140344359 A1 US 20140344359A1 US 201313901880 A US201313901880 A US 201313901880A US 2014344359 A1 US2014344359 A1 US 2014344359A1
Authority
US
United States
Prior art keywords
commentary
media content
relevant
user
section
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/901,880
Inventor
Michal Broz
Bernadette A. Carter
Melba I. Lopez
Matthew G. Marum
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Enterprise Solutions Singapore Pte Ltd
Original Assignee
Lenovo Enterprise Solutions Singapore Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Enterprise Solutions Singapore Pte Ltd filed Critical Lenovo Enterprise Solutions Singapore Pte Ltd
Priority to US13/901,880 priority Critical patent/US20140344359A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BROZ, MICHAL, LOPEZ, MELBA I., CARTER, BERNADETTE A., MARUM, MATTHEW G.
Assigned to LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD. reassignment LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Publication of US20140344359A1 publication Critical patent/US20140344359A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17306Intercommunication techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

Definitions

  • Embodiments of the present invention generally relate to relevant commentary for media content. More particularly, embodiments relate to providing (and/or receiving) relevant commentary for media content based on a preference, such as a preference for one or more of a temporal perspective, a viewpoint, and/or a state of a social network.
  • a preference such as a preference for one or more of a temporal perspective, a viewpoint, and/or a state of a social network.
  • Commentary for media content may be provided to users of the media content.
  • the commentary may include a list of posts associated with a video or a log of a conversation that has taken place between two or more users.
  • Social media buzz may also be presented alongside television media content.
  • the commentary may fail to adequately take into consideration a number of factors such as temporal perspective and viewpoint.
  • Embodiments may include a method involving providing relevant commentary to a user.
  • the method may include providing relevant commentary to the user in response to rendering a section of media content.
  • at least a portion of the relevant commentary may be based on a preference, such as a preference for a temporal perspective.
  • Embodiments may include a method involving receiving relevant commentary.
  • the method may include receiving relevant commentary in response to rendering a section of media content.
  • at least a portion of the relevant commentary may be based on a preference, such as a preference for a temporal perspective.
  • Embodiments may include a method involving detecting a media content access event by a user.
  • the method may include providing relevant commentary to the user in response to rendering a section of media content.
  • at least a portion of the relevant commentary may be based on a preference, such as two or more of a preference for a temporal perspective, a preference for a viewpoint, and a preference for a state of a social network.
  • Embodiments may include a method involving providing (and/or receiving) at least a portion of the relevant commentary based on a topic related to the section of the media content.
  • the method may include providing (and/or receiving) at least a portion of the relevant commentary based on an authorship independent of a media content access event by an author of the relevant commentary.
  • the method may include clarifying an ambiguous section of the media content.
  • the method may include simulating an interactive commentary session.
  • Embodiments may include a computer program product having a computer readable storage medium and computer usable code stored on the computer readable storage medium. If executed by a processor, the computer usable code may cause a computer to provide relevant commentary to a user. The computer usable code, if executed, may also cause a computer to provide relevant commentary to the user in response to a render of a section of media content. The computer usable code, if executed, may also cause a computer to cause at least a portion of the relevant commentary to be based on a preference, such as a preference for a temporal perspective.
  • Embodiments may include a computer program product having a computer readable storage medium and computer usable code stored on the computer readable storage medium. If executed by a processor, the computer usable code may cause a computer to receive relevant commentary. The computer usable code, if executed, may also cause a computer to receive relevant commentary in response to a render of a section of media content. The computer usable code, if executed, may also cause a computer to cause at least a portion of the relevant commentary to be based on a preference, such as a preference for a temporal perspective.
  • Embodiments may include a computer program product having a computer readable storage medium and computer usable code stored on the computer readable storage medium. If executed by a processor, the computer usable code may cause a computer to detect a media content access event by a user. The computer usable code, if executed, may also cause a computer to provide relevant commentary to the user in response to a render of a section of the media content. The computer usable code, if executed, may also cause a computer to cause at least a portion of the relevant commentary to be based on a preference, such as two or more of a preference for a temporal perspective, a preference for a viewpoint, and a preference for a state of a social network.
  • a preference such as two or more of a preference for a temporal perspective, a preference for a viewpoint, and a preference for a state of a social network.
  • Embodiments may include a computer program product having a computer readable storage medium and computer usable code stored on the computer readable storage medium. If executed by a processor, the computer usable code may cause a computer to cause at least a portion of the relevant commentary to be based on a topic to be related to the section of the media content. If executed, computer usable code may cause at least a portion of the relevant commentary to be based on an authorship independent of a media content access event by an author of the relevant commentary. The computer usable code, if executed, may cause a computer to clarify an ambiguous section of the media content. The computer usable code, if executed, may cause a computer to simulate an interactive commentary session.
  • FIGS. 1A to 1C are block diagrams of examples of schemes of providing (and/or receiving) relevant commentary in response to rendering a section of media content according to an embodiment
  • FIG. 2 is a block diagram of an example of an architecture including logic to provide (and/or receive) relevant commentary in response to a render of a section of media content according to an embodiment
  • FIG. 3 is a block diagram of an example of an architecture including a variation in logic to provide (and/or receive) relevant commentary in response to a render of a section of media content according to an embodiment
  • FIG. 4 is a flowchart of an example of a method of providing (and/or receiving) relevant commentary in response to rendering a section of media content according to an embodiment
  • FIG. 5 is a block diagram of an example of a computing device according to an embodiment.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • a media access event e.g., a video access request
  • commentary e.g., feeds of posts, news articles, blog posts, etc.
  • commentary generated when the media content was originally broadcasted e.g., posted, published, etc.
  • filtering the commentary according to a preference, such as a preference for a temporal perspective, a viewpoint, and/or a state of a social network.
  • the user experience may be augmented by providing past commentary (e.g., commentary made at a time of a past original broadcast, commentary made at a time before the past original broadcast, etc.) and/or present commentary (e.g., commentary made at a time of a present original broadcast, at a time that a user watches a broadcast replay in the present, etc.), which may be related to a topic (e.g., a topic addressed in the media content).
  • Augmenting the user experience may also involve providing an opportunity to enter commentary (e.g., post), which may be shared with a social network and/or used to tailor the user experience.
  • ambiguous media content may be clarified and/or made the subject of a simulated interactive commentary session.
  • the commentary and/or the media content may include any information that may be generated, processed, stored, retrieved, rendered, and/or exchanged in electronic form.
  • Examples of the commentary and/or the media content may include audio, video, images, text, hypertext links, graphics, and so on, or combinations thereof.
  • the commentary may include a post, a ranking, an instant message, a chat, an email, polling data, and so on, or combinations thereof.
  • the media content may include a video, a song, a television program, a picture, and so on, or combinations thereof.
  • the commentary and/or the media content may refer to a section thereof.
  • the section of the commentary may include one or more comments from among a string of comments by the same or different individual.
  • the section of the commentary and/or of the media content may include a frame of a video, an area of an image, a segment of audio, a domain of a hypertext link, a chapter of a book, a paragraph of an article, and so on, or combinations thereof.
  • the commentary and/or the media content may include a live (e.g., real-time) communication, a recorded communication, and so on, or combinations thereof.
  • a media content access event may include generating, processing, storing, retrieving, rendering, and/or exchanging information in electronic form.
  • the scheme 6 may include a computing device 12 having a media render portion 14 to display media content (e.g., video).
  • a media content access event (e.g., a video access event) may involve launching a media player application, launching a web browser, retrieving the media content from storage, receiving the media content from an image capture device (e.g., on or off-device camera), rendering (e.g., displaying) the media content, and so on, or combinations thereof.
  • the media content access event may be detected by the computing device 10 itself, by a remote computing device (e.g., off-platform remote server, off site remote server, etc., not shown), and so on, or combinations thereof.
  • the media content displayed in the media render portion 14 may include a beginning section (e.g., first minutes of a video, introduction section, first chapter of a book, etc.), an intermediate section (e.g., any time between the beginning and end of the media content), a final section (e.g., final minutes of the video, final paragraph of an article, etc.), and so on, or combinations thereof.
  • the media content displayed in the media render portion 14 may also include a plurality of stacked media content (e.g., videos, text, images, etc.), which may be completely overlaid, staggered, side-by-side, and so on, or combinations thereof.
  • a video may be displayed in the media render portion 14 of a debate between candidates 18 , 20 , which may include a live communication (e.g., present original broadcast), a recorded communication (e.g., past original broadcast), and so on, or combinations thereof.
  • a live communication e.g., present original broadcast
  • a recorded communication e.g., past original broadcast
  • the computing device 12 may also include a commentary render portion 16 to display commentary.
  • the commentary may be separate from the media content, the commentary may be overlaid with the media content using varying degrees of transparency among the media render portion 14 and the commentary render portion 16 , the commentary render portion 16 may be provided as a region of the media render portion 14 and vice versa, and so on, or combinations thereof.
  • the commentary render portion 16 may be vacant, may be completely transparent, and so on, or combinations thereof.
  • a section of the media content e.g., intermediate section of the debate
  • the commentary render portion 16 may populate by manually and/or automatically becoming less transparent, by adding commentary, and so on, or combinations thereof.
  • the commentary displayed in the commentary render portion 16 may include a plurality of stacked commentary (e.g., videos, text, images, etc.), which may be completely overlaid, staggered, side-by-side, and so on, or combinations thereof.
  • the commentary render portion 16 may display side-by-side textual relevant commentary RC 1 , RC 2 , RC 3 , RC 4 , although it is understood that any number and type of relevant commentary may be displayed (e.g., by scrolling up or down through the render portion 16 , by enlarging the render portion 16 , etc.).
  • the relevant commentary RC 1 , RC 2 , RC 3 , RC 4 may be provided based on a preference for a temporal perspective.
  • the user may wish to view relevant commentary from a past time period based on the preference for a past perspective.
  • the user may view a recorded video of a debate that occurred in a past time period, a time period may be determined (e.g., a past time period), and a portion of the relevant commentary (e.g., RC 1 , etc.) may be provided by, and/or received at, the commentary render portion 16 from a past time period (e.g., the time period corresponding approximately to the past original broadcast, the time period corresponding approximately to before the past original broadcast, etc.).
  • a past time period e.g., the time period corresponding approximately to the past original broadcast, the time period corresponding approximately to before the past original broadcast, etc.
  • the user may wish to view relevant commentary from a present time period based on the preference for a present temporal perspective.
  • the user may therefore view the recorded video of the debate that occurred in the past time period, a time period may be determined (e.g., a present time period), and a portion of the relevant commentary (e.g., RC 1 , etc.) may be provided by, and/or received at, the commentary render portion 16 from a present time period (e.g., the time period corresponding approximately to the present replay of the broadcast).
  • the user may wish to view a mixture of relevant commentary from a past time period based on the preference for a past perspective and from a present time period based on the preference for a present perspective.
  • the time period employed to impart a temporal perspective to the relevant commentary may be based on any desired time scale.
  • the time scale may include centuries, decades, years, months, weeks, days, seconds, and so on, or combinations thereof.
  • the time period may be set according to any parameter.
  • the time period may be employed according to a variance, such an approximate six-month variance from the date of creation of the media content.
  • a time period may represent a preference for a past perspective spanning six months before and/or six months after the date of the creation of the media content.
  • the time period may be employed according to a broadcast date of the media content.
  • a time period may represent a preference for a past perspective for a past broadcast of the media content (e.g., comments made at the time of a past original broadcast), a preference for a past perspective before the broadcast of the media content (e.g., comments made before an original past and/or present broadcast), a preference for a present perspective for a present original broadcast of the media content (e.g., comments made during a present original broadcast), a preference for a present perspective for a present replay of a past original broadcast of the media content (e.g., comments made in the present related to a past original broadcast), and so on, or combinations thereof.
  • a past perspective for a past broadcast of the media content e.g., comments made at the time of a past original broadcast
  • a preference for a past perspective before the broadcast of the media content e.g., comments made before an original past and/or present broadcast
  • a preference for a present perspective for a present original broadcast of the media content e.g., comments made during a
  • a portion of the relevant commentary RC 1 , RC 2 , RC 3 , RC 4 may be provided based on a topic related to the media content.
  • the user may view a present original broadcast of the media content (e.g., live video of a debate between the candidates 18 , 20 occurring in real-time), a topic may be determined (e.g., topic related to the section of the media content rendered), and a portion of the relevant commentary (e.g., RC 1 , etc.) may be provided by, and/or received at, the commentary render portion 16 in accordance with the topic.
  • the relevant commentary may be from a past time period, such as comments made at approximately the time of original broadcast in the past about a topic presented in that section of the video.
  • the relevant commentary may be from a present time period, such as comments made at approximately the time of present replay about the topic.
  • the topic may be derived from a user statement, the media content, and so on, or combinations thereof.
  • the user statement may include user commentary entered in response to commentary, to the section of the media content, and so on, or combinations thereof.
  • a section of the media content may be encountered (e.g., a discussion by the candidates 18 , 20 ) to cause the user to enter user commentary (e.g., via voice, text, an opinion such as “thumbs up”, favorite, bookmarking, etc.) representative of the topic.
  • the topic may be derived from a statement made by a narrator of the media content, from an object in the media content (e.g., a statement made by the candidates 18 , 20 , etc.), from an author of the media content, from other information associated with the media content (e.g., metadata, section headings, titles, a quote, etc.), and so on, or combinations thereof.
  • an object in the media content e.g., a statement made by the candidates 18 , 20 , etc.
  • an author of the media content e.g., from other information associated with the media content (e.g., metadata, section headings, titles, a quote, etc.), and so on, or combinations thereof.
  • the relevant commentary RC 1 , RC 2 , RC 3 , RC 4 may be provided based on a preference for a viewpoint.
  • the user may view a present original broadcast of the media content (e.g., live video of a debate between the candidates 18 , 20 occurring in real-time), a viewpoint may be determined (e.g., a viewpoint associated with a topic and/or a section of the media content), and a portion of the relevant commentary (e.g., RC 1 , etc.) may be provided by, and/or received at, the commentary render portion 16 in accordance with the viewpoint.
  • a viewpoint e.g., a viewpoint associated with a topic and/or a section of the media content
  • a portion of the relevant commentary e.g., RC 1 , etc.
  • the relevant commentary may be from a past time period, such as comments made at approximately the time of original broadcast in the past regarding a viewpoint presented in that section of the media content.
  • the relevant commentary may be from a present time period, such as comments made at approximately the time of video replay regarding the viewpoint.
  • the viewpoint may be derived from a user statement, user history information, and/or the media content.
  • the user statement may include user commentary entered in response a section of the media content.
  • a section of the media content may be encountered (e.g., a topic raised by the candidates 18 , 20 ) to cause the user to enter user commentary (e.g., via voice, text, a vote such as a “thumbs up”, a favorite designation, bookmarking, etc.) representative of the viewpoint.
  • a section of the media content may be encountered to cause the viewpoint to be derived from the user history.
  • the user history may include website search information, favorite information, bookmark information, metadata, opinion information (e.g., “thumbs up”, rankings, etc.), social network membership information, comments made by the user in the past (e.g., posts, etc.), and so on, or combinations thereof.
  • a section of the media content may be encountered to cause the viewpoint to be derived from the media content.
  • the viewpoint may be derived from a statement made by a narrator of the media content, from an object in the media content (e.g., a statement made by the candidates 18 , 20 , etc.), from an author of the media content, from other information associated with the media content (e.g., metadata, section headings, titles, a quote, etc.), and so on, or combinations thereof.
  • an object in the media content e.g., a statement made by the candidates 18 , 20 , etc.
  • an author of the media content e.g., from other information associated with the media content (e.g., metadata, section headings, titles, a quote, etc.), and so on, or combinations thereof.
  • the relevant commentary RC 1 , RC 2 , RC 3 , RC 4 may also be provided based on one or more of a viewpoint agreement, a viewpoint disagreement, and/or viewpoint neutrality.
  • the user may view a present original broadcast of the media content (e.g., live video of a debate between the candidates 18 , 20 occurring in real-time), a viewpoint may be determined (e.g., one of the candidates 18 , 20 talks about a specific topic of a certain point of view), and a portion of the relevant commentary (e.g., RC 1 , etc.) may be provided by, and/or received at, the commentary render portion 16 in accordance with the viewpoint correspondence.
  • the relevant commentary provided may be based on a degree of correspondence with a viewpoint.
  • the user may have a “pro” viewpoint for the topic, which may agree with the viewpoint of the speaker, and the relevant commentary may be provided corresponding to an agreement viewpoint of the user and the speaker (e.g., comments that agree with the viewpoint).
  • the user may have a “pro” viewpoint for the topic, which disagrees with the viewpoint of the speaker, and the relevant commentary may be provided corresponding to a disagreement viewpoint (e.g., comments that disagree with the viewpoint of the user, comments that disagree with the viewpoint of the speaker, etc.).
  • the commentary may be based on a neutral position for the viewpoint, which may provide all comments related to the topic, no comments related to the topic, and so on, or combinations thereof.
  • the relevant commentary RC 1 , RC 2 , RC 3 , RC 4 may also be provided based on one or more other viewpoint factors.
  • the commentary may be based on one or more of a geographic location, age, gender, height, weight, education, and/or career.
  • the relevant commentary e.g., RC 1 , etc.
  • a geographic viewpoint e.g., a Texas viewpoint, a New York viewpoint, etc.
  • an age viewpoint e.g., relatively younger voters, relatively older voters, etc.
  • the relevant commentary RC 1 , RC 2 , RC 3 , RC 4 may be provided based on a preference for a state of a social network.
  • the user may view a present original broadcast of the media content (e.g., live video of a debate between the candidates 18 , 20 occurring in real-time), a state of a social network may be determined (e.g., membership of a social network, content accessible via the social network), and a portion of the relevant commentary (e.g., RC 1 , etc.) may be provided by, and/or received at, the commentary render portion 16 in accordance with the state of the social network.
  • a present original broadcast of the media content e.g., live video of a debate between the candidates 18 , 20 occurring in real-time
  • a state of a social network may be determined (e.g., membership of a social network, content accessible via the social network)
  • a portion of the relevant commentary e.g., RC 1 , etc.
  • the relevant commentary may be from a past social network, such as by the members of the social network at approximately the time of original broadcast in the past, content accessible to the user via the social network at approximately the time of original broadcast in the past, and so on, or combinations thereof.
  • the relevant commentary may be from a present social network, such as by the members of the social network at approximately the time of video replay, content accessible to the user via the social network at approximately the time of video replay, and so on, or combinations thereof.
  • a social network may include an online social network, such as intranet social network and/or internet social network, where users may interact and/or establish relationships with each other.
  • a social intranet network may include a social community of employees able to communicate over an internal employer computer network.
  • Internet social networks may include, for example, FACEBOOK®, TWITTER®, LINKEDIN® (registered trademarks of Facebook, Twitter, and Linkedin, respectively) web sites.
  • internet social networks may include question-and-answer (Q&A) web sites, such as QUORA®, YAHOO!® ANSWERS, and STACK OVERFLOW® (registered trademarks of Quora, Yahoo, and Stack Overflow, respectively).
  • Q&A question-and-answer
  • a social network may include two or more people (e.g., a group) that communicate based on one or more criteria, such as shared interests, particular subjects, and so on, or combinations thereof.
  • a social network may include two or more users that “like” a particular FACEBOOK® web page.
  • any social network may include two or more people that express a relationship with each other, such as a professional, personal, familial, geographic, and/or educational relationship.
  • Users of a social network may establish relationships with each other, such as by joining a group, becoming “friends”, and/or establishing a “connection” to form a candidate social community.
  • a social network may be pre-existing.
  • the relevant commentary may be scoped to the state of a social network to provide (and/or receive) relevant commentary that would be made, that was actually made, and so on, or combinations thereof.
  • the relevant commentary may be scoped to a present state of the social network, which may include a state at approximately the time of a present replay of the media content, a state at approximately the time of a present original broadcast, and so on, or combinations thereof.
  • the scope to the present state may include present commentary (e.g., present comments) and/or past commentary (e.g., past comments) representative of how members of that present social network would (and/or did) comment in response to the media content (and/or similar media content).
  • the relevant commentary may be scoped to a past state of the social network, which may include a state at approximately the time before an original broadcast, a state at approximately the time of a past original broadcast, and so on, or combinations thereof.
  • the scope to the past state may include present commentary (e.g., present comments) and/or past commentary (e.g., past comments) representative of how members of that past social network would (and/or did) comment in response to the media content (and/or similar media content).
  • present commentary e.g., present comments
  • past commentary e.g., past comments
  • scoping the relevant commentary to the past social network state may cause the user to receive relevant commentary from users (e.g., members) in the social network (e.g., at a specific time in the past) and/or content that the user may have had access to via the social network (e.g., at the specific time in the past).
  • scoping the relevant commentary to the present social network state may cause the user to receive relevant commentary from users (e.g., members) in the social network (e.g., at a specific time in the present) and/or content that the user may have access to via the social network (e.g., at the specific time in the present).
  • the state of one or more social networks may be utilized to determine which content is to be, and/or is not to be, provided to the user.
  • specifying a state of a social network in the past may cause the content utilized (e.g., as potential relevant commentary) to include posts and/or content that the user may have had access to in the past (e.g., in the year 2012) via the social network in the past.
  • the relevant commentary RC 1 , RC 2 , RC 3 , RC 4 may be provided based on an authorship independent of a media content access event by an author of the relevant commentary.
  • the user may watch a present original broadcast of the media content (e.g., live video of a debate between the candidates 18 , 20 occurring in real-time), it may be determined if an authorship independent of a media content access event occurred (e.g., commentary related to a topic of a section of the media content without viewing the media content), and a portion of the relevant commentary (e.g., RC 1 , etc.) may be provided by, and/or received at, the commentary render portion 16 .
  • an authorship independent of a media content access event e.g., commentary related to a topic of a section of the media content without viewing the media content
  • a portion of the relevant commentary e.g., RC 1 , etc.
  • the relevant commentary may be from a past time period, such as comments made at approximately the time of original broadcast in the past by authors that did not view the broadcast, comments made before the original broadcast, and so on, or combinations thereof.
  • the relevant commentary may be from a present time period, such as comments made at approximately the time of video replay by authors that did not view the original broadcast, the replay, and so on, or combinations thereof.
  • the relevant commentary may include commentary that was generated (e.g., authored) for the media content while the author viewed the media content
  • the relevant commentary may not necessarily be temporally and/or spatially linked to the media content.
  • the relevant commentary does not come from the media content, was not generated specifically for the media content, was not generated while viewing the media content, and so on, or combinations thereof.
  • the relevant commentary may be based on an authorship of the commentary that is related to a viewpoint, a topic, an object, and so on, or combinations thereof.
  • the scheme 8 includes components having similar reference numerals as those already discussed in the scheme 6 of FIG. 1A , and are to be understood to incorporate similar functionality.
  • the user may enter user commentary UC 1 in the commentary render portion 16 , at the time T 1 .
  • the user commentary UC 1 may be entered by making the commentary public, by typing in the commentary, by adding the commentary (e.g., copy and past a link, etc.), by making the commentary opaque, and so on, or combinations thereof.
  • the user commentary UC 1 may be entered in response to encountering a section of the media content (e.g., intermediate section of the video of the debate).
  • the user commentary UC 1 may be entered in response to encountering a topic and/or a viewpoint, for example a topic and/or a viewpoint presented by one or more of the candidates 18 , 20 .
  • the user commentary may also be used to derive the viewpoint and/or or the topic.
  • the user commentary UC 1 may be from a present time period, such as a time period approximately at the time of replay of the media content.
  • a portion of the relevant commentary RC 1 , RC 2 , RC 3 , RC 4 may be based on the user commentary UC 1 , such as a post returned based on a topic and/or viewpoint represented by the user commentary UC 1 .
  • the user experience may be enhanced by simulating an interactive commentary session.
  • the user commentary UC 1 may be shared with a social network at the time T 2 , such as one or more present social networks affiliated with the user, to populate respective commentary render portions corresponding to one or more other affiliated members.
  • the user commentary UC 1 may be encountered to cause the commentary render portion 16 to populate with the relevant commentary RC 1 at the time T 2 .
  • the relevant commentary may be based on the user commentary UC 1 , as well as one or more of a preference for a temporal perspective, a viewpoint, a state of a social network, a topic, an authorship, and so on, or combinations thereof.
  • the user may enter further user commentary UC 2 and receive further relevant commentary RC 2 .
  • an interactive commentary session may be simulated at the time T 2 .
  • the relevant commentary RC 1 , RC 2 may be from a past time period
  • the commentary session may be perceived by the user as occurring in the present, in real-time, via the simulation.
  • the user may view a recorded video of a debate which occurred in a past time period, may enter user commentary (e.g., UC 1 ) that disagrees with one of the candidates 18 , 20 (e.g., disagrees with a viewpoint) at the time T 1 , and receive relevant commentary (e.g., RC 1 , etc.) at the time T 2 that an individual also disagreeing would have (and/or did) receive at time of original past broadcast of debate via the simulation.
  • user commentary e.g., UC 1
  • relevant commentary e.g., RC 1 , etc.
  • the relevant commentary may include commentary from members of a past social network state and/or a present social network to provide past sentiments and/or present sentiments associated with the media content via the simulation.
  • members of a past social network may appear to respond via the simulation with relevant commentary representative of their present viewpoints, of their past viewpoints, etc.
  • members of a present social network may appear to respond via the simulation with relevant commentary representative of their present viewpoints, of their past viewpoints, etc., and so on, or combinations thereof.
  • the scheme 10 includes components having similar reference numerals as those already discussed in the scheme 6 of FIG. 1A and/or scheme 8 of FIG.
  • the user may view initial commentary IC 1 in the commentary render portion 16 , at the time T 1 .
  • the initial commentary IC 1 may be displayed by making the commentary public, by adding the commentary, by making the commentary opaque, and so on, or combinations thereof.
  • the initial commentary IC 1 may be displayed in response to encountering a section of the media content.
  • the initial commentary IC 1 may be displayed in response to encountering a topic and/or a viewpoint.
  • the initial commentary IC 1 may also be used to derive the viewpoint and/or or the topic.
  • the initial commentary IC 1 may be from a present time period, a past time period, and so on, or combinations thereof.
  • the initial commentary IC 1 may be used to determine a user interest.
  • the user interest may involve an interest for a viewpoint and/or topic represented by the initial commentary IC 1 .
  • the user commentary UC 1 may be entered in response to the initial commentary IC 1 at the time T 1 , which may lead to an interaction (e.g., FIG. 1A , FIG. 1B , etc.), described above, at the time T 2 .
  • an interaction e.g., FIG. 1A , FIG. 1B , etc.
  • the initial commentary IC 1 may also be used to clarify an ambiguous section of the media content.
  • the initial commentary IC 1 may include one or more questions made at a render of the ambiguous section, a comment made (e.g., a comment about the subject matter of the content) at the render of the ambiguous section, a comment made (e.g., an answer) in response to a comment made (e.g., a question) at a render of the ambiguous section, and so on, or combinations thereof.
  • a comment made e.g., a comment about the subject matter of the content
  • a comment made e.g., an answer
  • the user may view clarifying commentary CC 1 in the commentary render portion 16 , at the time T 1 .
  • the clarifying commentary CC 1 may be displayed by making the commentary public, by adding the commentary, by making the commentary opaque, and so on, or combinations thereof.
  • the clarifying commentary CC 1 may be displayed in response to encountering a section of the media content that is ambiguous, determined from questions made, from initial commentary IC 1 , from user commentary UC 1 , from mappings, from metadata, and so on, or combinations thereof.
  • the clarifying commentary CC 1 may be displayed in response to encountering a topic and/or a viewpoint, and/or may be used to derive the viewpoint and/or or the topic.
  • the clarifying commentary CC 1 may be a further refinement of the initial commentary IC 1 , or may be the initial commentary IC 1 itself.
  • the clarifying commentary CC 1 may include a comment describing the media content, a link to further comments describing the media content, responses to questions made in the past related to the media content, and so on, or combinations thereof.
  • the user commentary UC 2 may be entered in response to the clarifying commentary CC 1 at the time T 1 (e.g., comment “that makes sense”, a link having a relative high degree of relatedness to a possible topic, etc.), which may lead to an interaction (e.g., FIG. 1A , FIG. 1B , etc.), as described above, and/or the interaction of FIG. 1C at the time T 2 .
  • FIG. 2 shows an architecture 102 that may be used to provide (and/or receive) relevant commentary in response to rendering a section of media content according to an embodiment.
  • media logic 122 may detect a media content access event.
  • the media logic 122 may detect the generation, processing, storing, retrieving, rendering, and/or exchanging of information in electronic form.
  • the media logic 122 may also identify a section of the media content, such as a frame of the media content, an intermediate section of the media content, and so on, or combinations thereof.
  • user logic 128 may enter user commentary, which may be from a present time period, a past time period, and so on, or combinations thereof.
  • the user commentary may be related to the media content, the section of the media content, a viewpoint, a topic, and so on, or combinations thereof.
  • the user logic 128 may provide a user interface (e.g., a graphical user interface, a command line interface, etc.) to access one or more configurable settings.
  • a user interface e.g., a graphical user interface, a command line interface, etc.
  • the user logic 128 may provide access to one or more settings associated with providing and/or receiving relevant commentary.
  • the settings may include options to determine the media access event, to identify a section of the media content, to specify the number and the type of relevant commentary, to specify the manner of displaying the media content and/or commentary, to specify the manner of entering user commentary, initial commentary, and/or clarifying commentary, to derive a viewpoint and/or a topic, to specify a preference for a temporal perspective, for a viewpoint, for a state of social network, to specify an authorship independent of a media content access event by an author of the relevant commentary, to clarify an ambiguous section of the media content, and/or to simulate an interactive commentary session.
  • the settings may include an automatic feature, for example to automatically determine the configurations based on history information, machine learning processes, and so on, or combinations thereof.
  • the time period may be set by the user via the user interface, which may allow the user to input the time period, select the time period, enable (and/or disable) an automatic implementation of a manually and/or automatically derived time period, and so on, or combinations thereof.
  • relevant commentary logic 134 may filter the commentary.
  • the relevant commentary logic 134 may filter the commentary based on a preference for a temporal perspective, such as a past perspective, a present perspective, and so on, or combinations thereof.
  • the relevant commentary logic 134 may determine and/or employ a time period to impart a temporal perspective to the relevant commentary, which may be on any desired time scale. The time period may be employed based on any parameter, such as a variance, a broadcast date, and so on, or combinations thereof.
  • the relevant commentary logic 134 may filter the commentary based on a preference for a viewpoint, which may be derived from a user statement, user history information, the media content, and so on, or combinations thereof.
  • the relevant commentary logic 134 may filter the commentary based a viewpoint agreement, a viewpoint disagreement, and/or a viewpoint neutrality. In another example, the relevant commentary logic 134 may filter the commentary based on one or more further viewpoint factors, such as a geographic location, age, gender, and so on, or combinations thereof.
  • the relevant commentary logic 134 may also filter the commentary based on a preference for a state of a social network, such as a past state of a social network, a present state of a social network, and so on, or combinations thereof.
  • the relevant commentary logic 134 may determine the state of the social network, and/or filter the commentary based on the state, to provide present commentary and/or past commentary representative of how members of the past social network and/or the present social network would (and/or did) comment in response to the media content (and/or similar media content), to provide content accessible via the social network according to the state, and so on, or combinations thereof.
  • the relevant commentary logic 134 may determine a topic related to the media content.
  • the relevant commentary logic 134 may derive the topic from a user statement, the media content, and so on, or combinations thereof.
  • the topic may be related to a section of the media content (e.g., a chapter, etc.).
  • the relevant commentary logic 134 may also derive the topic from a comment expressed by one or more of the user and/or the media content.
  • the relevant commentary logic 134 may determine an authorship of the media content.
  • An author of the media content may include a performer of the media content (e.g., writer, singer, etc.), an organization that is the source of the media content (e.g., publisher, source web site, etc.), and so on, or combinations thereof.
  • the relevant commentary logic 134 may determine if an authorship of the relevant commentary is independent of a media content access event by the author of the relevant commentary.
  • the relevant commentary logic 134 may provide commentary that was made for the media content, that does not come from the media content, was not generated specifically for the media content, was not generated while viewing the media content, and so on, or combinations thereof.
  • the relevant commentary logic 134 may also enter initial commentary, for example in response to encountering a section of the media content, in response to encountering a topic and/or a viewpoint, and so on, or combinations thereof.
  • the relevant commentary logic 134 may enter initial commentary that may be from a present time period (e.g., as real-time initial commentary), from a past time period (e.g., as stored initial commentary), and so on, or combinations thereof.
  • the initial commentary may be related to the media content, the section of the media content, a viewpoint, a topic, and so on, or combinations thereof.
  • the relevant commentary logic 134 may enter the initial commentary to determine a user interest, to clarify an ambiguous section of the media content, to derive a topic and/or a viewpoint, and so on, or combinations thereof.
  • the relevant commentary logic 134 may clarify an ambiguous section of the media content.
  • the relevant commentary logic 134 may clarify the section by, for example, determining and/or employing information such as a mapping associated with the ambiguous section, metadata associated with the media content, and so on, or combinations thereof.
  • the relevant commentary logic 134 may enter clarifying commentary in response to, for example, encountering a section of the media content that is ambiguous, encountering a topic and/or a viewpoint, and so on, or combinations thereof.
  • the relevant commentary logic 134 may enter clarifying commentary that may be from a present time period (e.g., as real-time initial commentary), from a past time period (e.g., as stored initial commentary), and so on, or combinations thereof.
  • the relevant commentary may be related to the media content, the section of the media content, a viewpoint, a topic, and so on, or combinations thereof.
  • the clarifying commentary may be a further refinement of the initial commentary, may be the initial commentary itself, and so on, or combinations thereof.
  • the relevant commentary logic 134 may enter clarifying commentary to derive a viewpoint and/or or a topic.
  • the relevant commentary logic 134 may also simulate an interactive commentary session.
  • the interactive commentary session may provide a user experience where the commentary session may be perceived by the user as occurring in the present (in real-time), even though the relevant commentary may be from a past time period, a present time period, and so on, or combinations thereof.
  • the relevant commentary logic 134 may provide relevant comment data 152 having relevant commentary based on one or more of the logic associated therewith.
  • a user may view a broadcast of a debate that occurred in the past, and the user may receive a stream of posts that were made in real-time during the debate in the past.
  • the user may make a post (e.g., user commentary) related to the content of the broadcast (e.g., a topic) and/or related to the stream of posts (e.g., negative posts, positive posts, general questions, etc.).
  • the user may, in response, receive posts and/or other content (e.g., news articles, blog posts, video responses, etc.) from the past that were responses to similar posts as the user post.
  • the user may also specify a setting to filter the commentary to view comments that coincide (e.g., agree, disagree, are natural) with the broadcast, the stream of posts, and/or the user post.
  • the user may scope the commentary to past viewpoints of members of a past social network, present viewpoints of members of a present social network, present viewpoints of members of a past social network, and so on, or combinations thereof.
  • the user will be provided a stream of posts (e.g., poll disapproval ratings, twitter posts, etc.) reflecting the opposition during the time of the debate in the past.
  • a user may view a broadcast of a debate that occurred in the past, and the user may receive a stream of posts that are presently being made in real-time at the time of the replay of the debate in the present.
  • the stream of posts may be related to a viewpoint and/or a topic.
  • the user may make a post (i.e., to the architecture 102 and/or the broader social network) related to the content of the broadcast (e.g., a topic), the stream of posts (e.g., negative posts, positive posts, general questions, etc.), and so on, or combinations thereof.
  • the user may, in response, receive posts and/or other content (e.g., news articles, blog posts, video responses, etc.) from the present that are related to the content and/or the user post.
  • the user may also specify a setting to filter the commentary to view comments that coincide (e.g., agree, disagree, are neutral) with the broadcast, the stream of posts, and/or the user post.
  • the user may scope the commentary to past viewpoints of members of a past social network, present viewpoints of members of a present social network, present viewpoints of members of a past social network, and so on, or combinations thereof.
  • the user will be provided a stream of posts (e.g., poll approval ratings, twitter posts, etc.) reflecting the support that are presently being made at the time of replay of the debate in the present.
  • posts e.g., poll approval ratings, twitter posts, etc.
  • a user may view a broadcast of a debate that occurred in the past, and a portion of the content may be analyzed for an ambiguous section of the broadcast based on, for example, questions that were made during the broadcast, after the broadcast, and so on, or combinations thereof.
  • the user may view posts made for clarification to provide a greater level of understanding to the user (e.g., an understanding of the media content, of a topic, of a viewpoint, etc.).
  • the architecture 102 may determine the context of the section of the broadcast by analyzing metadata associated with the content, a posting generated approximately at the time of each section of the broadcast, and so on, or combinations thereof.
  • the architecture 102 may also leverage sources for mappings between the media content sections and relevant posts (and/or media content). Mapping information may include tags, time stamp relationships, and prior interaction history with the source (and/or source author) of the media content being viewed, of similar content, and so on, or combinations thereof.
  • the user may receive commentary made (e.g., presently made, made in the past, etc.) associated with one or more topics presented in the media content (e.g., debate) without requiring the author of the commentary to view the media content.
  • the commentary may be obtained from, for example, a present article from a news organization associated with one or more topics presented in the media content, a post made in response to reading the present news article, and so on, or combinations thereof.
  • the architecture 102 may determine a viewpoint of the user, the media content, and so on, or combinations thereof.
  • the architecture 102 may filter the commentary based on a temporal preference, a viewpoint, and/or a state of the social network to provide data from the past, the present, or a combination thereof.
  • the user may have the opportunity to post an opinion, and view commentary from any desired time period, from any desired viewpoint, and/or from any desired social network.
  • the user may experience an interactive commentary session which may appear as a live commentary interaction session, although the commentary is not being generated (e.g., authored) in real-time.
  • an architecture 202 is shown that may be used to provide (and/or receive) relevant commentary in response to rendering a section of media content according to an embodiment.
  • Logic components identified in the architecture 202 of FIG. 3 having similar reference numerals as those already discussed in the architecture 102 of FIG. 2 are to be understood to incorporate similar functionality.
  • media logic 222 may include media access detection logic 224 to detect a media content access event.
  • the media access detection logic 224 may detect the generation, processing, storing, retrieving, rendering, and/or exchanging of information in electronic form.
  • the media access detection logic 224 may detect launching of a media player application, launching of a web browser, retrieving of the media content from storage, receiving the media content from an image capture device (e.g., on or off-device camera), rendering (e.g., displaying) the media content, and so on, or combinations thereof.
  • the media logic 222 may include section identification logic 226 to identify a section of the media content.
  • the section identification logic 226 may identify a frame of a video, an area of an image, a segment of audio, a domain of a hypertext link, a chapter of a book, a paragraph of an article, and so on, or combinations thereof.
  • the section identification logic 226 may also identify a beginning section of the media content, an intermediate section of the media content, a final section of the media content, and so on, or combinations thereof.
  • user logic 228 may include user commentary logic 230 to enter user commentary.
  • the user commentary logic 230 may enter user commentary by making the commentary public, by typing in the commentary, by adding the commentary (e.g., copy and past a link, etc.), by making the commentary opaque, and so on, or combinations thereof.
  • the user commentary may be from a present time period, for example as real-time user commentary.
  • the user commentary may be from a past time period, for example as stored user commentary.
  • the user commentary may be related to the media content, the section of the media content, a viewpoint, a topic, and so on, or combinations thereof.
  • the user logic 228 may also include user preference logic 232 .
  • the user preference logic 232 may provide access to one or more settings associated with providing and/or receiving relevant commentary.
  • relevant commentary logic 234 may include temporal logic 234 to filter the commentary based on a preference for a temporal perspective.
  • the temporal logic 234 may filter the commentary based on a preference for a past perspective, a preference for a present perspective, and so on, or combinations thereof.
  • the temporal logic 234 may determine and/or employ a time period to impart a temporal perspective to the relevant commentary, which may be on any desired time scale. The time period may be employed based on any parameter, such as a variance, a broadcast date, and so on, or combinations thereof.
  • the relevant commentary logic 234 may include viewpoint logic 238 to filter the commentary based on a preference for a viewpoint.
  • the viewpoint logic 238 may derive the viewpoint from one or more of a user statement, user history information, the media content (e.g., a section of the media content), and so on, or combinations thereof.
  • the viewpoint logic 238 may filter the commentary based on one or more of a viewpoint agreement, a viewpoint disagreement, a viewpoint neutrality, and so on, or combinations thereof.
  • the viewpoint logic 238 may derive the commentary based on one or more further viewpoint factors, such as a geographic location, age, gender, and so on, or combinations thereof.
  • the relevant commentary logic 234 may include social network logic 240 to filter the commentary based on a preference for a state of a social network.
  • the social network logic 240 may filter the commentary based on a preference for a past state of a social network, a present state of a social network, and so on, or combinations thereof.
  • the social network logic 240 may determine the state of the social network, and/or filter the commentary based on the state, to provide present commentary and/or past commentary representative of how members of the past social network and/or the present social network would (and/or did) comment in response to the media content (and/or similar media content), to provide content accessible via the social network according to the state, and so on, or combinations thereof.
  • the relevant commentary logic 234 may include topic logic 242 to determine a topic related to the media content.
  • the topic logic 242 may derive the topic from a user statement, the media content, and so on, or combinations thereof.
  • the topic may be related to a section of the media content (e.g., portion thereof, a chapter, etc.).
  • the topic logic 242 may derive the topic from a comment expressed by one or more of the user, the media content, and so on, or combinations thereof.
  • the relevant commentary logic 234 may include authorship logic 244 to determine an authorship of the media content.
  • the authorship logic 244 may determine if an authorship of the commentary is independent of a media content access event by the author of the relevant commentary.
  • the authorship logic 244 may filter the commentary to provide commentary that was made for the media content, that does not come from the media content, was not generated specifically for the media content, was not generated while viewing the media content, and so on, or combinations thereof.
  • the relevant commentary logic 234 may include initial commentary logic 246 to provide initial commentary.
  • the initial commentary logic 246 may enter the initial commentary by making the commentary public, by typing in the commentary, by adding the commentary (e.g., copy and past a link, etc.), by making the commentary opaque, and so on, or combinations thereof.
  • the initial commentary may be from a present time period, for example as real-time initial commentary.
  • the initial commentary may be from a past time period, for example as stored initial commentary.
  • the initial commentary may be related to the media content, the section of the media content, a viewpoint, a topic, and so on, or combinations thereof.
  • the initial commentary logic 246 may enter the initial commentary to determine a user interest, to clarify an ambiguous section of the media content, to derive a topic and/or a viewpoint, and so on, or combinations thereof.
  • the relevant commentary logic 234 may include clarification logic 248 to clarify an ambiguous section of the media content.
  • the clarification logic 248 may determine and/or employ information to clarify the ambiguous section, such as a mapping associated with the ambiguous section, metadata associated with the media content, and so on, or combinations thereof.
  • the clarification logic 248 may enter clarifying commentary by making the commentary public, by adding the commentary, my making the commentary opaque, and so on, or combinations thereof.
  • the clarification logic 248 may enter clarifying commentary in response to, for example, encountering a section of the media content that is ambiguous, encountering a topic and/or a viewpoint, and so on, or combinations thereof.
  • the clarification logic 248 may enter clarifying commentary that may be from a present time period (e.g., as real-time initial commentary), from a past time period (e.g., as stored initial commentary), and so on, or combinations thereof.
  • the relevant commentary may be related to the media content, the section of the media content, a viewpoint, a topic, and so on, or combinations thereof.
  • the clarifying commentary may be a further refinement of the initial commentary, may be the initial commentary itself, and so on, or combinations thereof.
  • the clarification logic 248 may enter clarifying commentary to derive a viewpoint and/or or a topic.
  • the relevant commentary logic 234 may include simulation logic 250 to simulate an interactive commentary session.
  • the simulation logic 250 may provide a user experience where the commentary session may appear as occurring in the present, in real-time, even though the relevant commentary may be from a past time period, a present time period, and so on, or combinations thereof.
  • the relevant commentary logic 234 may provide relevant comment data 252 having relevant commentary based on one or more of the logic associated therewith.
  • FIG. 4 shows a method 302 of providing and/or receiving relevant commentary in response to rendering a section of media content according to an embodiment.
  • Illustrated processing block 354 provides for detecting a media content access event, for example by a user, by a computing platform, and so on, or combinations thereof.
  • the media content access event may correspond to, for example, the media content access event (e.g., FIG. 1 to FIG. 3 ) already discussed.
  • Relevant commentary may be provided and/or received in response to rendering the media content, such as a section of the media content, at block 356 .
  • the relevant commentary may correspond to, for example, the relevant commentary (e.g., FIG. 1 to FIG. 3 ) already discussed.
  • At least a portion of the relevant commentary may be provided and/or received based on a preference for a temporal perspective at block 358 , wherein the temporal perspective in block 358 may correspond to, for example, the temporal perspective (e.g., FIG. 1 to FIG. 3 ) already discussed.
  • At least a portion of the relevant commentary may be provided and/or received based on a preference for a viewpoint at block 360 , wherein the viewpoint in block 360 may correspond to, for example, the viewpoint (e.g., FIG. 1 to FIG. 3 ) already discussed.
  • At least a portion of the relevant commentary may be provided and/or received based on a preference for a state of a social network at block 362 , wherein the state of the social network in block 362 may correspond to, for example, the state (e.g., FIG. 1 to FIG. 3 ) already discussed.
  • the method 302 may also provide and/or receive at least a portion of the relevant commentary based on a topic at block 364 , for example a topic related to the section of the media content that is rendered.
  • the relevant commentary in block 364 may correspond to, for example, the relevant commentary based on a topic (e.g., FIG. 1 to FIG. 3 ) already discussed.
  • the method 302 may provide and/or receive at least a portion of the relevant commentary based on an authorship independent of a media content access event by an author of the relevant commentary at block 366 .
  • the relevant commentary in block 366 may correspond to, for example, the relevant commentary based on an authorship (e.g., FIG. 1 to FIG. 3 ) already discussed.
  • the method 302 may clarify an ambiguous section of the media content at block 368 .
  • an ambiguous section of the media content at block 368 may be clarified (e.g., FIG. 1 to FIG. 3 ) as already discussed.
  • the method 302 may also simulate an interactive commentary session at block 370 .
  • an interactive commentary session at block 370 may be simulated (e.g., FIG. 1 to FIG. 3 ) as already discussed.
  • the method 302 may provide and/or receive initial commentary to the user related to the section, provide and/or receive user commentary in response to the initial commentary, provide and/or receive at least a portion of the relevant commentary based the user commentary, and so on, or combinations thereof.
  • the method 302 may provide and/or receive media content and initial commentary from a past time period, user commentary from a present time period, and/or a portion of the relevant commentary from the past time period.
  • the method 302 may also provide and/or receive media content from a past time period, initial commentary and user commentary from a present time period, and a portion of the relevant commentary from the present time period.
  • the method 302 may provide and/or receive a portion of the relevant commentary from a present time period and a past time period.
  • FIG. 5 shows a computing device 486 having a processor 488 , mass storage 490 (e.g., read only memory/ROM, optical disk, flash memory), a network interface 492 , and system memory 494 (e.g., random access memory/RAM).
  • the processor 488 is configured to execute logic 496 , wherein the logic 496 may implement one or more aspects of the schemes 8 to 10 ( FIG. 1A to FIG. 1C ), the architecture 102 ( FIG. 2 ), the architecture 202 ( FIG. 3 ), and/or the method 302 ( FIG. 4 ), already discussed.
  • the logic 496 may enable the computing device 486 to function to provide (and/or receive) relevant commentary, for example in response to rendering a section of media content.
  • the logic 496 may also be implemented as a software application that is distributed among many computers (e.g., local or remote). Thus, while a single computer could provide the functionality described herein, systems implementing these features can use many interconnected computers (e.g., for scalability as well as modular implementation).
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
  • the terms “first”, “second”, etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.

Abstract

Methods, products, apparatuses, and systems may provide and/or receive relevant commentary for media content. Additionally, the relevant commentary may be provided and/or received in response to rendering a section of the media content. In addition, the relevant commentary may be provided and/or received based on one or more of a preference for a temporal perspective, a preference for a viewpoint, and/or a preference for a state of a social network. Moreover, the relevant commentary may be provided and/or received based on a topic related to the section of the media content. The relevant commentary may be provided and/or received based on an authorship independent of a media content access event by an author of the relevant commentary. In addition, an ambiguous section of the media content may be clarified, and/or an interactive commentary session may be simulated.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application is a continuation of U.S. patent application Ser. No. 13/896,489 filed on May 17, 2013.
  • BACKGROUND
  • Embodiments of the present invention generally relate to relevant commentary for media content. More particularly, embodiments relate to providing (and/or receiving) relevant commentary for media content based on a preference, such as a preference for one or more of a temporal perspective, a viewpoint, and/or a state of a social network.
  • Commentary for media content may be provided to users of the media content. For example, the commentary may include a list of posts associated with a video or a log of a conversation that has taken place between two or more users. Social media buzz may also be presented alongside television media content. The commentary, however, may fail to adequately take into consideration a number of factors such as temporal perspective and viewpoint.
  • BRIEF SUMMARY
  • Embodiments may include a method involving providing relevant commentary to a user. The method may include providing relevant commentary to the user in response to rendering a section of media content. In addition, at least a portion of the relevant commentary may be based on a preference, such as a preference for a temporal perspective.
  • Embodiments may include a method involving receiving relevant commentary. The method may include receiving relevant commentary in response to rendering a section of media content. In addition, at least a portion of the relevant commentary may be based on a preference, such as a preference for a temporal perspective.
  • Embodiments may include a method involving detecting a media content access event by a user. The method may include providing relevant commentary to the user in response to rendering a section of media content. In addition, at least a portion of the relevant commentary may be based on a preference, such as two or more of a preference for a temporal perspective, a preference for a viewpoint, and a preference for a state of a social network.
  • Embodiments may include a method involving providing (and/or receiving) at least a portion of the relevant commentary based on a topic related to the section of the media content. The method may include providing (and/or receiving) at least a portion of the relevant commentary based on an authorship independent of a media content access event by an author of the relevant commentary. In addition, the method may include clarifying an ambiguous section of the media content. Moreover, the method may include simulating an interactive commentary session.
  • Embodiments may include a computer program product having a computer readable storage medium and computer usable code stored on the computer readable storage medium. If executed by a processor, the computer usable code may cause a computer to provide relevant commentary to a user. The computer usable code, if executed, may also cause a computer to provide relevant commentary to the user in response to a render of a section of media content. The computer usable code, if executed, may also cause a computer to cause at least a portion of the relevant commentary to be based on a preference, such as a preference for a temporal perspective.
  • Embodiments may include a computer program product having a computer readable storage medium and computer usable code stored on the computer readable storage medium. If executed by a processor, the computer usable code may cause a computer to receive relevant commentary. The computer usable code, if executed, may also cause a computer to receive relevant commentary in response to a render of a section of media content. The computer usable code, if executed, may also cause a computer to cause at least a portion of the relevant commentary to be based on a preference, such as a preference for a temporal perspective.
  • Embodiments may include a computer program product having a computer readable storage medium and computer usable code stored on the computer readable storage medium. If executed by a processor, the computer usable code may cause a computer to detect a media content access event by a user. The computer usable code, if executed, may also cause a computer to provide relevant commentary to the user in response to a render of a section of the media content. The computer usable code, if executed, may also cause a computer to cause at least a portion of the relevant commentary to be based on a preference, such as two or more of a preference for a temporal perspective, a preference for a viewpoint, and a preference for a state of a social network.
  • Embodiments may include a computer program product having a computer readable storage medium and computer usable code stored on the computer readable storage medium. If executed by a processor, the computer usable code may cause a computer to cause at least a portion of the relevant commentary to be based on a topic to be related to the section of the media content. If executed, computer usable code may cause at least a portion of the relevant commentary to be based on an authorship independent of a media content access event by an author of the relevant commentary. The computer usable code, if executed, may cause a computer to clarify an ambiguous section of the media content. The computer usable code, if executed, may cause a computer to simulate an interactive commentary session.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
  • The various advantages of the embodiments of the present invention will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:
  • FIGS. 1A to 1C are block diagrams of examples of schemes of providing (and/or receiving) relevant commentary in response to rendering a section of media content according to an embodiment;
  • FIG. 2 is a block diagram of an example of an architecture including logic to provide (and/or receive) relevant commentary in response to a render of a section of media content according to an embodiment;
  • FIG. 3 is a block diagram of an example of an architecture including a variation in logic to provide (and/or receive) relevant commentary in response to a render of a section of media content according to an embodiment;
  • FIG. 4 is a flowchart of an example of a method of providing (and/or receiving) relevant commentary in response to rendering a section of media content according to an embodiment; and
  • FIG. 5 is a block diagram of an example of a computing device according to an embodiment.
  • DETAILED DESCRIPTION
  • As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • When faced with a media access event (e.g., a video access request), it may be valuable to augment a user experience by providing commentary (e.g., feeds of posts, news articles, blog posts, etc.), such as commentary generated when the media content was originally broadcasted (e.g., posted, published, etc.). It may also be valuable to augment the user experience by filtering the commentary according to a preference, such as a preference for a temporal perspective, a viewpoint, and/or a state of a social network. Additionally, the user experience may be augmented by providing past commentary (e.g., commentary made at a time of a past original broadcast, commentary made at a time before the past original broadcast, etc.) and/or present commentary (e.g., commentary made at a time of a present original broadcast, at a time that a user watches a broadcast replay in the present, etc.), which may be related to a topic (e.g., a topic addressed in the media content). Augmenting the user experience may also involve providing an opportunity to enter commentary (e.g., post), which may be shared with a social network and/or used to tailor the user experience. In addition, ambiguous media content may be clarified and/or made the subject of a simulated interactive commentary session.
  • Referring now to FIGS. 1A to 1C, schemes 6, 8 and 10 are shown of providing (and/or receiving) relevant commentary in response to rendering a section of media content according to an embodiment. The commentary and/or the media content may include any information that may be generated, processed, stored, retrieved, rendered, and/or exchanged in electronic form. Examples of the commentary and/or the media content may include audio, video, images, text, hypertext links, graphics, and so on, or combinations thereof. In one example, the commentary may include a post, a ranking, an instant message, a chat, an email, polling data, and so on, or combinations thereof. In another example, the media content may include a video, a song, a television program, a picture, and so on, or combinations thereof.
  • The commentary and/or the media content may refer to a section thereof. For example, the section of the commentary may include one or more comments from among a string of comments by the same or different individual. The section of the commentary and/or of the media content may include a frame of a video, an area of an image, a segment of audio, a domain of a hypertext link, a chapter of a book, a paragraph of an article, and so on, or combinations thereof. The commentary and/or the media content may include a live (e.g., real-time) communication, a recorded communication, and so on, or combinations thereof. Accordingly, a media content access event may include generating, processing, storing, retrieving, rendering, and/or exchanging information in electronic form.
  • In the illustrated example of FIG. 1A, the scheme 6 may include a computing device 12 having a media render portion 14 to display media content (e.g., video). Accordingly, a media content access event (e.g., a video access event) may involve launching a media player application, launching a web browser, retrieving the media content from storage, receiving the media content from an image capture device (e.g., on or off-device camera), rendering (e.g., displaying) the media content, and so on, or combinations thereof. The media content access event may be detected by the computing device 10 itself, by a remote computing device (e.g., off-platform remote server, off site remote server, etc., not shown), and so on, or combinations thereof. The media content displayed in the media render portion 14 may include a beginning section (e.g., first minutes of a video, introduction section, first chapter of a book, etc.), an intermediate section (e.g., any time between the beginning and end of the media content), a final section (e.g., final minutes of the video, final paragraph of an article, etc.), and so on, or combinations thereof. The media content displayed in the media render portion 14 may also include a plurality of stacked media content (e.g., videos, text, images, etc.), which may be completely overlaid, staggered, side-by-side, and so on, or combinations thereof. At time T1, a video may be displayed in the media render portion 14 of a debate between candidates 18, 20, which may include a live communication (e.g., present original broadcast), a recorded communication (e.g., past original broadcast), and so on, or combinations thereof.
  • The computing device 12 may also include a commentary render portion 16 to display commentary. The commentary may be separate from the media content, the commentary may be overlaid with the media content using varying degrees of transparency among the media render portion 14 and the commentary render portion 16, the commentary render portion 16 may be provided as a region of the media render portion 14 and vice versa, and so on, or combinations thereof. At the time T1, the commentary render portion 16 may be vacant, may be completely transparent, and so on, or combinations thereof. At time T2, a section of the media content (e.g., intermediate section of the debate) may be encountered to cause the commentary render portion 16 to populate. In one example, the commentary render portion 16 may populate by manually and/or automatically becoming less transparent, by adding commentary, and so on, or combinations thereof. The commentary displayed in the commentary render portion 16 may include a plurality of stacked commentary (e.g., videos, text, images, etc.), which may be completely overlaid, staggered, side-by-side, and so on, or combinations thereof. At the time T2, the commentary render portion 16 may display side-by-side textual relevant commentary RC1, RC2, RC3, RC4, although it is understood that any number and type of relevant commentary may be displayed (e.g., by scrolling up or down through the render portion 16, by enlarging the render portion 16, etc.).
  • The relevant commentary RC1, RC2, RC3, RC4 may be provided based on a preference for a temporal perspective. In one example, the user may wish to view relevant commentary from a past time period based on the preference for a past perspective. The user may view a recorded video of a debate that occurred in a past time period, a time period may be determined (e.g., a past time period), and a portion of the relevant commentary (e.g., RC1, etc.) may be provided by, and/or received at, the commentary render portion 16 from a past time period (e.g., the time period corresponding approximately to the past original broadcast, the time period corresponding approximately to before the past original broadcast, etc.). In another example, the user may wish to view relevant commentary from a present time period based on the preference for a present temporal perspective. The user may therefore view the recorded video of the debate that occurred in the past time period, a time period may be determined (e.g., a present time period), and a portion of the relevant commentary (e.g., RC1, etc.) may be provided by, and/or received at, the commentary render portion 16 from a present time period (e.g., the time period corresponding approximately to the present replay of the broadcast). In a further example, the user may wish to view a mixture of relevant commentary from a past time period based on the preference for a past perspective and from a present time period based on the preference for a present perspective.
  • The time period employed to impart a temporal perspective to the relevant commentary may be based on any desired time scale. The time scale, for example, may include centuries, decades, years, months, weeks, days, seconds, and so on, or combinations thereof. The time period may be set according to any parameter. In one example, the time period may be employed according to a variance, such an approximate six-month variance from the date of creation of the media content. Thus, a time period may represent a preference for a past perspective spanning six months before and/or six months after the date of the creation of the media content. In another example, the time period may be employed according to a broadcast date of the media content. Thus, a time period may represent a preference for a past perspective for a past broadcast of the media content (e.g., comments made at the time of a past original broadcast), a preference for a past perspective before the broadcast of the media content (e.g., comments made before an original past and/or present broadcast), a preference for a present perspective for a present original broadcast of the media content (e.g., comments made during a present original broadcast), a preference for a present perspective for a present replay of a past original broadcast of the media content (e.g., comments made in the present related to a past original broadcast), and so on, or combinations thereof.
  • In addition, a portion of the relevant commentary RC1, RC2, RC3, RC4 may be provided based on a topic related to the media content. The user may view a present original broadcast of the media content (e.g., live video of a debate between the candidates 18, 20 occurring in real-time), a topic may be determined (e.g., topic related to the section of the media content rendered), and a portion of the relevant commentary (e.g., RC1, etc.) may be provided by, and/or received at, the commentary render portion 16 in accordance with the topic. The relevant commentary may be from a past time period, such as comments made at approximately the time of original broadcast in the past about a topic presented in that section of the video. The relevant commentary may be from a present time period, such as comments made at approximately the time of present replay about the topic.
  • The topic may be derived from a user statement, the media content, and so on, or combinations thereof. In one example, the user statement may include user commentary entered in response to commentary, to the section of the media content, and so on, or combinations thereof. For example, a section of the media content may be encountered (e.g., a discussion by the candidates 18, 20) to cause the user to enter user commentary (e.g., via voice, text, an opinion such as “thumbs up”, favorite, bookmarking, etc.) representative of the topic. In another example, the topic may be derived from a statement made by a narrator of the media content, from an object in the media content (e.g., a statement made by the candidates 18, 20, etc.), from an author of the media content, from other information associated with the media content (e.g., metadata, section headings, titles, a quote, etc.), and so on, or combinations thereof.
  • In addition, the relevant commentary RC1, RC2, RC3, RC4 may be provided based on a preference for a viewpoint. For example, the user may view a present original broadcast of the media content (e.g., live video of a debate between the candidates 18, 20 occurring in real-time), a viewpoint may be determined (e.g., a viewpoint associated with a topic and/or a section of the media content), and a portion of the relevant commentary (e.g., RC1, etc.) may be provided by, and/or received at, the commentary render portion 16 in accordance with the viewpoint. The relevant commentary may be from a past time period, such as comments made at approximately the time of original broadcast in the past regarding a viewpoint presented in that section of the media content. The relevant commentary may be from a present time period, such as comments made at approximately the time of video replay regarding the viewpoint. The viewpoint may be derived from a user statement, user history information, and/or the media content. In one example, the user statement may include user commentary entered in response a section of the media content. For example, a section of the media content may be encountered (e.g., a topic raised by the candidates 18, 20) to cause the user to enter user commentary (e.g., via voice, text, a vote such as a “thumbs up”, a favorite designation, bookmarking, etc.) representative of the viewpoint.
  • In another example, a section of the media content may be encountered to cause the viewpoint to be derived from the user history. The user history may include website search information, favorite information, bookmark information, metadata, opinion information (e.g., “thumbs up”, rankings, etc.), social network membership information, comments made by the user in the past (e.g., posts, etc.), and so on, or combinations thereof. In a further example, a section of the media content may be encountered to cause the viewpoint to be derived from the media content. For example, the viewpoint may be derived from a statement made by a narrator of the media content, from an object in the media content (e.g., a statement made by the candidates 18, 20, etc.), from an author of the media content, from other information associated with the media content (e.g., metadata, section headings, titles, a quote, etc.), and so on, or combinations thereof.
  • The relevant commentary RC1, RC2, RC3, RC4 may also be provided based on one or more of a viewpoint agreement, a viewpoint disagreement, and/or viewpoint neutrality. The user may view a present original broadcast of the media content (e.g., live video of a debate between the candidates 18, 20 occurring in real-time), a viewpoint may be determined (e.g., one of the candidates 18, 20 talks about a specific topic of a certain point of view), and a portion of the relevant commentary (e.g., RC1, etc.) may be provided by, and/or received at, the commentary render portion 16 in accordance with the viewpoint correspondence. The relevant commentary provided may be based on a degree of correspondence with a viewpoint. In one example, the user may have a “pro” viewpoint for the topic, which may agree with the viewpoint of the speaker, and the relevant commentary may be provided corresponding to an agreement viewpoint of the user and the speaker (e.g., comments that agree with the viewpoint). In another example, the user may have a “pro” viewpoint for the topic, which disagrees with the viewpoint of the speaker, and the relevant commentary may be provided corresponding to a disagreement viewpoint (e.g., comments that disagree with the viewpoint of the user, comments that disagree with the viewpoint of the speaker, etc.). In a further example, the commentary may be based on a neutral position for the viewpoint, which may provide all comments related to the topic, no comments related to the topic, and so on, or combinations thereof.
  • The relevant commentary RC1, RC2, RC3, RC4 may also be provided based on one or more other viewpoint factors. For example, the commentary may be based on one or more of a geographic location, age, gender, height, weight, education, and/or career. In one example, the relevant commentary (e.g., RC1, etc.) which may be provided when the user is viewing the debate between the candidates 18, 20 may vary according to a geographic viewpoint (e.g., a Texas viewpoint, a New York viewpoint, etc.), according to an age viewpoint (e.g., relatively younger voters, relatively older voters, etc.), and so on, or combinations thereof.
  • In addition, the relevant commentary RC1, RC2, RC3, RC4 may be provided based on a preference for a state of a social network. The user may view a present original broadcast of the media content (e.g., live video of a debate between the candidates 18, 20 occurring in real-time), a state of a social network may be determined (e.g., membership of a social network, content accessible via the social network), and a portion of the relevant commentary (e.g., RC1, etc.) may be provided by, and/or received at, the commentary render portion 16 in accordance with the state of the social network. The relevant commentary may be from a past social network, such as by the members of the social network at approximately the time of original broadcast in the past, content accessible to the user via the social network at approximately the time of original broadcast in the past, and so on, or combinations thereof. The relevant commentary may be from a present social network, such as by the members of the social network at approximately the time of video replay, content accessible to the user via the social network at approximately the time of video replay, and so on, or combinations thereof.
  • Generally, a social network may include an online social network, such as intranet social network and/or internet social network, where users may interact and/or establish relationships with each other. For the purpose of illustration, a social intranet network may include a social community of employees able to communicate over an internal employer computer network. Internet social networks may include, for example, FACEBOOK®, TWITTER®, LINKEDIN® (registered trademarks of Facebook, Twitter, and Linkedin, respectively) web sites. In addition, internet social networks may include question-and-answer (Q&A) web sites, such as QUORA®, YAHOO!® ANSWERS, and STACK OVERFLOW® (registered trademarks of Quora, Yahoo, and Stack Overflow, respectively). Thus, a social network may include two or more people (e.g., a group) that communicate based on one or more criteria, such as shared interests, particular subjects, and so on, or combinations thereof. For the purpose of illustration, a social network may include two or more users that “like” a particular FACEBOOK® web page. In addition, any social network may include two or more people that express a relationship with each other, such as a professional, personal, familial, geographic, and/or educational relationship. Users of a social network may establish relationships with each other, such as by joining a group, becoming “friends”, and/or establishing a “connection” to form a candidate social community. A social network may be pre-existing.
  • The relevant commentary may be scoped to the state of a social network to provide (and/or receive) relevant commentary that would be made, that was actually made, and so on, or combinations thereof. The relevant commentary may be scoped to a present state of the social network, which may include a state at approximately the time of a present replay of the media content, a state at approximately the time of a present original broadcast, and so on, or combinations thereof. The scope to the present state may include present commentary (e.g., present comments) and/or past commentary (e.g., past comments) representative of how members of that present social network would (and/or did) comment in response to the media content (and/or similar media content). In another example, the relevant commentary may be scoped to a past state of the social network, which may include a state at approximately the time before an original broadcast, a state at approximately the time of a past original broadcast, and so on, or combinations thereof. The scope to the past state may include present commentary (e.g., present comments) and/or past commentary (e.g., past comments) representative of how members of that past social network would (and/or did) comment in response to the media content (and/or similar media content). Thus, the members of a past social network and/or a present social network may appear to respond using a past sentiment and/or present sentiment.
  • In a further example, scoping the relevant commentary to the past social network state may cause the user to receive relevant commentary from users (e.g., members) in the social network (e.g., at a specific time in the past) and/or content that the user may have had access to via the social network (e.g., at the specific time in the past). In a yet another example, scoping the relevant commentary to the present social network state may cause the user to receive relevant commentary from users (e.g., members) in the social network (e.g., at a specific time in the present) and/or content that the user may have access to via the social network (e.g., at the specific time in the present). Thus, the state of one or more social networks may be utilized to determine which content is to be, and/or is not to be, provided to the user. In yet another example, specifying a state of a social network in the past (e.g., a state in the year 2012) may cause the content utilized (e.g., as potential relevant commentary) to include posts and/or content that the user may have had access to in the past (e.g., in the year 2012) via the social network in the past.
  • In addition, the relevant commentary RC1, RC2, RC3, RC4 may be provided based on an authorship independent of a media content access event by an author of the relevant commentary. The user may watch a present original broadcast of the media content (e.g., live video of a debate between the candidates 18, 20 occurring in real-time), it may be determined if an authorship independent of a media content access event occurred (e.g., commentary related to a topic of a section of the media content without viewing the media content), and a portion of the relevant commentary (e.g., RC1, etc.) may be provided by, and/or received at, the commentary render portion 16. The relevant commentary may be from a past time period, such as comments made at approximately the time of original broadcast in the past by authors that did not view the broadcast, comments made before the original broadcast, and so on, or combinations thereof. The relevant commentary may be from a present time period, such as comments made at approximately the time of video replay by authors that did not view the original broadcast, the replay, and so on, or combinations thereof.
  • Thus, while the relevant commentary may include commentary that was generated (e.g., authored) for the media content while the author viewed the media content, the relevant commentary may not necessarily be temporally and/or spatially linked to the media content. In one example, the relevant commentary does not come from the media content, was not generated specifically for the media content, was not generated while viewing the media content, and so on, or combinations thereof. In another example, the relevant commentary may be based on an authorship of the commentary that is related to a viewpoint, a topic, an object, and so on, or combinations thereof.
  • In the illustrated example of FIG. 1B, the scheme 8 includes components having similar reference numerals as those already discussed in the scheme 6 of FIG. 1A, and are to be understood to incorporate similar functionality. In this variation, the user may enter user commentary UC1 in the commentary render portion 16, at the time T1. The user commentary UC1 may be entered by making the commentary public, by typing in the commentary, by adding the commentary (e.g., copy and past a link, etc.), by making the commentary opaque, and so on, or combinations thereof. In one example, the user commentary UC1 may be entered in response to encountering a section of the media content (e.g., intermediate section of the video of the debate). In another example, the user commentary UC1 may be entered in response to encountering a topic and/or a viewpoint, for example a topic and/or a viewpoint presented by one or more of the candidates 18, 20. The user commentary may also be used to derive the viewpoint and/or or the topic. The user commentary UC1 may be from a present time period, such as a time period approximately at the time of replay of the media content. In addition, a portion of the relevant commentary RC1, RC2, RC3, RC4 may be based on the user commentary UC1, such as a post returned based on a topic and/or viewpoint represented by the user commentary UC1.
  • In the illustrated example, the user experience may be enhanced by simulating an interactive commentary session. In one example, the user commentary UC1 may be shared with a social network at the time T2, such as one or more present social networks affiliated with the user, to populate respective commentary render portions corresponding to one or more other affiliated members. In another example, the user commentary UC1 may be encountered to cause the commentary render portion 16 to populate with the relevant commentary RC1 at the time T2. The relevant commentary may be based on the user commentary UC1, as well as one or more of a preference for a temporal perspective, a viewpoint, a state of a social network, a topic, an authorship, and so on, or combinations thereof. The user may enter further user commentary UC2 and receive further relevant commentary RC2. Thus, an interactive commentary session may be simulated at the time T2.
  • Although the relevant commentary RC1, RC2 may be from a past time period, the commentary session may be perceived by the user as occurring in the present, in real-time, via the simulation. The user may view a recorded video of a debate which occurred in a past time period, may enter user commentary (e.g., UC1) that disagrees with one of the candidates 18, 20 (e.g., disagrees with a viewpoint) at the time T1, and receive relevant commentary (e.g., RC1, etc.) at the time T2 that an individual also disagreeing would have (and/or did) receive at time of original past broadcast of debate via the simulation. In addition, the relevant commentary (e.g., RC1, etc.) may include commentary from members of a past social network state and/or a present social network to provide past sentiments and/or present sentiments associated with the media content via the simulation. For example, members of a past social network may appear to respond via the simulation with relevant commentary representative of their present viewpoints, of their past viewpoints, etc., while members of a present social network may appear to respond via the simulation with relevant commentary representative of their present viewpoints, of their past viewpoints, etc., and so on, or combinations thereof. In the illustrated example of FIG. 1C, the scheme 10 includes components having similar reference numerals as those already discussed in the scheme 6 of FIG. 1A and/or scheme 8 of FIG. 1B, and are to be understood to incorporate similar functionality. In this variation, the user may view initial commentary IC1 in the commentary render portion 16, at the time T1. The initial commentary IC1 may be displayed by making the commentary public, by adding the commentary, by making the commentary opaque, and so on, or combinations thereof. In one example, the initial commentary IC1 may be displayed in response to encountering a section of the media content. In another example, the initial commentary IC1 may be displayed in response to encountering a topic and/or a viewpoint. The initial commentary IC1 may also be used to derive the viewpoint and/or or the topic. The initial commentary IC1 may be from a present time period, a past time period, and so on, or combinations thereof.
  • The initial commentary IC1 may be used to determine a user interest. The user interest may involve an interest for a viewpoint and/or topic represented by the initial commentary IC1. In the illustrated example, the user commentary UC1 may be entered in response to the initial commentary IC1 at the time T1, which may lead to an interaction (e.g., FIG. 1A, FIG. 1B, etc.), described above, at the time T2. For example, at least a portion of the relevant commentary may be provided based on the user commentary. The initial commentary IC1 may also be used to clarify an ambiguous section of the media content. The initial commentary IC1 may include one or more questions made at a render of the ambiguous section, a comment made (e.g., a comment about the subject matter of the content) at the render of the ambiguous section, a comment made (e.g., an answer) in response to a comment made (e.g., a question) at a render of the ambiguous section, and so on, or combinations thereof.
  • In the illustrated example, the user may view clarifying commentary CC1 in the commentary render portion 16, at the time T1. The clarifying commentary CC1 may be displayed by making the commentary public, by adding the commentary, by making the commentary opaque, and so on, or combinations thereof. In one example, the clarifying commentary CC1 may be displayed in response to encountering a section of the media content that is ambiguous, determined from questions made, from initial commentary IC1, from user commentary UC1, from mappings, from metadata, and so on, or combinations thereof. The clarifying commentary CC1 may be displayed in response to encountering a topic and/or a viewpoint, and/or may be used to derive the viewpoint and/or or the topic. The clarifying commentary CC1 may be a further refinement of the initial commentary IC1, or may be the initial commentary IC1 itself. The clarifying commentary CC1 may include a comment describing the media content, a link to further comments describing the media content, responses to questions made in the past related to the media content, and so on, or combinations thereof. Accordingly, the user commentary UC2 may be entered in response to the clarifying commentary CC1 at the time T1 (e.g., comment “that makes sense”, a link having a relative high degree of relatedness to a possible topic, etc.), which may lead to an interaction (e.g., FIG. 1A, FIG. 1B, etc.), as described above, and/or the interaction of FIG. 1C at the time T2.
  • FIG. 2 shows an architecture 102 that may be used to provide (and/or receive) relevant commentary in response to rendering a section of media content according to an embodiment. In the illustrated example, media logic 122 may detect a media content access event. In one example, the media logic 122 may detect the generation, processing, storing, retrieving, rendering, and/or exchanging of information in electronic form. The media logic 122 may also identify a section of the media content, such as a frame of the media content, an intermediate section of the media content, and so on, or combinations thereof. In the illustrated example, user logic 128 may enter user commentary, which may be from a present time period, a past time period, and so on, or combinations thereof. In one example, the user commentary may be related to the media content, the section of the media content, a viewpoint, a topic, and so on, or combinations thereof.
  • The user logic 128 may provide a user interface (e.g., a graphical user interface, a command line interface, etc.) to access one or more configurable settings. In one example, the user logic 128 may provide access to one or more settings associated with providing and/or receiving relevant commentary. The settings may include options to determine the media access event, to identify a section of the media content, to specify the number and the type of relevant commentary, to specify the manner of displaying the media content and/or commentary, to specify the manner of entering user commentary, initial commentary, and/or clarifying commentary, to derive a viewpoint and/or a topic, to specify a preference for a temporal perspective, for a viewpoint, for a state of social network, to specify an authorship independent of a media content access event by an author of the relevant commentary, to clarify an ambiguous section of the media content, and/or to simulate an interactive commentary session. The settings may include an automatic feature, for example to automatically determine the configurations based on history information, machine learning processes, and so on, or combinations thereof. In one example, the time period may be set by the user via the user interface, which may allow the user to input the time period, select the time period, enable (and/or disable) an automatic implementation of a manually and/or automatically derived time period, and so on, or combinations thereof.
  • In the illustrated example, relevant commentary logic 134 may filter the commentary. The relevant commentary logic 134 may filter the commentary based on a preference for a temporal perspective, such as a past perspective, a present perspective, and so on, or combinations thereof. The relevant commentary logic 134 may determine and/or employ a time period to impart a temporal perspective to the relevant commentary, which may be on any desired time scale. The time period may be employed based on any parameter, such as a variance, a broadcast date, and so on, or combinations thereof. In addition, the relevant commentary logic 134 may filter the commentary based on a preference for a viewpoint, which may be derived from a user statement, user history information, the media content, and so on, or combinations thereof. In one example, the relevant commentary logic 134 may filter the commentary based a viewpoint agreement, a viewpoint disagreement, and/or a viewpoint neutrality. In another example, the relevant commentary logic 134 may filter the commentary based on one or more further viewpoint factors, such as a geographic location, age, gender, and so on, or combinations thereof.
  • The relevant commentary logic 134 may also filter the commentary based on a preference for a state of a social network, such as a past state of a social network, a present state of a social network, and so on, or combinations thereof. The relevant commentary logic 134 may determine the state of the social network, and/or filter the commentary based on the state, to provide present commentary and/or past commentary representative of how members of the past social network and/or the present social network would (and/or did) comment in response to the media content (and/or similar media content), to provide content accessible via the social network according to the state, and so on, or combinations thereof. In addition, the relevant commentary logic 134 may determine a topic related to the media content. The relevant commentary logic 134 may derive the topic from a user statement, the media content, and so on, or combinations thereof. In one example, the topic may be related to a section of the media content (e.g., a chapter, etc.). The relevant commentary logic 134 may also derive the topic from a comment expressed by one or more of the user and/or the media content.
  • The relevant commentary logic 134 may determine an authorship of the media content. An author of the media content may include a performer of the media content (e.g., writer, singer, etc.), an organization that is the source of the media content (e.g., publisher, source web site, etc.), and so on, or combinations thereof. In one example, the relevant commentary logic 134 may determine if an authorship of the relevant commentary is independent of a media content access event by the author of the relevant commentary. The relevant commentary logic 134 may provide commentary that was made for the media content, that does not come from the media content, was not generated specifically for the media content, was not generated while viewing the media content, and so on, or combinations thereof.
  • The relevant commentary logic 134 may also enter initial commentary, for example in response to encountering a section of the media content, in response to encountering a topic and/or a viewpoint, and so on, or combinations thereof. The relevant commentary logic 134 may enter initial commentary that may be from a present time period (e.g., as real-time initial commentary), from a past time period (e.g., as stored initial commentary), and so on, or combinations thereof. In one example, the initial commentary may be related to the media content, the section of the media content, a viewpoint, a topic, and so on, or combinations thereof. The relevant commentary logic 134 may enter the initial commentary to determine a user interest, to clarify an ambiguous section of the media content, to derive a topic and/or a viewpoint, and so on, or combinations thereof.
  • The relevant commentary logic 134 may clarify an ambiguous section of the media content. The relevant commentary logic 134 may clarify the section by, for example, determining and/or employing information such as a mapping associated with the ambiguous section, metadata associated with the media content, and so on, or combinations thereof. The relevant commentary logic 134 may enter clarifying commentary in response to, for example, encountering a section of the media content that is ambiguous, encountering a topic and/or a viewpoint, and so on, or combinations thereof. The relevant commentary logic 134 may enter clarifying commentary that may be from a present time period (e.g., as real-time initial commentary), from a past time period (e.g., as stored initial commentary), and so on, or combinations thereof. In one example, the relevant commentary may be related to the media content, the section of the media content, a viewpoint, a topic, and so on, or combinations thereof. In another example, the clarifying commentary may be a further refinement of the initial commentary, may be the initial commentary itself, and so on, or combinations thereof. The relevant commentary logic 134 may enter clarifying commentary to derive a viewpoint and/or or a topic. In addition, the relevant commentary logic 134 may also simulate an interactive commentary session. The interactive commentary session may provide a user experience where the commentary session may be perceived by the user as occurring in the present (in real-time), even though the relevant commentary may be from a past time period, a present time period, and so on, or combinations thereof.
  • Accordingly, the relevant commentary logic 134 may provide relevant comment data 152 having relevant commentary based on one or more of the logic associated therewith. In one example, a user may view a broadcast of a debate that occurred in the past, and the user may receive a stream of posts that were made in real-time during the debate in the past. In addition, the user may make a post (e.g., user commentary) related to the content of the broadcast (e.g., a topic) and/or related to the stream of posts (e.g., negative posts, positive posts, general questions, etc.). The user may, in response, receive posts and/or other content (e.g., news articles, blog posts, video responses, etc.) from the past that were responses to similar posts as the user post. The user may also specify a setting to filter the commentary to view comments that coincide (e.g., agree, disagree, are natural) with the broadcast, the stream of posts, and/or the user post. The user may scope the commentary to past viewpoints of members of a past social network, present viewpoints of members of a present social network, present viewpoints of members of a past social network, and so on, or combinations thereof. Thus, if the user agrees with a candidate's position for a topic presented in the debate and has a preference for an experience of opposition at the time of the original broadcast of the debate in the past, the user will be provided a stream of posts (e.g., poll disapproval ratings, twitter posts, etc.) reflecting the opposition during the time of the debate in the past.
  • In another example, a user may view a broadcast of a debate that occurred in the past, and the user may receive a stream of posts that are presently being made in real-time at the time of the replay of the debate in the present. The stream of posts may be related to a viewpoint and/or a topic. The user may make a post (i.e., to the architecture 102 and/or the broader social network) related to the content of the broadcast (e.g., a topic), the stream of posts (e.g., negative posts, positive posts, general questions, etc.), and so on, or combinations thereof. The user may, in response, receive posts and/or other content (e.g., news articles, blog posts, video responses, etc.) from the present that are related to the content and/or the user post. The user may also specify a setting to filter the commentary to view comments that coincide (e.g., agree, disagree, are neutral) with the broadcast, the stream of posts, and/or the user post. The user may scope the commentary to past viewpoints of members of a past social network, present viewpoints of members of a present social network, present viewpoints of members of a past social network, and so on, or combinations thereof. Thus, if the user agrees with a candidate's position for a topic presented in the debate and has a preference for an experience of support at the time of replay of the debate in the present, the user will be provided a stream of posts (e.g., poll approval ratings, twitter posts, etc.) reflecting the support that are presently being made at the time of replay of the debate in the present.
  • In a further example, a user may view a broadcast of a debate that occurred in the past, and a portion of the content may be analyzed for an ambiguous section of the broadcast based on, for example, questions that were made during the broadcast, after the broadcast, and so on, or combinations thereof. Thus, when an identified ambiguous section of the broadcast is replayed, the user may view posts made for clarification to provide a greater level of understanding to the user (e.g., an understanding of the media content, of a topic, of a viewpoint, etc.). The architecture 102 may determine the context of the section of the broadcast by analyzing metadata associated with the content, a posting generated approximately at the time of each section of the broadcast, and so on, or combinations thereof. The architecture 102 may also leverage sources for mappings between the media content sections and relevant posts (and/or media content). Mapping information may include tags, time stamp relationships, and prior interaction history with the source (and/or source author) of the media content being viewed, of similar content, and so on, or combinations thereof.
  • In yet another example, the user may receive commentary made (e.g., presently made, made in the past, etc.) associated with one or more topics presented in the media content (e.g., debate) without requiring the author of the commentary to view the media content. The commentary may be obtained from, for example, a present article from a news organization associated with one or more topics presented in the media content, a post made in response to reading the present news article, and so on, or combinations thereof. In yet a further example, the architecture 102 may determine a viewpoint of the user, the media content, and so on, or combinations thereof. Thus, the architecture 102 may filter the commentary based on a temporal preference, a viewpoint, and/or a state of the social network to provide data from the past, the present, or a combination thereof. The user may have the opportunity to post an opinion, and view commentary from any desired time period, from any desired viewpoint, and/or from any desired social network. In addition, the user may experience an interactive commentary session which may appear as a live commentary interaction session, although the commentary is not being generated (e.g., authored) in real-time.
  • Turning now to FIG. 3, an architecture 202 is shown that may be used to provide (and/or receive) relevant commentary in response to rendering a section of media content according to an embodiment. Logic components identified in the architecture 202 of FIG. 3 having similar reference numerals as those already discussed in the architecture 102 of FIG. 2 are to be understood to incorporate similar functionality. In this variation, media logic 222 may include media access detection logic 224 to detect a media content access event. The media access detection logic 224 may detect the generation, processing, storing, retrieving, rendering, and/or exchanging of information in electronic form. In one example, the media access detection logic 224 may detect launching of a media player application, launching of a web browser, retrieving of the media content from storage, receiving the media content from an image capture device (e.g., on or off-device camera), rendering (e.g., displaying) the media content, and so on, or combinations thereof. In addition, the media logic 222 may include section identification logic 226 to identify a section of the media content. The section identification logic 226 may identify a frame of a video, an area of an image, a segment of audio, a domain of a hypertext link, a chapter of a book, a paragraph of an article, and so on, or combinations thereof. The section identification logic 226 may also identify a beginning section of the media content, an intermediate section of the media content, a final section of the media content, and so on, or combinations thereof.
  • In the illustrated example, user logic 228 may include user commentary logic 230 to enter user commentary. The user commentary logic 230 may enter user commentary by making the commentary public, by typing in the commentary, by adding the commentary (e.g., copy and past a link, etc.), by making the commentary opaque, and so on, or combinations thereof. The user commentary may be from a present time period, for example as real-time user commentary. The user commentary may be from a past time period, for example as stored user commentary. The user commentary may be related to the media content, the section of the media content, a viewpoint, a topic, and so on, or combinations thereof. The user logic 228 may also include user preference logic 232. The user preference logic 232 may provide access to one or more settings associated with providing and/or receiving relevant commentary.
  • In the illustrated example, relevant commentary logic 234 may include temporal logic 234 to filter the commentary based on a preference for a temporal perspective. The temporal logic 234 may filter the commentary based on a preference for a past perspective, a preference for a present perspective, and so on, or combinations thereof. The temporal logic 234 may determine and/or employ a time period to impart a temporal perspective to the relevant commentary, which may be on any desired time scale. The time period may be employed based on any parameter, such as a variance, a broadcast date, and so on, or combinations thereof.
  • In the illustrated example, the relevant commentary logic 234 may include viewpoint logic 238 to filter the commentary based on a preference for a viewpoint. In one example, the viewpoint logic 238 may derive the viewpoint from one or more of a user statement, user history information, the media content (e.g., a section of the media content), and so on, or combinations thereof. In another example, the viewpoint logic 238 may filter the commentary based on one or more of a viewpoint agreement, a viewpoint disagreement, a viewpoint neutrality, and so on, or combinations thereof. In a further example, the viewpoint logic 238 may derive the commentary based on one or more further viewpoint factors, such as a geographic location, age, gender, and so on, or combinations thereof.
  • In the illustrated example, the relevant commentary logic 234 may include social network logic 240 to filter the commentary based on a preference for a state of a social network. The social network logic 240 may filter the commentary based on a preference for a past state of a social network, a present state of a social network, and so on, or combinations thereof. The social network logic 240 may determine the state of the social network, and/or filter the commentary based on the state, to provide present commentary and/or past commentary representative of how members of the past social network and/or the present social network would (and/or did) comment in response to the media content (and/or similar media content), to provide content accessible via the social network according to the state, and so on, or combinations thereof.
  • In the illustrated example, the relevant commentary logic 234 may include topic logic 242 to determine a topic related to the media content. The topic logic 242 may derive the topic from a user statement, the media content, and so on, or combinations thereof. In one example, the topic may be related to a section of the media content (e.g., portion thereof, a chapter, etc.). The topic logic 242 may derive the topic from a comment expressed by one or more of the user, the media content, and so on, or combinations thereof.
  • In the illustrated example, the relevant commentary logic 234 may include authorship logic 244 to determine an authorship of the media content. The authorship logic 244 may determine if an authorship of the commentary is independent of a media content access event by the author of the relevant commentary. In one example, the authorship logic 244 may filter the commentary to provide commentary that was made for the media content, that does not come from the media content, was not generated specifically for the media content, was not generated while viewing the media content, and so on, or combinations thereof.
  • In the illustrated example, the relevant commentary logic 234 may include initial commentary logic 246 to provide initial commentary. The initial commentary logic 246 may enter the initial commentary by making the commentary public, by typing in the commentary, by adding the commentary (e.g., copy and past a link, etc.), by making the commentary opaque, and so on, or combinations thereof. The initial commentary may be from a present time period, for example as real-time initial commentary. The initial commentary may be from a past time period, for example as stored initial commentary. The initial commentary may be related to the media content, the section of the media content, a viewpoint, a topic, and so on, or combinations thereof. The initial commentary logic 246 may enter the initial commentary to determine a user interest, to clarify an ambiguous section of the media content, to derive a topic and/or a viewpoint, and so on, or combinations thereof.
  • In the illustrated example, the relevant commentary logic 234 may include clarification logic 248 to clarify an ambiguous section of the media content. The clarification logic 248 may determine and/or employ information to clarify the ambiguous section, such as a mapping associated with the ambiguous section, metadata associated with the media content, and so on, or combinations thereof. The clarification logic 248 may enter clarifying commentary by making the commentary public, by adding the commentary, my making the commentary opaque, and so on, or combinations thereof. The clarification logic 248 may enter clarifying commentary in response to, for example, encountering a section of the media content that is ambiguous, encountering a topic and/or a viewpoint, and so on, or combinations thereof. The clarification logic 248 may enter clarifying commentary that may be from a present time period (e.g., as real-time initial commentary), from a past time period (e.g., as stored initial commentary), and so on, or combinations thereof. The relevant commentary may be related to the media content, the section of the media content, a viewpoint, a topic, and so on, or combinations thereof. The clarifying commentary may be a further refinement of the initial commentary, may be the initial commentary itself, and so on, or combinations thereof. The clarification logic 248 may enter clarifying commentary to derive a viewpoint and/or or a topic.
  • In the illustrated example, the relevant commentary logic 234 may include simulation logic 250 to simulate an interactive commentary session. The simulation logic 250 may provide a user experience where the commentary session may appear as occurring in the present, in real-time, even though the relevant commentary may be from a past time period, a present time period, and so on, or combinations thereof. Accordingly, the relevant commentary logic 234 may provide relevant comment data 252 having relevant commentary based on one or more of the logic associated therewith.
  • FIG. 4 shows a method 302 of providing and/or receiving relevant commentary in response to rendering a section of media content according to an embodiment. Illustrated processing block 354 provides for detecting a media content access event, for example by a user, by a computing platform, and so on, or combinations thereof. Thus, the media content access event may correspond to, for example, the media content access event (e.g., FIG. 1 to FIG. 3) already discussed. Relevant commentary may be provided and/or received in response to rendering the media content, such as a section of the media content, at block 356. The relevant commentary may correspond to, for example, the relevant commentary (e.g., FIG. 1 to FIG. 3) already discussed. At least a portion of the relevant commentary may be provided and/or received based on a preference for a temporal perspective at block 358, wherein the temporal perspective in block 358 may correspond to, for example, the temporal perspective (e.g., FIG. 1 to FIG. 3) already discussed. At least a portion of the relevant commentary may be provided and/or received based on a preference for a viewpoint at block 360, wherein the viewpoint in block 360 may correspond to, for example, the viewpoint (e.g., FIG. 1 to FIG. 3) already discussed. At least a portion of the relevant commentary may be provided and/or received based on a preference for a state of a social network at block 362, wherein the state of the social network in block 362 may correspond to, for example, the state (e.g., FIG. 1 to FIG. 3) already discussed.
  • The method 302 may also provide and/or receive at least a portion of the relevant commentary based on a topic at block 364, for example a topic related to the section of the media content that is rendered. Thus, the relevant commentary in block 364 may correspond to, for example, the relevant commentary based on a topic (e.g., FIG. 1 to FIG. 3) already discussed. In addition, the method 302 may provide and/or receive at least a portion of the relevant commentary based on an authorship independent of a media content access event by an author of the relevant commentary at block 366. Thus, the relevant commentary in block 366 may correspond to, for example, the relevant commentary based on an authorship (e.g., FIG. 1 to FIG. 3) already discussed. Additionally, the method 302 may clarify an ambiguous section of the media content at block 368. Thus, for example, an ambiguous section of the media content at block 368 may be clarified (e.g., FIG. 1 to FIG. 3) as already discussed. The method 302 may also simulate an interactive commentary session at block 370. Thus, for example, an interactive commentary session at block 370 may be simulated (e.g., FIG. 1 to FIG. 3) as already discussed.
  • While not shown, it is understood that any functionality presented herein may be employed in the operation of the method 302. For example, the method 302 may provide and/or receive initial commentary to the user related to the section, provide and/or receive user commentary in response to the initial commentary, provide and/or receive at least a portion of the relevant commentary based the user commentary, and so on, or combinations thereof. In addition, the method 302 may provide and/or receive media content and initial commentary from a past time period, user commentary from a present time period, and/or a portion of the relevant commentary from the past time period. The method 302 may also provide and/or receive media content from a past time period, initial commentary and user commentary from a present time period, and a portion of the relevant commentary from the present time period. As a final non-limiting example, the method 302 may provide and/or receive a portion of the relevant commentary from a present time period and a past time period.
  • FIG. 5 shows a computing device 486 having a processor 488, mass storage 490 (e.g., read only memory/ROM, optical disk, flash memory), a network interface 492, and system memory 494 (e.g., random access memory/RAM). In the illustrated example, the processor 488 is configured to execute logic 496, wherein the logic 496 may implement one or more aspects of the schemes 8 to 10 (FIG. 1A to FIG. 1C), the architecture 102 (FIG. 2), the architecture 202 (FIG. 3), and/or the method 302 (FIG. 4), already discussed. Thus, the logic 496 may enable the computing device 486 to function to provide (and/or receive) relevant commentary, for example in response to rendering a section of media content. The logic 496 may also be implemented as a software application that is distributed among many computers (e.g., local or remote). Thus, while a single computer could provide the functionality described herein, systems implementing these features can use many interconnected computers (e.g., for scalability as well as modular implementation).
  • The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. In addition, the terms “first”, “second”, etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.
  • Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments of the present invention can be implemented in a variety of forms. Therefore, while the embodiments of this invention have been described in connection with particular examples thereof, the true scope of the embodiments of the invention should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.

Claims (20)

We claim:
1. A method comprising:
detecting a media content access event by a user; and
providing relevant commentary to the user in response to rendering a section of the media content, wherein at least a portion of the relevant commentary is based on two or more of a preference for a temporal perspective, a preference for a viewpoint, and a preference for a state of a social network.
2. The method of claim 1, further comprising:
providing initial commentary to the user related to the section;
receiving user commentary in response to the initial commentary; and
providing at least a portion of the relevant commentary based on the user commentary.
3. The method of claim 1, further comprising:
providing at least a portion of the relevant commentary based on a topic related to the section of the media content;
providing at least a portion of the relevant commentary based on an authorship independent of a media content access event by an author of the relevant commentary;
clarifying an ambiguous section of the media content; and
simulating an interactive commentary session.
4. A method comprising:
receiving relevant commentary in response to rendering a section of media content, wherein at least a portion of the relevant commentary is based on a preference for a temporal perspective.
5. The method of claim 4, further comprising:
receiving initial commentary related to the section;
providing user commentary in response to the initial commentary; and
receiving at least a portion of the relevant commentary based on the user commentary.
6. The method of claim 4, further comprising:
receiving at least a portion of the relevant commentary based on a preference for a viewpoint related to the section of the media content;
receiving at least a portion of the relevant commentary based on a preference for a state of a social network;
receiving at least a portion of the relevant commentary based on a topic related to the section of the media content;
receiving at least a portion of the relevant commentary based on an authorship independent of a media content access event by an author of the relevant commentary;
clarifying an ambiguous section of the media content; and
simulating an interactive commentary session.
7. A method comprising:
providing relevant commentary to a user in response to rendering a section of media content, wherein at least a portion of the relevant commentary is based on a preference for a temporal perspective.
8. The method of claim 7, further comprising:
providing initial commentary to the user related to the section;
receiving user commentary in response to the initial commentary; and
providing at least a portion of the relevant commentary based the user commentary.
9. The method of claim 8, wherein the media content and the initial commentary are from a past time period, the user commentary is from a present time period, and the portion of the relevant commentary is from the past time period based on the preference for a past perspective.
10. The method of claim 8, wherein the media content is from a past time period, the initial commentary and the user commentary are from a present time period, and the portion of the relevant commentary is from the present time period based on the preference for a present perspective.
11. The method of claim 7, wherein the portion of the relevant commentary is from a present time period and a past time period based on the preference for a present perspective and a past perspective.
12. The method of claim 7, wherein at least a portion of the relevant commentary is provided based on a preference for a viewpoint related to the section.
13. The method of claim 12, wherein the viewpoint is derived from one or more of a user statement, user history information, and the section of the media content.
14. The method of claim 12, wherein the relevant commentary is provided based on one or more of a viewpoint agreement, a viewpoint disagreement, and a viewpoint neutrality.
15. The method of claim 7, wherein at least a portion of the relevant commentary is provided based on a preference for a state of a social network.
16. The method of claim 7, wherein at least a portion of the relevant commentary is provided based on a topic to be related to the section of the media content.
17. The method of claim 16, wherein the topic is derived from a comment expressed by one or more of the user and the media content.
18. The method of claim 7, wherein at least a portion of the relevant commentary is provided based on an authorship independent of a media content access event by an author of the relevant commentary.
19. The method of claim 7, further comprising clarifying an ambiguous section of the media content.
20. The method of claim 7, further comprising simulating an interactive commentary session.
US13/901,880 2013-05-17 2013-05-24 Relevant commentary for media content Abandoned US20140344359A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/901,880 US20140344359A1 (en) 2013-05-17 2013-05-24 Relevant commentary for media content

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/896,489 US9509758B2 (en) 2013-05-17 2013-05-17 Relevant commentary for media content
US13/901,880 US20140344359A1 (en) 2013-05-17 2013-05-24 Relevant commentary for media content

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/896,489 Continuation US9509758B2 (en) 2013-05-17 2013-05-17 Relevant commentary for media content

Publications (1)

Publication Number Publication Date
US20140344359A1 true US20140344359A1 (en) 2014-11-20

Family

ID=51896673

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/896,489 Active 2034-01-21 US9509758B2 (en) 2013-05-17 2013-05-17 Relevant commentary for media content
US13/901,880 Abandoned US20140344359A1 (en) 2013-05-17 2013-05-24 Relevant commentary for media content

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/896,489 Active 2034-01-21 US9509758B2 (en) 2013-05-17 2013-05-17 Relevant commentary for media content

Country Status (1)

Country Link
US (2) US9509758B2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180150450A1 (en) * 2015-05-29 2018-05-31 Microsoft Technology Licensing, Llc Comment-centered news reader
US20180293278A1 (en) * 2017-04-10 2018-10-11 Linkedln Corporation Usability and resource efficiency using comment relevance
US10474743B2 (en) * 2015-09-08 2019-11-12 Canon Kabushiki Kaisha Method for presenting notifications when annotations are received from a remote device
US10891322B2 (en) 2015-10-30 2021-01-12 Microsoft Technology Licensing, Llc Automatic conversation creator for news
US11100294B2 (en) 2018-08-27 2021-08-24 International Business Machines Corporation Encouraging constructive social media interactions
CN113923505A (en) * 2021-12-14 2022-01-11 飞狐信息技术(天津)有限公司 Bullet screen processing method and device, electronic equipment and storage medium
US11283879B2 (en) * 2018-10-08 2022-03-22 Ciambella Ltd. System, apparatus and method for providing end to end solution for networks
US11412271B2 (en) 2019-11-25 2022-08-09 International Business Machines Corporation AI response to viewers of live stream video
US11516159B2 (en) 2015-05-29 2022-11-29 Microsoft Technology Licensing, Llc Systems and methods for providing a comment-centered news reader
US20230097459A1 (en) * 2021-08-14 2023-03-30 David Petrosian Mkervali System and method of conducting a mental confrontation in a form of a mobile application or a computer program
US11622089B2 (en) * 2019-08-27 2023-04-04 Debate Me Now Technologies, Inc. Method and apparatus for controlled online debate

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106470363B (en) 2015-08-18 2019-09-13 阿里巴巴集团控股有限公司 Compare the method and device of race into row written broadcasting live
US11055372B2 (en) * 2018-06-12 2021-07-06 International Business Machines Corporation Managing content on a social network

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030101151A1 (en) * 2001-11-26 2003-05-29 Holland Wilson Lee Universal artificial intelligence software program
US6854087B1 (en) * 1999-08-10 2005-02-08 Fuji Xerox Co., Ltd. Document editing apparatus
US20060015811A1 (en) * 2004-07-14 2006-01-19 Fuji Xerox Co., Ltd. Document processing apparatus, document processing method and storage medium storing document processing program
US20060061595A1 (en) * 2002-05-31 2006-03-23 Goede Patricia A System and method for visual annotation and knowledge representation
US20060218495A1 (en) * 2005-03-25 2006-09-28 Fuji Xerox Co., Ltd. Document processing device
US20060277523A1 (en) * 2005-06-06 2006-12-07 Gary Horen Annotations for tracking provenance
US20070250810A1 (en) * 2006-04-20 2007-10-25 Tittizer Abigail A Systems and methods for managing data associated with computer code
US20080086307A1 (en) * 2006-10-05 2008-04-10 Hitachi Consulting Co., Ltd. Digital contents version management system
US20080109875A1 (en) * 2006-08-08 2008-05-08 Harold Kraft Identity information services, methods, devices, and systems background
US20100121912A1 (en) * 2007-04-27 2010-05-13 Dwango Co., Ltd. Terminal device, comment distribution server, comment transmission method, comment distribution method, and recording medium that houses comment distribution program
US20100136509A1 (en) * 2007-07-02 2010-06-03 Alden Mejer System and method for clinical trial investigator meeting delivery and training including dynamic media enrichment
US8074184B2 (en) * 2003-11-07 2011-12-06 Mocrosoft Corporation Modifying electronic documents with recognized content or other associated data
US20120143590A1 (en) * 2010-05-07 2012-06-07 For+side.com Co., Ltd Electronic book system and content server
US20120316962A1 (en) * 2010-02-22 2012-12-13 Yogesh Chunilal Rathod System and method for social networking for managing multidimensional life stream related active note(s) and associated multidimensional active resources and actions
US20130086077A1 (en) * 2011-09-30 2013-04-04 Nokia Corporation Method and Apparatus for Associating Commenting Information with One or More Objects
US20130159127A1 (en) * 2011-06-10 2013-06-20 Lucas J. Myslinski Method of and system for rating sources for fact checking
US20130227016A1 (en) * 2012-02-24 2013-08-29 Mark RISHER Detection and prevention of unwanted content on cloud-hosted services
US20130325954A1 (en) * 2012-06-01 2013-12-05 Microsoft Corporation Syncronization Of Media Interactions Using Context
US20140032481A1 (en) * 2007-09-27 2014-01-30 Adobe Systems Incorporated Commenting dynamic content
US20140033015A1 (en) * 2008-05-12 2014-01-30 Adobe Systems Incorporated Comment presentation in electronic documents
US20140082096A1 (en) * 2012-09-18 2014-03-20 International Business Machines Corporation Preserving collaboration history with relevant contextual information
US20140181630A1 (en) * 2012-12-21 2014-06-26 Vidinoti Sa Method and apparatus for adding annotations to an image

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020130868A1 (en) * 2000-11-28 2002-09-19 Aston Guardian Limited Method and apparatus for providing financial instrument interface
EP1451744A4 (en) * 2001-12-07 2008-10-29 Philip Helmes Rules based method and system for project performance monitoring
AU2003254269A1 (en) * 2002-07-29 2004-02-16 Opinionlab, Inc. System and method for providing substantially real-time access to collected information concerning user interaction with a web page of a website
US7870480B1 (en) * 2005-03-14 2011-01-11 Actuate Corporation Methods and apparatus for storing and retrieving annotations accessible by a plurality of reports
EP2140441A1 (en) * 2007-04-16 2010-01-06 CAE Inc. Method and system for training
US20080317439A1 (en) 2007-06-22 2008-12-25 Microsoft Corporation Social network based recording
US20090063991A1 (en) * 2007-08-27 2009-03-05 Samuel Pierce Baron Virtual Discussion Forum
US8473377B2 (en) * 2008-02-29 2013-06-25 Accenture Global Services Data management system
US8996621B2 (en) * 2008-05-12 2015-03-31 Adobe Systems Incorporated Asynchronous comment updates
US20100095326A1 (en) * 2008-10-15 2010-04-15 Robertson Iii Edward L Program content tagging system
US20100250445A1 (en) * 2009-03-25 2010-09-30 Solheim Robert J Commitment tracking system
WO2010141260A1 (en) 2009-06-01 2010-12-09 Telcordia Technologies, Inc. System and method for processing commentary that is related to content
WO2011019444A1 (en) * 2009-06-11 2011-02-17 Chacha Search, Inc. Method and system of providing a search tool
US9117203B2 (en) * 2009-09-01 2015-08-25 Nokia Technologies Oy Method and apparatus for augmented social networking messaging
US7934983B1 (en) * 2009-11-24 2011-05-03 Seth Eisner Location-aware distributed sporting events
US10713018B2 (en) * 2009-12-07 2020-07-14 International Business Machines Corporation Interactive video player component for mashup interfaces
US20110154200A1 (en) 2009-12-23 2011-06-23 Apple Inc. Enhancing Media Content with Content-Aware Resources
US20110202544A1 (en) * 2010-02-12 2011-08-18 Praized Media Inc. Real time aggregation and filtering of local data feeds
GB201007191D0 (en) * 2010-04-29 2010-06-09 British Broadcasting Corp Content provision system
US20150229698A1 (en) * 2010-11-29 2015-08-13 Joseph G. Swan Crowd Sourced or Collaborative Generation of Issue Analysis Information Structures
US20120151320A1 (en) 2010-12-10 2012-06-14 Mcclements Iv James Burns Associating comments with playback of media content
US8836771B2 (en) * 2011-04-26 2014-09-16 Echostar Technologies L.L.C. Apparatus, systems and methods for shared viewing experience using head mounted displays
US9066145B2 (en) * 2011-06-30 2015-06-23 Hulu, LLC Commenting correlated to temporal point of video data
US20130170561A1 (en) * 2011-07-05 2013-07-04 Nokia Corporation Method and apparatus for video coding and decoding
US8725704B2 (en) * 2011-09-27 2014-05-13 Barracuda Networks, Inc. Client-server transactional pre-archival apparatus
GB2502037A (en) * 2012-02-10 2013-11-20 Qatar Foundation Topic analytics
US9256343B1 (en) * 2012-05-14 2016-02-09 Google Inc. Dynamically modifying an electronic article based on commentary
US20140280377A1 (en) * 2013-03-14 2014-09-18 Scribestar Ltd. Systems and methods for collaborative document review

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6854087B1 (en) * 1999-08-10 2005-02-08 Fuji Xerox Co., Ltd. Document editing apparatus
US20030101151A1 (en) * 2001-11-26 2003-05-29 Holland Wilson Lee Universal artificial intelligence software program
US20060061595A1 (en) * 2002-05-31 2006-03-23 Goede Patricia A System and method for visual annotation and knowledge representation
US8074184B2 (en) * 2003-11-07 2011-12-06 Mocrosoft Corporation Modifying electronic documents with recognized content or other associated data
US20060015811A1 (en) * 2004-07-14 2006-01-19 Fuji Xerox Co., Ltd. Document processing apparatus, document processing method and storage medium storing document processing program
US20060218495A1 (en) * 2005-03-25 2006-09-28 Fuji Xerox Co., Ltd. Document processing device
US20060277523A1 (en) * 2005-06-06 2006-12-07 Gary Horen Annotations for tracking provenance
US20070250810A1 (en) * 2006-04-20 2007-10-25 Tittizer Abigail A Systems and methods for managing data associated with computer code
US20080109875A1 (en) * 2006-08-08 2008-05-08 Harold Kraft Identity information services, methods, devices, and systems background
US20080086307A1 (en) * 2006-10-05 2008-04-10 Hitachi Consulting Co., Ltd. Digital contents version management system
US20100121912A1 (en) * 2007-04-27 2010-05-13 Dwango Co., Ltd. Terminal device, comment distribution server, comment transmission method, comment distribution method, and recording medium that houses comment distribution program
US20100136509A1 (en) * 2007-07-02 2010-06-03 Alden Mejer System and method for clinical trial investigator meeting delivery and training including dynamic media enrichment
US20140032481A1 (en) * 2007-09-27 2014-01-30 Adobe Systems Incorporated Commenting dynamic content
US20140033015A1 (en) * 2008-05-12 2014-01-30 Adobe Systems Incorporated Comment presentation in electronic documents
US20120316962A1 (en) * 2010-02-22 2012-12-13 Yogesh Chunilal Rathod System and method for social networking for managing multidimensional life stream related active note(s) and associated multidimensional active resources and actions
US20120143590A1 (en) * 2010-05-07 2012-06-07 For+side.com Co., Ltd Electronic book system and content server
US20130159127A1 (en) * 2011-06-10 2013-06-20 Lucas J. Myslinski Method of and system for rating sources for fact checking
US20130086077A1 (en) * 2011-09-30 2013-04-04 Nokia Corporation Method and Apparatus for Associating Commenting Information with One or More Objects
US20130227016A1 (en) * 2012-02-24 2013-08-29 Mark RISHER Detection and prevention of unwanted content on cloud-hosted services
US20130325954A1 (en) * 2012-06-01 2013-12-05 Microsoft Corporation Syncronization Of Media Interactions Using Context
US20140082096A1 (en) * 2012-09-18 2014-03-20 International Business Machines Corporation Preserving collaboration history with relevant contextual information
US20140181630A1 (en) * 2012-12-21 2014-06-26 Vidinoti Sa Method and apparatus for adding annotations to an image

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11516159B2 (en) 2015-05-29 2022-11-29 Microsoft Technology Licensing, Llc Systems and methods for providing a comment-centered news reader
US10699078B2 (en) * 2015-05-29 2020-06-30 Microsoft Technology Licensing, Llc Comment-centered news reader
US20180150450A1 (en) * 2015-05-29 2018-05-31 Microsoft Technology Licensing, Llc Comment-centered news reader
US10474743B2 (en) * 2015-09-08 2019-11-12 Canon Kabushiki Kaisha Method for presenting notifications when annotations are received from a remote device
US10891322B2 (en) 2015-10-30 2021-01-12 Microsoft Technology Licensing, Llc Automatic conversation creator for news
US20180293278A1 (en) * 2017-04-10 2018-10-11 Linkedln Corporation Usability and resource efficiency using comment relevance
US10771424B2 (en) * 2017-04-10 2020-09-08 Microsoft Technology Licensing, Llc Usability and resource efficiency using comment relevance
US11100294B2 (en) 2018-08-27 2021-08-24 International Business Machines Corporation Encouraging constructive social media interactions
US11283879B2 (en) * 2018-10-08 2022-03-22 Ciambella Ltd. System, apparatus and method for providing end to end solution for networks
US11622089B2 (en) * 2019-08-27 2023-04-04 Debate Me Now Technologies, Inc. Method and apparatus for controlled online debate
US11412271B2 (en) 2019-11-25 2022-08-09 International Business Machines Corporation AI response to viewers of live stream video
US20230097459A1 (en) * 2021-08-14 2023-03-30 David Petrosian Mkervali System and method of conducting a mental confrontation in a form of a mobile application or a computer program
CN113923505A (en) * 2021-12-14 2022-01-11 飞狐信息技术(天津)有限公司 Bullet screen processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
US20140344353A1 (en) 2014-11-20
US9509758B2 (en) 2016-11-29

Similar Documents

Publication Publication Date Title
US9509758B2 (en) Relevant commentary for media content
Head et al. How students engage with news: Five takeaways for educators, journalists, and librarians
Koutropoulos et al. Emotive Vocabulary in MOOCs: Context & Participant Retention.
Soukup Looking at, through, and with YouTube
Kolodzy Practicing convergence journalism: an introduction to cross-media storytelling
Harlan et al. Teen content creators: Experiences of using information to learn
US20090083637A1 (en) Method and System for Online Collaboration
Gainer 21st‐Century mentor texts: Developing critical literacies in the information age
Rehm et al. Strategic research agenda for multilingual Europe 2020
Harkin et al. Deciphering user-generated content in transitional societies
WO2016164343A1 (en) Anonymous content posting
Grünewald et al. Implementation and evaluation of digital e-lecture annotation in learning groups to foster active learning
Bartolome et al. A Literature Review of Video-Sharing Platform Research in HCI
KR101858204B1 (en) Method and apparatus for generating interactive multimedia contents
Thurman Real-time online reporting: Best practices for live blogging
Chen et al. " My Culture, My People, My Hometown": Chinese Ethnic Minorities Seeking Cultural Sustainability by Video Blogging
Walker Towards the collaborative museum? Social media, participation, disciplinary experts and the public in the contemporary museum
Scheidt Diary weblogs as genre
Smith Growing your library career with social media
Campos et al. Machine Generation of Audio Description for Blind and Visually Impaired People
Shimada An effective method of collecting practical knowledge by presentation of videos and related words
Ifeduba et al. Using Digital Publishing to Drive Legal Practice: The Nigerian Experience
CN113535296B (en) Content organization and display method and corresponding equipment
Landry et al. Art or circus? characterizing user-created video on YouTube
Salim et al. Improving the learning effectiveness of educational videos

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BROZ, MICHAL;CARTER, BERNADETTE A.;LOPEZ, MELBA I.;AND OTHERS;SIGNING DATES FROM 20130517 TO 20130520;REEL/FRAME:030481/0842

AS Assignment

Owner name: LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:034194/0353

Effective date: 20140926

Owner name: LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD.,

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:034194/0353

Effective date: 20140926

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION