WO2023220274A1 - Entity cards including descriptive content relating to entities from a video - Google Patents

Entity cards including descriptive content relating to entities from a video Download PDF

Info

Publication number
WO2023220274A1
WO2023220274A1 PCT/US2023/021889 US2023021889W WO2023220274A1 WO 2023220274 A1 WO2023220274 A1 WO 2023220274A1 US 2023021889 W US2023021889 W US 2023021889W WO 2023220274 A1 WO2023220274 A1 WO 2023220274A1
Authority
WO
WIPO (PCT)
Prior art keywords
entity
video
user interface
card
user
Prior art date
Application number
PCT/US2023/021889
Other languages
French (fr)
Inventor
Jonathan Matthew MALMAUD
Nicolas Paul-Stringall Higuera
Jeff Hsu
Sarah Fay SMITH
Gabriel Rubow Culbertson
Runmin ZHAO
Original Assignee
Google Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/091,837 external-priority patent/US12072934B2/en
Application filed by Google Llc filed Critical Google Llc
Publication of WO2023220274A1 publication Critical patent/WO2023220274A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7844Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • G06F16/745Browsing; Visualisation therefor the internal structure of a single video sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4828End-user interface for program selection for searching program descriptors

Definitions

  • the disclosure relates generally to providing entity cards for a user interface in association with a video displayed on a display of a user computing device. More particularly, the disclosure relates to providing entity cards which assist in the understanding of the contents of the video and include descriptive content relating to an entity (e.g., a concept, a term, a topic, and the like) which is mentioned in the video.
  • entity cards which assist in the understanding of the contents of the video and include descriptive content relating to an entity (e.g., a concept, a term, a topic, and the like) which is mentioned in the video.
  • a user not familiar with the term “sarcophagus” may not fully understand the content of the video.
  • the user may pause the video and navigate to a search page to perform a search for the term “sarcophagus.”
  • the user may have difficulty spelling the term which they wish to search for and may not obtain accurate search results or may experience inconvenience in searching.
  • the user may stop watching the video after finding the content of the video too difficult to understand.
  • a computer-implemented method for a server system includes obtaining a transcription of content from a video, applying a machine learning resource to identify one or more entities which are most likely to be searched for by a user viewing the video, based on the transcription of the content, generating one or more entity cards for each of the one or more entities, each of the one or more entity cards including descriptive content relating to a respective entity among the one or more entities, and providing a user interface, to be displayed on a respective display of one or more user computing devices, for: playing the video on a first portion of the user interface, and when the video is played and a first entity among the one or more entities is mentioned in the video, displaying a first entity card on a second portion of the user interface, the first entity card including descriptive content relating to the first entity.
  • applying the machine learning resource to identify the one or more entities includes obtaining training data to train the machine learning resource based on observational data of users conducting searches in response to viewing only the video.
  • applying the machine learning resource to identify the one or more entities includes identifying a plurality of candidate entities from the video by associating text from the transcription with a knowledge graph, and ranking the candidate entities to obtain the one or more entities, based on one or more of: a relevance of each of the candidate entities to a topic of the video, a relevance of each of the candidate entities to one or more other candidate entities among the plurality of candidate entities, a number of mentions of the candidate entity in the video, and a number of videos in which the candidate entity appears across a corpus of videos stored in one or more databases.
  • applying the machine learning resource to identify the one or more entities includes evaluating user interactions with the user interface, and determining at least one adjustment to the machine learning resource based on the evaluation of the user interactions with the user interface.
  • the first entity is mentioned in the video at a first timepoint in the video, and the first entity card is displayed on the second portion of the user interface at the first timepoint.
  • the one or more entities include a second entity and the one or more entity cards include a second entity card
  • the method further includes providing the user interface, to be displayed on the respective display of the one or more user computing devices, for: displaying, on a third portion of the user interface while the continuing to play the video, the second entity card in a contracted form, the second entity card in the contracted form referencing the second entity to be mentioned in the video at a second timepoint in the video after the first timepoint, and when the second entity is mentioned in the video at the second timepoint, displaying on the third portion of the user interface while the continuing to play the video, the second entity card in a fully expanded fonn, tire second entity card in the fully expanded fonn including descriptive content relating to the second entity.
  • the one or more entities include a second entity and the one or more entity cards include a second entity card
  • the method further includes providing the user interface, to be displayed on the respective display of the one or more user computing devices, for: when the second entity is mentioned in the video while the video is playing, displaying the second entity card on the second portion of the user interface while continuing to play the video, the second entity card including descriptive content relating to the second entity, wherein the second entity card is displayed on the second portion of the display by replacing the first entity card at a time when the second entity is mentioned in the video.
  • the method further includes providing the user interface, to be displayed on the respective display of the one or more user computing devices, for: when the first entity is mentioned in the video while the video is playing, displaying a notification user interface element on a third portion of the user interface while continuing to play the video, the notification user interface element indicating additional information relating to the first entity is available, and in response to the first entity being mentioned in the video while the video is playing and in response to receiving a selection of the notification user interface element, displaying the first entity card on the second portion of the user interface while continuing to play the video.
  • the first entity card includes at least one of a textual summary providing information relating to the first entity or an image relating to the first entity.
  • a computer-implemented method for a user computing device includes receiving a video for playback in a user interface, providing the video for display on a first portion of the user interface displayed on a display of the user computing device, and when a first entity is mentioned in the video while the video is playing: providing a first entity card for display on a second portion of the user interface while continuing to play the video, wherein the first entity card includes descriptive content relating to the first entity, and the first entity card has been generated in response to automatic recognition of the first entity from a transcription of content of the video.
  • the first entity is mentioned in the video at a first timepoint in the video, and the first entity card is provided for display on the second portion of the user interface at the first timepoint.
  • the method includes providing, for display on a third portion of the user interface, a contracted second entity card referencing a second entity to be mentioned in the video at a second timepoint in the video after the first timepoint, and when the second entity is mentioned in the video at the second timepoint, expanding the contracted second entity card to fully display the second entity card on the third portion of the user interface while the continuing to play the video, the second entity card including descriptive content relating to the second entity.
  • the method includes, when a second entity is mentioned in the video while the video is playing, providing a second entity card for display on tire second portion of tire user interface while continuing to play the video, tire second entity card including descriptive content relating to the second entity, wherein the second entity card is provided for display on the second portion of the user interface by replacing the first entity card at a time when the second entity is mentioned in the video.
  • the method includes providing for display on the user interface, one or more entity search user interface elements that, when selected, are configured to perform a search relating to the first entity.
  • the method includes providing for display on the user interface, one or more search query user interface elements that, when selected, are configured to perform a search relating to a topic of the video other than the first entity.
  • the method includes utilizing a machine learning resource to identify the first entity and generate the first entity card.
  • the first entity is an entity among a plurality of entities mentioned in the video that is determined by the machine learning resource as an entity most likely to be searched for by a user viewing the video among the plurality of entities mentioned in the video.
  • the method includes, when the first entity is mentioned in the video while the video is playing, providing a notification user interface element for display on a third portion of the user interface while continuing to play the video, the notification user interface element indicating additional information relating to the first entity is available, and in response to receiving a selection of the notification user interface element, providing the first entity card for display on the second portion of the user interface while continuing to play the video.
  • the first entity card includes a textual summary providing information relating to the first entity and/or an image relating to the first entity.
  • a user computing device includes a display, one or more memories to store instructions, and one or more processors to execute the instructions stored in the one or more memories to: receive a video for playback in a user interface, provide the video for display on a first portion of the user interface displayed on the display, and when a first entity is mentioned in the video while the video is playing: provide a first entity card for display on a second portion of the user interface while continuing to play the video, wherein the first entity card includes descriptive content relating to the first entity, and the first entity card has been generated in response to automatic recognition of the first entity from a transcription of content of the video.
  • a server system includes one or more memories to store instructions, and one or more processors to execute the instructions stored in the one or more memories to: obtain a transcription of content from a video, apply a machine learning resource to identify one or more entities which are most likely to be searched for by a user viewing the video, based on the transcription of the content, generate one or more entity cards for each of the one or more entities, each of the one or more entity cards including descriptive content relating to a respective entity among the one or more entities, and provide a user interface, to be displayed on a respective display of one or more user computing devices, for: playing the video on a first portion of the user interface, and when the video is played and a first entity among the one or more entities is mentioned in the video, displaying a first entity card on a second portion of the user interface, the first entity card including descriptive content relating to the first entity.
  • a computer-readable medium e.g., a non- transitory computer-readable medium which stores instructions that are executable by one or more processors of a user computing device and/or a server system.
  • the computer-readable medium stores instructions which may include instructions to cause the one or more processors to perform one or more operations of any of the methods described herein (e.g., operations of the server system and/or operations of the user computing device).
  • the computer-readable medium may store additional instructions to execute other aspects of the server system and user computing device and corresponding methods of operation, as described herein.
  • FIG. 1 depicts an example system according to one or more example embodiments of tire disclosure
  • FIG. 2 depicts example block diagrams of a user computing device and server computing system according to one or more example embodiments of the disclosure
  • FIG. 3 depicts an example system for generating an entity card for a user interface, according to one or more example embodiments of the disclosure
  • FIGS. 4A-4C depict example user interfaces in which one or more entity cards are presented during the display of a video, according to one or more example embodiments of the disclosure
  • FIGS. 5A-5C depict example user interfaces in which entity cards are presented during the display of a video, according to one or more example embodiments of the disclosure
  • FIGS. 6A-6C depict example user interfaces in which a notification user interface element is presented for displaying one or more entity cards during the display of a video, according to one or more example embodiments of the disclosure;
  • FIGS. 7A and 7B depict example user interfaces in which a timeline is presented for displaying one or more entity cards during the display of a video, according to one or more example embodiments of the disclosure.
  • FIGS. 8-10 each illustrate flow diagrams of example, non-limiting computer- implemented methods according to one or more example embodiments.
  • first element may be termed as a second element
  • second element may be termed as a first element
  • the term "and / or” includes a combination of a plurality of related listed items or any item of the plurality of related listed items.
  • the scope of the expression or phrase “A and/or B” includes the item “A”, the item “B”, and the combination of items "A and B”.
  • the scope of the expression or phrase "at least one of A or B” is intended to include all of the following: (1) at least one of A, (2) at least one of B, and (3) at least one of A and at least one of B.
  • the scope of the expression or phrase "at least one of A, B, or C” is intended to include all of the following: (1) at least one of A, (2) at least one of B, (3) at least one of C, (4) at least one of A and at least one of B, (5) at least one of A and at least one of C, (6) at least one of B and at least one of C, and (7) at least one of A, at least one of B, and at least one of C.
  • the user interface is provided with an entity card which includes information about the entity that may be helpful to the user’s understanding of the content of the video. For example, if a user is watching a video about the Egyptian pyramids, an entity card may be provided to the user interface to provide further information about the term “sarcophagus,” such as a definition of the term, a photo of a sarcophagus, and the like.
  • the entity card is provided to the user interface while the video continues to play so that the user does not need to navigate away from the application or web page which plays the video in order to learn more about the entity (e.g., the term “sarcophagus”). Therefore, information about a potentially difficult concept or a concept which the user may want to know more about, may be presented to the user to help the user gain a quick understanding of the concept without leaving the application or web page which plays the video, and the user need not perform a separate search regarding the concept.
  • the entirety of the entity card may be visible on the user interface to the user or a portion of the entity card may be visible on the user interface and the user is able to select the portion of the entity card to expand the entity card to also see the hidden portion of the entity card for further information regarding the entity.
  • the entity card may be displayed on the user interface at a same time that the entity is mentioned in the video. That is, the display of the entity card is synchronized with a time that the entity is mentioned in the video.
  • the entity card may be displayed on the user interface every time that the entity is mentioned in the video, only the first time the entity is mentioned in the video, or selectively displayed when the entity is mentioned a plurality of times in the video.
  • a user interface element separate from the entity card may be provided on the user interface to allow a user to perform a search with respect to the entity. For example if the user wishes to obtain further information about the entity beyond that which is provided in the entity card the user can select the user interface element which causes a search to be performed with respect to the entity and a search results page may be displayed on the display of the user computing device.
  • one or more user interface elements separate from the entity card may be provided on the user interface that correspond to respective suggested search queries.
  • the one or more user interface elements allow a user to perform a search for information other than the entity itself, for example with respect to other entities or other topics covered in the video. For example if the user wishes to obtain further information about other entities or other topics covered in the video the user can select the corresponding user interface element which causes a search to be performed with respect to the corresponding entity or topic and a search results page may be displayed on the display of the user computing device.
  • entity cards for each of the plurality of concepts in the video may be displayed on the user interface as a user watches the video and the plurality of concepts are mentioned.
  • the user interface is provided with a separate entity card for each concept which includes information about the concept that may be helpful to the user’s overall understanding of the content of the video.
  • the entity cards are provided to the user interface while the video continues to play so that the user does not need to navigate away from the application or web page which plays the video.
  • information about potentially difficult concepts or concepts which the user may want to know more about may be presented to the user to help the user gain a quick understanding of the concept without leaving the application or web page which plays the video, and the user need not perform a separate search regarding each of the concepts.
  • all of the entity cards associated with a video may be visible on the user interface to the user at a same time while the video is playing, or only some of the entity cards associated with the video may be visible on the user interface to the user at a same time while the video is playing.
  • one or more of the entity cards may be fully expanded so that a user can view the entire contents of the entity card, while some or all of the remaining entity cards may be displayed on the user interface in a contracted or hidden form.
  • the user may view a portion of the entity card and the portion of the entity card may include some identifying information (e.g., an identification of the corresponding entity) so that the user is able to comprehend the relevance of the entity card.
  • the user is able to select the portion of the entity card to expand the entity card to also view the hidden portion of the entity card for further information regarding the entity.
  • entity cards associated with a video may be visible on the user interface to the user as the video progresses, and the user is not able to view an entity card until the corresponding entity is mentioned in the video.
  • a first entity card about a first entity may be displayed on the user interface at a time during the video when the first entity is mentioned in the video (e.g., at a first timepoint).
  • the first entity card may be displayed for a predetennined amount of time while tire video continues to play (e.g., for a time sufficient for an average user to read or view the content contained in the first entity card) or until a next entity is mentioned in the video at which point another entity card is provided on the user interface.
  • a second entity card about a second entity may be displayed on the user interface at a time during the video when the second entity is mentioned in the video (e.g., at a second timepoint).
  • tire second entity card may be displayed on the user interface by replacing the first entity card (i.e., by occupying some or all of the space on the user interface which was previously occupied by the first entity card).
  • the second entity card may be in a contracted or hidden form and when the second entity is mentioned in the video at the second timepoint, the second entity card may be expanded to fully display the second entity card on the user interface.
  • the first entity card may be changed to be in the contracted or hidden form at the time the second entity card is expanded if it is not already in the contracted or hidden form prior to the second entity card being displayed.
  • the first and second entity cards may each be fully displayed on the user interface.
  • a notification user interface element is displayed on the user interface while continuing to play the video.
  • the notification user interface element indicates that additional information relating to the entity is available.
  • an entity card is displayed on the user interface while continuing to play the video.
  • the notification user interface element may include an image (e.g., a thumbnail image) of the entity to further make the user aware that the notification user interface element is associated with the entity and an entity card about the entity is available.
  • a timeline may be displayed on the user interface which indicates one or more timepomts along the timeline in which information about one or more entities is available via corresponding entity cards.
  • a corresponding entity card is displayed on the user interface to provide information about the entity.
  • a user interface element may be provided which, when selected, allows the entity cards to cycle through the user interface as the video progresses along the timeline.
  • one or more entity cards are provided to be displayed on the user interface of the display of a user computing device while a video is played, for one or more entities which are mentioned in the video.
  • the entities for which entity cards are provided may be identified from the video by using a machine learning resource.
  • an entity among a plurality of candidate entities mentioned in the video may be identified by the machine learning resource as an entity for which an entity card should be generated when the machine learning resource determines or predicts the entity is likely to be searched for by a user viewing the video (e g., having a confidence value greater than a threshold value, a probability of being searched for greater than a threshold value, being determined as most likely to be searched for by a user viewing the video compared to other entities mentioned in the video, and the like).
  • the machine learning resource may select a candidate entity as an entity for which an entity card should be generated based on one or more of: a relevance of each of the plurality of candidate entities to a topic of the video, a relevance of each of the plurality of candidate entities to one or more other candidate entities among the plurality of candidate entities, a number of mentions of the candidate entity in the video, and a number of videos in which the candidate entity appears across a corpus of videos stored in one or more databases.
  • a server system includes one or more servers which provide the video and the entity cards for display on the user interface of the display of the user computing device.
  • entity cards are generated based on entities which are identified from the video.
  • the video may be analyzed using speech recognition programs to perform automatic speech recognition and obtain a transcription (a text transcript) of the speech.
  • a next operation may include associating some or all of the text from the transcription with knowledge graph entities to obtain a collection of knowledge graph entities associated with the video.
  • Training data for a machine learning resource may be obtained based by identifying those knowledge graph entities which also appear in search queries from real users viewing the video.
  • Additional operations for identifying entities from the video which may be important to the understanding of the video content and/or for identifying entities from the video which are likely to be searched for by a user, include determining how relevant an entity is to other entities in the video, determining how broad the entity is using a tf-idf (term frequency-inverse document frequency) signal across a corpus of videos, and determining how related the entity is to the topic of the video (e.g., using a query-based salient term signal).
  • tf-idf term frequency-inverse document frequency
  • the machine learning resource may be trained by applying weights to candidate entities (e.g., a higher weight may be assigned to an entity the more often the term is mentioned in the video, a lower weight may be assigned to an entity which is overly broad and appears frequently in a corpus of videos, a higher weight may be assigned to an entity the more related it is to the topic of the video, etc.).
  • the machine learning resource may be applied to evaluate candidate entities from among candidate entities identified in the video and to rank the candidate entities. For example, one or more of the highest ranked candidate entities may be selected as entities for which entity cards are to be generated.
  • information regarding the entity may be obtained from various sources to populate the entity card with text and/or image(s) which provide more information about the entity.
  • information regarding the entity may be obtained from one or more websites, an electronic service which provides summaries for topics, and the like.
  • the entity card may be limited to less than a predetermined length and/or size (e.g., less than 100 words).
  • the entity card may include information including one or more of a title (e.g., a title which identifies the entity, such as the title of “Barack Obama”), a subtitle (e.g., “former President of the United States” with respect to the prior entity and title example of Barack Obama), and attribution information (to provide attribution to a source of the information).
  • a title e.g., a title which identifies the entity, such as the title of “Barack Obama”
  • a subtitle e.g., “former President of the United States” with respect to the prior entity and title example of Barack Obama
  • attribution information to provide attribution to a source of the information.
  • an image may be limited to a thumbnail image size, a specified resolution, etc.
  • a next operation includes rendering a user interface which is to be displayed on a display of a user computing device, the user interface including the video and the one or more entity cards associated with the video which may appear on the user interface at various points during the video or may be displayed (fully or partially) throughout the video.
  • the machine learning resource may be updated or adjusted based on an evaluation of user interactions with the user interface. For example, if users generally do not interact with a particular entity card during the video, there may be an implication that the entity is not a term or topic users are interested in or do not understand with respect to the content of the video. Accordingly, the machine learning resource may be adjusted to reflect the user interactions (or lack thereof) with the user interface. Likewise, if users generally do interact with a particular entity card during the video, there may be an implication that the entity is a term or topic users are interested in or do not understand with respect to the content of the video. Accordingly, the machine learning resource may be adjusted to reflect the user interactions with the user interface.
  • the disclosure provides a way for users to easily understand or leam more about an entity (e.g., a term, concept, topic, etc.) associated with a video, and similarly, to easily identify content that the user may wish to consume in further detail .
  • entity e.g., a term, concept, topic, etc.
  • the user is able to more quickly comprehend the entity without the need for performing a separate search and without tire need for stopping the video and a user experience is improved.
  • the user is also able to ascertain whether they are interested in learning more about the entity by performing a search for the entity after having been provided a brief summary (e.g., a snippet) and/or imagery regarding the entity via an entity card.
  • the user is able to avoid performing a search for an entity, loading search results, and reading various content items that may or may not be relevant to the entity, which is more computationally expensive than simply reading information that is already presented, thereby conserving time, processing, memory, and network resources of the computing system (whether server device, client device, or both).
  • user convenience and experience is improved because the user is not discouraged by the complexity of the content of the video and the user will be more likely to watch the video in its entirety.
  • User convenience and experience is also improved because the user is not discouraged by erroneous search results due to spelling errors, as a machine learning resource predicts entities that the user is likely to search for, and the information about the entity is automatically displayed during the video.
  • the user can avoid loading/viewing content from a search results page which again conserves processing, memory, and network resources of the computing system.
  • User convenience and experience is also improved because the user avoids the inconvenience of switching between an application or web page which plays the video and a search results page, and instead the video can be continuously played without disruption while the entity cards are accessible or presented to the user during the video.
  • systems of the type disclosed herein may leam through one or more various machine learning techniques (e g., by training a neural network or other machine -learned model) a balance of the types of content items, perspectives, sources, and/or other atributes that are preferred, such as based on different types of content, different user populations, different contexts such as timing and location, etc.
  • various machine learning techniques e g., by training a neural network or other machine -learned model
  • data descriptive of actions taken by one or more users can be stored and used as training data to train (e.g., via supervised training techniques) one or more machine-learned models to, after training, generate predictions which assist in providing content (e.g., entity cards) in the user interface which meets the one or more users’ respective preferences.
  • training data e.g., via supervised training techniques
  • machine-learned models e.g., entity cards
  • system performance is improved with reduced manual intervention, providing fewer user searches and further conserving processing, memory, and network resources of the computing system (whether server device, client device, or both)
  • FIG. 1 is an example system according to one or more example embodiments of the disclosure.
  • FIG. 1 illustrates an example of a system which includes a user computing device 100, a server computing system 300, video data store 370, entity data store 380, entity card data store 390, and external content 400, each of which may be in communication with one another over a network 200.
  • the user computing device 100 can include any of a personal computer, a smartphone, a laptop, a tablet computer, and the like.
  • the network 200 may include any type of communications network such as a local area network (LAN), wireless local area network (WLAN), wide area network (WAN), personal area network (PAN), virtual private network (VPN), or the like.
  • wireless communication between elements of the example embodiments may be performed via a wireless LAN, Wi-Fi, Bluetooth, ZigBee, Wi-Fi direct (WFD), ultra wideband (UWB), infrared data association (IrDA), Bluetooth low energy (BLE), near field communication (NFC), a radio frequency (RF) signal, and the like.
  • wired communication between elements of the example embodiments may be performed via a pair cable, a coaxial cable, an optical fiber cable, an Ethernet cable, and the like.
  • the server computing system 300 can include a server, or a combination of servers (e.g., a web server, application server, etc.) in communication with one another, for example in a distributed fashion.
  • servers e.g., a web server, application server, etc.
  • the server computing system 300 may obtain data or information from one or more of the video data store 370, entity data store 380, entity card data store 390, and external content 400.
  • the video data store 370, entity data store 380, and entity card data store 390 may be integrally provided with the server computing system 300 (e g., as part of the memory 320 of the server computing system 300) or may be separately
  • video data store 370 entity data store 380
  • entity card entity card
  • data store 390 can be combined as a single data store (database), or may be a plurality of respective data stores. Data stored in one data store (e.g., the entity data store 380) may overlap with some data stored in another data store (e.g., the entity card data store 390). In some implementations, one data store (e.g., the entity card data store 390) may reference data that is stored in another data store (e.g., the entity data store 380).
  • Video data store 370 can store videos and/or information about videos.
  • video data store 370 may store a collection of videos.
  • Tire videos may be stored, grouped, or classified in any fashion.
  • videos may be stored according to a genre or category, according to a title, according to a date (e.g., of creation or last modification, etc ), etc.
  • Information about the videos may include location information (e.g., a uniform resource locator (URL)) regarding where a video may be stored or accessed.
  • Information about the videos may include transcription information of the videos.
  • URL uniform resource locator
  • a computing system may be configured to perform automatic speech recognition (e.g., using one or more speech recognition programs) with respect to a video stored in the video data store 370 or stored elsewhere, to obtain a transcription (i.e., a textual transcript) of the video, and the textual transcript of the video may be stored in the video data store 370.
  • automatic speech recognition e.g., using one or more speech recognition programs
  • a transcription i.e., a textual transcript
  • the textual transcript of the video may be stored in the video data store 370.
  • Entity data store 380 can store information about entities which are identified from textual transcripts of videos. The identification of entities within a video will be explained in more detail below. Entity cards may be created or generated for a video with respect to an entity that is identified from and the video and which may be stored in the entity data store 380. For example, the entities for which entity cards are provided may be identified from the video by using a machine learning resource.
  • an entity among a plurality of candidate entities mentioned in the video may be identified by the machine learning resource as an entity for which an entity card should be generated when the machine learning resource determines or predicts the entity is likely to be searched for by a user viewing the video (e.g., having a confidence value greater than a threshold value, a probability of being searched for greater than a threshold value, being determined as most likely to be searched for by a user viewing the video compared to other entities mentioned in the video, and the like)
  • the machine learning resource may select a candidate entity as an entity for which an entity card should be generated based on one or more of: a relevance of each of the plurality of candidate entities to a topic of the video, a relevance of each of the plurality of candidate entities to one or more other candidate entities among the plurality of candidate entities, a number of mentions of the candidate entity in the video, and a number of videos in which the candidate entity appears across a corpus of videos stored in one or more databases (e.g., video data store 370).
  • entities which may be identified from a video entitled “Tire myth of Icarus and Daedalus” may include “Greek mythology,” “Crete,” “Icarus,” and “Daedalus.” These entities may be stored in the entity data store 380, and may be associated with the video entitled “The myth of Icarus and Daedalus” which may be stored or referenced in the video data store 370.
  • Entity card data store 390 can store information about entity cards which are created or generated from entities which are identified from videos. The creation or generation of entity cards will be explained in more detail below.
  • the entitycards may be created or generated by obtaining information regarding the entity from various sources (e.g., from external content 400) to populate the entity card with text and/or image(s) which provide more information about the entity.
  • information regarding the entity may be obtained from one or more websites, an electronic service which provides summaries for topics, and the like.
  • entity cards stored in the entity card data store 390 may be limited to less than a predetermined length and/or size (e.g., less than 100 words).
  • the entity cards stored in the entity card data store 390 may include information including one or more of a title (e.g., a title which identifies the entity, such as the title of “Barack Obama”), a subtitle (e.g., “former President of the United States” with respect to the prior entity and title example of Barack Obama), and attribution information (to provide attribution to a source of the information).
  • a title e.g., a title which identifies the entity, such as the title of “Barack Obama”
  • a subtitle e.g., “former President of the United States” with respect to the prior entity and title example of Barack Obama
  • attribution information to provide attribution to a source of the information.
  • an image that forms part or all of the entity card may be limited to a thumbnail image size, a specified resolution, etc.
  • External content 400 can be any form of external content including news articles, webpages, video files, audio files, image files, written descriptions, ratings, game content, social media content, photographs, commercial offers, transportation methods, weather conditions, or other suitable external content.
  • the user computing device 100 and server computing system 300 can access external content 400 over network 200.
  • External content 400 can be searched by user computing device 100 and server computing system 300 according to known searching methods and search results can be ranked according to relevance, popularity, or other suitable attributes, including location-specific filtering or promotion.
  • a user of the user computing device 100 may transmit a request to view a video which is to be provided via the server computing system 300.
  • the video may be available through a videosharing web site, an online video hosting sendee, a streaming video service, or other video platform .
  • the user may view the video on a display 160 of the user computing device via a video application 130 or a web browser 140.
  • the server computing system 300 is configured to provide a user interface, to be displayed on the display of the user computing device 100, for: playing the video on a first portion of the user interface, and when the video is played and a first entity among the one or more entities is mentioned in the video, displaying a first entity card on a second portion of the user interface, the first entity card including descriptive content relating to the first entity.
  • FIGS. 4A through 7B which will be discussed in more detail below, provide example user interfaces which illustrate the display of the video on a first portion of the user interface and the display of the first entity card on a second portion of the user interface.
  • the server computing system 300 may store or retrieve the user interface including the video and the first entity card and provide the user interface in response to a request from the user computing device 100 to view the video. In some implementations, the server computing system 300 may store or retrieve the video and first entity card related to the video, dynamically generate the user interface including the video and the first entity card in response to a request from the user computing device 100 to view the video, and transmit the user interface to the user computing device 100.
  • the server computing system 300 may store or retrieve the video, may store or retrieve one or more entities which have been identified from the video, dynamically generate one or more entity cards from the identified one or more entities, and dynamically generate the user interface including the video and one or more entity cards (e.g., including the first entity card) in response to a request from the user computing device 100 to view the video.
  • the server computing system 300 may store or retrieve the video, dynamically identify one or more entities from the video (e.g., after transcribing the video or obtaining a transcript of the video), dynamically generate one or more entity cards from the identified one or more entities, and dynamically generate the user interface including the video and one or more entity cards (e.g., including the first entity card) in response to a request from the user computing device 100 to view the video.
  • FIG. 2 example block diagrams of a user computing device and server computing system according to one or more example embodiments of the disclosure will now be described.
  • the user computing device 100 may include one or more processors 110, one or more memory devices 120, a video application 130, a web browser 140, an input device 150, and a display 160.
  • the server computing system 300 may include one or more processors 310, one or more memory devices 320, and a user interface generator 330.
  • the one or more processors 110, 310 can be any suitable processing device that can be included in a user computing device 100 or server computing system 300.
  • a processor 110, 310 may include one or more of a processor, processor cores, a controller and an arithmetic logic unit, a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), an image processor, a microcomputer, a field programmable array, a programmable logic unit, an applicationspecific integrated circuit (ASIC), a microprocessor, a microcontroller, etc., and combinations thereof, including any other device capable of responding to and executing instructions in a defined manner.
  • the one or more processors 1 10, 310 can be a single processor or a plurality of processors that are operatively connected, for example in parallel.
  • the memory 120, 320 can include one or more non-transitory computer-readable storage mediums, such as such as a Read Only Memory (ROM), Programmable Read Only Memory (PROM), Erasable Programmable Read Only Memory (EPROM), and flash memory, a USB drive, a volatile memory device such as a Random Access Memory (RAM), a hard disk, floppy disks, a blue-ray disk, or optical media such as CD ROM discs and DVDs, and combinations thereof.
  • ROM Read Only Memory
  • PROM Programmable Read Only Memory
  • EPROM Erasable Programmable Read Only Memory
  • flash memory a USB drive
  • RAM Random Access Memory
  • examples of the memory 120, 320 are not limited to the above description, and the memory 120, 320 may be realized by other various devices and structures as would be understood by those skilled in the art.
  • memory 120 can store instructions, that when executed, cause the one or more processors 110 to receive a video for playback in a user interface, provide the video for display on a first portion of the user interface displayed on the display, and when a first entity is mentioned in the video while the video is playing, provide a first entity card for display on a second portion of the user interface while continuing to play the video, as described according to examples of the disclosure.
  • memory 320 can store instructions, that when executed, cause the one or more processors 310 to provide a user interface, to be displayed on a respective display of one or more user computing devices, for: playing the video on a first portion of the user interface, and when the video is played and a first entity among the one or more entities is mentioned in the video, displaying a first entity card on a second portion of the user interface, as described according to examples of the disclosure.
  • Memory 120 can also include data 122 and instructions 124 that can be retrieved, manipulated, created, or stored by the one or more processor(s) 110.
  • data can be accessed and used as input to receive a video for playback in a user interface, provide the video for display on a first portion of the user interface displayed on the display, and when a first entity is mentioned in the video while the video is playing, provide a first entity card for display on a second portion of the user interface while continuing to play the video, as described according to examples of the disclosure.
  • Memory 320 can also include data 322 and instructions 324 that can be retrieved, manipulated, created, or stored by the one or more processor(s) 310.
  • such data can be accessed and used as input to provide a user interface, to be displayed on a respective display of one or more user computing devices, for: playing the video on a first portion of the user interface, and when the video is played and a first entity among the one or more entities is mentioned in the video, displaying a first entity card on a second portion of the user interface, as described according to examples of the disclosure.
  • the user computing device 100 includes a video application 130, which may also be referred to as a video player or a video streaming app.
  • the video application 130 enables a user of the user computing device 100 to view a video which is provided through the application and displayed on a user interface of the display 160.
  • the video may be provided to the video application 130 via the server computing system 300.
  • the user computing device 100 includes a web browser 140, which may also be referred to as an internet browser or simply as a browser.
  • the web browser 140 may be any browser which is used to access a website or web page (e.g., via the world wide web).
  • a user of the user computing device 100 may provide an input (e.g., a URL) to the web browser 140 to obtain content (e.g., a video) and display the content on the display 160 of the user computing device.
  • the web browser’s 140 rendering engine may display content on a user interface (e.g., a graphical user interface).
  • a video may be provided or obtained using the web browser 140 through a video-sharing web site, an online video hosting service, a streaming video service, or other video platform.
  • the video may be provided to the web browser 140 via the server computing system 300.
  • the user computing device 100 includes an input device 150 configured to receive an input from a user and may include, for example, one or more of a keyboard (e.g., a physical keyboard, virtual keyboard, etc.), a mouse, ajoystick, a button, a switch, an electronic pen or stylus, a gesture recognition sensor (e.g., to recognize gestures of a user including movements of a body part), an input sound device or voice recognition sensor (e.g., a microphone to receive a voice command), a track ball, a remote controller, a portable (e.g., a cellular or smart) phone, a tablet PC, a pedal or footswitch, a virtual-reality device, and so on.
  • a keyboard e.g., a physical keyboard, virtual keyboard, etc.
  • a mouse e.g., a physical keyboard, virtual keyboard, etc.
  • ajoystick e.g., to recognize gestures of a user including movements of a body part
  • the input device 150 may further include a haptic device to provide haptic feedback to a user.
  • the input device 150 may also be embodied by a touch-sensitive display having a touchscreen capability, for example.
  • the input device 150 may be used by a user of the user computing device 100 to provide an input to request to view a video, to provide an input selecting a user interface element displayed on the user interface, to input a search query, etc.
  • the user computing device 100 includes a display 160 which displays information viewable by the user, for example on a user interface (e.g., a graphical user interface).
  • the display 160 may be a non-touch sensitive display or a touch- sensitive display.
  • the display 160 may include a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, active matrix organic light emitting diode (AMOLED), flexible display, 3D display, a plasma display panel (PDP), a cathode ray tube (CRT) display, and the like, for example.
  • LCD liquid crystal display
  • LED light emitting diode
  • OLED organic light emitting diode
  • AMOLED active matrix organic light emitting diode
  • flexible display 3D display
  • PDP plasma display panel
  • PDP cathode ray tube
  • CRT cathode ray tube
  • the server computing system 300 can include one or more processor(s) 310 and memory 320 which were previously discussed above.
  • the server computing system 300 may also include a user interface generator 330.
  • the user interface generator 330 may include a video provider 332, entity card provider 334, and a search query provider 336.
  • the user interface generator 330 may generate a user interface for display on the display 160 of the user computing device 100.
  • the user interface may include various portions for displaying various content.
  • the user interface may display a video on a first portion of the user interface, an entity card on a second portion of the user interface, and various user interface elements at other portions of the user interface (e.g., a suggested search query on a third portion of the user interface).
  • the video provider 332 may include information or content (e.g., video content) which may be used to render the user interface so that the video can be displayed and played back at the user computer device 100.
  • the video provider 332 may be configured to retrieve a video (e.g., in response to a request from the user computing device 100 for the video) from the video data store 370.
  • the entity card provider 334 may include information or content (e.g., an entity card including a textual summary and/or an image) which may be used to render the user interface so that the entity card can be displayed together with the video on the user interface at the user computer device 100.
  • tire entity card provider 334 may be configured to retrieve an entity card associated with the video from the entity card data store 390.
  • the search query provider 336 may include information or content (e.g., a suggested search query) which may be used to render the user interface so that the search query can be displayed together with at least one of the video or the entity card on the user interface at the user computer device 100.
  • the search query provider 336 may generate one or more suggested search queries based on entities identified with respect to the video and/or based on the content of the entity card(s) associated with the video. For example, the search query provider 336 may generate suggested search queries based on previous user searches in response to watching the video, based on previous user searches regarding a topic of the video, and the like.
  • FIG. 3 depicts an example system 3000 for generating an entity card for a user interface, according to one or more example embodiments of the disclosure.
  • the example system 3000 includes video 3010, entity generator 3020, external content 400, entity card generator 3030, user interface entity card Tenderer 3040, and user interface generator 3050.
  • Each of the video 3010, entity generator 3020, entity card generator 3030, user interface entity card Tenderer 3040, and user interface generator 3050 may be part of the server computing system 300.
  • the video 3010, entity generator 3020, entity card generator 3030, user interface entity card Tenderer 3040, and user interface generator 3050 may be part of the server computing system 300 (e.g., as part of a single server or distributed between a plurality of servers and/or data stores).
  • entity cards may be generated based on entities which are identified from a video.
  • a video 3010 may be provided to entity generator 3020 for one or more entities to be identified from the video 3010.
  • Video 3010 may be obtained from or stored in video data store 370, for example.
  • Entity generator 3020 may include video transcriber 3022, signal annotator 3024, and machine learning resource 3026, for example.
  • the video provider 332 may provide the video 3010 to the entity generator 3020 or a third-party application or service provider may provide the video 3010.
  • Video transcriber 3022 is configured to transcribe the video 3010.
  • video transcriber 3022 may include a speech recognition program which analyzes the video 3010 and performs automatic speech recognition to obtain a transcription (e.g., a text transcript) of the speech from the video 3010.
  • a transcription e.g., a text transcript
  • the transcription of tire video 3010 may be obtained from video data store 370 or from a third-party application or service provider which generates the transcription of the video, for example by automated speech recognition, and entity generator 3020 may not include the video transcriber 3022.
  • Signal annotator 3024 is configured to associate some or all of the text from the transcription obtained by video transcriber 3022 or video data store 370 with knowledge graph entities to obtain a collection of knowledge graph entities associated with the video 3010.
  • a knowledge graph may generally refer to interlinked and/or interrelated descriptions between objects, events, situations, or concepts, for example in the form of a graph-structured data model.
  • Machine learning resource 3026 is configured to predict which entities from the video 3010 (e.g., which knowledge graph entities from the video 3010) are most likely to be searched for by a user viewing the video 3010 among the plurality of entities mentioned in or identified from the video (e.g., having a confidence value greater than a threshold value that a user might perform a search for the entity, a probability of being searched for greater than a threshold value, being determined as most likely to be searched for by a user viewing the video compared to other entities mentioned in the video, and the like).
  • Training data for the machine learning resource 3026 may be obtained by identifying or matching those knowledge graph entities which also appear in search queries from real users viewing the video 3010.
  • the machine learning resource 3026 may be configured to identify entities from the video 3010 for which an entity card is to be generated by determining how relevant an entity is to other entities in the video 3010, determining how broad the entity is using a tf-idf (term frequency-inverse document frequency) signal across a corpus of videos (e.g., stored in the video data store 370), and determining how related the entity is to the topic of the video (e.g., using a query-based salient term signal).
  • tf-idf term frequency-inverse document frequency
  • the machine learning resource 3026 may select a candidate entity as an entity for which an entity card is to be generated based on one or more of: a relevance of each of the plurality of candidate entities to a topic of the video, a relevance of each of the plurality of candidate entities to one or more other candidate entities among the plurality of candidate entities, a number of mentions of the candidate entity in the video, and a number of videos in which the candidate entity appears across a corpus of videos stored in one or more databases (e.g., video data store 370), etc.
  • the machine learning resource 3026 may be trained by applying weights to candidate entities (e.g., a higher weight may be assigned to an entity the more often the term is mentioned in the video 3 10, a lower weight may be assigned to an entity which is overly broad and appears frequently in the corpus of videos, a higher weight may be assigned to an entity the more related it is to the topic of the video, etc.).
  • Tire machine learning resource 3026 may be configured to evaluate candidate entities from among candidate entities identified in the video 3010 and to rank the candidate entities.
  • the entity generator 3020 may be configured to select one or more of the highest ranked candidate entities identified by the machine learning resource 3026 as entities for which entity cards are to be generated.
  • an entity among a plurality of candidate entities mentioned in or identified from the video 3010 may be identified by the machine learning resource 3026 as an entity for which an entity card should be generated when the machine learning resource 3026 determines or predicts the entity is likely to be searched for by a user viewing the video 3010.
  • the machine learning resource 3026 may be updated or adjusted based on an evaluation of user interactions with a user interface provided to the user computing device 100. For example, if users generally do not interact with a particular entity card while watching the video 3010, there may be an implication that the entity is not a term or topic users are interested in or do not understand with respect to the content of the video 3010. Accordingly, the machine learning resource 3026 may be adjusted to reflect the user interactions (or lack thereof) with the user interface. Likewise, if users generally do interact with a particular entity card while watching the video 3010, there may be an implication that the entity is a term or topic users are interested in or do not understand with respect to the content of the video 3010.
  • the machine learning resource 3026 may be adjusted to reflect the user interactions with the user interface.
  • the machine learning resource 3026 may re-generate entities associated with video 3010 according to a preset schedule (e.g., every two weeks) which may result in different entities being identified compared to previous entities identified by the entity generator 3020. Accordingly, different entity cards may be displayed in association with the video 3010 after the machine learning resource 3026 is updated or adjusted.
  • the machine learning resource 3026 may also regenerate entities associated with video 3010 according to a user input request to the entity generator 3020.
  • the entity generator 3020 may output one or more entities which the machine learning resource 3026 determines or predicts is likely to be searched for by a user viewing the video to the entity card generator 3030.
  • the entity card generator 3030 may be configured to generate an entity card for each of the one or more entities.
  • the entity card generator 3030 may be configured to obtain information regarding the entity from various sources based on the identity of the entity and associated metadata of the entity to populate the entity card with text and/or image(s) which provide more information about the entity.
  • the entity card generator 3030 may be configured to create or generate entity cards by obtaining information regarding the entity from various sources (e.g., from external content 400) to populate the entity card with text and/or image(s) which provide more information about the entity.
  • information regarding the entity may be obtained from one or more websites, an electronic service which provides summaries for topics, and the like.
  • entity card generator 3030 may store the generated entity cards in the entity card data store 390.
  • the entity cards may be limited to less than a predetermined length and/or size (e.g., less than 100 words).
  • the entity cards may include information including one or more of a title (e.g., a title which identifies the entity, such as the title of “Barack Obama”), a subtitle (e.g., “former President of the United States” with respect to the prior entity and title example of Barack Obama), and attribution information (to provide attribution to a source of the information).
  • a title e.g., a title which identifies the entity, such as the title of “Barack Obama”
  • a subtitle e.g., “former President of the United States” with respect to the prior entity and title example of Barack Obama
  • attribution information to provide attribution to a source of the information.
  • an image that forms part or all of the entity card may be limited to a thumbnail image size, a specified resolution, etc.
  • the entity card generator 3030 may correspond to entity card provider 334.
  • User interface entity card Tenderer 3040 may be configured to render an entity card which is to be provided for at least a portion of the user interface that is to be provided for display on the display 160 of the user computing device 100.
  • User interface generator 3050 which may correspond to user interface generator 330, may be configured to combine the video 3010 and the rendered entity card to generate the user interface that is to be provided for display on the display 160 of the user computing device 100.
  • rendering of the entity card, or rendering of the user interface which includes at least the video and the entity card may be performed at the user computing device 100.
  • rendering of the entity card, or rendering of the user interface which includes at least the video and the entity card may be performed at the server computing system 300.
  • FIGS. 4A-4C depict example user interfaces in which one or more entity cards are presented during the display of a video, according to one or more example embodiments of the disclosure.
  • an example user interface 4000 as displayed on a display 160 of user computing device 100 is shown.
  • the user interface 4000 includes a section in which video 4010 is being played on a first portion 4012 of the user interface 4000.
  • the user interface 4000 also includes a section entitled “In this video” 4020 displayed on a second portion 4022 of the user interface 4000 which summarizes content of the video 4010 at different time points during the video 4010.
  • the user interface 4000 also includes a section entitled “Related topics” 4030 which includes an entity card 4040 displayed on a third portion 4032 of the user interface 4000.
  • an entity which has been identified from the video 4010 is “King Tutankhamuh,” and the entity card 4040 includes descriptive content relating to King Tutankhamuh including an image of the pharaoh and a textual summary regarding the pharaoh.
  • the entity may have been identified by applying a machine learning resource to identify one or more entities which are most likely to be searched for by a user viewing the video 4010, based on a transcription of content from the video 4010.
  • a user of the user computing device 100 may scroll down the user interface 4000 to view the content shown in the entity card 4040 in the third portion 4032.
  • FIG. 4B an example user interface 4000’ as displayed on a display 160 of user computing device 100 is shown.
  • the video 4010 may be maintained (anchored) at an upper portion of the user interface 4000’ on a first portion 4012’ of the user interface 4000’ .
  • the user interface 4000’ also includes the section entitled “Related topics” which includes the entity card 4040 displayed on a second portion 4022’ of the user interface 4000’.
  • the entity card 4040 includes a title 4042 (King Tutankhamuh) of the entity card 4040, a subtitle 4044 (Pharaoh), descriptive content 4046 (a textual summary and thumbnail image), and attribution 4048 which cites to a source of the descriptive content 4046.
  • a title 4042 King Tutankhamuh
  • a subtitle 4044 Pharaoh
  • descriptive content 4046 a textual summary and thumbnail image
  • attribution 4048 which cites to a source of the descriptive content 4046.
  • a suggested search query 4060 may be provided on the user interface 4000’.
  • the suggested search query 4060 corresponds to an entity search user interface element that, when selected, is configured to perform a search relating to the entity.
  • a second entity card 4070 (Valley of the kings) is also displayed.
  • the second entity card 4070 is displayed in a contracted or collapsed form, in contrast to the entity card 4040 which is in an expanded form.
  • the second entity card 4070 displayed in the contracted or collapsed form includes sufficient identifying information (e.g., the title of the entity card such as “Valley of the kings” and a thumbnail image relating to the Valley of the kings) so that a user understands what subject, concept, or topic the second entity card 4070 is concerned with.
  • a user may expand the second entity card 4070 by selecting a user interface element 4080 to obtain a fuller description of the second entity (the Valley of the kings), and a user may collapse the entity card 4040 by selecting a user interface element 4050.
  • the suggested search queries 4090 correspond to search query user interface elements that, when selected, are configured to perform a search relating to a topic of the video 4010 other than the first entity or the second entity (e g., on a topic of the video other than any of the entities identified from the video 4010).
  • the video 4010 continues to play while a user views the entity card 4040 and/or second entity card 4070. Therefore, viewing of the video 4010 is not interrupted when a user wishes to know more about an entity identified from the video 4010. The user may obtain sufficient information about the entity from the presented entity cards on the user interface 4000’.
  • the entity may be mentioned in the video 4010 at a first timepoint in the video 4010, and the entity card 4040 may be provided for display on a portion of the user interface (e g., third portion 4032 of user interface 4000 or second portion 4022’ of user interface 4000’) at the first timepoint. Therefore, the entity card 4040 may be provided for display at a time which is synchronized with a discussion of the entity in the video 4010.
  • second entity card 4070 is displayed on a third portion 4032 of the user interface 4000’ in a contracted form.
  • the second entity card 4070 may identify or reference a second entity (Valley of the kings) to be mentioned in the video 4010 at a second timepoint in the video 4010 after the first timepoint.
  • the contracted second entity card 4070 may be automatically expanded to fully display the second entity card 4070 on the third portion 4032’ of the user interface 4000’ while the continuing to play the video 4010, the second entity card 4070 including descriptive content relating to the second entity.
  • the contracted second entity card 4070 may be automatically expanded to fully display the second entity card 4070 on the second portion 4022’ of the user interface 4000’ while the continuing to play the video 4010, the second entity card 4070 replacing the entity card 4040 on the user interface 4000’ at a time when the second entity is mentioned in the video 4010 and including descriptive content relating to the second entity.
  • FIG. 4C an example user interface 4000” as displayed on a display 160 of user computing device 100 is shown.
  • User interface 4000” displays a search results page 4092 that is obtained in response to a user selecting the suggested search query 4060 provided on the user interface 4000’.
  • the suggested search query 4060 corresponds to an entity search user interface element that, when selected, is configured to perform a search relating to the entity.
  • the entity search is for the entity King Tutankhamun
  • the search box 4092 is automatically populated with the entity, thereby relieving a user of having to spell the entity which could be spelled incorrectly by the user causing errors in the search results and leading to frustration on the part of the user as well as wasting computing resources on inefficient or erroneous searches.
  • all of the entity cards (e.g., entity card 4040 and second entity card 4070) associated with a video may be visible on the user interface to the user at a same time while the video is playing, or only some of the entity cards associated with the video may be visible on the user interface to the user at a same time while the video is playing.
  • one or more of tire entity cards (e.g., entity card 4040 and/or second entity card 4070) may be fully expanded so that a user can view the entire contents of the entity card(s), while some or all of the remaining entity cards may be displayed on the user interface in a contracted or collapsed form.
  • the user may view a portion of the entity card and the portion of the entity card may include some identifying information (e.g., an identification of the corresponding entity) so that the user is able to comprehend the relevance of the entity card.
  • the user is able to select a user interface element or some portion of the visible portion of the collapsed entity card to expand the entity card to also view the hidden portion of the entity card for further information regarding the entity, hi some implementations, the entity card may be automatically expanded to fully display the entity card on the user interface at a time when the corresponding entity is mentioned in the video.
  • a previously shown entity card may be changed to be in the contracted or collapsed form at the time the second entity card is expanded if it is not already in the contracted or collapsed form prior to the second entity card being displayed.
  • all of the entity cards may each be fully displayed on the user interface throughout the video.
  • FIGS. 5A-5C depict example user interfaces in which entity cards are presented during the display of a video, according to one or more example embodiments of the disclosure.
  • an example user interface 5000 as displayed on a display 160 of user computing device 100 is shown.
  • Hie user interface 5000 includes a section in which video 5010 is being played on a first portion 5012 of the user interface 5000.
  • a transcript (i.e., closed captioning) 5014 of the audio portion of the video may be provided for display in the video 5010 displayed on the user interface 5000.
  • the user interface 5000 also includes a section entitled “Topics to explore” 5020 displayed on a second portion 5022 of the user interface 5000 which includes one or more entity cards to be displayed at different time points during the video 5010.
  • the second portion 5022 may include a first subportion 5032 which includes first entity card 5030 and a second sub-portion 5052 which includes at least a portion of second entity card 5050.
  • a first entity which has been identified from the video 5010 is “Greek mythology.” and the first entity card 5030 includes descriptive content relating to Greek mythology including an image which relates to Greek mythology and a textual summary regarding Greek mythology.
  • the first entity card 5030 may also include information regarding a timepoint in the video 5010 at which the first entity is being discussed (e.g., 8 seconds into the video 5010).
  • a second entity which has been identified from the video 5010 is “Crete,” and the second entity card 5050 includes descriptive content relating to Crete including an image which relates to Crete and a textual summary regarding Crete.
  • the second entity card 5050 may also include information regarding a timepoint in the video 5010 at which the second entity is being discussed (e.g., 10 seconds into the video 5010).
  • the first sub-portion 5032 may include one or more user interface elements.
  • a suggested search query 5040 may be provided on the first subportion 5032.
  • the suggested search query 5040 corresponds to an entity search user interface element that, when selected, is configured to perform a search relating to the entity (e.g., a search for the first entity “Greek mythology").
  • the second entity card 5050 is only partially shown on the user interface 5000 as the topic or concept regarding the second entity is yet to be discussed in the video 5010.
  • the entity cards in the second portion 5020 may rotate, for example in a carousel fashion, as each entity is being discussed while the video continues to play.
  • the video 5010 continues to play while a user views the first entity card 5030 and second entity card 5050.
  • viewing of the video 5010 is not interrupted if a user wishes to know more about an entity identified from the video 5010 and obtains sufficient information about the entities from the presented entity cards on the user interface 5000 (or subsequent user interfaces 5000’, 5000”, etc.) to understand the content of the video 5010 and need not perform a separate search and/or stop the video 5010.
  • FIG. 5B an example user interface 5000’ as displayed on a display 160 of user computing device 100 is shown.
  • the user interface 5000’ includes a section in which video 5010 is being played on a first portion 5012 of the user interface 5000’.
  • the user interface 5000’ also includes a section entitled “Topics to explore” 5020 displayed on a second portion 5022 of the user interface 5000’ which includes one or more entity cards to be displayed at different time points during the video 5010.
  • the second portion 5022 may include a first sub-portion 5032’ which includes at least a portion of first entity card 5030, a second sub-portion 5052’ which includes second entity card 5050, and third subportion 5062’ which includes at least a portion of third entity card 5060.
  • the second entity card 5050 includes descriptive content relating to Crete including an image which relates to Crete and a textual summary regarding Crete.
  • the second entity card 5050 may also include information regarding a timepoint in the video 5010 at which the second entity is being discussed (e.g., 10 seconds into the video 5010).
  • a third entity which has been identified from the video 5010 is “Icarus,” and the third entity card 5060 includes descriptive content relating to Icarus including an image which relates to Icarus and a textual summary regarding Icarus.
  • the third entity card 5060 may also include information regarding a timepoint in the video 5010 at which the third entity is being discussed (e.g., 13 seconds into the video 5010 as shown in FIG. 5C).
  • the second sub-portion 5052’ may include one or more user interface elements.
  • a suggested search query 5040’ may be provided on the second sub-portion 5052’.
  • the suggested search query 5040’ corresponds to an entity search user interface element that, when selected, is configured to perform a search relating to the entity (e.g., a search for the second entity “Crete”).
  • the first entity card 5030 and third entity card 5060 are only partially shown on the user interface 5000’ as the topic or concept regarding the first entity has already been discussed in the video 5010 and the topic or concept regarding the third entity has yet to be discussed in the video 5010.
  • the entity cards in the second portion 5020 may rotate, for example in a carousel fashion, as each entity is being discussed while the video continues to play.
  • the video 5010 continues to play while a user views the first entity card 5030, the second entity card 5050, and the third entity card 5060.
  • viewing of the video 5010 is not interrupted if a user wishes to know more about an entity identified from the video 5010 and obtains sufficient information about the entities from the presented entity cards on the user interfaces 5000, 5000’, etc. to understand the content of the video 5010 and need not perform a separate search and/or stop the video 5010.
  • the user interface 5000 includes a section in which video 5010 is being played on a first portion 5012 of the user interface 5000”.
  • the user interface 5000 also includes a section entitled “Topics to explore” 5020 displayed on a second portion 5022 of the user interface 5000” which includes one or more entity cards to be displayed at different time points during the video 5010.
  • the second portion 5022 may include a first sub-portion 5052” which includes at least a portion of second entity card 5050, a second sub-portion 5062” which includes third entity card 5060, and third subportion 5072” which includes at least a portion of fourth entity card 5070.
  • the third entity card 5060 includes descriptive content relating to Icarus including an image which relates to Icarus and a textual summary regarding Icarus.
  • the third entity card 5060 may also include information regarding a timepoint in the video 5010 at which the third entity is being discussed (e.g., 13 seconds into the video 5010).
  • a fourth entity which has been identified from the video 5010 may include “Daedalus,” and the fourth entity card 5070 includes descriptive content relating to Daedalus including an image which relates to Daedalus and a textual summary regarding Daedalus.
  • the fourth entity card 5070 may also include information regarding a timepoint in the video 5010 at which the fourth entity is being discussed (e.g., 20 seconds into the video 5010).
  • the second sub-portion 5062 may include one or more user interface elements.
  • a suggested search query 5040 may be provided on the second sub-portion 5062”.
  • the suggested search query 5040 corresponds to an entity search user interface element that, when selected, is configured to perform a search relating to the entity (e.g., a search for the second entity “Icarus”).
  • the second entity card 5050 and fourth entity card 5070 are only partially shown on the user interface 5000” as the topic or concept regarding the second entity has already been discussed in the video 5010 and the topic or concept regarding the fourth entity has yet to be discussed in the video 5010.
  • the entity cards in the second portion 5020 may rotate, for example in a carousel fashion, as each entity is being discussed while the video continues to play.
  • the video 5010 continues to play while a user views the first entity card 5030, the second entity card 5050, the third entity card 5060, the fourth entity card 5070, and so on.
  • viewing of the video 5010 is not interrupted if a user wishes to know more about an entity identified from the video 5010 and obtains sufficient information about the entities from the presented entity cards on the user interfaces 5000, 5000’, 5000”, etc. to understand the content of the video 5010 and need not perform a separate search and/or stop the video 5010
  • entity cards e.g., entity cards 5030, 5050, 5060, 5070
  • entity cards 5030, 5050, 5060, 5070 associated with a video
  • the user is not able to view an entity card fully until the corresponding entity is mentioned in the video.
  • a first entity card about a first entity may be displayed on the user interface at a time during the video when the first entity is mentioned in the video (e g., at a first timepoint).
  • the first entity card may be displayed for a predetermined amount of time while tire video continues to play (e.g., for a time sufficient for an average user to read or view the content contained in the first entity card) or for a time until a next entity is mentioned in the video at which point another entity card is provided on the user interface.
  • a second entity card about a second entity may be displayed on the user interface at a time during the video when the second entity is mentioned in the video (e.g., at a second timepoint).
  • the second entity card may be displayed on the user interface by replacing the first entity card (i.e., by occupying some or all of the space on the user interface which was previously occupied by the first entity card).
  • FIGS. 6A-6C depict example user interfaces in which a notification user interface element is presented for displaying one or more entity cards during the display of a video, according to one or more example embodiments of the disclosure.
  • FIG. 6A an example user interface 6000 as displayed on a display 160 of user computing device 100 is shown.
  • the user interface 6000 includes a section in which video 6010 is being played on a first portion 6012 of the user interface 6000.
  • the user interface 6000 also includes a section entitled “Related searches” 6020 displayed on a second portion 6022 of the user interface 6000 which includes one or more suggested search queries 6030 relating to the video 6010.
  • FIG. 6A an example user interface 6000 as displayed on a display 160 of user computing device 100 is shown.
  • the user interface 6000 includes a section in which video 6010 is being played on a first portion 6012 of the user interface 6000.
  • the user interface 6000 also includes a section entitled “Related searches” 6020 displayed on a second portion 6022 of the user
  • FIG. 6B an example user interface 6000’ as displayed on a display 160 of user computing device 100 is shown.
  • the user interface 6000’ has a similar configuration as tire user interface 6000 of FIG. 6A, except that a notification user interface element 6040 is displayed in response to an entity (for which an entity card exists) being mentioned in the video 6010.
  • the notification user interface element 6040 is displayed on the user interface 6000’ while continuing to play the video 6010.
  • the notification user interface element 6040 indicates that additional information relating to the entity is available.
  • an entity card is displayed on the user interface while continuing to play the video 6010.
  • the notification user interface element 6040 may include some identifying information (e.g., an identification of the corresponding entity and/or an image such as a thumbnail image) of the entity to further make the user aware that the notification user interface element 6040 is associated with the entity and so that the user is able to comprehend the relevance of the entity card which is available.
  • some identifying information e.g., an identification of the corresponding entity and/or an image such as a thumbnail image
  • an example user interface 6000” as displayed on a display 160 of user computing device 100 is shown.
  • user interface 6000’ may be displayed in response to a user selecting the notification user interface element 6040 as displayed on user interface 6000’.
  • the user interface 6000” includes a section in which video 6010 is being played on a first portion 6012 of the user interface 6000.
  • the user interface 6000 also includes a section entitled “Related searches” 6020 displayed on a second portion 6022 of the user interface 6000”, however the second portion 6022 is obscured by a new section entitled “Topics Mentioned” 6050 which is overlaid (e.g., as a pop-up window) on the second portion 6022.
  • the section entitled “Topics Mentioned” 6050 includes the entity card 6060 which is overlaid on the second portion 6022.
  • the entity card 6060 includes a title 6062 (King Tutankhamuh) of the entity card 6060, a subtitle 6064 (Pharaoh), descriptive content 6066 (a textual summary and thumbnail image), and attribution 6068 which cites to a source of the descriptive content 6066.
  • a suggested search query 6070 may be provided.
  • the suggested search query 6070 corresponds to an entity search user interface element that, when selected, is configured to perform a search relating to the entity.
  • at least a portion of a second entity card 6080 is also provided.
  • the second entity card 6080 may be related to a next entity to be discussed during the video.
  • the notification user interface element 6040 may be displayed on the user interface 6000’ at a same time that an entity is mentioned in the video 6010.
  • the notification user interface element 6040 may be displayed on the user interface 6000’ throughout the video 6010 and the selection of the notification user interface element 6040 may cause the section entitled “Topics Mentioned” 6050 to be displayed.
  • the section entitled “Topics Mentioned” 6050 is overlaid (e.g., as a pop-up window) on the second portion 6022 and may remain open until closed (e.g., via user interface element 6090).
  • the second entity card 6080 may be displayed fully (e.g., by replacing entity card 6060) at a time when the entity associated with the second entity card 6080 is discussed in the video 6010.
  • the display of the entity cards may be synchronized with a time that an associated entity is mentioned in the video.
  • entity cards may be displayed on the user interface every time that the associated entity is mentioned in the video, only the first time the entity is mentioned in the video, or selectively displayed when the entity is mentioned a plurality of times in the video.
  • the availability of entity cards associated with a video may be indicated while a video is playing using a notification user interface element, for example at a time that the associated entity is discussed during the video. Therefore, the user may have the option to view the entity card while viewing the video by deciding to select the notification user interface element.
  • FIGS. 7A and 7B depict example user interfaces in which a timeline is presented for displaying one or more entity cards during the display of a video, according to one or more example embodiments of the disclosure
  • user interface 7000 as displayed on a display 160 of user computing device 100 is shown.
  • user interface 7000 includes a section in which video 7010 is being played on a first portion 7012 of the user interface 7000.
  • the user interface 7000 also includes a section entitled “Related searches” 7020 displayed on a second portion 7022 of the user interface 7000 which includes one or more suggested search queries 7030 relating to the video 7010.
  • the user interface 7000 further includes a persistent timeline section 7042 which is overlaid (e.g., as a pop-up window) on the second portion 7022 to obscure at least a portion of the second portion 7022.
  • the persistent timeline section 7042 includes a persistent timeline 7040, at least a portion of a first entity card 7050, and at least a portion of second entity card 7060.
  • the persistent timeline 7040 may display a timeline of the video 7010 and include one or more points 7044 which indicate when an entity is to be discussed during the video 7010. For example, a next entity to be discussed (e.g., Howard Carter) can be indicated by at least a portion of second entity card 7060 being shown in the persistent timeline section 7042 and includes an image of the entity.
  • the first entity card 7050 and/or second entity card 7060 may be selectable such that the entity cards are expanded as shown in FIG. 7B, for example.
  • user interface 7000’ may be displayed in response to a user selecting the first entity card 7050 as displayed on user interface 7000, or in some implementations the user interface 7000’ may be displayed at a timepoint that the entity (King Tutankhamuh) is being discussed during the video 7010.
  • the user interface 7000’ includes the video 7010 displayed on a first portion 7012’ ofthe user interface 7000’.
  • the user interface 7000’ also includes a section entitled “Topics to explore” which includes the first entity card 7050 and at least a portion of the second entity card 7060 displayed on a second portion 7022’ of the user interface 7000’ .
  • the user interface 7000’ also includes the section entitled “Related searches” 7020 displayed on a third portion 7032’ ofthe user interface 7000’ which includes one or more suggested search queries 7030 relating to the video 7010.
  • the first entity card 7050 is expanded to include the descriptive content regarding the entity and, similar to the examples discussed previously, includes a title (King Tutankhamuh), a subtitle (Ancient Egyptian King), descriptive content (a textual summary and thumbnail image), and attribution which cites to a source of the descriptive content.
  • Second entity card 7060 may be displayed at least partially next to the first entity card 7050 in the second portion 7022’ of the user interface 7000’ .
  • a user interface element 7080 may be included in the user interface 7000’ as a selectable element that, when selected, allows the entity cards associated with video 7010 to be cycled through, for example in a carousel fashion, as the video is played.
  • the section entitled “Topics to explore” may also include additional user interface elements.
  • a suggested search query 7070 may be provided.
  • the suggested search query 7070 corresponds to an entity search user interface element that, when selected, is configured to perform a search relating to the entity.
  • a persistent timeline section may be displayed while a video is being played to indicate to a user of available entity cards in the video.
  • An entity card related to an entity being discussed currently may be displayed centrally (i.e., prominently) in the persistent timeline section while a next entity card for a next entity to be discussed during the video may also be provided.
  • the user is able to select the entity card as displayed in the persistent timeline section (c.g., in a contracted or collapsed form) to expand the entity card for further information regarding the entity.
  • the entity card may be automatically expanded to fully display the entity card on the user interface at a time when the corresponding entity is mentioned in the video.
  • FIGS. 8-10 illustrate flow diagrams of example, non-limiting computer- implemented methods according to one or more example embodiments of the disclosure.
  • the method 800 includes operation 810 of a user computing device (e.g., user computing device 100) displaying a video on a first portion of a user interface which is displayed on a display 160 of the user computing device 100.
  • the user interface displays a first entity card on a second portion of the user interface while continuing to play the video, the first entity card including descriptive content relating to the first entity.
  • the first entity card has been generated in response to automatic recognition of the first entity from a transcription of content of the video.
  • the method 900 includes operation 910 of a server computing system (c.g., server computing sy stem 300) obtaining a transcription of content from a video (e.g., via automatic speech recognition).
  • the method includes applying a machine learning resource to identify one or more entities which are most likely to be searched for by a user viewing the video, based on the transcription of the content.
  • identified entities may include the “valley of the kings,” “King Tutankhamuh,” and “sarcophagus.”
  • the method includes generating one or more entity cards for each of the one or more entities, each of the one or more entity cards including descriptive content relating to a respective entity among the one or more entities.
  • the method includes providing (or generating) a user interface, to be displayed on a respective display of one or more user computing devices, to: play the video on a first portion of the user interface, and when the video is played and a first entity among the one or more entities is mentioned in the video, display a first entity card on a second portion of the user interface, the first entity card including descriptive content relating to the first entity.
  • the method 1000 includes operation 1010 of a server computing system (e.g., server computing system 300) obtaining a transcription of content from a video (e.g., via automatic speech recognition).
  • the method includes associating text from the transcription with a knowledge graph to obtain a collection of knowledge graph entities.
  • the method includes obtaining training data to build a machine learning model for the machine learning resource by identifying (or matching) knowledge graph entities with entities that appear in actual search queries of users watching the video.
  • the method includes weighing entities based on a relevance of the entity to other entities in the video, broadness of the entity, relevance of the entity to the topic of the video, and the like.
  • the machine learning resource may be trained by applying weights to candidate entities (e.g., a higher weight may be assigned to an entity the more often the term is mentioned in the video, a lower weight may be assigned to an entity which is overly broad and appears frequently in a corpus of videos, a higher weight may be assigned to an entity the more related it is to the topic of the video, etc.).
  • the method includes applying the machine learning resource to evaluate candidate entities from among candidate entities identified in the video and to rank the candidate entities.
  • the method includes the machine learning resource predicting one or more entities for which entity cards are to be generated for the video.
  • the machine learning resource may select a predetermined number of entities (e.g., three or four) which are the the highest ranked candidate entities as entities for which entity cards are to be generated.
  • the machine learning resource may select those entities which are predicted (e.g., with a specified confidence level, with a probability of being searched above a threshold level, etc.) to be searched by a user.
  • the identified entities may subsequently be provided to an entity card generator for generating entity cards and/or the identified entities may be stored in a database (e.g., entity data store 380).
  • Module may be used herein in association with various features of the disclosure. Such terms may refer to, but are not limited to, a software or hardware component or device, such as a Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC), which performs certain tasks.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • a module or unit may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes procedures, subroutines, segments of program code, drivers, fimiware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • components such as software components, object-oriented software components, class components and task components, processes, functions, attributes procedures, subroutines, segments of program code, drivers, fimiware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • components such as software components, object-oriented software components, class components and task components, processes, functions, attributes procedures, subroutines, segments of program code, drivers, fimiware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • Non-transitory computer-readable media including program instructions to implement various operations embodied by a computer.
  • Hie media may also include, alone or in combination with the program instructions, data files, data structures, and the like.
  • Examples of non-transitory computer-readable media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks, Blue- Ray disks, and DVDs; magneto-optical media such as optical discs; and other hardware devices that are specially configured to store and perform program instructions, such as semiconductor memory, read-only memory (ROM), random access memory (RAM), flash memory, USB memory, and the like.
  • Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter.
  • the program instructions may be executed by one or more processors.
  • the described hardware devices may be configured to act as one or more software modules in order to perform the operations of the abovedescribed embodiments, or vice versa.
  • a non-transitory computer-readable storage medium may be distributed among computer systems connected through a network and computer-readable codes or program instructions may be stored and executed in a decentralized manner.
  • the non-transitory computer-readable storage media may also be embodied in at least one application specific integrated circuit (ASIC) or Field Programmable Gate Array (FPGA).
  • ASIC application specific integrated circuit
  • FPGA Field Programmable Gate Array
  • Each block of the flowchart illustrations may represent a unit, module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical fiinction(s).
  • ft should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of order. For example, two blocks shown in succession may in fact be executed substantially concurrently (simultaneously) or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A server system obtains a transcription of content from a video and applies a machine learning resource to identify entities which are most likely to be searched for by a user viewing the video, based on the transcription of the content. The server system generates entity cards for each of the entities, each of the entity cards including descriptive content relating to a respective entity among the entities. The server system provides a user interface, to be displayed on a respective display of one or more user computing devices, for playing the video on a first portion of the user interface. When the video is played and a first entity among the entities is mentioned in the video, the user interface displays a first entity card on a second portion of the user interface, the first entity card including descriptive content relating to the first entity.

Description

ENTITY CARDS INCLUDING DESCRIPTIVE CONTENT RELATING TO ENTITIES
FROM A VIDEO
PRIORITY CLAIM
[0001] The present application is based on and claims priority to United States Application Number 18/091837 having a filing date of December 30. 2022, which claims priority to United States Provisional Application 63/341674 having a filing date of May 13, 2022, each of which are incorporated by reference in their entirety for all purposes.
FIELD
[0002] The disclosure relates generally to providing entity cards for a user interface in association with a video displayed on a display of a user computing device. More particularly, the disclosure relates to providing entity cards which assist in the understanding of the contents of the video and include descriptive content relating to an entity (e.g., a concept, a term, a topic, and the like) which is mentioned in the video.
BACKGROUND
[0003] When users watch a video, for example on a challenging or a new topic, there may be keywords or concepts that the user is not familiar with, but are helpful to understanding the content of the video. For example, in a video about the Egyptian pyramids, the term “sarcophagus” may be an important concept which is discussed extensively.
However, a user not familiar with the term “sarcophagus” may not fully understand the content of the video. The user may pause the video and navigate to a search page to perform a search for the term “sarcophagus.” In some instances, the user may have difficulty spelling the term which they wish to search for and may not obtain accurate search results or may experience inconvenience in searching. In other instances, the user may stop watching the video after finding the content of the video too difficult to understand.
SUMMARY
[0004] Aspects and advantages of embodiments of the disclosure will be set forth in part in the following description, or can be learned from the description, or can be learned through practice of the example embodiments.
[0005] In one or more example embodiments, a computer-implemented method for a server system includes obtaining a transcription of content from a video, applying a machine learning resource to identify one or more entities which are most likely to be searched for by a user viewing the video, based on the transcription of the content, generating one or more entity cards for each of the one or more entities, each of the one or more entity cards including descriptive content relating to a respective entity among the one or more entities, and providing a user interface, to be displayed on a respective display of one or more user computing devices, for: playing the video on a first portion of the user interface, and when the video is played and a first entity among the one or more entities is mentioned in the video, displaying a first entity card on a second portion of the user interface, the first entity card including descriptive content relating to the first entity.
[0006] In some implementations, applying the machine learning resource to identify the one or more entities includes obtaining training data to train the machine learning resource based on observational data of users conducting searches in response to viewing only the video.
[0007] In some implementations, applying the machine learning resource to identify the one or more entities includes identifying a plurality of candidate entities from the video by associating text from the transcription with a knowledge graph, and ranking the candidate entities to obtain the one or more entities, based on one or more of: a relevance of each of the candidate entities to a topic of the video, a relevance of each of the candidate entities to one or more other candidate entities among the plurality of candidate entities, a number of mentions of the candidate entity in the video, and a number of videos in which the candidate entity appears across a corpus of videos stored in one or more databases.
[0008] In some implementations, applying the machine learning resource to identify the one or more entities includes evaluating user interactions with the user interface, and determining at least one adjustment to the machine learning resource based on the evaluation of the user interactions with the user interface.
[0009] In some implementations, the first entity is mentioned in the video at a first timepoint in the video, and the first entity card is displayed on the second portion of the user interface at the first timepoint.
[00010] In some implementations, the one or more entities include a second entity and the one or more entity cards include a second entity card, and the method further includes providing the user interface, to be displayed on the respective display of the one or more user computing devices, for: displaying, on a third portion of the user interface while the continuing to play the video, the second entity card in a contracted form, the second entity card in the contracted form referencing the second entity to be mentioned in the video at a second timepoint in the video after the first timepoint, and when the second entity is mentioned in the video at the second timepoint, displaying on the third portion of the user interface while the continuing to play the video, the second entity card in a fully expanded fonn, tire second entity card in the fully expanded fonn including descriptive content relating to the second entity.
[0011] In some implementations, the one or more entities include a second entity and the one or more entity cards include a second entity card, and the method further includes providing the user interface, to be displayed on the respective display of the one or more user computing devices, for: when the second entity is mentioned in the video while the video is playing, displaying the second entity card on the second portion of the user interface while continuing to play the video, the second entity card including descriptive content relating to the second entity, wherein the second entity card is displayed on the second portion of the display by replacing the first entity card at a time when the second entity is mentioned in the video.
[0012] In some implementations, the method further includes providing the user interface, to be displayed on the respective display of the one or more user computing devices, for: when the first entity is mentioned in the video while the video is playing, displaying a notification user interface element on a third portion of the user interface while continuing to play the video, the notification user interface element indicating additional information relating to the first entity is available, and in response to the first entity being mentioned in the video while the video is playing and in response to receiving a selection of the notification user interface element, displaying the first entity card on the second portion of the user interface while continuing to play the video.
[0013] In some implementations, the first entity card includes at least one of a textual summary providing information relating to the first entity or an image relating to the first entity.
[0014] In one or more example embodiments, a computer-implemented method for a user computing device, includes receiving a video for playback in a user interface, providing the video for display on a first portion of the user interface displayed on a display of the user computing device, and when a first entity is mentioned in the video while the video is playing: providing a first entity card for display on a second portion of the user interface while continuing to play the video, wherein the first entity card includes descriptive content relating to the first entity, and the first entity card has been generated in response to automatic recognition of the first entity from a transcription of content of the video.
[0015] In some implementations, the first entity is mentioned in the video at a first timepoint in the video, and the first entity card is provided for display on the second portion of the user interface at the first timepoint.
[0016] In some implementations, the method includes providing, for display on a third portion of the user interface, a contracted second entity card referencing a second entity to be mentioned in the video at a second timepoint in the video after the first timepoint, and when the second entity is mentioned in the video at the second timepoint, expanding the contracted second entity card to fully display the second entity card on the third portion of the user interface while the continuing to play the video, the second entity card including descriptive content relating to the second entity.
[0017] In some implementations, the method includes, when a second entity is mentioned in the video while the video is playing, providing a second entity card for display on tire second portion of tire user interface while continuing to play the video, tire second entity card including descriptive content relating to the second entity, wherein the second entity card is provided for display on the second portion of the user interface by replacing the first entity card at a time when the second entity is mentioned in the video.
[0018] In some implementations, the method includes providing for display on the user interface, one or more entity search user interface elements that, when selected, are configured to perform a search relating to the first entity.
[0019] In some implementations the method includes providing for display on the user interface, one or more search query user interface elements that, when selected, are configured to perform a search relating to a topic of the video other than the first entity.
[0020] In some implementations the method includes utilizing a machine learning resource to identify the first entity and generate the first entity card.
[0021] In some implementations the first entity is an entity among a plurality of entities mentioned in the video that is determined by the machine learning resource as an entity most likely to be searched for by a user viewing the video among the plurality of entities mentioned in the video.
[0022] In some implementations the method includes, when the first entity is mentioned in the video while the video is playing, providing a notification user interface element for display on a third portion of the user interface while continuing to play the video, the notification user interface element indicating additional information relating to the first entity is available, and in response to receiving a selection of the notification user interface element, providing the first entity card for display on the second portion of the user interface while continuing to play the video. [0023] Tn some implementations, the first entity card includes a textual summary providing information relating to the first entity and/or an image relating to the first entity. [0024] In one or more example embodiments, a user computing device includes a display, one or more memories to store instructions, and one or more processors to execute the instructions stored in the one or more memories to: receive a video for playback in a user interface, provide the video for display on a first portion of the user interface displayed on the display, and when a first entity is mentioned in the video while the video is playing: provide a first entity card for display on a second portion of the user interface while continuing to play the video, wherein the first entity card includes descriptive content relating to the first entity, and the first entity card has been generated in response to automatic recognition of the first entity from a transcription of content of the video.
[0025] In one or more example embodiments, a server system includes one or more memories to store instructions, and one or more processors to execute the instructions stored in the one or more memories to: obtain a transcription of content from a video, apply a machine learning resource to identify one or more entities which are most likely to be searched for by a user viewing the video, based on the transcription of the content, generate one or more entity cards for each of the one or more entities, each of the one or more entity cards including descriptive content relating to a respective entity among the one or more entities, and provide a user interface, to be displayed on a respective display of one or more user computing devices, for: playing the video on a first portion of the user interface, and when the video is played and a first entity among the one or more entities is mentioned in the video, displaying a first entity card on a second portion of the user interface, the first entity card including descriptive content relating to the first entity.
[0026] In one or more example embodiments, a computer-readable medium (e.g., a non- transitory computer-readable medium) which stores instructions that are executable by one or more processors of a user computing device and/or a server system is provided. In some implementations the computer-readable medium stores instructions which may include instructions to cause the one or more processors to perform one or more operations of any of the methods described herein (e.g., operations of the server system and/or operations of the user computing device). The computer-readable medium may store additional instructions to execute other aspects of the server system and user computing device and corresponding methods of operation, as described herein.
[0027] These and other features, aspects, and advantages of various embodiments of the disclosure will become better understood with reference to the following description, drawings, and appended claims. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate example embodiments of the disclosure and, together with the description, serve to explain the related principles.
BRIEF DESCRIPTION OF THE DRAWINGS
[0028] Detailed discussion of example embodiments directed to one of ordinary skill in the art is set forth in the specification, which makes reference to the appended drawings, in which:
[0029] FIG. 1 depicts an example system according to one or more example embodiments of tire disclosure;
[0030] FIG. 2 depicts example block diagrams of a user computing device and server computing system according to one or more example embodiments of the disclosure;
[0031] FIG. 3 depicts an example system for generating an entity card for a user interface, according to one or more example embodiments of the disclosure;
[0032] FIGS. 4A-4C depict example user interfaces in which one or more entity cards are presented during the display of a video, according to one or more example embodiments of the disclosure;
[0033] FIGS. 5A-5C depict example user interfaces in which entity cards are presented during the display of a video, according to one or more example embodiments of the disclosure;
[0034] FIGS. 6A-6C depict example user interfaces in which a notification user interface element is presented for displaying one or more entity cards during the display of a video, according to one or more example embodiments of the disclosure;
[0035] FIGS. 7A and 7B depict example user interfaces in which a timeline is presented for displaying one or more entity cards during the display of a video, according to one or more example embodiments of the disclosure; and
[0036] FIGS. 8-10 each illustrate flow diagrams of example, non-limiting computer- implemented methods according to one or more example embodiments.
DETAILED DESCRIPTION
[0037] Reference now will be made to embodiments of the disclosure, one or more examples of which are illustrated in the drawings, wherein like reference characters denote like elements. Each example is provided by way of explanation of the disclosure and is not intended to limit the disclosure. In fact, it will be apparent to those skilled in the art that various modifications and variations can be made to disclosure without departing from the scope or spirit of the disclosure. For instance, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Tirus, it is intended that the disclosure covers such modifications and variations as come within the scope of the appended claims and their equivalents.
[0038] Terms used herein are used to describe the example embodiments and are not intended to limit and / or restrict the disclosure. The singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. In this disclosure, terms such as "including", "having", “comprising”, and the like are used to specify features, numbers, steps, operations, elements, components, or combinations thereof, but do not preclude the presence or addition of one or more of the features, elements, steps, operations, elements, components, or combinations thereof.
[0039] It will be understood that, although the terms first, second, third, etc., may be used herein to describe various elements, the elements are not limited by these terms.
Instead, these terms are used to distinguish one element from another element. For example, without departing from the scope of the disclosure, a first element may be termed as a second element, and a second element may be termed as a first element.
[0040] It will be understood that when an element is referred to as being “connected” to another element, the expression encompasses an example of a direct connection or direct coupling, as well as a connection or coupling with one or more other elements interposed therebetween.
[0041] The term "and / or" includes a combination of a plurality of related listed items or any item of the plurality of related listed items. For example, the scope of the expression or phrase "A and/or B" includes the item "A", the item "B", and the combination of items "A and B”.
[0042] In addition, the scope of the expression or phrase "at least one of A or B" is intended to include all of the following: (1) at least one of A, (2) at least one of B, and (3) at least one of A and at least one of B. Likew ise, the scope of the expression or phrase "at least one of A, B, or C" is intended to include all of the following: (1) at least one of A, (2) at least one of B, (3) at least one of C, (4) at least one of A and at least one of B, (5) at least one of A and at least one of C, (6) at least one of B and at least one of C, and (7) at least one of A, at least one of B, and at least one of C.
[0043] According to example embodiments, as a user watches a video on a user interface of a display of a user computing device and an entity (e g., a concept, a term, a topic, and the like) is mentioned in the video, the user interface is provided with an entity card which includes information about the entity that may be helpful to the user’s understanding of the content of the video. For example, if a user is watching a video about the Egyptian pyramids, an entity card may be provided to the user interface to provide further information about the term “sarcophagus,” such as a definition of the term, a photo of a sarcophagus, and the like. The entity card is provided to the user interface while the video continues to play so that the user does not need to navigate away from the application or web page which plays the video in order to learn more about the entity (e.g., the term “sarcophagus”). Therefore, information about a potentially difficult concept or a concept which the user may want to know more about, may be presented to the user to help the user gain a quick understanding of the concept without leaving the application or web page which plays the video, and the user need not perform a separate search regarding the concept.
[0044] In some implementations, the entirety of the entity card may be visible on the user interface to the user or a portion of the entity card may be visible on the user interface and the user is able to select the portion of the entity card to expand the entity card to also see the hidden portion of the entity card for further information regarding the entity.
[0045] In some implementations, the entity card may be displayed on the user interface at a same time that the entity is mentioned in the video. That is, the display of the entity card is synchronized with a time that the entity is mentioned in the video. The entity card may be displayed on the user interface every time that the entity is mentioned in the video, only the first time the entity is mentioned in the video, or selectively displayed when the entity is mentioned a plurality of times in the video.
[0046] In some implementations, a user interface element separate from the entity card may be provided on the user interface to allow a user to perform a search with respect to the entity. For example if the user wishes to obtain further information about the entity beyond that which is provided in the entity card the user can select the user interface element which causes a search to be performed with respect to the entity and a search results page may be displayed on the display of the user computing device.
[0047] In some implementations, one or more user interface elements separate from the entity card may be provided on the user interface that correspond to respective suggested search queries. The one or more user interface elements allow a user to perform a search for information other than the entity itself, for example with respect to other entities or other topics covered in the video. For example if the user wishes to obtain further information about other entities or other topics covered in the video the user can select the corresponding user interface element which causes a search to be performed with respect to the corresponding entity or topic and a search results page may be displayed on the display of the user computing device.
[0048] In some implementations, there may be a plurality of concepts in a video which could potentially be difficult for the user to understand (or potentially be of interest to the user). According to example embodiments, entity cards for each of the plurality of concepts in the video may be displayed on the user interface as a user watches the video and the plurality of concepts are mentioned. For example, the user interface is provided with a separate entity card for each concept which includes information about the concept that may be helpful to the user’s overall understanding of the content of the video. The entity cards are provided to the user interface while the video continues to play so that the user does not need to navigate away from the application or web page which plays the video. Therefore, information about potentially difficult concepts or concepts which the user may want to know more about, may be presented to the user to help the user gain a quick understanding of the concept without leaving the application or web page which plays the video, and the user need not perform a separate search regarding each of the concepts.
[0049] In some implementations, all of the entity cards associated with a video may be visible on the user interface to the user at a same time while the video is playing, or only some of the entity cards associated with the video may be visible on the user interface to the user at a same time while the video is playing. For example, one or more of the entity cards may be fully expanded so that a user can view the entire contents of the entity card, while some or all of the remaining entity cards may be displayed on the user interface in a contracted or hidden form. For example, in the contracted or hidden form, the user may view a portion of the entity card and the portion of the entity card may include some identifying information (e.g., an identification of the corresponding entity) so that the user is able to comprehend the relevance of the entity card. For example, the user is able to select the portion of the entity card to expand the entity card to also view the hidden portion of the entity card for further information regarding the entity.
[0050] In some implementations, entity cards associated with a video may be visible on the user interface to the user as the video progresses, and the user is not able to view an entity card until the corresponding entity is mentioned in the video. For example, a first entity card about a first entity may be displayed on the user interface at a time during the video when the first entity is mentioned in the video (e.g., at a first timepoint). The first entity card may be displayed for a predetennined amount of time while tire video continues to play (e.g., for a time sufficient for an average user to read or view the content contained in the first entity card) or until a next entity is mentioned in the video at which point another entity card is provided on the user interface. For example, a second entity card about a second entity may be displayed on the user interface at a time during the video when the second entity is mentioned in the video (e.g., at a second timepoint).
[0051] In some implementations, tire second entity card may be displayed on the user interface by replacing the first entity card (i.e., by occupying some or all of the space on the user interface which was previously occupied by the first entity card).
[0052] In some implementations, the second entity card may be in a contracted or hidden form and when the second entity is mentioned in the video at the second timepoint, the second entity card may be expanded to fully display the second entity card on the user interface. The first entity card may be changed to be in the contracted or hidden form at the time the second entity card is expanded if it is not already in the contracted or hidden form prior to the second entity card being displayed. In some implementations the first and second entity cards may each be fully displayed on the user interface.
[0053] In some implementations, when an entity is mentioned in the video while the video is playing, a notification user interface element is displayed on the user interface while continuing to play the video. For example, the notification user interface element indicates that additional information relating to the entity is available. In response to receiving a selection of the notification user interface element, an entity card is displayed on the user interface while continuing to play the video. The notification user interface element may include an image (e.g., a thumbnail image) of the entity to further make the user aware that the notification user interface element is associated with the entity and an entity card about the entity is available.
[0054] In some implementations, while the video is playing a timeline may be displayed on the user interface which indicates one or more timepomts along the timeline in which information about one or more entities is available via corresponding entity cards. As the video approaches or crosses each timepoint, a corresponding entity card is displayed on the user interface to provide information about the entity. For example, a user interface element may be provided which, when selected, allows the entity cards to cycle through the user interface as the video progresses along the timeline.
[0055] According to example embodiments disclosed herein, one or more entity cards are provided to be displayed on the user interface of the display of a user computing device while a video is played, for one or more entities which are mentioned in the video. For example, the entities for which entity cards are provided may be identified from the video by using a machine learning resource. For example, an entity among a plurality of candidate entities mentioned in the video may be identified by the machine learning resource as an entity for which an entity card should be generated when the machine learning resource determines or predicts the entity is likely to be searched for by a user viewing the video (e g., having a confidence value greater than a threshold value, a probability of being searched for greater than a threshold value, being determined as most likely to be searched for by a user viewing the video compared to other entities mentioned in the video, and the like). For example, the machine learning resource may select a candidate entity as an entity for which an entity card should be generated based on one or more of: a relevance of each of the plurality of candidate entities to a topic of the video, a relevance of each of the plurality of candidate entities to one or more other candidate entities among the plurality of candidate entities, a number of mentions of the candidate entity in the video, and a number of videos in which the candidate entity appears across a corpus of videos stored in one or more databases.
[0056] In accordance with example embodiments disclosed herein, a server system includes one or more servers which provide the video and the entity cards for display on the user interface of the display of the user computing device.
[0057] According to examples disclosed herein, entity cards are generated based on entities which are identified from the video. For example, the video may be analyzed using speech recognition programs to perform automatic speech recognition and obtain a transcription (a text transcript) of the speech. A next operation may include associating some or all of the text from the transcription with knowledge graph entities to obtain a collection of knowledge graph entities associated with the video. Training data for a machine learning resource may be obtained based by identifying those knowledge graph entities which also appear in search queries from real users viewing the video. Additional operations for identifying entities from the video which may be important to the understanding of the video content and/or for identifying entities from the video which are likely to be searched for by a user, include determining how relevant an entity is to other entities in the video, determining how broad the entity is using a tf-idf (term frequency-inverse document frequency) signal across a corpus of videos, and determining how related the entity is to the topic of the video (e.g., using a query-based salient term signal). The machine learning resource may be trained by applying weights to candidate entities (e.g., a higher weight may be assigned to an entity the more often the term is mentioned in the video, a lower weight may be assigned to an entity which is overly broad and appears frequently in a corpus of videos, a higher weight may be assigned to an entity the more related it is to the topic of the video, etc.). The machine learning resource may be applied to evaluate candidate entities from among candidate entities identified in the video and to rank the candidate entities. For example, one or more of the highest ranked candidate entities may be selected as entities for which entity cards are to be generated.
[0058] To generate the entity card, information regarding the entity may be obtained from various sources to populate the entity card with text and/or image(s) which provide more information about the entity. For example, information regarding the entity may be obtained from one or more websites, an electronic service which provides summaries for topics, and the like.
[0059] In some implementations, the entity card may be limited to less than a predetermined length and/or size (e.g., less than 100 words). The entity card may include information including one or more of a title (e.g., a title which identifies the entity, such as the title of “Barack Obama”), a subtitle (e.g., “former President of the United States” with respect to the prior entity and title example of Barack Obama), and attribution information (to provide attribution to a source of the information). For example, an image may be limited to a thumbnail image size, a specified resolution, etc.
[0060] According to examples disclosed herein, a next operation includes rendering a user interface which is to be displayed on a display of a user computing device, the user interface including the video and the one or more entity cards associated with the video which may appear on the user interface at various points during the video or may be displayed (fully or partially) throughout the video.
[0061] For example, the machine learning resource may be updated or adjusted based on an evaluation of user interactions with the user interface. For example, if users generally do not interact with a particular entity card during the video, there may be an implication that the entity is not a term or topic users are interested in or do not understand with respect to the content of the video. Accordingly, the machine learning resource may be adjusted to reflect the user interactions (or lack thereof) with the user interface. Likewise, if users generally do interact with a particular entity card during the video, there may be an implication that the entity is a term or topic users are interested in or do not understand with respect to the content of the video. Accordingly, the machine learning resource may be adjusted to reflect the user interactions with the user interface.
[0062] The systems and methods of the disclosure provide a number of technical effects and benefits. In one example, the disclosure provides a way for users to easily understand or leam more about an entity (e.g., a term, concept, topic, etc.) associated with a video, and similarly, to easily identify content that the user may wish to consume in further detail . By providing such a user interface, the user is able to more quickly comprehend the entity without the need for performing a separate search and without tire need for stopping the video and a user experience is improved. The user is also able to ascertain whether they are interested in learning more about the entity by performing a search for the entity after having been provided a brief summary (e.g., a snippet) and/or imagery regarding the entity via an entity card. In such fashion, the user is able to avoid performing a search for an entity, loading search results, and reading various content items that may or may not be relevant to the entity, which is more computationally expensive than simply reading information that is already presented, thereby conserving time, processing, memory, and network resources of the computing system (whether server device, client device, or both). Likewise, user convenience and experience is improved because the user is not discouraged by the complexity of the content of the video and the user will be more likely to watch the video in its entirety. User convenience and experience is also improved because the user is not discouraged by erroneous search results due to spelling errors, as a machine learning resource predicts entities that the user is likely to search for, and the information about the entity is automatically displayed during the video. Therefore, the user can avoid loading/viewing content from a search results page which again conserves processing, memory, and network resources of the computing system. User convenience and experience is also improved because the user avoids the inconvenience of switching between an application or web page which plays the video and a search results page, and instead the video can be continuously played without disruption while the entity cards are accessible or presented to the user during the video.
[0063] In some cases, systems of the type disclosed herein may leam through one or more various machine learning techniques (e g., by training a neural network or other machine -learned model) a balance of the types of content items, perspectives, sources, and/or other atributes that are preferred, such as based on different types of content, different user populations, different contexts such as timing and location, etc. For example, data descriptive of actions taken by one or more users (e.g., “clicks,” “likes,” or similar) with respect to the user interface in various contextual scenarios can be stored and used as training data to train (e.g., via supervised training techniques) one or more machine-learned models to, after training, generate predictions which assist in providing content (e.g., entity cards) in the user interface which meets the one or more users’ respective preferences. In such a way, system performance is improved with reduced manual intervention, providing fewer user searches and further conserving processing, memory, and network resources of the computing system (whether server device, client device, or both)
[0064] Referring now to the drawings, FIG. 1 is an example system according to one or more example embodiments of the disclosure. FIG. 1 illustrates an example of a system which includes a user computing device 100, a server computing system 300, video data store 370, entity data store 380, entity card data store 390, and external content 400, each of which may be in communication with one another over a network 200.
[0065] For example, the user computing device 100 can include any of a personal computer, a smartphone, a laptop, a tablet computer, and the like.
[0066] For example, the network 200 may include any type of communications network such as a local area network (LAN), wireless local area network (WLAN), wide area network (WAN), personal area network (PAN), virtual private network (VPN), or the like. For example, wireless communication between elements of the example embodiments may be performed via a wireless LAN, Wi-Fi, Bluetooth, ZigBee, Wi-Fi direct (WFD), ultra wideband (UWB), infrared data association (IrDA), Bluetooth low energy (BLE), near field communication (NFC), a radio frequency (RF) signal, and the like. For example, wired communication between elements of the example embodiments may be performed via a pair cable, a coaxial cable, an optical fiber cable, an Ethernet cable, and the like.
[0067] For example, the server computing system 300 can include a server, or a combination of servers (e.g., a web server, application server, etc.) in communication with one another, for example in a distributed fashion.
[0068] In example embodiments, the server computing system 300 may obtain data or information from one or more of the video data store 370, entity data store 380, entity card data store 390, and external content 400. The video data store 370, entity data store 380, and entity card data store 390 may be integrally provided with the server computing system 300 (e g., as part of the memory 320 of the server computing system 300) or may be separately
(e.g., remotely) provided. Further, video data store 370, entity data store 380, and entity card
data store 390 can be combined as a single data store (database), or may be a plurality of respective data stores. Data stored in one data store (e.g., the entity data store 380) may overlap with some data stored in another data store (e.g., the entity card data store 390). In some implementations, one data store (e.g., the entity card data store 390) may reference data that is stored in another data store (e.g., the entity data store 380).
[0069] Video data store 370 can store videos and/or information about videos. For example, video data store 370 may store a collection of videos. Tire videos may be stored, grouped, or classified in any fashion. For example, videos may be stored according to a genre or category, according to a title, according to a date (e.g., of creation or last modification, etc ), etc. Information about the videos may include location information (e.g., a uniform resource locator (URL)) regarding where a video may be stored or accessed. Information about the videos may include transcription information of the videos. For example, a computing system may be configured to perform automatic speech recognition (e.g., using one or more speech recognition programs) with respect to a video stored in the video data store 370 or stored elsewhere, to obtain a transcription (i.e., a textual transcript) of the video, and the textual transcript of the video may be stored in the video data store 370.
[0070] Entity data store 380 can store information about entities which are identified from textual transcripts of videos. The identification of entities within a video will be explained in more detail below. Entity cards may be created or generated for a video with respect to an entity that is identified from and the video and which may be stored in the entity data store 380. For example, the entities for which entity cards are provided may be identified from the video by using a machine learning resource. For example, an entity among a plurality of candidate entities mentioned in the video may be identified by the machine learning resource as an entity for which an entity card should be generated when the machine learning resource determines or predicts the entity is likely to be searched for by a user viewing the video (e.g., having a confidence value greater than a threshold value, a probability of being searched for greater than a threshold value, being determined as most likely to be searched for by a user viewing the video compared to other entities mentioned in the video, and the like) For example, the machine learning resource may select a candidate entity as an entity for which an entity card should be generated based on one or more of: a relevance of each of the plurality of candidate entities to a topic of the video, a relevance of each of the plurality of candidate entities to one or more other candidate entities among the plurality of candidate entities, a number of mentions of the candidate entity in the video, and a number of videos in which the candidate entity appears across a corpus of videos stored in one or more databases (e.g., video data store 370). For example, entities which may be identified from a video entitled “Tire myth of Icarus and Daedalus” may include “Greek mythology,” “Crete,” “Icarus,” and “Daedalus.” These entities may be stored in the entity data store 380, and may be associated with the video entitled “The myth of Icarus and Daedalus” which may be stored or referenced in the video data store 370.
[0071] Entity card data store 390 can store information about entity cards which are created or generated from entities which are identified from videos. The creation or generation of entity cards will be explained in more detail below. For example, the entitycards may be created or generated by obtaining information regarding the entity from various sources (e.g., from external content 400) to populate the entity card with text and/or image(s) which provide more information about the entity. For example, information regarding the entity may be obtained from one or more websites, an electronic service which provides summaries for topics, and the like. For example, entity cards stored in the entity card data store 390 may be limited to less than a predetermined length and/or size (e.g., less than 100 words). The entity cards stored in the entity card data store 390 may include information including one or more of a title (e.g., a title which identifies the entity, such as the title of “Barack Obama”), a subtitle (e.g., “former President of the United States” with respect to the prior entity and title example of Barack Obama), and attribution information (to provide attribution to a source of the information). For example, an image that forms part or all of the entity card may be limited to a thumbnail image size, a specified resolution, etc.
[0072] External content 400 can be any form of external content including news articles, webpages, video files, audio files, image files, written descriptions, ratings, game content, social media content, photographs, commercial offers, transportation methods, weather conditions, or other suitable external content. The user computing device 100 and server computing system 300 can access external content 400 over network 200. External content 400 can be searched by user computing device 100 and server computing system 300 according to known searching methods and search results can be ranked according to relevance, popularity, or other suitable attributes, including location-specific filtering or promotion.
[0073] With reference to FIG. 1, in an example embodiment a user of the user computing device 100 may transmit a request to view a video which is to be provided via the server computing system 300. For example, the video may be available through a videosharing web site, an online video hosting sendee, a streaming video service, or other video platform . For example, the user may view the video on a display 160 of the user computing device via a video application 130 or a web browser 140. In response to receiving the request, the server computing system 300 is configured to provide a user interface, to be displayed on the display of the user computing device 100, for: playing the video on a first portion of the user interface, and when the video is played and a first entity among the one or more entities is mentioned in the video, displaying a first entity card on a second portion of the user interface, the first entity card including descriptive content relating to the first entity. FIGS. 4A through 7B, which will be discussed in more detail below, provide example user interfaces which illustrate the display of the video on a first portion of the user interface and the display of the first entity card on a second portion of the user interface. In some implementations, the server computing system 300 may store or retrieve the user interface including the video and the first entity card and provide the user interface in response to a request from the user computing device 100 to view the video. In some implementations, the server computing system 300 may store or retrieve the video and first entity card related to the video, dynamically generate the user interface including the video and the first entity card in response to a request from the user computing device 100 to view the video, and transmit the user interface to the user computing device 100. In some implementations, the server computing system 300 may store or retrieve the video, may store or retrieve one or more entities which have been identified from the video, dynamically generate one or more entity cards from the identified one or more entities, and dynamically generate the user interface including the video and one or more entity cards (e.g., including the first entity card) in response to a request from the user computing device 100 to view the video. In some implementations, the server computing system 300 may store or retrieve the video, dynamically identify one or more entities from the video (e.g., after transcribing the video or obtaining a transcript of the video), dynamically generate one or more entity cards from the identified one or more entities, and dynamically generate the user interface including the video and one or more entity cards (e.g., including the first entity card) in response to a request from the user computing device 100 to view the video.
[0074] Referring now to FIG. 2, example block diagrams of a user computing device and server computing system according to one or more example embodiments of the disclosure will now be described.
[0075] The user computing device 100 may include one or more processors 110, one or more memory devices 120, a video application 130, a web browser 140, an input device 150, and a display 160. The server computing system 300 may include one or more processors 310, one or more memory devices 320, and a user interface generator 330.
[0076] For example, the one or more processors 110, 310 can be any suitable processing device that can be included in a user computing device 100 or server computing system 300. For example, such a processor 110, 310 may include one or more of a processor, processor cores, a controller and an arithmetic logic unit, a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), an image processor, a microcomputer, a field programmable array, a programmable logic unit, an applicationspecific integrated circuit (ASIC), a microprocessor, a microcontroller, etc., and combinations thereof, including any other device capable of responding to and executing instructions in a defined manner. The one or more processors 1 10, 310 can be a single processor or a plurality of processors that are operatively connected, for example in parallel.
[0077] The memory 120, 320 can include one or more non-transitory computer-readable storage mediums, such as such as a Read Only Memory (ROM), Programmable Read Only Memory (PROM), Erasable Programmable Read Only Memory (EPROM), and flash memory, a USB drive, a volatile memory device such as a Random Access Memory (RAM), a hard disk, floppy disks, a blue-ray disk, or optical media such as CD ROM discs and DVDs, and combinations thereof. However, examples of the memory 120, 320 are not limited to the above description, and the memory 120, 320 may be realized by other various devices and structures as would be understood by those skilled in the art.
[0078] For example, memory 120 can store instructions, that when executed, cause the one or more processors 110 to receive a video for playback in a user interface, provide the video for display on a first portion of the user interface displayed on the display, and when a first entity is mentioned in the video while the video is playing, provide a first entity card for display on a second portion of the user interface while continuing to play the video, as described according to examples of the disclosure. For example, memory 320 can store instructions, that when executed, cause the one or more processors 310 to provide a user interface, to be displayed on a respective display of one or more user computing devices, for: playing the video on a first portion of the user interface, and when the video is played and a first entity among the one or more entities is mentioned in the video, displaying a first entity card on a second portion of the user interface, as described according to examples of the disclosure.
[0079] Memory 120 can also include data 122 and instructions 124 that can be retrieved, manipulated, created, or stored by the one or more processor(s) 110. In some example embodiments, such data can be accessed and used as input to receive a video for playback in a user interface, provide the video for display on a first portion of the user interface displayed on the display, and when a first entity is mentioned in the video while the video is playing, provide a first entity card for display on a second portion of the user interface while continuing to play the video, as described according to examples of the disclosure. Memory 320 can also include data 322 and instructions 324 that can be retrieved, manipulated, created, or stored by the one or more processor(s) 310. In some example embodiments, such data can be accessed and used as input to provide a user interface, to be displayed on a respective display of one or more user computing devices, for: playing the video on a first portion of the user interface, and when the video is played and a first entity among the one or more entities is mentioned in the video, displaying a first entity card on a second portion of the user interface, as described according to examples of the disclosure. [0080] In FIG. 2, the user computing device 100 includes a video application 130, which may also be referred to as a video player or a video streaming app. The video application 130 enables a user of the user computing device 100 to view a video which is provided through the application and displayed on a user interface of the display 160. For example, the video may be provided to the video application 130 via the server computing system 300.
[0081] In FIG. 2, the user computing device 100 includes a web browser 140, which may also be referred to as an internet browser or simply as a browser. The web browser 140 may be any browser which is used to access a website or web page (e.g., via the world wide web). A user of the user computing device 100 may provide an input (e.g., a URL) to the web browser 140 to obtain content (e.g., a video) and display the content on the display 160 of the user computing device. For example, the web browser’s 140 rendering engine may display content on a user interface (e.g., a graphical user interface). For example, a video may be provided or obtained using the web browser 140 through a video-sharing web site, an online video hosting service, a streaming video service, or other video platform. For example, the video may be provided to the web browser 140 via the server computing system 300.
[0082] In FIG. 2, the user computing device 100 includes an input device 150 configured to receive an input from a user and may include, for example, one or more of a keyboard (e.g., a physical keyboard, virtual keyboard, etc.), a mouse, ajoystick, a button, a switch, an electronic pen or stylus, a gesture recognition sensor (e.g., to recognize gestures of a user including movements of a body part), an input sound device or voice recognition sensor (e.g., a microphone to receive a voice command), a track ball, a remote controller, a portable (e.g., a cellular or smart) phone, a tablet PC, a pedal or footswitch, a virtual-reality device, and so on. The input device 150 may further include a haptic device to provide haptic feedback to a user. The input device 150 may also be embodied by a touch-sensitive display having a touchscreen capability, for example. The input device 150 may be used by a user of the user computing device 100 to provide an input to request to view a video, to provide an input selecting a user interface element displayed on the user interface, to input a search query, etc. [0083] In FIG. 2, the user computing device 100 includes a display 160 which displays information viewable by the user, for example on a user interface (e.g., a graphical user interface). For example, the display 160 may be a non-touch sensitive display or a touch- sensitive display. The display 160 may include a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, active matrix organic light emitting diode (AMOLED), flexible display, 3D display, a plasma display panel (PDP), a cathode ray tube (CRT) display, and the like, for example. However, the disclosure is not limited to these example displays and may include other types of displays.
[0084] In accordance with example embodiments described herein, the server computing system 300 can include one or more processor(s) 310 and memory 320 which were previously discussed above. The server computing system 300 may also include a user interface generator 330. For example, the user interface generator 330 may include a video provider 332, entity card provider 334, and a search query provider 336. The user interface generator 330 may generate a user interface for display on the display 160 of the user computing device 100. The user interface may include various portions for displaying various content. For example, the user interface may display a video on a first portion of the user interface, an entity card on a second portion of the user interface, and various user interface elements at other portions of the user interface (e.g., a suggested search query on a third portion of the user interface).
[0085] The video provider 332 may include information or content (e.g., video content) which may be used to render the user interface so that the video can be displayed and played back at the user computer device 100. In some implementations, the video provider 332 may be configured to retrieve a video (e.g., in response to a request from the user computing device 100 for the video) from the video data store 370.
[0086] The entity card provider 334 may include information or content (e.g., an entity card including a textual summary and/or an image) which may be used to render the user interface so that the entity card can be displayed together with the video on the user interface at the user computer device 100. In some implementations, tire entity card provider 334 may be configured to retrieve an entity card associated with the video from the entity card data store 390.
[0087] The search query provider 336 may include information or content (e.g., a suggested search query) which may be used to render the user interface so that the search query can be displayed together with at least one of the video or the entity card on the user interface at the user computer device 100. In some implementations, the search query provider 336 may generate one or more suggested search queries based on entities identified with respect to the video and/or based on the content of the entity card(s) associated with the video. For example, the search query provider 336 may generate suggested search queries based on previous user searches in response to watching the video, based on previous user searches regarding a topic of the video, and the like.
[0088] Additional aspects of the user computing device 100 and server computing system 300 will be discussed in view of the following illustrations shown in FIGS. 3 through 7B and the flow diagrams of FIGS. 8-10.
[0089] FIG. 3 depicts an example system 3000 for generating an entity card for a user interface, according to one or more example embodiments of the disclosure. The example system 3000 includes video 3010, entity generator 3020, external content 400, entity card generator 3030, user interface entity card Tenderer 3040, and user interface generator 3050. Each of the video 3010, entity generator 3020, entity card generator 3030, user interface entity card Tenderer 3040, and user interface generator 3050 may be part of the server computing system 300. In some implementations, the video 3010, entity generator 3020, entity card generator 3030, user interface entity card Tenderer 3040, and user interface generator 3050 may be part of the server computing system 300 (e.g., as part of a single server or distributed between a plurality of servers and/or data stores).
[0090] Referring to FIG. 3, entity cards may be generated based on entities which are identified from a video. For example, a video 3010 may be provided to entity generator 3020 for one or more entities to be identified from the video 3010. Video 3010 may be obtained from or stored in video data store 370, for example. Entity generator 3020 may include video transcriber 3022, signal annotator 3024, and machine learning resource 3026, for example. In some implementations, the video provider 332 may provide the video 3010 to the entity generator 3020 or a third-party application or service provider may provide the video 3010. [0091] Video transcriber 3022 is configured to transcribe the video 3010. For example, video transcriber 3022 may include a speech recognition program which analyzes the video 3010 and performs automatic speech recognition to obtain a transcription (e.g., a text transcript) of the speech from the video 3010. In some implementations, the transcription of tire video 3010 may be obtained from video data store 370 or from a third-party application or service provider which generates the transcription of the video, for example by automated speech recognition, and entity generator 3020 may not include the video transcriber 3022. [0092] Signal annotator 3024 is configured to associate some or all of the text from the transcription obtained by video transcriber 3022 or video data store 370 with knowledge graph entities to obtain a collection of knowledge graph entities associated with the video 3010. A knowledge graph may generally refer to interlinked and/or interrelated descriptions between objects, events, situations, or concepts, for example in the form of a graph-structured data model.
[0093] Machine learning resource 3026 is configured to predict which entities from the video 3010 (e.g., which knowledge graph entities from the video 3010) are most likely to be searched for by a user viewing the video 3010 among the plurality of entities mentioned in or identified from the video (e.g., having a confidence value greater than a threshold value that a user might perform a search for the entity, a probability of being searched for greater than a threshold value, being determined as most likely to be searched for by a user viewing the video compared to other entities mentioned in the video, and the like). Training data for the machine learning resource 3026 may be obtained by identifying or matching those knowledge graph entities which also appear in search queries from real users viewing the video 3010. [0094] The machine learning resource 3026 may be configured to identify entities from the video 3010 for which an entity card is to be generated by determining how relevant an entity is to other entities in the video 3010, determining how broad the entity is using a tf-idf (term frequency-inverse document frequency) signal across a corpus of videos (e.g., stored in the video data store 370), and determining how related the entity is to the topic of the video (e.g., using a query-based salient term signal). For example, the machine learning resource 3026 may select a candidate entity as an entity for which an entity card is to be generated based on one or more of: a relevance of each of the plurality of candidate entities to a topic of the video, a relevance of each of the plurality of candidate entities to one or more other candidate entities among the plurality of candidate entities, a number of mentions of the candidate entity in the video, and a number of videos in which the candidate entity appears across a corpus of videos stored in one or more databases (e.g., video data store 370), etc.
[0095] The machine learning resource 3026 may be trained by applying weights to candidate entities (e.g., a higher weight may be assigned to an entity the more often the term is mentioned in the video 3 10, a lower weight may be assigned to an entity which is overly broad and appears frequently in the corpus of videos, a higher weight may be assigned to an entity the more related it is to the topic of the video, etc.). Tire machine learning resource 3026 may be configured to evaluate candidate entities from among candidate entities identified in the video 3010 and to rank the candidate entities. For example, the entity generator 3020 may be configured to select one or more of the highest ranked candidate entities identified by the machine learning resource 3026 as entities for which entity cards are to be generated. For example, an entity among a plurality of candidate entities mentioned in or identified from the video 3010 may be identified by the machine learning resource 3026 as an entity for which an entity card should be generated when the machine learning resource 3026 determines or predicts the entity is likely to be searched for by a user viewing the video 3010.
[0096] For example, the machine learning resource 3026 may be updated or adjusted based on an evaluation of user interactions with a user interface provided to the user computing device 100. For example, if users generally do not interact with a particular entity card while watching the video 3010, there may be an implication that the entity is not a term or topic users are interested in or do not understand with respect to the content of the video 3010. Accordingly, the machine learning resource 3026 may be adjusted to reflect the user interactions (or lack thereof) with the user interface. Likewise, if users generally do interact with a particular entity card while watching the video 3010, there may be an implication that the entity is a term or topic users are interested in or do not understand with respect to the content of the video 3010. Accordingly, the machine learning resource 3026 may be adjusted to reflect the user interactions with the user interface. For example, the machine learning resource 3026 may re-generate entities associated with video 3010 according to a preset schedule (e.g., every two weeks) which may result in different entities being identified compared to previous entities identified by the entity generator 3020. Accordingly, different entity cards may be displayed in association with the video 3010 after the machine learning resource 3026 is updated or adjusted. The machine learning resource 3026 may also regenerate entities associated with video 3010 according to a user input request to the entity generator 3020.
[0097] The entity generator 3020 may output one or more entities which the machine learning resource 3026 determines or predicts is likely to be searched for by a user viewing the video to the entity card generator 3030. In response to receiving the one or more entities from the entity generator 30202, the entity card generator 3030 may be configured to generate an entity card for each of the one or more entities. For example, the entity card generator 3030 may be configured to obtain information regarding the entity from various sources based on the identity of the entity and associated metadata of the entity to populate the entity card with text and/or image(s) which provide more information about the entity. For example, the entity card generator 3030 may be configured to create or generate entity cards by obtaining information regarding the entity from various sources (e.g., from external content 400) to populate the entity card with text and/or image(s) which provide more information about the entity. For example, information regarding the entity may be obtained from one or more websites, an electronic service which provides summaries for topics, and the like. For example, entity card generator 3030 may store the generated entity cards in the entity card data store 390. For example, the entity cards may be limited to less than a predetermined length and/or size (e.g., less than 100 words). For example, the entity cards may include information including one or more of a title (e.g., a title which identifies the entity, such as the title of “Barack Obama”), a subtitle (e.g., “former President of the United States” with respect to the prior entity and title example of Barack Obama), and attribution information (to provide attribution to a source of the information). For example, an image that forms part or all of the entity card may be limited to a thumbnail image size, a specified resolution, etc. In some implementations, the entity card generator 3030 may correspond to entity card provider 334.
[0098] User interface entity card Tenderer 3040 may be configured to render an entity card which is to be provided for at least a portion of the user interface that is to be provided for display on the display 160 of the user computing device 100.
[0099] User interface generator 3050, which may correspond to user interface generator 330, may be configured to combine the video 3010 and the rendered entity card to generate the user interface that is to be provided for display on the display 160 of the user computing device 100. In some implementations, rendering of the entity card, or rendering of the user interface which includes at least the video and the entity card, may be performed at the user computing device 100. In some implementations, rendering of the entity card, or rendering of the user interface which includes at least the video and the entity card, may be performed at the server computing system 300.
[00100] FIGS. 4A-4C depict example user interfaces in which one or more entity cards are presented during the display of a video, according to one or more example embodiments of the disclosure. Referring to FIG. 4A, an example user interface 4000 as displayed on a display 160 of user computing device 100 is shown. The user interface 4000 includes a section in which video 4010 is being played on a first portion 4012 of the user interface 4000. The user interface 4000 also includes a section entitled “In this video” 4020 displayed on a second portion 4022 of the user interface 4000 which summarizes content of the video 4010 at different time points during the video 4010. The user interface 4000 also includes a section entitled “Related topics” 4030 which includes an entity card 4040 displayed on a third portion 4032 of the user interface 4000. In this example, an entity which has been identified from the video 4010 is “King Tutankhamuh,” and the entity card 4040 includes descriptive content relating to King Tutankhamuh including an image of the pharaoh and a textual summary regarding the pharaoh. For example, the entity may have been identified by applying a machine learning resource to identify one or more entities which are most likely to be searched for by a user viewing the video 4010, based on a transcription of content from the video 4010.
[0101] In some implementations, a user of the user computing device 100 may scroll down the user interface 4000 to view the content shown in the entity card 4040 in the third portion 4032. Referring to FIG. 4B, an example user interface 4000’ as displayed on a display 160 of user computing device 100 is shown. For example, as shown in FIG. 4B, in response to a user scrolling down to view the entity card 4040, the video 4010 may be maintained (anchored) at an upper portion of the user interface 4000’ on a first portion 4012’ of the user interface 4000’ . The user interface 4000’ also includes the section entitled “Related topics” which includes the entity card 4040 displayed on a second portion 4022’ of the user interface 4000’. In this example, the entity card 4040 includes a title 4042 (King Tutankhamuh) of the entity card 4040, a subtitle 4044 (Pharaoh), descriptive content 4046 (a textual summary and thumbnail image), and attribution 4048 which cites to a source of the descriptive content 4046.
[0102] For example, in other portions of the user interface 4000’ additional user interface elements and entity cards may be provided. For example, a suggested search query 4060 may be provided on the user interface 4000’. Here, the suggested search query 4060 corresponds to an entity search user interface element that, when selected, is configured to perform a search relating to the entity.
[0103] For example, in a third portion 4032’ of the user interface 4000’ a second entity card 4070 (Valley of the kings) is also displayed. Here, the second entity card 4070 is displayed in a contracted or collapsed form, in contrast to the entity card 4040 which is in an expanded form. The second entity card 4070 displayed in the contracted or collapsed form includes sufficient identifying information (e.g., the title of the entity card such as “Valley of the kings” and a thumbnail image relating to the Valley of the kings) so that a user understands what subject, concept, or topic the second entity card 4070 is concerned with. For example, a user may expand the second entity card 4070 by selecting a user interface element 4080 to obtain a fuller description of the second entity (the Valley of the kings), and a user may collapse the entity card 4040 by selecting a user interface element 4050.
[0104] For example, in the third portion 4032’ of the user interface 4000’ additional suggested search queries 4090 (e.g., “Howard carter” and “Mummification process”) are also displayed. Here, the suggested search queries 4090 correspond to search query user interface elements that, when selected, are configured to perform a search relating to a topic of the video 4010 other than the first entity or the second entity (e g., on a topic of the video other than any of the entities identified from the video 4010).
[0105] In example embodiments of the disclosure, the video 4010 continues to play while a user views the entity card 4040 and/or second entity card 4070. Therefore, viewing of the video 4010 is not interrupted when a user wishes to know more about an entity identified from the video 4010. The user may obtain sufficient information about the entity from the presented entity cards on the user interface 4000’.
[0106] In some implementations, the entity may be mentioned in the video 4010 at a first timepoint in the video 4010, and the entity card 4040 may be provided for display on a portion of the user interface (e g., third portion 4032 of user interface 4000 or second portion 4022’ of user interface 4000’) at the first timepoint. Therefore, the entity card 4040 may be provided for display at a time which is synchronized with a discussion of the entity in the video 4010.
[0107] As shown in FIG. 4B, second entity card 4070 is displayed on a third portion 4032 of the user interface 4000’ in a contracted form. The second entity card 4070 may identify or reference a second entity (Valley of the kings) to be mentioned in the video 4010 at a second timepoint in the video 4010 after the first timepoint. In some implementations, when the second entity is mentioned in the video 4010 at the second timepoint, the contracted second entity card 4070 may be automatically expanded to fully display the second entity card 4070 on the third portion 4032’ of the user interface 4000’ while the continuing to play the video 4010, the second entity card 4070 including descriptive content relating to the second entity. In some implementations, when the second entity is mentioned in the video 4010 at the second timepoint, the contracted second entity card 4070 may be automatically expanded to fully display the second entity card 4070 on the second portion 4022’ of the user interface 4000’ while the continuing to play the video 4010, the second entity card 4070 replacing the entity card 4040 on the user interface 4000’ at a time when the second entity is mentioned in the video 4010 and including descriptive content relating to the second entity. [0108] Referring to FIG. 4C, an example user interface 4000” as displayed on a display 160 of user computing device 100 is shown. User interface 4000” displays a search results page 4092 that is obtained in response to a user selecting the suggested search query 4060 provided on the user interface 4000’. The suggested search query 4060 corresponds to an entity search user interface element that, when selected, is configured to perform a search relating to the entity. In the example of FIG. 4C, the entity search is for the entity King Tutankhamun, and the search box 4092 is automatically populated with the entity, thereby relieving a user of having to spell the entity which could be spelled incorrectly by the user causing errors in the search results and leading to frustration on the part of the user as well as wasting computing resources on inefficient or erroneous searches.
[0109] As discussed above with respect to the example of FIGS. 4A-4C, in some implementations, all of the entity cards (e.g., entity card 4040 and second entity card 4070) associated with a video may be visible on the user interface to the user at a same time while the video is playing, or only some of the entity cards associated with the video may be visible on the user interface to the user at a same time while the video is playing. For example, one or more of tire entity cards (e.g., entity card 4040 and/or second entity card 4070) may be fully expanded so that a user can view the entire contents of the entity card(s), while some or all of the remaining entity cards may be displayed on the user interface in a contracted or collapsed form. For example, in the contracted or collapsed form, the user may view a portion of the entity card and the portion of the entity card may include some identifying information (e.g., an identification of the corresponding entity) so that the user is able to comprehend the relevance of the entity card. For example, the user is able to select a user interface element or some portion of the visible portion of the collapsed entity card to expand the entity card to also view the hidden portion of the entity card for further information regarding the entity, hi some implementations, the entity card may be automatically expanded to fully display the entity card on the user interface at a time when the corresponding entity is mentioned in the video. To save space on the user interface, a previously shown entity card may be changed to be in the contracted or collapsed form at the time the second entity card is expanded if it is not already in the contracted or collapsed form prior to the second entity card being displayed. In some implementations all of the entity cards may each be fully displayed on the user interface throughout the video.
[0110] FIGS. 5A-5C depict example user interfaces in which entity cards are presented during the display of a video, according to one or more example embodiments of the disclosure. Referring to FIG. 5A, an example user interface 5000 as displayed on a display 160 of user computing device 100 is shown. Hie user interface 5000 includes a section in which video 5010 is being played on a first portion 5012 of the user interface 5000. For example, a transcript (i.e., closed captioning) 5014 of the audio portion of the video may be provided for display in the video 5010 displayed on the user interface 5000. The user interface 5000 also includes a section entitled “Topics to explore” 5020 displayed on a second portion 5022 of the user interface 5000 which includes one or more entity cards to be displayed at different time points during the video 5010. For example, the second portion 5022 may include a first subportion 5032 which includes first entity card 5030 and a second sub-portion 5052 which includes at least a portion of second entity card 5050.
[0111] In the example of FIG. 5 A, a first entity which has been identified from the video 5010 is “Greek mythology." and the first entity card 5030 includes descriptive content relating to Greek mythology including an image which relates to Greek mythology and a textual summary regarding Greek mythology. For example, the first entity card 5030 may also include information regarding a timepoint in the video 5010 at which the first entity is being discussed (e.g., 8 seconds into the video 5010). For example, a second entity which has been identified from the video 5010 is “Crete,” and the second entity card 5050 includes descriptive content relating to Crete including an image which relates to Crete and a textual summary regarding Crete. For example, the second entity card 5050 may also include information regarding a timepoint in the video 5010 at which the second entity is being discussed (e.g., 10 seconds into the video 5010).
[0112] For example, the first sub-portion 5032 may include one or more user interface elements. For example, a suggested search query 5040 may be provided on the first subportion 5032. Here, the suggested search query 5040 corresponds to an entity search user interface element that, when selected, is configured to perform a search relating to the entity (e.g., a search for the first entity “Greek mythology").
[0113] As shown in FIG. 5A, the second entity card 5050 is only partially shown on the user interface 5000 as the topic or concept regarding the second entity is yet to be discussed in the video 5010. For example, the entity cards in the second portion 5020 may rotate, for example in a carousel fashion, as each entity is being discussed while the video continues to play. Thus, the video 5010 continues to play while a user views the first entity card 5030 and second entity card 5050. Therefore, viewing of the video 5010 is not interrupted if a user wishes to know more about an entity identified from the video 5010 and obtains sufficient information about the entities from the presented entity cards on the user interface 5000 (or subsequent user interfaces 5000’, 5000”, etc.) to understand the content of the video 5010 and need not perform a separate search and/or stop the video 5010.
[0114] Referring to FIG. 5B, an example user interface 5000’ as displayed on a display 160 of user computing device 100 is shown. The user interface 5000’ includes a section in which video 5010 is being played on a first portion 5012 of the user interface 5000’. The user interface 5000’ also includes a section entitled “Topics to explore” 5020 displayed on a second portion 5022 of the user interface 5000’ which includes one or more entity cards to be displayed at different time points during the video 5010. For example, the second portion 5022 may include a first sub-portion 5032’ which includes at least a portion of first entity card 5030, a second sub-portion 5052’ which includes second entity card 5050, and third subportion 5062’ which includes at least a portion of third entity card 5060.
[0115] In the example of FIG. 5B, the second entity card 5050 includes descriptive content relating to Crete including an image which relates to Crete and a textual summary regarding Crete. For example, the second entity card 5050 may also include information regarding a timepoint in the video 5010 at which the second entity is being discussed (e.g., 10 seconds into the video 5010). For example, a third entity which has been identified from the video 5010 is “Icarus,” and the third entity card 5060 includes descriptive content relating to Icarus including an image which relates to Icarus and a textual summary regarding Icarus.
For example, the third entity card 5060 may also include information regarding a timepoint in the video 5010 at which the third entity is being discussed (e.g., 13 seconds into the video 5010 as shown in FIG. 5C).
[0116] For example, the second sub-portion 5052’ may include one or more user interface elements. For example, a suggested search query 5040’ may be provided on the second sub-portion 5052’. Here, the suggested search query 5040’ corresponds to an entity search user interface element that, when selected, is configured to perform a search relating to the entity (e.g., a search for the second entity “Crete”).
[0117] As shown in FIG. 5B, the first entity card 5030 and third entity card 5060 are only partially shown on the user interface 5000’ as the topic or concept regarding the first entity has already been discussed in the video 5010 and the topic or concept regarding the third entity has yet to be discussed in the video 5010. For example, the entity cards in the second portion 5020 may rotate, for example in a carousel fashion, as each entity is being discussed while the video continues to play. Thus, the video 5010 continues to play while a user views the first entity card 5030, the second entity card 5050, and the third entity card 5060. Therefore, viewing of the video 5010 is not interrupted if a user wishes to know more about an entity identified from the video 5010 and obtains sufficient information about the entities from the presented entity cards on the user interfaces 5000, 5000’, etc. to understand the content of the video 5010 and need not perform a separate search and/or stop the video 5010.
[0118] Referring to FIG. 5C, an example user interface 5000” as displayed on a display 160 of user computing device 100 is shown. The user interface 5000” includes a section in which video 5010 is being played on a first portion 5012 of the user interface 5000”. The user interface 5000” also includes a section entitled “Topics to explore” 5020 displayed on a second portion 5022 of the user interface 5000” which includes one or more entity cards to be displayed at different time points during the video 5010. For example, the second portion 5022 may include a first sub-portion 5052” which includes at least a portion of second entity card 5050, a second sub-portion 5062” which includes third entity card 5060, and third subportion 5072” which includes at least a portion of fourth entity card 5070.
[0119] In the example of FIG. 5C, the third entity card 5060 includes descriptive content relating to Icarus including an image which relates to Icarus and a textual summary regarding Icarus. For example, the third entity card 5060 may also include information regarding a timepoint in the video 5010 at which the third entity is being discussed (e.g., 13 seconds into the video 5010). For example, a fourth entity which has been identified from the video 5010 may include “Daedalus,” and the fourth entity card 5070 includes descriptive content relating to Daedalus including an image which relates to Daedalus and a textual summary regarding Daedalus. For example, the fourth entity card 5070 may also include information regarding a timepoint in the video 5010 at which the fourth entity is being discussed (e.g., 20 seconds into the video 5010).
[0120] For example, the second sub-portion 5062” may include one or more user interface elements. For example, a suggested search query 5040” may be provided on the second sub-portion 5062”. Here, the suggested search query 5040” corresponds to an entity search user interface element that, when selected, is configured to perform a search relating to the entity (e.g., a search for the second entity “Icarus”).
[0121] As shown in FIG. 5C, the second entity card 5050 and fourth entity card 5070 are only partially shown on the user interface 5000” as the topic or concept regarding the second entity has already been discussed in the video 5010 and the topic or concept regarding the fourth entity has yet to be discussed in the video 5010. For example, the entity cards in the second portion 5020 may rotate, for example in a carousel fashion, as each entity is being discussed while the video continues to play. Thus, the video 5010 continues to play while a user views the first entity card 5030, the second entity card 5050, the third entity card 5060, the fourth entity card 5070, and so on. Therefore, viewing of the video 5010 is not interrupted if a user wishes to know more about an entity identified from the video 5010 and obtains sufficient information about the entities from the presented entity cards on the user interfaces 5000, 5000’, 5000”, etc. to understand the content of the video 5010 and need not perform a separate search and/or stop the video 5010
[0122] As discussed above with respect to the example of FIGS. 5A-5C, in some implementations, entity cards (e.g., entity cards 5030, 5050, 5060, 5070) associated with a video may be visible on the user interface to the user as the video progresses, and the user is not able to view an entity card fully until the corresponding entity is mentioned in the video. For example, a first entity card about a first entity may be displayed on the user interface at a time during the video when the first entity is mentioned in the video (e g., at a first timepoint). The first entity card may be displayed for a predetermined amount of time while tire video continues to play (e.g., for a time sufficient for an average user to read or view the content contained in the first entity card) or for a time until a next entity is mentioned in the video at which point another entity card is provided on the user interface. For example, a second entity card about a second entity may be displayed on the user interface at a time during the video when the second entity is mentioned in the video (e.g., at a second timepoint). In some implementations, the second entity card may be displayed on the user interface by replacing the first entity card (i.e., by occupying some or all of the space on the user interface which was previously occupied by the first entity card).
[0123] FIGS. 6A-6C depict example user interfaces in which a notification user interface element is presented for displaying one or more entity cards during the display of a video, according to one or more example embodiments of the disclosure. Referring to FIG. 6A, an example user interface 6000 as displayed on a display 160 of user computing device 100 is shown. The user interface 6000 includes a section in which video 6010 is being played on a first portion 6012 of the user interface 6000. The user interface 6000 also includes a section entitled “Related searches” 6020 displayed on a second portion 6022 of the user interface 6000 which includes one or more suggested search queries 6030 relating to the video 6010. [0124] Referring to FIG. 6B, an example user interface 6000’ as displayed on a display 160 of user computing device 100 is shown. The user interface 6000’ has a similar configuration as tire user interface 6000 of FIG. 6A, except that a notification user interface element 6040 is displayed in response to an entity (for which an entity card exists) being mentioned in the video 6010. For example, when an entity is mentioned in the video 6010 while the video 6010 is playing, the notification user interface element 6040 is displayed on the user interface 6000’ while continuing to play the video 6010. For example, the notification user interface element 6040 indicates that additional information relating to the entity is available. In response to receiving a selection of the notification user interface element 6040, an entity card is displayed on the user interface while continuing to play the video 6010. The notification user interface element 6040 may include some identifying information (e.g., an identification of the corresponding entity and/or an image such as a thumbnail image) of the entity to further make the user aware that the notification user interface element 6040 is associated with the entity and so that the user is able to comprehend the relevance of the entity card which is available.
[0125] Referring to FIG. 6C, an example user interface 6000” as displayed on a display 160 of user computing device 100 is shown. For example, user interface 6000’ may be displayed in response to a user selecting the notification user interface element 6040 as displayed on user interface 6000’. The user interface 6000” includes a section in which video 6010 is being played on a first portion 6012 of the user interface 6000. The user interface 6000 also includes a section entitled “Related searches” 6020 displayed on a second portion 6022 of the user interface 6000”, however the second portion 6022 is obscured by a new section entitled “Topics Mentioned” 6050 which is overlaid (e.g., as a pop-up window) on the second portion 6022.
[0126] The section entitled “Topics Mentioned” 6050 includes the entity card 6060 which is overlaid on the second portion 6022. In this example, the entity card 6060 includes a title 6062 (King Tutankhamuh) of the entity card 6060, a subtitle 6064 (Pharaoh), descriptive content 6066 (a textual summary and thumbnail image), and attribution 6068 which cites to a source of the descriptive content 6066.
[0127] For example, in other portions of the section entitled “Topics Mentioned” 6050 additional user interface elements and entity cards may be provided. For example, a suggested search query 6070 may be provided. Here, the suggested search query 6070 corresponds to an entity search user interface element that, when selected, is configured to perform a search relating to the entity. For example, at least a portion of a second entity card 6080 is also provided. The second entity card 6080 may be related to a next entity to be discussed during the video. [0128] In some implementations, the notification user interface element 6040 may be displayed on the user interface 6000’ at a same time that an entity is mentioned in the video 6010. In some implementations, the notification user interface element 6040 may be displayed on the user interface 6000’ throughout the video 6010 and the selection of the notification user interface element 6040 may cause the section entitled “Topics Mentioned” 6050 to be displayed. For example, the section entitled “Topics Mentioned” 6050 is overlaid (e.g., as a pop-up window) on the second portion 6022 and may remain open until closed (e.g., via user interface element 6090). For example, the second entity card 6080 may be displayed fully (e.g., by replacing entity card 6060) at a time when the entity associated with the second entity card 6080 is discussed in the video 6010. That is, the display of the entity cards may be synchronized with a time that an associated entity is mentioned in the video. For example, entity cards may be displayed on the user interface every time that the associated entity is mentioned in the video, only the first time the entity is mentioned in the video, or selectively displayed when the entity is mentioned a plurality of times in the video.
[0129] As discussed above with respect to the example of FIGS. 6A-6C, in some implementations, the availability of entity cards associated with a video may be indicated while a video is playing using a notification user interface element, for example at a time that the associated entity is discussed during the video. Therefore, the user may have the option to view the entity card while viewing the video by deciding to select the notification user interface element.
[0130] FIGS. 7A and 7B depict example user interfaces in which a timeline is presented for displaying one or more entity cards during the display of a video, according to one or more example embodiments of the disclosure
[0131] Referring to FIG. 7A, an example user interface 7000 as displayed on a display 160 of user computing device 100 is shown. For example, user interface 7000 includes a section in which video 7010 is being played on a first portion 7012 of the user interface 7000. The user interface 7000also includes a section entitled “Related searches” 7020 displayed on a second portion 7022 of the user interface 7000 which includes one or more suggested search queries 7030 relating to the video 7010. The user interface 7000 further includes a persistent timeline section 7042 which is overlaid (e.g., as a pop-up window) on the second portion 7022 to obscure at least a portion of the second portion 7022.
[0132] The persistent timeline section 7042 includes a persistent timeline 7040, at least a portion of a first entity card 7050, and at least a portion of second entity card 7060. The persistent timeline 7040 may display a timeline of the video 7010 and include one or more points 7044 which indicate when an entity is to be discussed during the video 7010. For example, a next entity to be discussed (e.g., Howard Carter) can be indicated by at least a portion of second entity card 7060 being shown in the persistent timeline section 7042 and includes an image of the entity.
[0133] The first entity card 7050 and/or second entity card 7060 may be selectable such that the entity cards are expanded as shown in FIG. 7B, for example.
[0134] Referring to FIG 7B, user interface 7000’ may be displayed in response to a user selecting the first entity card 7050 as displayed on user interface 7000, or in some implementations the user interface 7000’ may be displayed at a timepoint that the entity (King Tutankhamuh) is being discussed during the video 7010. In FIG. 7B, the user interface 7000’ includes the video 7010 displayed on a first portion 7012’ ofthe user interface 7000’. The user interface 7000’ also includes a section entitled “Topics to explore” which includes the first entity card 7050 and at least a portion of the second entity card 7060 displayed on a second portion 7022’ of the user interface 7000’ . The user interface 7000’ also includes the section entitled “Related searches” 7020 displayed on a third portion 7032’ ofthe user interface 7000’ which includes one or more suggested search queries 7030 relating to the video 7010.
[0135] As shown in FIG. 7B, the first entity card 7050 is expanded to include the descriptive content regarding the entity and, similar to the examples discussed previously, includes a title (King Tutankhamuh), a subtitle (Ancient Egyptian King), descriptive content (a textual summary and thumbnail image), and attribution which cites to a source of the descriptive content. Second entity card 7060 may be displayed at least partially next to the first entity card 7050 in the second portion 7022’ of the user interface 7000’ . A user interface element 7080 may be included in the user interface 7000’ as a selectable element that, when selected, allows the entity cards associated with video 7010 to be cycled through, for example in a carousel fashion, as the video is played.
[0136] In some implementations, the section entitled “Topics to explore” may also include additional user interface elements. For example, a suggested search query 7070 may be provided. Here, the suggested search query 7070 corresponds to an entity search user interface element that, when selected, is configured to perform a search relating to the entity.
[0137] As discussed above with respect to the example of FIGS. 7A-7B, in some implementations, a persistent timeline section may be displayed while a video is being played to indicate to a user of available entity cards in the video. An entity card related to an entity being discussed currently may be displayed centrally (i.e., prominently) in the persistent timeline section while a next entity card for a next entity to be discussed during the video may also be provided. For example, the user is able to select the entity card as displayed in the persistent timeline section (c.g., in a contracted or collapsed form) to expand the entity card for further information regarding the entity. In some implementations, the entity card may be automatically expanded to fully display the entity card on the user interface at a time when the corresponding entity is mentioned in the video.
[0138] FIGS. 8-10 illustrate flow diagrams of example, non-limiting computer- implemented methods according to one or more example embodiments of the disclosure. [0139] Referring to FIG. 8, the method 800 includes operation 810 of a user computing device (e.g., user computing device 100) displaying a video on a first portion of a user interface which is displayed on a display 160 of the user computing device 100. At operation 820, when a first entity is mentioned in the video while the video is playing, the user interface displays a first entity card on a second portion of the user interface while continuing to play the video, the first entity card including descriptive content relating to the first entity. For example, the first entity card has been generated in response to automatic recognition of the first entity from a transcription of content of the video.
[0140] Referring to FIG. 9, the method 900 includes operation 910 of a server computing system (c.g., server computing sy stem 300) obtaining a transcription of content from a video (e.g., via automatic speech recognition). At operation 920, the method includes applying a machine learning resource to identify one or more entities which are most likely to be searched for by a user viewing the video, based on the transcription of the content. For example, in a video about ancient Egypt, identified entities may include the “valley of the kings,” “King Tutankhamuh,” and “sarcophagus.” At operation 930, the method includes generating one or more entity cards for each of the one or more entities, each of the one or more entity cards including descriptive content relating to a respective entity among the one or more entities. At operation 940, the method includes providing (or generating) a user interface, to be displayed on a respective display of one or more user computing devices, to: play the video on a first portion of the user interface, and when the video is played and a first entity among the one or more entities is mentioned in the video, display a first entity card on a second portion of the user interface, the first entity card including descriptive content relating to the first entity. [0141] Referring to FIG. 10, the method 1000 includes operation 1010 of a server computing system (e.g., server computing system 300) obtaining a transcription of content from a video (e.g., via automatic speech recognition). At operation 1020, the method includes associating text from the transcription with a knowledge graph to obtain a collection of knowledge graph entities. At operation 1030, the method includes obtaining training data to build a machine learning model for the machine learning resource by identifying (or matching) knowledge graph entities with entities that appear in actual search queries of users watching the video. At operation 1040, the method includes weighing entities based on a relevance of the entity to other entities in the video, broadness of the entity, relevance of the entity to the topic of the video, and the like. For example, the machine learning resource may be trained by applying weights to candidate entities (e.g., a higher weight may be assigned to an entity the more often the term is mentioned in the video, a lower weight may be assigned to an entity which is overly broad and appears frequently in a corpus of videos, a higher weight may be assigned to an entity the more related it is to the topic of the video, etc.). At operation 1050, the method includes applying the machine learning resource to evaluate candidate entities from among candidate entities identified in the video and to rank the candidate entities. At operation 1060, the method includes the machine learning resource predicting one or more entities for which entity cards are to be generated for the video. For example, the machine learning resource may select a predetermined number of entities (e.g., three or four) which are the the highest ranked candidate entities as entities for which entity cards are to be generated. For example, the machine learning resource may select those entities which are predicted (e.g., with a specified confidence level, with a probability of being searched above a threshold level, etc.) to be searched by a user. The identified entities may subsequently be provided to an entity card generator for generating entity cards and/or the identified entities may be stored in a database (e.g., entity data store 380).
[0142] Terms such as "module", "unit," “provider,” and “generator” may be used herein in association with various features of the disclosure. Such terms may refer to, but are not limited to, a software or hardware component or device, such as a Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC), which performs certain tasks. A module or unit may be configured to reside on an addressable storage medium and configured to execute on one or more processors. Thus, a module or unit may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes procedures, subroutines, segments of program code, drivers, fimiware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. The functionality provided for in the components and modules/units may be combined into fewer components and modules/units or further separated into additional components and modules.
[0143] Aspects of the above-described example embodiments may be recorded in computer-readable media (e.g., non-transitory computer-readable media) including program instructions to implement various operations embodied by a computer. Hie media may also include, alone or in combination with the program instructions, data files, data structures, and the like. Examples of non-transitory computer-readable media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks, Blue- Ray disks, and DVDs; magneto-optical media such as optical discs; and other hardware devices that are specially configured to store and perform program instructions, such as semiconductor memory, read-only memory (ROM), random access memory (RAM), flash memory, USB memory, and the like. Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter. The program instructions may be executed by one or more processors. The described hardware devices may be configured to act as one or more software modules in order to perform the operations of the abovedescribed embodiments, or vice versa. In addition, a non-transitory computer-readable storage medium may be distributed among computer systems connected through a network and computer-readable codes or program instructions may be stored and executed in a decentralized manner. In addition, the non-transitory computer-readable storage media may also be embodied in at least one application specific integrated circuit (ASIC) or Field Programmable Gate Array (FPGA).
[0144] Each block of the flowchart illustrations may represent a unit, module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical fiinction(s). ft should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of order. For example, two blocks shown in succession may in fact be executed substantially concurrently (simultaneously) or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
[0145] While the disclosure has been described with respect to various example embodiments, each example is provided by way of explanation, not limitation of the disclosure. Those skilled in the art, upon attaining an understanding of the foregoing, can readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, the disclosure does not preclude inclusion of such modifications, variations and/or additions to the disclosed subject matter as would be readily apparent to one of ordinary skill in the art. For example, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that the disclosure covers such alterations, variations, and equivalents.

Claims

WHAT IS CLAIMED IS:
1. A computer-implemented method for a server system, comprising: obtaining a transcription of content from a video; applying a machine learning resource to identify one or more entities which are most likely to be searched for by a user viewing the video, based on the transcription of the content; generating one or more entity cards for each of the one or more entities, each of the one or more entity cards including descriptive content relating to a respective entity among the one or more entities; and providing a user interface, to be displayed on a respective display of one or more user computing devices, for: playing the video on a first portion of the user interface, and when the video is played and a first entity among the one or more entities is mentioned in the video, displaying a first entity card on a second portion of the user interface, the first entity card including descriptive content relating to the first entity.
2. The computer-implemented method of claim 1, wherein applying the machine learning resource to identify the one or more entities includes: obtaining training data to train the machine learning resource based on observational data of users conducting searches in response to viewing only the video.
3. Tire computer-implemented method of claim 2, wherein applying the machine learning resource to identify the one or more entities further includes: identifying a plurality of candidate entities from the video by associating text from the transcription with a knowledge graph, and ranking the candidate entities to obtain the one or more entities, based on one or more of: a relevance of each of the candidate entities to a topic of the video, a relevance of each of the candidate entities to one or more other candidate entities among the plurality of candidate entities, a number of mentions of the candidate entity in the video, and a number of videos in which the candidate entity appears across a corpus of videos stored in one or more databases.
4. The computer-implemented method of claim 3, wherein applying the machine learning resource to identify the one or more entities further includes: evaluating user interactions with the user interface, and determining at least one adjustment to the machine learning resource based on the evaluation of the user interactions with the user interface.
5. The computer-implemented method of claim 1, wherein the first entity is mentioned in the video at a first timepoint in the video, and the first entity card is displayed on the second portion of the user interface at the first timepoint.
6. The computer-implemented method of claim 5, wherein the one or more entities include a second entity and the one or more entity cards include a second entity card, and the method further comprises providing the user interface, to be displayed on the respective display of the one or more user computing devices, for: displaying, on a third portion of the user interface while continuing to play the video, the second entity card in a contracted form, the second entity card in the contracted form referencing the second entity to be mentioned in the video at a second timepoint in the video after the first timepoint, and when the second entity is mentioned in the video at the second timepoint, displaying on the third portion of the user interface while the continuing to play the video, the second entity card in a fully expanded form, the second entity card in the fully expanded form including descriptive content relating to the second entity.
7. The computer-implemented method of claim 1, wherein the one or more entities include a second entity and the one or more entity cards include a second entity card, and the method further comprises providing the user interface, to be displayed on the respective display of the one or more user computing devices, for: when the second entity is mentioned in the video while the video is playing, displaying the second entity card on the second portion of the user interface while continuing to play the video, the second entity card including descriptive content relating to the second entity, wherein the second entity card is displayed on the second portion of the display by replacing the first entity card at a time when the second entity is mentioned in the video.
8. The computer-implemented method of claim 1, further comprising providing the user interface, to be displayed on the respective display of the one or more user computing devices, for: when the first entity is mentioned in the video while the video is playing, displaying a notification user interface element on a third portion of the user interface while continuing to play the video, the notification user interface element indicating additional information relating to the first entity is available, and in response to the first entity being mentioned in the video while the video is playing and in response to receiving a selection of the notification user interface element, displaying the first entity card on the second portion of the user interface while continuing to play the video.
9. The computer-implemented method of claim 1, wherein the first entity card includes at least one of a textual summary providing information relating to the first entity or an image relating to the first entity.
10. A computer-implemented method for a user computing device, comprising: receiving a video for playback in a user interface; providing the video for display on a first portion of the user interface displayed on a display of the user computing device; and when a first entity is mentioned in the video while the video is playing: providing a first entity card for display on a second portion of the user interface while continuing to play the video, wherein the first entity card includes descriptive content relating to the first entity, and the first entity card has been generated in response to automatic recognition of the first entity from a transcription of content of the video.
11. The computer-implemented method of claim 10, wherein the first entity is mentioned in the video at a first timepoint in the video, and the first entity card is provided for display on the second portion of the user interface at the first timepoint.
12. The computer-implemented method of claim 11, further comprising providing, for display on a third portion of the user interface, a contracted second entity card referencing a second entity to be mentioned in the video at a second timepoint in the video after the first timepoint, and when the second entity is mentioned in the video at the second timepoint, expanding the contracted second entity card to fully display the second entity card on the third portion of the user interface while the continuing to play the video, the second entity card including descriptive content relating to the second entity.
13. Tire computer-implemented method of claim 10, further comprising: when a second entity is mentioned in the video while the video is playing, providing a second entity card for display on the second portion of the user interface while continuing to play the video, the second entity card including descriptive content relating to the second entity, wherein the second entity card is provided for display on the second portion of the user interface by replacing the first entity card at a time when the second entity is mentioned in the video.
14. The computer-implemented method of claim 10, further comprising providing for display on the user interface, one or more entity search user interface elements that, when selected, are configured to perform a search relating to the first entity.
15. The computer-implemented method of claim 14, further comprising providing for display on the user interface, one or more search query user interface elements that, when selected, are configured to perform a search relating to a topic of the video other than the first entity.
16. The computer-implemented method of claim 10, further comprising utilizing a machine learning resource to identify the first entity and generate the first entity card.
17. The computer-implemented method of claim 16, wherein the first entity is an entity among a plurality of entities mentioned in the video that is determined by the machine learning resource as an entity most likely to be searched for by a user viewing the video among the plurality of entities mentioned in the video.
18. The computer-implemented method of claim 10, further comprising: when the first entity is mentioned in the video while the video is playing, providing a notification user interface element for display on a third portion of the user interface while continuing to play the video, the notification user interface clement indicating additional information relating to the first entity is available, and in response to receiving a selection of the notification user interface element, providing the first entity card for display on the second portion of the user interface while continuing to play the video.
19. The computer-implemented method of claim 10, wherein the first entity card includes a textual summary providing information relating to the first entity and/or an image relating to the first entity.
20. A user computing device, comprising: a display; one or more memories to store instructions; and one or more processors to execute the instructions stored in the one or more memories to: receive a video for playback in a user interface, provide the video for display on a first portion of the user interface displayed on the display, and when a first entity is mentioned in the video while the video is playing: provide a first entity card for display on a second portion of the user interface while continuing to play the video, wherein the first entity card includes descriptive content relating to the first entity, and the first entity card has been generated in response to automatic recognition of the first entity from a transcription of content of the video.
PCT/US2023/021889 2022-05-13 2023-05-11 Entity cards including descriptive content relating to entities from a video WO2023220274A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263341674P 2022-05-13 2022-05-13
US63/341,674 2022-05-13
US18/091,837 US12072934B2 (en) 2022-05-13 2022-12-30 Entity cards including descriptive content relating to entities from a video
US18/091,837 2022-12-30

Publications (1)

Publication Number Publication Date
WO2023220274A1 true WO2023220274A1 (en) 2023-11-16

Family

ID=86732904

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/021889 WO2023220274A1 (en) 2022-05-13 2023-05-11 Entity cards including descriptive content relating to entities from a video

Country Status (1)

Country Link
WO (1) WO2023220274A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002011446A2 (en) * 2000-07-27 2002-02-07 Koninklijke Philips Electronics N.V. Transcript triggers for video enhancement
US20080140385A1 (en) * 2006-12-07 2008-06-12 Microsoft Corporation Using automated content analysis for audio/video content consumption
US20150095938A1 (en) * 2013-09-30 2015-04-02 Hulu, LLC Queue to Display Additional Information for Entities in Captions

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002011446A2 (en) * 2000-07-27 2002-02-07 Koninklijke Philips Electronics N.V. Transcript triggers for video enhancement
US20080140385A1 (en) * 2006-12-07 2008-06-12 Microsoft Corporation Using automated content analysis for audio/video content consumption
US20150095938A1 (en) * 2013-09-30 2015-04-02 Hulu, LLC Queue to Display Additional Information for Entities in Captions

Similar Documents

Publication Publication Date Title
US11841912B2 (en) System for applying natural language processing and inputs of a group of users to infer commonly desired search results
KR101989502B1 (en) Systems and methods for providing search features
US10175860B2 (en) Search intent preview, disambiguation, and refinement
JP6062929B2 (en) Presenting related searches on the toolbar
US20180084023A1 (en) Video Keyframes Display on Online Social Networks
JP6361351B2 (en) Method, program and computing system for ranking spoken words
US20140279993A1 (en) Clarifying User Intent of Query Terms of a Search Query
US20120259845A1 (en) Method of Providing Instructional Material While A Software Application is in Use
US20190340245A1 (en) System and method for semantic analysis of song lyrics in a media content environment
US20150370833A1 (en) Visual refinements in image search
US8868598B2 (en) Smart user-centric information aggregation
RU2629638C2 (en) Method and server of creating recommended set of elements for user
JP2015204105A (en) Method and device for providing recommendation information
US9619519B1 (en) Determining user interest from non-explicit cues
US20240046964A1 (en) Video Timed Anchors
US20230401250A1 (en) Systems and methods for generating interactable elements in text strings relating to media assets
US12072934B2 (en) Entity cards including descriptive content relating to entities from a video
US11768867B2 (en) Systems and methods for generating interactable elements in text strings relating to media assets
WO2023220274A1 (en) Entity cards including descriptive content relating to entities from a video
CN108376149A (en) Method, apparatus, equipment and the computer storage media that information shows
EP3296890A1 (en) Video keyframes display on online social networks
US20200037009A1 (en) System device and methods for presenting media contents
US20240289407A1 (en) Search with stateful chat
US20240104150A1 (en) Presenting Related Content while Browsing and Searching Content
US12052474B2 (en) Systems and methods for generating interactable elements in text strings relating to media assets

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23729904

Country of ref document: EP

Kind code of ref document: A1