WO2010078650A1 - Identification, recommandation et fourniture d'un contenu média approprié - Google Patents

Identification, recommandation et fourniture d'un contenu média approprié Download PDF

Info

Publication number
WO2010078650A1
WO2010078650A1 PCT/CA2010/000010 CA2010000010W WO2010078650A1 WO 2010078650 A1 WO2010078650 A1 WO 2010078650A1 CA 2010000010 W CA2010000010 W CA 2010000010W WO 2010078650 A1 WO2010078650 A1 WO 2010078650A1
Authority
WO
WIPO (PCT)
Prior art keywords
media
metadata
emotive
media segment
segment
Prior art date
Application number
PCT/CA2010/000010
Other languages
English (en)
Inventor
Ray Newal
Areef Reza
Oliver Nicholas Komarnycky
Original Assignee
Jigsee Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jigsee Inc. filed Critical Jigsee Inc.
Publication of WO2010078650A1 publication Critical patent/WO2010078650A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17318Direct or substantially direct transmission and handling of requests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/036Insert-editing
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • H04N21/4325Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6582Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors

Definitions

  • the invention relates generally to the identification, recommendation and delivery of relevant media content based on metadata creation and analysis. More specifically, the invention relates to segmenting, annotating, and distributing media content based on emotive metadata.
  • aspects of the present invention address the aforementioned need by providing methods and systems that enable the presentation of relevant advertising and other media content to users experiencing media at moments of heightened emotional engagement.
  • advertisements inserted within media to be less intrusive, not only do they need to be relevant, but they need to be presented when the audience is in a suitable emotive state.
  • These moments serve as a catalyst for significant gains in efficiency and effectiveness for both advertisers and publishers alike.
  • a computer-implemented method of generating metadata in association with a media segment of a media content item comprising the steps of: providing at least a portion of the media content item to a user for playback on a user interface of a media device; receiving input from the user defining a starting point of the media segment and an ending point of the media segment; and recording, in association with the media content item, metadata corresponding to the starting and ending points within the media content item and metadata relating to the user.
  • a computer- implemented method of providing additional media content to a user based on emotive metadata associated with a media segment being played on a user interface of a media device comprising the steps of: a) searching metadata associated with a collection of media segments to obtain a list of matching media segments, wherein each media segment in the list of matching media segments has associated therewith metadata comprising at least one emotive metadata element common with the emotive metadata of the media segment being played, b) selecting a matching media segment from the list of matching media segments, provided that the selected matching media segment has not been previously selected during a current user session; and c) providing said selected media segment for playback upon completion of said media segment being played.
  • a computer- implemented method of inserting an additional media segment within a media content item based on a relationship between emotive metadata associated with the media content item and emotive metadata associated with the additional media segment comprising the steps of: a) identifying an emotive media segment within the media content item, the emotive media segment comprising a portion of the media content item, wherein the emotive media segment has associated therewith emotive metadata; b) searching metadata associated with a collection of media segments for a list of matching media segments, wherein each media segment in the list of matching media segments has associated therewith metadata comprising at least one emotive metadata element common with the emotive metadata of the emotive media segment, c) selecting a matching media segment from the list of matching media segments; and d) inserting the selected matching media segment into the media content item after the emotive media segment.
  • a computer- implemented method of inserting an additional media segment within a media content item based on a relationship between emotive metadata associated with the media content item and emotive metadata associated with the additional media segment comprising the steps of: a) identifying an emotive media segment within the media content item, the emotive media segment comprising a portion of the media content item, wherein the emotive media segment has associated therewith emotive metadata having at least one emotive metadata element common with the emotive metadata of the additional media segment, and b) inserting the additional media segment into the media content item after the emotive media segment.
  • Figure 1 shows a flow chart illustrating a method of generating metadata associated with a media segment according to a first embodiment of the invention.
  • Figure 2 shows a diagram illustrating a media device for use in playback of media content.
  • Figure 3 shows a diagram illustrating an emotive metadata schema.
  • Figure 4 shows a flow chart illustrating a method of generating metadata associated with a media segment according to a another embodiment of the invention.
  • Figure 5 is a diagram illustrating a system including a media device for media playback and a server communicating with the media device.
  • Figure 6 is a flow chart illustrating a method of providing additional relevant media segments to a user viewing a first media segment.
  • Figure 7 is an example of a user interface for presenting and recommending video media content according to an embodiment of the invention.
  • Figure 8 is an example illustrating the use of a user interface to facilitate a transaction based on transaction metadata.
  • Figure 9 is a flow chart illustrating a method of inserting an emotively relevant media segment into a media content item based on emotive metadata.
  • the systems described herein are directed to methods and systems of providing relevant media content to users.
  • embodiments of the present invention are disclosed herein. However, the disclosed embodiments are merely exemplary, and it should be understood that the invention may be embodied in many various and alternative forms.
  • the terms, “comprises” and “comprising” are to be construed as being inclusive and open ended, and not exclusive. Specifically, when used in this specification including claims, the terms, “comprises” and “comprising” and variations thereof mean the specified features, steps or components are included. These terms are not to be interpreted to exclude the presence of other features, steps or components.
  • the terms “about” and “approximately, when used in conjunction with ranges of dimensions of particles, compositions of mixtures or other physical properties or characteristics, is meant to cover slight variations that may exist in the upper and lower limits of the ranges of dimensions so as to not exclude embodiments where on average most of the dimensions are satisfied but where statistically dimensions may exist outside this region. It is not the intention to exclude embodiments such as these from the present invention.
  • the coordinating conjunction "and/or” is meant to be a selection between a logical disjunction and a logical conjunction of the adjacent words, phrases, or clauses.
  • the phrase “X and/or Y” is meant to be interpreted as “one or both of X and Y” wherein X and Y are any word, phrase, or clause.
  • the term “media content item” means any form of digital media that may be segmented, including, but not limited to, video, audio, animations, slideshows, electronic text, and combinations thereof.
  • a media content item may be stored in any format, or copies of the same or similar media file may be stored in multiple formats.
  • Non-limiting examples of media content items include movies, music videos, television show episodes, video presentations, home and amateur videos, video advertisements, songs, audio advertisements, audiobooks, electronic books, and any portion thereof.
  • Metadata refers to data associated with a media content item that provides information about or related to the content item.
  • metadata may include a plurality of metadata elements. Each of the plurality of metadata elements may provide unique descriptive information relating to a content item. An association may be created to link a media content item to its related metadata. Therefore, metadata or metadata files may be provided in a database and searched in order to identify and locate a desired media content item. Accordingly, metadata associated with a media content item may facilitate the electronic delivery of the media content item in a digital format.
  • the term "emotive metadata” means metadata related to human behavioral traits.
  • Emotive metadata may relate to emotions displayed or conveyed by the media content and/or may relate to emotions experienced by a media user when playing or viewing the media content.
  • Metadata associated with a media content item may comprise emotive data and other forms of metadata, such as, but not limited to, descriptive metadata, metadata related to the facilitation of a transaction associated with a product or service, and metadata related to the location of a media content item.
  • "Emotive metadata” may also comprise metadata relating to the user including the user's interests, interest drivers, personal values, personality attributes, and any combination thereof.
  • the term "media segment” means any portion of a media content item for which associated metadata exists.
  • media segments include a video clip within a video file such as a scene from a movie, an audio clip within an audio file such as a chorus of a song, or a portion of an electronic book such as a chapter or a paragraph.
  • Starting and ending points of a media segment may be defined by a user or pre-defined and referenced by a user.
  • a media segment may comprise any portion of a media content item, including the entirety of the media content item.
  • the term "user” means any person viewing, reading or otherwise experiencing the playback of a media content item or media segment, for any purpose.
  • a user may be an end consumer experiencing the playback of media for purposes of enjoyment, education, or other forms of media consumption.
  • a user may be person experiencing the playback of media for commercial purposes, such as, but not limited to, analysis media content for the creation of metadata for use in searching and/or insertion and/or development of relevant advertising.
  • a media segment forms a portion of a media content item such as a digital video or a song.
  • the media segment may be defined by the user by specifying a starting and ending point within the media content item, or alternatively may be identified by selecting a pre-defined segment of a media content item.
  • the resulting media segment which relates to a "moment" of relevance or interest to a user, is thus defined by metadata describing the starting and ending points of the media segment, and metadata associated with the user selecting the media segment.
  • Defining media segments annotated with user-relevant metadata provides a number of advantages for more semantic and efficient media searching and retrieval, media content generation, and media content delivery.
  • the metadata associated with media segments as defined herein enables dramatically improved media content granularity for suggesting and providing relevant media to users.
  • this enables advertisers to target users with additional media segments having relevant and contextual messaging, and to include transaction metadata in such media segments that enables users to engage immediately in transactions related to products and services.
  • An exemplary yet non-limiting use of media segments defined according to this embodiment of the invention includes the insertion of additional user- relevant media segments, such as advertisements, based on correlations between the media segment metadata and metadata relating to the additional media segments.
  • An additional non-limiting application of media segments defined according to the present embodiment includes the delivery of additional media content, such as additional media segments having metadata correlated with metadata associated with a media segment being played by a user.
  • Figure 1 provides a flow chart illustrating a method of generating metadata relating to a media segment according to the embodiment discussed above, from the perspective of a user.
  • step 100 at least a portion of a media content item is played by a user on a media device.
  • the portion may include the entirety of the media content item. The user thus views, plays, reads or otherwise experiences the media content item.
  • the media device may be any device or system capable of displaying, playing and/or presenting media content.
  • media devices include handheld audio and/or video players such as an iPod®, television, smart phones, tablets, kiosks, and a display connected to computer.
  • media may be locally provided by the media device and/or local systems operatively connected to the media device, and/or media may be remotely provided to the media device either via a server or through a local system or application.
  • FIG. 2 illustrates a non-limiting example of a media device that may be used by the user according to embodiments of the invention.
  • the media device shown generally at 150, comprises a media presentation module 160, an input module 170, and a memory and processor module 180.
  • Modules 160-180 may be housed in a single device 190 such as a handheld media device, or may form a system comprising separate devices such as a computer system comprising a computer processor, a monitor and a keyboard and mouse.
  • the user while playing the media content item, experiences content of interest or relevancy occurring over a media segment, determines starting and ending points of the media segment in step 105, and provides input defining the starting and ending points in step 110.
  • starting and ending points may take on many forms according to different aspects of the invention.
  • Non-limiting examples of starting and ending points include time stamps relative to a reference time point associated with the media content item, scene changes, cue points, references to dialogue, and frames within a video. Accordingly, starting and ending points may be determined by the user by referencing such exemplary points.
  • the user may reference pre-determined sections of a media content item, such as chapters in an electronic book or scenes in a movie. In such cases, a single reference by a user to a pre-determined section provides the required information to determine the starting and ending points of the chosen media segment.
  • the starting and ending points provided or inferred based on the user input form metadata that is associated with the media segment in step 115. This metadata is recorded in association with the media segment.
  • step 120 which is preferably included in the method, the user provides additional metadata relating to the media segment.
  • metadata provided by the user or associated with the user is recorded in association with the media segment.
  • the metadata provided by the user describing the media segment may comprise multiple types of user-specific metadata.
  • the metadata may comprise a user identifier, such as, but not limited to, a user ID, a user name, a user icon, a user group, and other forms of user identification known to those skilled in the art. Accordingly, a media segment as so defined provides an indication that the media segment was of relevance to a particular user or user group. Such information may be used to provide search results and/or recommendations to related users or user groups.
  • a second user may obtain and play the media segment by searching for media segments having such metadata.
  • this form of media segment generation allows users to search for, or obtain recommendations to, specific and relevant segments within media content items that may serve as launching points for further media content discovery.
  • Metadata relating to the user may additionally or alternatively include metadata describing the user's interests, interest drivers, personal values, personality attributes, and any combination thereof.
  • the user provides metadata relating to a media segment that may comprise metadata describing how or why the media segment is of interest to the user.
  • metadata comprises emotive metadata that is indicative of one or more emotional responses experienced by a user when playing or viewing the media segment, and/or metadata that relates to emotions exhibited by or within the media itself, such as emotions of a character in a movie.
  • Such emotive metadata preferably relates to a vocabulary enabling specific emotive terms to be applied as metadata.
  • the emotive metadata vocabulary may comprise metadata elements relating to the aspirations, interest drivers, personal values, and emotions.
  • a specific emotive metadata vocabulary is provided in Example 1 below.
  • Figure 3 illustrates specific embodiment of an emotive metadata schema in which emotive metadata may comprise elements relating to self-interest drivers, personality, and emotion.
  • a user may be a media owner or other party that generates metadata relating to media segments for commercial purposes, such as to aid in the discovery of media segments within selected media content items that are available for purchase.
  • a user may generate metadata relating to a media segment that is an advertisement or relates to a product or service to aid in the discovery of such a media segment by end users.
  • it may be desirable for a media owner to engage, for example through employment or other incentives, users to generate emotive metadata identifying media segments to increase the value and discovery potential of the media content item.
  • the present invention further contemplates a method of generating metadata by providing media content to a user and receiving input from the user for the purpose of generating user-related metadata defining a media segment.
  • Figure 4 provides a flow chart illustrating a method of generating and recording user-specific metadata relating to a media segment selected by a user.
  • step 200 at least a portion of a media content item is provided to a media device for playback.
  • the media content may reside on a local resource such as a local server or database, or may reside on a remote database.
  • a preferred embodiment is shown in Figure 5, where a media device 150 connects through an internal or external network 300 to communicate with a server 310 running an application program interface (API) 320.
  • the API 320 serves the media device 150 with media content items residing in media content database 340.
  • step 205 input is received from the user defining the starting and ending points of the media segment.
  • this input is received by API 320.
  • step 210 metadata provided by the user relating to the media segment is received.
  • step 215 metadata corresponding to the starting and ending positions of the media segment is associated with media segment and stored either in a metadata database (shown at 330 in Figure 5) or appended to the media content item residing in media content database 340.
  • step 220 the user-specific metadata relating to the media segment is also stored either in metadata database 330 or appended to the media content item residing in media content database 340.
  • Figure 4 provides a specific and non-limiting embodiment showing a system for practicing an embodiment of the present invention, and those skilled in the art will readily appreciate that other variants of the system architecture may be possible, and these variants are encompassed within the present invention. While the preceding embodiments describe methods for generating metadata relating to media segments based on user input, metadata relating to user-selected media segments may be additionally or alternatively obtained through automated metadata extraction methods. For example, automated processing using techniques such as text, audio and video analysis may be employed to extract metadata from media segments.
  • media segments may be by a user without appending user metadata, yet still representing a media segment of interest to a user.
  • a media segment initially having no metadata associated therewith, may be appended with metadata, such as emotive metadata, using automated metadata extraction methods as discussed below.
  • the media segment of interest to the user may be selected by the user, and the metadata corresponding to the media segment selected by the user may be automatically determined and associated with the selected media segment.
  • a media device may be employed by a user to define a media segment based on sampling of a portion of a media content item being played on a separate media presentation device, and cross-referenced with the full media content item based on automated metadata analysis.
  • a media segment may be defined based on a user viewing a video and using a media device capable of recording an audio portion of the video being viewed.
  • the user may define a media segment within the audio portion recorded by the media device, and subsequently the media segment may be analyzed by performing metadata extraction of the recorded audio portion of the media segment and correlating the metadata with the full media content item for creation of the full media segment with or without associated metadata.
  • metadata associated with a media segment as defined herein involves the incorporation of metadata from multiple sources, such as through an iterative learning process in which correlations are established between the audiovisual properties of film clips, information extracted from collateral texts and user-provided metadata relating to a media segment.
  • the media content item is video content relating to films, which enables the extraction and discovery of semantically richer metadata due to conventions followed by film-makers that aids in the construction of emotive metadata vocabularies. For example, conventions are followed in the way that videos tell stories, albeit to varying degrees, including the use of editing techniques and formulaic plot structures. Knowledge of such 'film grammar 1 can be exploited in the development of multimedia information extraction technologies, particularly with regard to metadata vocabularies, for video content relating to film.
  • a user may share media segments with other users, and additional metadata may be collected relating to the identity, type, and interests of users with whom the media segments are shared. For example, metadata describing the type of a user (for example, partners, parents, close friends, and work colleagues) with whom a media segment is shared may be recorded in association with a media segment. Additionally, a user's viewing patterns may be monitored to gather metadata relating to preferred sequences of media segments, and this metadata may be recorded to aid in the suggestion and recommendation of additional media segments.
  • additional metadata relating to a media segment may be obtained by searching online resources, for example, websites such as imdb.com to obtain descriptive metadata such as the Title, Year, Genre, Director, Actors, etc.
  • Metadata may be obtained by matching popular results for a video clip from YouTube® against a media segment, and leverage comments made about a media segment by YouTube® users for metadata annotation.
  • lists of famous film quotes could be used to obtain additional metadata by matching them against time-coded subtitles.
  • Metadata may be extracted from audio sources by using a speech-to-text converter and subsequently employing an algorithm such as a natural language processing algorithm for metadata extraction. For example, retrieving media segments featuring a car becomes a matter of looking for 'car 1 in the screenplay or audio clip.
  • emotive metadata associated with a media segment being played by a user is used to provide to the user at least one additional media segment having related emotive metadata.
  • This embodiment enables a new manner in which a user may experience media, where media discovery is provided by emotive metadata linking separate media segments that need not belong to a common parent media content item.
  • Prior art media selection and recommendation methods have focused on providing and recommending media content based on generic descriptive or content-related metadata.
  • the present embodiment provides a completely different media playback and browsing experience for a user by preserving the emotional state or mood associated with the viewing of a media segment.
  • the present and inventive method departs from the "more is better” teachings of the prior art, and instead discloses a focused "less is more” approach to focused emotive metadata based recommendation and delivery.
  • Such embodiments provide a potentially rich media playback experience in which an emotion, mood, or feeling associated with the playback of a first media segment may be at least in part preserved during the playback of an additional and automatically selected media segment. All prior art media selection and recommendation methods known to the inventors fail to deliver such an experience.
  • a preferred method is illustrated generally by the flow chart shown in
  • step 400 emotive metadata associated with a media segment being played by a user on a media device is used to search a collection of media segments for a list of media segments, where each matching media segment in the list has at least one common emotive metadata element.
  • This step provides a list of emotively relevant media segments that may be subsequently provided to the user for playback.
  • step 405 a media segment is selected from the list of matching media segments.
  • the selected matching media segment is provided to the user in step 410 for playback.
  • one or more additional media segments from the list of matching media segments may be provided to the user after providing a first matching media segment.
  • a matching media segment for recommendation, delivery and/or playback may be selected using a wide range of criteria.
  • a matching media segment is selected from the list of matching media segments based on a ranking of relevant emotive metadata matches. For example, a media segment having emotive metadata with the largest number of matches with the emotive metadata associated with the media segment being viewed by the user may be selected. It is to be understood that many other selection rules known to those skilled in the art are within the scope of the present invention.
  • a matching media segment may be selected randomly from the list of media segments.
  • a matching media segment recommended, delivered and/or played according to the present embodiment is a media segment that has not already been played during a current user session. This provision avoids repeated playback of media already recommended and viewed.
  • the collection of media segments may be located on a local or remote resource or a combination of the two.
  • media segments may be stored and/or cached in a local memory or database resource such as flash memory or a hard drive.
  • at least a portion of the collection of media segments is stored on a server.
  • the collection of media segments may be stored in media content database 340, either with associated metadata appended to the media segments, or with associated metadata stored in a separate (either physically or logically) database such as metadata database 330.
  • the searching, selection and delivery of additional media segments to the user may be carried out by an application programming interface 320 operating on a server 310, where the server is connected to the media device through a local or remote network.
  • server 310 is connected to media device 150 through the internet.
  • Media segments may be provided to media device 150 via one of many methods known to those skilled in the art, including, but not limited to, streaming a media segment and uploading a media segment to the media device for playback.
  • additional network and/or system elements may used to facilitate the method, including the use of additional devices and services such as web servers and firewalls.
  • the system may further interface with proprietary and/or third party streaming media applications.
  • the system may be configured as or interfaced with a social media application, wherein users may access, discover and share media segments.
  • the emotive metadata associated with the media segment being played and/or the collection of media segments preferably includes emotive metadata elements that pertain to an emotive response of a user when viewing a given media segment.
  • the emotive metadata may relate to emotions exhibited by or within the media itself, such as emotions of a character in a movie.
  • the emotive metadata elements may belong to an emotive metadata vocabulary.
  • media segments are played back to the user on a media device that comprises a user interface.
  • the user interface enables the user to view, read or otherwise playback media segments according to the method disclosed above.
  • a non-limiting example of a user interface for displaying video media content according to one embodiment of the invention is shown in Figures 7 and 8.
  • the user interface 500 comprises a display area where video media content is played back to a user.
  • An optional progress bar is included at 510 that may further include controls relating to playback and volume.
  • the user interface preferably includes metadata information 520 relating to the identity of the media segment being played, and additional controls 530 relating to additional media playback.
  • the user interface preferably includes a control for requesting the delivery and playback of additional media segments having related emotive metadata according to the embodiment of the invention.
  • controls 530 may be included for replaying a selected media segment, sharing a preferred media segment with another user, and adding user- specific metadata related to the clip (in accordance with previously recited embodiments of the present invention).
  • the user interface may be provided on wide range of media presentation devices, for example, a touchscreen device and a display device having an input interface such as a mouse and keyboard.
  • metadata associated with a matching media segment additionally comprises transaction metadata that facilitates a transaction related to a product or service, where the product or service is preferably associated with content within the matching media segment.
  • transaction metadata may be utilized in many different methods to facilitate a transaction.
  • Non-limiting examples of transaction-based metadata includes metadata linking the user to a transaction such as a web page, email address, or IP address.
  • Specific non-limiting transaction based metadata includes metadata that can be displayed in the user interface as a link (for example, to a web page or telephone number where a user may purchase a product or service and/or initiate a call to inquire for more information regarding a product or service).
  • the transaction metadata may alternatively be rendered as a button or other selectable item on the user interface.
  • the matching media segment is a portion of a media content item (such as a movie or song) and the transaction metadata facilitates a transaction by which a user may playback, rent or purchase at least a portion of the media content item (as illustrated in Figure 8).
  • the matching media segment is a portion of a media content item
  • the transaction metadata associated with the media segment allows a user to playback, rent or purchase a media segment that is sequentially related to the media segment currently being viewed, thereby facilitating continued viewing of a media content item.
  • a method for inserting an additional media segment within a media content item, where the media segment has associated emotive metadata that is related to that of a media segment identified within the media content item. This embodiment enables the insertion of emotively relevant content into a media content item.
  • an emotive media segment having emotive metadata is identified within a media content item.
  • the emotive media segment may be identified based on emotive metadata created using, for example, previously disclosed embodiments of the present invention.
  • the emotive media segment may be obtained by searching within metadata associated with the media content item for a specific emotive metadata element and retrieving thereby obtaining an emotive media segment having associated emotive metadata including the specific emotive metadata element.
  • a collection of media segments is searched to obtain a list of matching media segments having metadata in which at least one emotive metadata element is common with the metadata relating to the emotive media segment identified in the previous step.
  • a matching media segment is selected from the list for insertion.
  • the matching media segment may be selected based on a wide range of criteria.
  • the matching media segment may be selected based on a ranking of the matches, random selection, or, in the case of the collection of media segments being advertisements, contractual agreements pertaining the insertion frequency and/or priority.
  • the selected matching media segment is inserted into the media content item after the identified emotive media segment.
  • an additional media segment having associated emotive metadata may be pre-selected for insertion within a media content item. Accordingly, an emotive media segment after which the additional media segment is to be inserted may be identified by searching for a pre-defined media segment within the media content item that has associated metadata including at least one emotive metadata element common with the emotive metadata associated with the additional media segment.
  • the insertion of the additional media segment may be made prior to playback of the media content item by a user, or dynamically during playback of the media content item by the user.
  • the emotive metadata associated with the emotive media segment is user-specific emotive metadata, enabling the insertion of additional media segments targeting the emotive aspects of the user directly.
  • the media content item and/or additional media segments may be stored within a local or remote resource, or a combination of the two.
  • media segments may be stored and/or cached in a local memory or database resource such as flash memory or a hard drive.
  • a server such an embodiment may be illustrated again with reference to Figure 5.
  • the media content item, and its associated media segments may be stored in media content database 340, either with associated metadata appended to the media segments, or with associated metadata stored in a separate (either physically or logically) database such as metadata database 330. Additional media segments for insertion may be stored in a common or separate resource.
  • the identification of an emotive media segment and/or searching, selection and delivery of additional media segments for insertion may be carried out by an application programming interface 320 operating on a server 310, where the server is connected to the media device through a local or remote network.
  • server 310 is connected to media device 150 through the internet.
  • Media content may be provided to media device 150 via one of many methods known to those skilled in the art, including, but not limited to, streaming a media segment and uploading a media segment to the media device for playback.
  • additional network and/or system elements may used to facilitate the method, including the use of additional devices and services such as web servers and firewalls.
  • the system may further interface with proprietary and/or third party streaming media applications.
  • the media segment inserted within the content item is an advertisement.
  • the insertion of an emotively relevant advertising media segment after a related media segment in a media content item assists in preserving the emotion, or the moment, experienced by a viewer, and thereby delivers a far less disruptive user viewing experience.
  • the insertion of emotively relevant media segments, particularly those with additional transaction metadata facilitates improved advertising conversion rates.
  • the emotive metadata either in the media segment identified within the media content item, or in the additional media segment to be inserted is indicative of an anticipated heightened emotional state of a user.
  • the metadata associated with the additional media segment to be inserted preferably additionally comprises transaction metadata that facilitates a transaction related to a product or service, where the product or service is preferably associated with content within the inserted media segment.
  • transaction-based metadata may be utilized in many different methods to facilitate a transaction.
  • non-limiting examples of transaction- based metadata includes metadata linking the user to a transaction with a link such as a web page, email address, telephone number or IP address.
  • Specific non-limiting transaction based metadata includes metadata that can be displayed in a user interface as a link to a web page where a user may purchase a product or service and/or inquire for more information regarding a product or service.
  • the transaction metadata may alternatively be rendered as a button or other selectable item a the user interface.
  • relevancy tags are grouped into three categories: value profiles, interest drivers, and emotions.
  • value profile tags include:
  • Examples of positive emotion tags include:
  • negative emotion tags examples include: • Outraged

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Graphics (AREA)
  • Game Theory and Decision Science (AREA)
  • Human Computer Interaction (AREA)
  • Economics (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Systèmes et procédés d'identification, de recommandation et de fourniture d'un contenu média au moyen de segments de média assortis de métadonnées connexes. Selon un aspect, l'invention concerne un procédé permettant de générer des métadonnées en rapport avec l'utilisateur et concernant des segments de média sélectionnés par ce dernier, les métadonnées étant de préférence de nature affective. Sont également décrits des procédés permettant de présenter aux utilisateurs des segments de média supplémentaires et d'insérer ces segments dans un contenu média sur la base de métadonnées affectives connexes.
PCT/CA2010/000010 2009-01-07 2010-01-07 Identification, recommandation et fourniture d'un contenu média approprié WO2010078650A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US20442609P 2009-01-07 2009-01-07
US61/204,426 2009-01-07

Publications (1)

Publication Number Publication Date
WO2010078650A1 true WO2010078650A1 (fr) 2010-07-15

Family

ID=42316161

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2010/000010 WO2010078650A1 (fr) 2009-01-07 2010-01-07 Identification, recommandation et fourniture d'un contenu média approprié

Country Status (1)

Country Link
WO (1) WO2010078650A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013138038A1 (fr) * 2012-03-14 2013-09-19 General Instrument Corporation Mise en correspondance de sentiments dans un élément de contenu multimédia
WO2014159783A3 (fr) * 2013-03-14 2015-01-29 General Instrument Corporation Insertion de publicité
US8965915B2 (en) 2013-03-17 2015-02-24 Alation, Inc. Assisted query formation, validation, and result previewing in a database having a complex schema
US8995822B2 (en) 2012-03-14 2015-03-31 General Instrument Corporation Sentiment mapping in a media content item
EP2904561A4 (fr) * 2012-10-01 2016-05-25 Google Inc Système et procédé permettant d'optimiser des vidéos
US10812853B2 (en) 2018-10-23 2020-10-20 At&T Intellecutal Property I, L.P. User classification using a remote control detail record
CN115499704A (zh) * 2022-08-22 2022-12-20 北京奇艺世纪科技有限公司 一种视频推荐方法、装置、可读存储介质及电子设备

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002013065A1 (fr) * 2000-08-03 2002-02-14 Epstein Bruce A Collaboration d'informations et evalutation de fiabilite
US20080059989A1 (en) * 2001-01-29 2008-03-06 O'connor Dan Methods and systems for providing media assets over a network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002013065A1 (fr) * 2000-08-03 2002-02-14 Epstein Bruce A Collaboration d'informations et evalutation de fiabilite
US20080059989A1 (en) * 2001-01-29 2008-03-06 O'connor Dan Methods and systems for providing media assets over a network

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8995822B2 (en) 2012-03-14 2015-03-31 General Instrument Corporation Sentiment mapping in a media content item
EP2826254A1 (fr) * 2012-03-14 2015-01-21 General Instrument Corporation Mise en correspondance de sentiments dans un élément de contenu multimédia
WO2013138038A1 (fr) * 2012-03-14 2013-09-19 General Instrument Corporation Mise en correspondance de sentiments dans un élément de contenu multimédia
US9106979B2 (en) 2012-03-14 2015-08-11 Arris Technology, Inc. Sentiment mapping in a media content item
EP2904561A4 (fr) * 2012-10-01 2016-05-25 Google Inc Système et procédé permettant d'optimiser des vidéos
US10194096B2 (en) 2012-10-01 2019-01-29 Google Llc System and method for optimizing videos using optimization rules
US11930241B2 (en) 2012-10-01 2024-03-12 Google Llc System and method for optimizing videos
WO2014159783A3 (fr) * 2013-03-14 2015-01-29 General Instrument Corporation Insertion de publicité
EP2954672A4 (fr) * 2013-03-14 2016-10-19 Arris Entpr Inc Insertion de publicité
US9497507B2 (en) 2013-03-14 2016-11-15 Arris Enterprises, Inc. Advertisement insertion
KR101800098B1 (ko) * 2013-03-14 2017-11-21 제너럴 인스트루먼트 코포레이션 광고 삽입
US8996559B2 (en) 2013-03-17 2015-03-31 Alation, Inc. Assisted query formation, validation, and result previewing in a database having a complex schema
US8965915B2 (en) 2013-03-17 2015-02-24 Alation, Inc. Assisted query formation, validation, and result previewing in a database having a complex schema
US9244952B2 (en) 2013-03-17 2016-01-26 Alation, Inc. Editable and searchable markup pages automatically populated through user query monitoring
US10812853B2 (en) 2018-10-23 2020-10-20 At&T Intellecutal Property I, L.P. User classification using a remote control detail record
CN115499704A (zh) * 2022-08-22 2022-12-20 北京奇艺世纪科技有限公司 一种视频推荐方法、装置、可读存储介质及电子设备
CN115499704B (zh) * 2022-08-22 2023-12-29 北京奇艺世纪科技有限公司 一种视频推荐方法、装置、可读存储介质及电子设备

Similar Documents

Publication Publication Date Title
US20230325437A1 (en) User interface for viewing targeted segments of multimedia content based on time-based metadata search criteria
JP6342951B2 (ja) ビデオインターバルへの注釈
US7533091B2 (en) Methods, systems, and computer-readable media for generating a suggested list of media items based upon a seed
Gao et al. Vlogging: A survey of videoblogging technology on the web
US20130218942A1 (en) Systems and methods for providing synchronized playback of media
US20170193103A1 (en) System and Method for Generating Media Bookmarks
TWI514171B (zh) 動態頁面產生之頁面模組與方法
CN107087225B (zh) 将隐藏式字幕流用于设备元数据
US10013704B2 (en) Integrating sponsored media with user-generated content
US20100088327A1 (en) Method, Apparatus, and Computer Program Product for Identifying Media Item Similarities
WO2010078650A1 (fr) Identification, recommandation et fourniture d'un contenu média approprié
JPWO2006019101A1 (ja) コンテンツ関連情報取得装置、コンテンツ関連情報取得方法、およびコンテンツ関連情報取得プログラム
US20140317099A1 (en) Personalized digital content search
US9418141B2 (en) Systems and methods for providing a multi-function search box for creating word pages
GB2455331A (en) Retrieving media content
Pinto et al. YouTube timed metadata enrichment using a collaborative approach
Galuščáková Information retrieval and navigation in audio-visual archives
Riley The revolution will be televised: Identifying, organizing, and presenting correlations between social media and broadcast television
Chorianopoulos et al. Social video retrieval: research methods in controlling, sharing, and editing of web video
Kim et al. iFlix
Kaiser et al. Metadata-based Adaptive Assembling of Video Clips on the Web
Profumo Can Music Be Personalized?
FR3030980A1 (fr) Procede et dispositif de generation d'une image representative d'un chapitrage d'un contenu multimedia, terminal et programme d'ordinateur correspondants.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10729071

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10729071

Country of ref document: EP

Kind code of ref document: A1