WO2008033840A2 - Systèmes et procédés de création, de collecte et d'utilisation de métadonnées - Google Patents

Systèmes et procédés de création, de collecte et d'utilisation de métadonnées Download PDF

Info

Publication number
WO2008033840A2
WO2008033840A2 PCT/US2007/078162 US2007078162W WO2008033840A2 WO 2008033840 A2 WO2008033840 A2 WO 2008033840A2 US 2007078162 W US2007078162 W US 2007078162W WO 2008033840 A2 WO2008033840 A2 WO 2008033840A2
Authority
WO
WIPO (PCT)
Prior art keywords
mix
metadata
mixes
video
module
Prior art date
Application number
PCT/US2007/078162
Other languages
English (en)
Other versions
WO2008033840A3 (fr
Inventor
David A. Dudas
James H. Kaskade
Kenneth W. O'flaherty
Original Assignee
Eyespot Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eyespot Corporation filed Critical Eyespot Corporation
Publication of WO2008033840A2 publication Critical patent/WO2008033840A2/fr
Publication of WO2008033840A3 publication Critical patent/WO2008033840A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising

Definitions

  • This invention relates in general to the use of computer technology to create, collect and use metadata in an online video editing environment. Background
  • Figure 1 is a block diagram illustrating a prior art video editing platform including a creation block 199, a consumption block 198, and a media aggregation, storage, manipulation & delivery infrastructure 108.
  • Figure 1 shows with arrows the paths that currently exist for transferring video content from a particular source, including a DSC 100, a DVC 102, a mobile phone 104, and a webcam 106 to a particular destination viewing device including a DVD player 110, a DSTB 112, a DVR 114, a mobile phone 116, a handheld 118, a video iPod 120, or a PC 122.
  • the only destination device that supports content from all input devices is the PC 122.
  • mobile phone 104 can send video content to another mobile phone 116, and a limited number of today's digital camcorders and digital cameras can create video content on DVDs that can then be viewed on the DVD player 110.
  • these paths are fractured and many of the devices in the creation block 199 have no way of interfacing with many of the devices in the consumption block 198. Beyond the highlighted paths through the media aggregation, storage, manipulation & delivery infrastructure 108, no other practical video transfer paths exist today.
  • online video-sharing sites also face the possibility that they may be serving up objectionable content - for example "hate” content or pornographic content that some of their users have posted to the website.
  • objectionable content for example "hate” content or pornographic content that some of their users have posted to the website.
  • YouTube Over 30,000 per day
  • a few video-sharing/editing sites also allow users to reuse the video content of other members of the site (with the author's permission). This has led to active communities of users who "harvest" video content from other users and include the content in their own productions.
  • the content may have been provided by other -A- consumers, or by commercial groups, such as record labels or recording artists, who wish to promote their work by offering up samples (audio, video, or both) for "remixing.”
  • One advertising approach is to attach an advertisement to a video that is being viewed - in which case there is a need to select an advertisement that is least likely to annoy or offend the viewer (and that may hopefully be of interest or value to the viewer).
  • the metadata collected about the media and its authors can also be used to mitigate the problems discussed earlier relating to content: to analyze and mine the content and authorship of videos in order to detect, flag, and possibly remove, potentially objectionable media - media that may infringe copyrights, or may include hate content or pornographic content.
  • a video-sharing/video-editing site may have partner websites that distribute or publish videos created by the site's users.
  • syndication cases there may be a need to select an advertisement without any knowledge about the viewer of the video. This can be accomplished by collecting and analyzing metadata relating to each video, and matching advertisements solely based on what is known about each video, e.g., the subject matter (or category), title, descriptive tags or other related keywords of the video and of all of the component parts of the video (including any harvested content and its authors).
  • Various data mining techniques including predictive modeling and clustering, are well suited to this task.
  • a system and methods are disclosed for creating, collecting, and using metadata associated with digital content in an online video editing environment.
  • a system and related methods create, collect, and analyze media metadata pertaining to the content and its usage, as well as user metadata pertaining to personal characteristics and behavior, in order to perform intelligent filtering of content, personalization services, target marketing, and monetization.
  • Mix includes a number of digital content components and transformations that are applied to the content. Video content, clips, pictures, sounds, special effects, title frames, etc., make up the mix.
  • the online video editing environment receives portions of the mixes (video, clips, pictures, sounds) via an upload process.
  • a system administrator, content moderator, or similar person can apply metadata tags to a selected number of the components of a mix and the mixes themselves (e.g., the most popular).
  • the mixes also can have metadata tags applied, either voluntarily or as a requirement, by a user who has created the mix.
  • An algorithm(s) can apply additional metadata tags to the mixes.
  • the mixes along with the associated metadata tags are then made available to users of the online video editing community and an action (e.g., filtering, advertising, etc.) can be performed based on the metadata tags.
  • the user metadata can include static personal profile information, such as age and gender, as well as dynamic behavioral and social networking information based on the uploading, editing, harvesting, sharing, group membership and video viewing actions of each user.
  • the media metadata can include static information covering clip titles, photograph titles, audio/music titles, mix titles, album titles, blog titles, and associated commentary text, and descriptive tags applied to content.
  • the media metadata can also include dynamic information relating to the frequency with which individual pieces of content are streamed, copied, harvested or shared, and by whom, and to their geographic and demographic distribution.
  • a single mix can contain inherited content metadata and user metadata, as described, originating from any and all content sources used to create the final product, and their authors.
  • the system can use the user and media metadata, including inherited metadata, to automatically filter content, in order to identify selected types of content, including suspected pornographic content, hate content, and copyright infringement.
  • the system also can use the metadata to dynamically personalize and enhance each user's experience on the website, whether connected through personal computer, mobile device, or digital home network, and to make appropriate recommendations to users regarding social networking and regarding viewing, mixing and sharing of content.
  • Metadata can also be used in relation to an upload process, wherein certain tags can cause only a portion, or certain segments, of the digital content to be uploaded.
  • the system also can use the metadata for purposes of target marketing and advertising, including dynamic selection of the most appropriate advertisement to attach to mixes that are being viewed (pre-rolls, post-rolls, dynamically placed interstitials, or overlays), or to place around the video player, or to communicate as website advertisements, email advertisements, or through other means.
  • the system also can use the metadata for monetization purposes, by automating the allocation of revenues associated with advertising and marketing.
  • Underlying methods can include various data mining techniques applied to the metadata, including prediction, classification, clustering, association, and the use of automatically generated video categories and user categories, in combination with business rules, to assist in real-time decision-making.
  • the system can be used on a dedicated website or its functionality can be served to different websites seeking to provide users with enhanced video editing capabilities and metadata use, which can include, for example, redirection away from an external website to a website local to the system, wherein the present functionality is employed while still maintaining the look and feel of the external website.
  • Figure 1 is a block diagram illustrating a prior art video editing platform.
  • Figure 2 is a block diagram illustrating the functional blocks or modules in an example architecture.
  • Figure 3 is a block diagram illustrating an example online video platform.
  • Figure 4 is a block diagram illustrating an example online video editor.
  • FIG. 5 is a block diagram illustrating an example video preprocessing application.
  • Figure 6A is a schematic representation summarizing an example data structure of the metadata associated with an example mix created and collected using the system.
  • Figure 6B is a flowchart illustrating an exemplary process for creating, collecting, and using metadata.
  • Figure 7 is a flowchart illustrating an exemplary process for filtering mixes using metadata.
  • Figure 8 is a schematic representation summarizing an example of personalization using metadata.
  • Figure 9 is a flowchart illustrating an exemplary process for the implementation of categorization using predictive modeling.
  • Figure 10 is a flowchart illustrating an exemplary process for the implementation of categorization using clustering.
  • Figure 11 is a flowchart illustrating an exemplary process for the implementation of categorization using a combination of predictive modeling and clustering.
  • Certain examples as disclosed herein provide for the use of computer technology to create, collect, and use metadata associated with content.
  • the content includes mixes. Mixes include one or more of all of the different types of content, including video clips (clips), songs, audio, pictures, special effects, etc.
  • the term mix refers interchangeably to the mix itself or a portion of the mix.
  • a system and related methods create, collect, and analyze media metadata pertaining to the mixes and their usage, as well as user metadata pertaining to personal characteristics and behavior, in order to perform intelligent filtering of the mixes, personalization services, target marketing, and monetization.
  • FIG. 2 is a block diagram illustrating the functional blocks or modules in an example computer architecture.
  • a system 200 includes an online video platform 206, an online video editor 202, a preprocessing application 204, as well as a content creation block 208 and a content consumption block 210.
  • the illustrated blocks 200-210 can be implemented on one or more servers or server systems.
  • the content creation block 208 can include input data from multiple sources that are provided to the online video platform 206, including personal video creation devices 212, personal photo and music repositories 214, and personally selected online video resources 216.
  • video files are uploaded by consumers from their personal video creation devices 212.
  • video is used as shorthand to refer to any one of a plurality of types of data which include not only video files, but also images, audio files, special effects, and other data formats capable of being combined and/or included in a mix.
  • the personal video creation devices 212 can include DSCs, DVCs, cellular phones equipped with video cameras, and webcams.
  • input to the online video platform 206 is obtained from other sources of digital video and non-video content selected by the user.
  • Non-video sources include the personal photo and music repositories 214, which are stored on the user's PC, or on the video server, or on an external server, such as a photo-sharing application service provider ("ASP").
  • Additional video sources include websites that publish shareable video content, such as news organizations or other external video-sharing sites, which are designated as personally selected online video resources 216.
  • the preprocessing application 204 can perform a number of tasks.
  • One such task is segmenting the video content into a number of manageable upload segments, which can be uploaded in parallel, one at a time, or in different orders depending on how they are prioritized or used.
  • a user can interact with one of the upload segments via the online video editor 202, which can cause a re-prioritization of that particular upload segment.
  • the preprocessing application 204 also contains a metadata module 299.
  • the metadata module is shown as being a component of the preprocessing application 204, but it can also reside, in whole or in part, in the online video editor 202 or the online video platform 204.
  • the metadata module 299 performs a variety of functions such as collecting, generating, and/or creating new metadata tags for uploaded content either through algorithms, through inheritance of tags from earlier mixes that were borrowed from to make the current mix, or manually by a system administrator, a person tasked with indexing/moderating content, or a user.
  • the online video editor 202 (also referred to as the Internet-hosted application service) can be used on a dedicated website or its functionality can be served to different websites seeking to provide users with enhanced video editing capabilities. For example, a user can go to any number of external websites providing an enhanced video editing service.
  • the present system 200 can be used to enable the external websites to provide the video editing capabilities while maintaining the look and feel of the external websites. In that respect, the user of one of the external websites may not be aware that they are using the present system 200other than the fact that they are using functionality provided by the present system 200.
  • the system 200 serves the application to the external IP address of the external website and provides the needed function while at the same time running the application in a manner consistent with the graphical user interface ("GUI") that is already implemented at the external IP address.
  • GUI graphical user interface
  • a user of the external website can cause the invocation of a redirection and GUI recreation module 230, which can cause the user to be redirected to one of the servers used in the present system 200 which provides the needed functionality while at the same time recreating the look and feel of the external website.
  • the online video editor 202 also contains a filtering module 298 and a targeting module 297.
  • the filtering module 298 and the targeting module 297 are shown as being components of the online video editor 202, but they can also reside, in whole or in part, in the preprocessing application 204 or the online video platform 206.
  • the filtering module 298 can perform the scoring and application of business rules (defined in more detail later), which can be used to classify mixes and portions of mixes so that actions can be taken. For example, the filtering module 298 can filter out mixes and portions of mixes that are pornographic in nature while allowing non-objectionable mixes to pass through.
  • the targeting module 297 can be used for a variety of purposes, including applying business rules to mixes using the metadata, which, for example, allow it to make recommendations of other mixes, products, or services to a user.
  • the targeting module 297 can use the metadata to place specific advertisements in association with the mix.
  • the advertisements can be placed such that they are of specific potential relevance to the user based on an analysis of the associated metadata.
  • Video content can be output by the online video platform 206 to the content consumption block 210.
  • Content consumption block 210 can be utilized by a user of a variety of possible destination devices, including, but not limited to, mobile devices 218, computers 220, DVRs 222, DSTBs 224, and DVDs 226.
  • the mobile devices 218 can be, for example, cellular phones or PDAs equipped with video display capability.
  • the computers 220 can include PCs, Apples, or other computers or video viewing devices that download content via the PC or Apple, such as handheld devices (e.g., PalmOne), or an Apple video iPod.
  • the DVDs 226 can be used as a media to output video content to a permanent storage location, as part of a fulfillment service.
  • Delivery by the online video platform 206 to the mobile devices 218 can use a variety of methods, including a multimedia messaging service (“MMS”), a wireless application protocol (“WAP”), and instant messaging (“IM”). Delivery by the online video platform 206 to the computers 220 can use a variety of methods, including: email, IM, uniform resource locator ("URL”) addresses, peer-to-peer file distribution (“P2P”), or really simple syndication (“RSS”).
  • MMS multimedia messaging service
  • WAP wireless application protocol
  • IM instant messaging
  • Delivery by the online video platform 206 to the computers 220 can use a variety of methods, including: email, IM, uniform resource locator (“URL”) addresses, peer-to-peer file distribution (“P2P”), or really simple syndication (“RSS”).
  • RSS really simple syndication
  • the online video platform 206 includes an opt- in engine module 300, a delivery engine module 302, a presence engine module 304, a transcoding engine module 306, an analytic engine module 308, and an editing engine module 310.
  • the online video platform 206 can be implemented on one or more servers, for example, Linux servers.
  • the system 200 can leverage open source applications and an open source software development environment.
  • the system 200 has been architected to be extremely scalable, requiring no system reconfiguration to accommodate a growing number of service users, and to support the need for high reliability.
  • the application suite can be based on AJAX, Flash, or a hybrid ("client") where the online application behaves as if it resides on the user's local computing device, rather than across the Internet on a remote computing device, such as a server.
  • client a hybrid
  • the client architecture allows users to manipulate data and perform "drag and drop” operations, without the need for page refreshes or other interruptions.
  • the opt-in engine module 300 can be a server, which manages distribution relationships between content producers in the content creation block 208 and content consumers in the content consumption block 210.
  • the delivery engine module 302 can be a server that manages the delivery of content from content producers in the content creation block 208 to content consumers in the content consumption block 210.
  • the presence engine module 304 can be a server that determines device priority for delivery of mixes to each consumer, based on predefined delivery preferences and detection of consumer presence at each delivery device.
  • FIG. 4 is a block diagram illustrating an example online video editor 202.
  • the online video editor 202 includes an interface module 400, input media 402a-h, and a template 404.
  • a digital content aggregation and control module 406 can also be used in conjunction with the online video editor 202 and thumbnails 408 representing the actual video files can be included in the interface module 400.
  • the online video editor 202 can be an Internet-hosted application, which provides the interface module 400 for selecting video and other digital content (e.g., music, voice, photos) and incorporating the selected contents into mixes via the digital content aggregation and control module 406.
  • the digital content aggregation and control module 406 can be software, hardware, and/or firmware that enables the modification of the mix as well as the visual representation of the user's actions in the interface module 400.
  • the thumbnails 408 are used as a way to preview content in parallel with the upload process.
  • the thumbnails 408 can be generated in a number of manners.
  • the thumbnails can be single still frames created from certain sections within the content.
  • the thumbnails 408 can include multiple selections of frames (e.g., a quadrant of four frames).
  • the thumbnails can include an actual sample of the video in seconds (e.g., a 1 minute video could be represented by the first 5 seconds).
  • the thumbnails 408 can be multiple samples of video (e.g., 4 thumbnails of 3 second videos for a total of 12 seconds).
  • the thumbnails 408 are a method of representing the media to be uploaded, whereby the process of creating the representation and uploading it takes a significantly less amount of time than either uploading the original media or compressing and uploading the original media.
  • the online video editor 202 allows the user to choose (or can create) the template 404 for the mix.
  • the template 404 can represent a timeline sequence and structure for insertion of contents into the mix.
  • the template 404 can be presented in a separate window at the bottom of the screen, and the online video editor 202 via the digital content aggregation and control module 406 can allow the user to drag and drop the thumbnails 408 (representing mixes or portions of mixes) in order to insert them into the timeline to create the new mix.
  • the online video editor 202 can also allow the user to select from a library of special effects to create transitions between scenes in the video. The work-in-progress of a particular video project can be shown in a separate window.
  • the online video editor 202 allows the user to publish the video to one or more previously defined galleries / archives 410. Any new video published to the gallery / archive 410 can be made available automatically to all subscribers 412 to the gallery. Alternatively, the user can choose to keep certain mixes private or to only share the mixes with certain users.
  • FIG. 5 is a block diagram illustrating an example preprocessing application.
  • the preprocessing application 204 includes a data model module 502, a control module 504, a user interface module 506, foundation classes 508, an operating system module 510, a video segmentation module 512, a video compression module 514, a video segment upload module 516, a video source 518, the metadata module 299, the filtering module 298, the targeting module 297, and video segment files 520.
  • the preprocessing application 204 is written in C++ and runs on a Windows PC, wherein the foundation classes 508 includes Microsoft foundation classes ("MFCs").
  • MFCs Microsoft foundation classes
  • an object-oriented programming model is provided to the Windows APIs.
  • the preprocessing application 204 is written, wherein the foundation classes 508 are in a format suitable for the operating system module 510 to be the Linux operating system.
  • the video segment upload module 516 can be an application that uses a Model-View-Controller (“MVC") architecture.
  • MVC Model-View-Controller
  • the MVC architecture separates the data model module 502, the user interface module 506, and the control module 504 into three distinct components.
  • the preprocessing application 204 automatically segments, compresses, and uploads content from the user's PC, regardless of length.
  • the preprocessing application 204 uses the video segmentation module 512, the video compression module 514, and the video segment upload module 516 respectively to perform these tasks.
  • the uploading method works in parallel with the online video editor 202, allowing the user to begin editing the content immediately, while the content is in the process of being uploaded.
  • the content can be uploaded to the online video platform 206 and stored as one or more video segment files 520, one file per segment.
  • the video source 518 can be a digital video camcorder or other video source device.
  • the preprocessing application 204 starts automatically when the video source 518 is plugged into the user's PC. Thereafter, it can automatically segment the content stream by scene transition using the video segmentation module 512, and save each of the video segment files 520 as a separate file on the PC.
  • content can be transferred to a local computing device whereby an intelligent uploader can be deployed.
  • the content can be sent directly from the video source 518 over a wireless network (not shown), then over the Internet, and finally to the online video platform 206.
  • This alternative bypasses the need to involve a local computing device or a client computer.
  • this example is most useful when the content is either very short, or highly compressed, or both.
  • the content is not compressed or long or both, and, therefore, relatively large, it is typically transferred first to a client computer where an intelligent uploader is useful.
  • an upload process is initiated from a local computing device using the video segment upload module 516, which facilitates the input of lengthy content.
  • the user would be provided with the ability to interact with the user interface module 506.
  • the control module 504 controls the video segmentation module 512 and the video compression module 514, wherein the content is segmented and compressed into the video segment files 520.
  • lengthy content can be segmented into 100 upload segments, which are in turn compressed into 100 segmented and compressed upload segments.
  • Each of the compressed video segment files 520 begin to be uploaded separately via the video segment upload module 516 under the direction of the control module 504. This can occur by each of the upload segments being uploaded in parallel. Alternatively each of the upload segments can be uploaded in order, the largest segment first, the smallest segment first, or any other manner.
  • the online video editor 202 is presented to the user. Through a user interface provided by the user interface module 506, thumbnails representing the video segments in the process of being uploaded are made available to the user. The user would proceed to edit the video content via an interaction with the thumbnails. For example, the user can be provided with the ability to drag and drop the thumbnails into and out of a timeline or a storyline, to modify the order of the segments that will appear in the final edited content.
  • the system 200 ( Figure 2) is configured to behave as if all of the content represented by the thumbnails is currently in one location (i.e., on the user's local computer) despite the fact that the content is still in the process of being uploaded by the video segment upload module 516.
  • the upload process can be changed. For example, if the upload process was uploading all of the compressed upload segments in sequential order and the user dropped an upload segment representing the last sequential portion of the content into the storyline, the upload process can immediately begin to upload the last sequential portion of the content, thereby lowering the priority of the segments that were currently being uploaded prior to the user's editing action.
  • the online video editor 202 (also called the Internet-hosted application service) provides a mixing platform, whereby users can create, edit, track, and share mixes.
  • the online video editor 202 can provide multiple opportunities for users to create mixes not only by mixing (e.g., trimming, adding transitions or effects to video, adding titles, combining, and overlaying) their own uploaded portions of mixes (video clips, photos, audio/music tracks), but also by "remixing" other mixes “harvested” (reused) from other users, including commercial users who offer high-quality audio and video portions of mixes representing the works of a variety of artists.
  • each portion of a mix used in a mix can itself be a mix; and, more importantly, each portion of a mix used in a remix can itself be a mix or even a remix - it can consist of multiple pieces of content from one or more sources. And the remixing activity can be recursive, whereby there can be several or many levels of remixing, resulting in a hierarchy of authors who have contributed content to the resulting mix. (Note: in this document, the term mix will be used regardless of whether the production is a mix or a remix and is intended to refer to both.)
  • the metadata module 299 maintains detailed information regarding the media sources and authors of the component parts of every mix (including clips or mixes, photos, and audio/music tracks). Each mix can contain inherited media metadata and user metadata, originating from all sources used to create the final product and from all levels in the hierarchy of contributing authors.
  • the online video mixing activities of the user base thus provide a rich source of metadata which the system 200 is in a unique position to mine, and which user's of the system 200 are in a unique position to track, in order to gain an understanding of the media, as well as the authors of the media (i.e., the subscribers to the Internet-hosted service) and the relationships between them.
  • the metadata module 299 can maintain the metadata in one or more tables in a data warehouse.
  • the metadata module 299 can aggregate all of the metadata into a single data store.
  • Online analytical processing (OLAP) or other data mining tools can then be used by the metadata module 299.
  • the metadata module 299 can also maintain operational data stores for action at defined intervals (e.g., daily).
  • the operational data stores can be used for real-time action, such as personalization, monetization, filtering, etc.
  • any scoring associated with the data warehouse can be input back to the operational data stores for real-time action, such as creating new categorization models, which are used to score users, mixes, or for matching relevant advertisements.
  • the metadata module 299 maintains the metadata in a data structure of a computing device, (e.g. in a set of tables in a relational database).
  • Figure 6A is a schematic representation exemplifying one such data structure; in this schematic, each of the types of metadata (“Author” metadata, “Media” metadata, “Title & Tags” metadata, “Community” metadata, “Segmentation” metadata, and “Behavioral” metadata) might be stored and maintained in one or more tables in a relational database, and would be used for real-time actions such as personalization, monetization or filtering.
  • the metadata in these tables might also be aggregated on a regular basis into a single data store for purposes of analysis and data mining, (e.g., in order to build up-to-date categorization models that can be fed back to the operational system for scoring users and/or videos.)
  • the media metadata that metadata module 299 maintains for each mix or portion of the mix can include keywords from the title, descriptive tags (assigned by the author, and possibly also by administrators or other users), and a category.
  • the system 200 can automatically assign the media category (based on the mix's tags and keywords), using data mining techniques, including: categorization, classification, clustering and association.
  • the system administrators maintain a living master list of categories based on observation of the overall content base of the website. Categories are useful in matching up advertisements with mixes that users are viewing, as well as in personalizing user experiences and in identifying potentially objectionable mixes.
  • categories are discussed in detail later.
  • Various types of categories exist, including video categories, group categories, author categories, and viewer categories. Categories are a form of metadata, and are stored in the metadata data structure.
  • Creative activity metadata also can include names of albums or other types of collections of files that the user creates or elects to include in their account which can consist of groups of mixes, videos, photos, and music around a topic, titles, and keyword content of blogs that they write about, mixes they have created, titles and keyword content of posts they write to the public discussion forum, flagging history (prior cases of creating/posting objectionable mixes), and groups that they create or belong to.
  • the metadata module 299 maintains metadata about every group, including the owner, each member, and each mix posted to the group for viewing (along with keywords from accompanying commentary). Each mix in turn has its own associated metadata, including its author, title, descriptive tags, and creation history (as previously described).
  • the metadata module 299 makes this additional user metadata available to users of the system 200 for every author in the creation hierarchy of a mix, providing further insights into the activities and relationships of the user base, and providing a rich source for filtering, personalization, target marketing, and monetization.
  • the metadata module 299 can also mine metadata relating to creative users of the system 200 in combination with additional metadata that is maintained relating to other users who are not authors but are simply viewers of the available mixes.
  • the suite of media and user metadata is used to segment users into viewer categories and author categories.
  • the combination of author categories, viewer categories and video categories are used as inputs into real-time decisions regarding filtering, personalization, target marketing, and monetization.
  • the metadata module 299 maintains metadata relating to: the current mix being viewed; the current viewer (profile and behavior metadata); clips/mixes previously created by the current viewer (if a member); the author of the current mix (media, profile and behavior metadata); inherited media metadata including all video clips, video mixes, audio/music, photos contributing to the current mix; and inherited user metadata including profile and behavior metadata relating to all authors of clips, mixes, audio/music, photos contributing to the current mix (at all levels of remixing).
  • the metadata module 299 maintains metadata for each mix, for each viewer, and for each author in a data structure in the memory of a computing device.
  • the metadata module 299 can maintain the following: a video category, assigned by the application of a predictive or a clustering model in the metadata module 299; a title; tags; music genre, title, artist, and record label; author; a creation date and time, a ranking; public/private status; a number of views (by mix location); a number of views (by viewer location); a number of video ad impressions (by mix location); a number of page ad impressions (by mix location); a number of shares by email; a number of shares by mobile device; a number of shares by publishing destination; a number of times harvested for reuse in other mixes; links to harvested mixes; title, tags and author of each portion of the mix the mix; and title, tags and author of each photo included in the mix.
  • the metadata module 299 can maintain metadata in a data structure in the memory of a computing device for each viewer category, assigned by a predictive and/or a clustering model in the metadata module 299, which can be based on: a category, title, tags, music genre of the currently viewed mix; categories, titles, tags, music, and genre of all mixes viewed during current session; categories and names of all groups visited during the current session; search terms used during the current session; categories, titles, tags, music, and genre of mixes most frequently viewed in prior sessions; categories and names of groups most frequently visited during prior sessions; search terms most frequently used in prior sessions; the number of logins per month; an average time spent on site per login; an average number of mixes viewed per login; and a clickthrough behavior (on advertisements).
  • a predictive and/or a clustering model in the metadata module 299 which can be based on: a category, title, tags, music genre of the currently viewed mix; categories, titles, tags, music, and genre of all mixes viewed during current session; categories and names of all
  • the metadata module 299 can maintain in a data structure of a memory of a computing device the following metadata: an author category, assigned by a predictive or a clustering model in the metadata module 299 based on an author's viewer category and supporting metadata; media metadata, which include titles and tags of media uploaded (videos, photos, music), titles and tags of media harvested (videos, photos, music), and titles and tags of mixes created; community metadata, which includes album titles, album categories, assigned by a predictive or a clustering model, blog titles, keywords from blog entries, names of groups created, keywords from groups created, names of groups belonged to, keywords from groups belonged to, titles of posts to public discussion forum, keywords from posts to public discussion forum, names of destinations shared to, and method shared (“send by email", "send to mobile device (cellular phone)", “published”, etc.); behavioral data - (historical) including a number of logins per month, an average time spent on site per login, a total number of clips uploaded, a total number of cellular phone clips
  • Figure 6A is a schematic representation summarizing an example data structure of the metadata associated with an example mix created and collected using the system 200.
  • Figure 6A includes metadata blocks 600, 602, 604 and 606 associated with the mixes A, C, B, and D respectively.
  • Mix A includes two mixes by other authors (mix B and mix C) with associated metadata blocks 604 and 602.
  • Metadata block 600 incorporates by pointers the metadata blocks 602 and 604 associated with the two mixes B and C, which are included in mix A.
  • the content of the metadata blocks of incorporated mixes can be copied in the metadata block of the mix.
  • Mix B is associated with the metadata block 604.
  • Mix B includes a further mix by another author (mix D).
  • Mix D is associated with metadata block 606, which is associated with block 604 via a pointer.
  • Each mix has associated behavioral metadata tags, author metadata tags, community metadata tags, media metadata tags, title metadata tags, and categories 699.
  • Figure 6A summarizes examples of the metadata associated with each mix. Figure 6A does not show metadata relating to a viewer of the mix, which can also be included.
  • the metadata block 600 includes metadata tags, which are associated with mix A and/or inherited (e.g., incorporated by the already existing metadata tags associated with portions of mixes that were incorporated into the mix), including behavioral metadata tags, author metadata tags, community metadata tags, media metadata tags, and title metadata tags.
  • the behavioral metadata tags include tags associated with, for example, views (by location of media, viewer location, etc.), sharing (email, mobile, publishing destination), ranking, harvesting of data, use in mixes, flagging classification, and campaign information.
  • the author metadata tags include tags associated with, for example, demographics, media ownership, media creation behavior, media viewing behavior, media sharing behavior, social network, flagging behavior, and forum statistics.
  • the community metadata tags include tags associated with, for example, blog post & comment key word information, group post & comment key word information, and album classifications.
  • the community metadata tags include tags associated with, for example, time & date of original mix upload, components of the mix, and private/public status.
  • the community metadata tags include tags associated with, for example, title keywords, tag key words, tags associated with audio to text conversion.
  • the filtering module 298 enables intelligent filtering of mixes or portions of mixes by the filtering module 298.
  • the filtering module 298 can score all newly uploaded portions of mixes for the likelihood of including objectionable content (or other types of predetermined content) and for likelihood of copyright infringement.
  • a predictive model accomplishes real-time scoring built from metadata relating to prior cases of flagged portions of mixes and authors (versus non-flagged mixes and authors). After the filtering module 298 scores each new portion on the mix, it applies business rules to the score to determine the appropriate action.
  • Possible actions taken by the filtering module 298 include (but are not limited to): flagging the author and the portion of the mix for later review (during next business day, for example), flagging the author and the portion of the mix for immediate review (triggering email or a phone call to a designated reviewer, for example), and flagging the author and immediately taking the portion of the mix down from the website.
  • the targeting module 297 can automatically assign the categories 699. Thereafter, the filtering module 298 can score and filter the mix.
  • the author, a system administrator, or an automated audio-to-text conversion program can assign the metadata tags.
  • Figure 6B is a flowchart illustrating an exemplary process for creating, collecting, and using metadata.
  • the preprocessing application 204, the online video platform 206, and the online video editor 202 can carry out the process.
  • the preprocessing application 204 receives a number of mixes at step 1200.
  • the preprocessing application 204 receives one or more of the mixes via an upload process as previously described, or the mixes already reside in the system 200 or at a network location accessible to the preprocessing application 204.
  • a system administrator assigns selected metadata tags to a selected amount of the mixes.
  • the selected amount of the mixes can be a predefined percentage of all of the mixes.
  • the selected amount may be, for example, one hundred of the most popular mixes.
  • the system administrator is a trusted source capable of manually handling the classification of the selected amount of mixes in an effective manner, such as applying the appropriate descriptive words to the mixes or selecting appropriate metadata tags. This includes for example, the system administrator applying a category 699 ( Figure 6A) or any number of types of metadata tags.
  • the system 200 receives user selected metadata tags for a selected amount of the mixes.
  • the user can input the second metadata tags either optionally or as a required component of the online video editing function.
  • the user will create keywords or select keywords from a pre-existing list or menu that the user feels are applicable to the mix for searching purposes.
  • the system 200 can also require the user to input a minimum, such as two metadata tags for each mix or portion of a mix that they upload.
  • These metadata tags include, for example, media metadata tags, title metadata tags, and others.
  • the filtering module 298 applies third metadata tags to the mixes.
  • the filtering module 298 uses a predictive modeling algorithm or a cluster modeling algorithm, or both.
  • the results of the algorithm produce new metadata tags, which are used to further classify the mix including for example the category 699 ( Figure 6A).
  • the predictive modeling algorithm is a computer program that embodies a neural net or similar algorithm.
  • the algorithm has inputs, which define the independent variables and a single output ("score") which is the dependent variable.
  • the algorithm is first trained by applying known inputs and associated output to the model. A number of numerical coefficients are calculated based on the application of inputs/output. Through a series of "training" passes, the algorithm is ready to apply a set of unknown inputs to which it calculates an output score, all based on history associated with the training data set.
  • the filtering module 298 can take a group of users who actually responded to an online advertisement, together with a similar-sized random group of users who did not respond to the advertisement.
  • the output variable would be "responded to an advertisement” and this would be set to "Yes” (or “1") for those users who did respond, and to "No” (or "0") for those users who did not;
  • the input variables would be all the metadata associated with the user, the video they were watching, the advertisement metadata, etc.
  • the filtering module 298 would use the data as the "training data set” to program the predictive model, which would learn a pattern from the training data representing the input variable combinations that most closely distinguish between responders and non-responders. This pattern would be saved as a model - a reusable program.
  • Clustering models use algorithms to take a set of input variables and assign the set of the input variables to one of several output values.
  • the input variables or "attributes" are used to define the population, and the output values are used to define groups or clusters or subsets of the population where each cluster have similar attributes.
  • the filtering module 298 directs the clustering algorithm to define ten clusters using all the metadata available around users. At the conclusion of the algorithm data processing, there may be ten distinct user clusters, which define different types of users.
  • each cluster is obtained by looking at the strength of coefficients associated with each attribute used to create the clusters.
  • the filtering module 298 may find that one cluster was created based predominantly on three attributes (which ultimately were the most statistically significant) including age, gender, and video category viewed the most - specifically, there was a large group of users between the ages of 14-17, males, watching skateboard videos.
  • the filtering module 298 might define this cluster as "skate youth" that advertisers specifically want to target.
  • a score can be increased for that user, "inappropriate content user score”. Any future mixes, which contain content from that user can be scored as “suspect”. The more material from that user or other users with “inappropriate content user scores” would only increase the "suspect” score, raising that video up in propriety to be reviewed by someone.
  • a counter in a database for each user and a flag for each piece of content may all be tied together with some business logic.
  • the online video editing environment makes the mixes available to the users. For example, a user can create a new mix that includes an existing mix. The new mix automatically includes all of the metadata associated with the new and existing mixes. Then, the targeting module 297 or the filtering module 298 performs an action at step 1210. The targeting module 297 or the filtering module 298 base the action, at least in part, on the metadata tags. The action can include associating a targeted advertisement with the mix, making a suggestion based on the nature of the mix, filtering out an inappropriate mix, etc.
  • FIG. 7 is a flowchart illustrating an exemplary process for filtering mixes using metadata.
  • the filtering module 298 shown in Figure 2 carries out this process.
  • the filtering module 298 determines at step
  • Step 700 whether the video segment upload module 516 is uploading a new mix or a portion of a mix. Step 700 repeats until the upload is complete.
  • the newly uploaded mix is scored at step 702. Scoring assigns a number (or "score") indicating the probability of the uploaded mix including objectionable material; the higher the score, the higher the likelihood of objectionable material. Scoring can take the form of any number of scoring algorithms that are based in part on the metadata that is available, which is associated with the mix. For example, the filtering module 298 can score a category known to contain pornographic or otherwise objectionable mixes highly whereas it can score a category known to contain children's or otherwise non-objectionable mixes lower. Scoring will be defined in more detail later.
  • business rules are applied to the scored, uploaded mix at step 704.
  • the business rules can be input manually or dynamically.
  • the business rules can be accessible to the filtering module 298 for application to the mix.
  • the business rules in one aspect can include a list of commands in the form of If A, then B. Business rules will be defined in more detail later.
  • the filtering module 298 determines an appropriate action to take. In the example of Figure 7 three actions are specified although others are possible. If a first action is required at step 706, the filtering module 298 flags the author and the mix for later review at step 708. This can occur when the scoring and the application of the business rules indicate that the mix should be reviewed later but is most likely not objectionable or does not need to be immediately removed.
  • the filtering module 298 flags the author and the mix for immediate review at step 712. This can occur when the mix is intermediate in nature (i.e., the system 200 does not score the mix to be outright objectionable but the score also indicates it might be objectionable and should be reviewed as soon as possible). If, on the other hand, a third action is required at step 714, the filtering module 298 flags the author and the mix is immediately taken down from the website at step 716. This can occur when the mix is determined to be most likely objectionable and should not be stored by the system 200 even for a short period of time.
  • the filtering module 298 uses metadata factors to score the mix to screen for objectionable or copyright infringing content, where the metadata factors include (but are not limited to) the following: categories, keywords, titles, tags of the current mix; categories of prior mixes created by the author, plus their keywords, titles, and tags; categories of the current author; flagging behavior of the current author (for prior mixes); (if a mix) categories, keywords, titles, and tags of components used in the current mix (including clips, mixes, photos, and audio/music tracks); (if a mix) categories, keywords, titles, and tags associated with authors of components that were used in the current mix; (if a mix) flagging behavior of authors of components that were used in the current mix; and (if a mix) categories, keywords, titles, and tags associated with other mixes/remixes that use any of the same components that are used in the current mix.
  • the metadata factors include (but are not limited to) the following: categories, keywords, titles, tags of the current mix; categories of prior mixes created by the author, plus their keywords, titles
  • the metadata created and maintained by the metadata module 299 provides insights into the interests, the viewing and mixing history, and the social networking activities of its users.
  • the targeting module 297 can use these insights to dynamically personalize and enhance each user's experience on the website, whether connected through a personal computer, a mobile device, or a digital home network.
  • the targeting module 297 can apply a variety of data mining techniques to the metadata in order to make appropriate recommendations to users.
  • One scheme uses predictive and clustering models, which the targeting module 297 applies to mixes, viewers, authors and groups in order to classify them into the categories 699.
  • the targeting module 297 uses the categories 699 to recommend similar mixes to view, or groups to join having similar categorical content.
  • the targeting module 297 uses clustering models, which it applies to viewers, authors, and groups in order to identify similar interests and make recommendations, such as to view or join a particular group (this is enhanced by the fact that groups already indicate clusters of users with similar interests).
  • the targeting module 297 uses association models, which it applies to mix hierarchies, to sets of titles of mixes viewed in a single session, and to sets of names of groups visited in a single session, in order to identify groupings of similar mixes or groups, and to make mixing, viewing or joining recommendations.
  • the following are examples of personalization based on usage of the metadata that the targeting module 297 maintains:
  • Example 1 Group membership is a type of social networking that provides a natural form of clustering/user-based collaborative filtering, without the need for modeling to identify the clusters - by being members of a group, members have a high likelihood of having similar interest(s).
  • Example 2 Mix hierarchy provides a looser, unconscious type of social networking similar to item-based collaborative filtering, where content (clips or mixes) that are used together indicate potential common interests.
  • Example 3 A clustering model (user-based collaborative filtering) applied to group memberships identifies that members of group XXX tend to also be members of groups YYY and ZZZ. Action: Recommend to members of group XXX who are not already members of groups YYY and ZZZ that they consider joining them.
  • Example 4 Association rules (item-based collaborative filtering) induced from sets of hierarchies of mixes (category & title) that were used in making mixes identify clips/mixes that are frequently used together (analogous to market baskets of items purchased together). Action: When User A harvests clip X to use in a mix, recommend looking at clips Y and Z that are frequently used together with clip X in other mix hierarchies.
  • Example 5 Association rules (item-based collaborative filtering) induced from sets of titles of mixes viewed within a session identify mixes that are frequently viewed together (analogous to market baskets). Action: When User A views mix V, recommend also viewing mixes U and W that are frequently viewed together with mix V.
  • Example 6 Association rules (item-based collaborative filtering) induced from sets of names of groups visited within a session identify groups that are frequently visited together (analogous to market baskets). Action: When user B visits group G, recommend also visiting groups F and H which are frequently viewed together with group G.
  • Example 7 Predictive and clustering models induced from metadata relating to prior mixes are used to classify new mixes into a video category (mixes of the same category have similar features - a form of item-based collaborative filtering); video categories are then used to make viewing recommendations to viewers. Action: When viewer F views mix V of category CCC, recommend also viewing newly created mixes S and T that are of same category.
  • Example 8 Predictive and clustering models induced from metadata relating to mixes posted to groups are used to classify new groups into a group category, where the group category is from the same set of categories that is defined for mixes; group categories are then used to make recommendations regarding viewing or joining groups. Action: If user A makes a mix of category CCC, or if the mix includes a clip or mix of category CCC, recommend that user A visit group XXX which is also of category CCC (i.e., group XXX features mixes of category CCC).
  • Figure 8 is a schematic representation summarizing an example of personalization using metadata.
  • personalization is shown, together with additional examples of personalization actions that the targeting module 297 takes based on the available metadata.
  • Figure 8 includes clips 806 and 808 and mixes 802, 804, and 810.
  • user A has uploaded the mix 802 via a cellular phone multi-media message service ("MMS").
  • MMS multi-media message service
  • User A has made a mix that includes a mix 804 from user B, who in turn used clips 806 and 808 from users C and D.
  • User B's mix 804 is of the category "YoungKids", and has an audio track with music by the Black Eyed Peas (music genre HipHop), which user A retains on her mix 802.
  • User Cs clip 806 is of the category “Nature”; and user D's clip 806 is of the category “Cats”.
  • User A gives her new mix 802 the title “LittleJoey” and the tags "Kids", "Preschool” and “Playground”, the mix 802 inherits various tags from its harvested clips, including “Swingset", “HipHop", “BossaNova”, “SergioMendes", "Beach", "Hawaii", “Sunset", “Cat", and” CatTrick".
  • the targeting module 297 automatically classifies the new mix 802 into category "YoungKids”.
  • User A shares her new mix 802 with user E, who has made mixes 810 of the categories "Nature”, Cats", and “Cartoons” and who publishes mixes 810 to a website 812, such as MySpace.com.
  • Users B and D are members of the Group ZZZ, and C frequently views mixes from Group ZZZ.
  • User D publishes frequently to a blog site 814, such as Blogger.com.
  • the targeting module 297 can personalize user A's experience in visiting a website in the online video platform 202, which can be part of the online video platform 202 or can be the result of a redirection from an external website and a recreation of the look and feel of the external website, via the redirection and GUI recreation module 230.
  • the targeting module 297 can make a number of recommendations 899. For example, the targeting module 297 can recommend to user A that she:
  • the insights into the interests, the viewing and mixing history, and the social networking activities of the user base revealed by the system's metadata maintained, collected, and created by the metadata module 299 are also useful for purposes of target marketing and advertising.
  • Multiple forms of advertising are supported, including: dynamic selection of the most appropriate advertisement to attach to mixes that are being viewed. Advertisements can be attached in various ways, including as pre-rolls (inserted at the start), post-rolls (inserted at the end), dynamically placed interstitials (inserted between mixes); overlays (expandable boxed inserted over mixes); advertisements placed around the video player, on the video player page; other website advertisements; email advertisements; and any other advertising means associated with the mixes.
  • the targeting module 297 can also perform several forms of intelligent target marketing and targeted advertising to user A, based on the available metadata.
  • Target marketing includes the following example. Since user A uploaded mixes from her mobile device (cellular phone) using MMS, the targeting module 297 can suggest to user A that she consider purchasing an application that she can download to her phone. The application makes it easier to upload cellular phone mixes, and offers other useful features relating to searching, viewing, sharing mixes on the cellular phone. The targeting module 297 can also promote to her other products and services related to video- enabled cellular phones, or to cellular phones for young children.
  • Targeted advertising includes the following example.
  • the targeting module 297 can also target advertisements to user A that relate to: children's products, toy stores (via user A's metadata); cat products, pet stores (via metadata about user D, from the mix tree); nature lovers (e.g., travel and organic/environmental/ergonomic products) - via metadata about user C (from the mix tree), and user E (from mix sharing); music of genres "HipHop” and "BossaNova” (via metadata about user B, from the mix tree); and music by Black Eyed Peas and by Sergio Mendes (via metadata about user B, from the mix tree).
  • FIG. 1 Another example of targeted advertising is shown in the following example where a viewer selects a mix to view. If the selected mix is less than 10 seconds in length, the targeting module 297 skips the advertising; otherwise: the targeting module 297 retrieves the category of the selected mix. If the video category is "undefined,” the targeting module 297 assigns the advertising randomly. If the video category is not "undefined,” the targeting module 297 retrieves a rotating list of advertisements requesting this video category as top priority. If the targeting module 297 has already served the advertisement on the top of the rotating list to this viewer on any of the viewer's last several viewings in this session, for example 3, the targeting module 297 can skip this advertisement in favor of the next advertisement in the list until the targeting module 297 has selected 3 advertisements (or until the list is exhausted).
  • the targeting module 297 can proceed next to a rotating list, representing advertisements requesting this video category as the next highest priority, and the process repeats until the targeting module 297 has selected several, for example 3, advertisements (or until all lists are exhausted).
  • the targeting module 297 can then score each advertisement based on the priority of the matching video category, as requested for this advertisement (High, Medium or Low), plus additional optional criteria (if available), which include: a viewer demographic group versus priority of demographic groups requested for this advertisement.
  • the targeting module 297 matches the mix's tags, if any, versus the priority of tags requested with this advertisement.
  • Targeting based on metadata relating to user A is relatively straightforward; other targeting is more subtle, exploiting the explicit (via sharing) and implicit (via the mix tree) social relationships between users that the targeting module 297 uncovers by the maintained metadata.
  • user A shares her mix 802 with user E it is assumed via business rules that there is a likelihood that user A also shares user E's interest in Nature (and, by implication, products associated with nature lovers).
  • the likelihood that user A is interested in Nature is reinforced by the fact that user A indirectly harvests a mix from user C that is also about Nature.
  • user A keeps an audio track harvested from user B 's mix 804
  • Video categories and video tags are especially strong indicators of content that can be used to infer shared interests. Video tags describing the content of the mix are supplied by the creator (or can optionally be added by system administrators or by other users).
  • One aspect of the targeting module 297 and the filtering module 298 applies a variety of data mining techniques to the media and user metadata in order to perform intelligent content filtering, personalization, target marketing/advertising and monetization. Real-time decisions can be made automatically based on a combination of business rules and data mining outputs.
  • Examples of applied data mining techniques include: predictive modeling, which can be applied to media and author metadata in order to score new media for likelihood of including objectionable mix; predictive modeling and clustering, which can be applied to mixes, groups, viewers and authors in order to identify target marketing opportunities with viewers and authors, and to select the most appropriate advertisement to attach to a mix that is being viewed; predictive modeling and clustering, which can be applied to mixes, groups, viewers and authors in order to segment them into categories, which are used for multiple purposes; predictive modeling and clustering, which can be applied to viewers, authors and groups in order to identify similar interests and make recommendations, such as to view or join a particular group (this is sometimes referred to as "user-based collaborative filtering"); and association models, which can be applied to mix hierarchies, to sets of titles of mixes viewed in a single session, and to sets of names of groups visited in a single session, in order to identify groupings of similar mixes or groups, and to make mixing, viewing or joining recommendations (this is sometimes referred to as "item-based collaborative filtering").
  • Example of business rules that can be applied according to one aspect include: do not serve the same advertisement to the same viewer within the same session; do not make more than three recommendations to a user within one session; and distribute revenue for advertisements based on a revenue-sharing formula.
  • the system 200 can use categorization for a variety of purposes.
  • the targeting module 298 uses a combination of predictive modeling and clustering in order to segment mixes, groups, viewers and authors in order to place them into categories. Mixes and groups can share the same set of categories, based on tags, titles and other keywords associated with mixes, whereas viewer categories and author categories can be based on these plus other metadata elements relating to user behavior.
  • Categorization can be particularly valuable in supporting fast, real-time decisions.
  • the targeting module 298 can perform categorization dynamically in real time, such as in determining the category of a newly uploaded mix or portion of a mix. In other cases, the targeting module 298 can use categories assigned previously through a categorization process, such as in selecting in real time the most appropriate advertisement to attach to a mix that a user has requested for viewing.
  • the targeting module 298 can take into account, in the selection of targeted advertisements and other marketing, one or more categories in various ways, such as: viewer category alone (e.g., if the viewer is not also an author of any mixes); video category alone (e.g., for syndicated mixes); author category alone (e.g., if an author is creating a mix); viewer and author categories (e.g., if an author is viewing another author's mix); and mix, viewer and author categories.
  • viewer category alone e.g., if the viewer is not also an author of any mixes
  • video category alone e.g., for syndicated mixes
  • author category alone e.g., if an author is creating a mix
  • viewer and author categories e.g., if an author is viewing another author's mix
  • mix viewer and author categories.
  • the following types of data available to the targeting module 297 can be used in selecting the most appropriate advertisement to serve with a mix that a user has selected for viewing, including: the video category of currently selected mix (plus optionally the title and tags); viewer demographic data; viewer category, based on the viewing history of the viewer (categories (plus optionally titles/tags)of viewed mixes, search terms used, groups visited, in the current and the last n sessions); if the viewer is a member of a group, the member category (based on the set of their video categories, plus optionally titles, tags, blog titles, groups created, and groups joined; the member category of the creator of the currently selected mix; member categories of content contributors to the currently selected mix; and viewer clickthrough data - type of product/service of ads that this viewer has clicked on during the current session or today or past week / month /
  • the video category can be used as an indicator of "genre”, and the partner site can request mixes of one or more particular genres; in such a case, the targeting module 298 can attach advertisements to the mix based solely on the video category (absent other metadata).
  • the targeting module 297 can also use video categories to create
  • channels which are selections of mixes available for viewing that are grouped according to subject matter.
  • the targeting module 297 can organize channels by name on the website, and all publicly viewable mixes belonging to a channel are easily accessible for viewing under their channel title. In the most straightforward form, the targeting module 297 can equate video categories one-for-one with channels, and it can organize mixes by category for viewing.
  • the targeting module 297 can also use channels for personalization, in a similar way to groups. For example, after a viewer views a mix, the targeting module 297 can recommend the viewer to visit the channel that features mixes of the same video category as the viewed mix.
  • Video categories constitute the highest-level classification of mixes, and provide a means of matching advertisements to mixes, such that, in real time, the targeting module 297 can select the most appropriate advertisement and attach it to each mix that is being viewed.
  • an "automotive" video category (or an "automotive enthusiast” viewer category) might be identified, based on discovery of a significant cluster of automotive videos (or viewers of such videos); advertisements from manufacturers such as Ford could then be selected to accompany viewings of videos in the "automotive' category, or could be targeted at viewers who have been categorized as "automotive enthusiast”.
  • Video categories can be assigned automatically based on the mix's tags and other keywords, using data mining techniques.
  • the system administrators can maintain a living master list of video categories based on observation of the overall content base of the website.
  • the system 200 also can maintain an up-to-date set of predictive models and clustering models incorporating the current master list of video categories, together with the tags and keywords that are used as inputs by the models to classify mixes into a category.
  • Categorizing mixes and categorizing users can involve human intelligence, especially in the initial definition of the set of categories, and then subsequently in regularly updating this set, in order to represent in the most meaningful way the base of mixes and users available to the online video editing environment.
  • the categories should be sufficiently intuitive to enable advertisers (with or without the assistance of the system staff) to select the most appropriate target categories for their ads (or to facilitate automatic matching of ad category to video category and/or user category).
  • the targeting module 297 assigns video categories using various forms of predictive modeling and clustering.
  • the targeting module 297 uses a predictive model.
  • Figure 9 is a flowchart illustrating an exemplary process for the implementation of categorization using predictive modeling.
  • the targeting module 297 builds a predictive model from a selected amount of popular, tagged mixes, and then applies the model to new mixes as users add them, to automatically classify them into a category.
  • a system administrator or data analyst assigns a category manually to a selected amount of the most popular mixes.
  • the manual assignment can occur by viewing the mixes, observing the titles and tags of the mix (if any), assigning new tags to the mix, and then assigning the mix to the most appropriate category.
  • the selected amount of the most popular mixes can be a few hundred.
  • the metadata tags the administrator assigns to the mixes may include media metadata tags, for example, and also may include the assignment of a category.
  • the targeting module 297 tests and trains a predictive model based on the manually categorized mixes.
  • the predictive model uses the newly assigned tags as predictive inputs.
  • the predictive model can also take the form of a decision tree (i.e. it may have been created using a decision tree induction algorithm). If the model is in the from of a decision tree, the tree can be enhanced manually based on observed knowledge.
  • the predictive model analyzes the mixes (including any mixes that were newly uploaded) and automatically categorizes them.
  • the video platform can require and/or strongly suggest that users apply at least two tags to each new mix.
  • the system administrator can also manually assign tags to the new mixes prior to applying the predictive model, especially the more popular mixes.
  • step 908 If not, the mix is categorized as undefined at step 908. After step 908, or if success does occur at step 906 then the predictive model has categorized the mix, so the targeting module 297 determines whether there are any more mixes at step 910. If so, the process repeats at step 904 wherein the predictive model obtains the next mix and the automatic categorization process repeats until all mixes including newly uploaded ones are processed.
  • the targeting module 297 determines whether a time period has passed at step 912. Depending on the system capabilities and size, the time period can be a week, a month, etc. If step 912 is false, step 910 repeats to ensure no new mixes have been uploaded that need categorization. When step 912 becomes true, the process repeats at step 900, wherein the manual categorization of the most popular mix occurs again. During this iterative process new categories can also be included where appropriate. Therefore, as the predictive model is rebuilt and rerun each time, it should improve since it is based on the latest set of categories and a possibly larger set of popular mixes.
  • the targeting module 297 uses a clustering model to assign mix categories.
  • Figure 10 is a flowchart illustrating an exemplary process for the implementation of categorization using clustering.
  • clustering is used to automatically identify a set of clusters, and the clusters are then manually examined to identify a category that they represent.
  • the clustering model is then applied to new mixes as they are added to automatically assign them to a cluster representing a category.
  • tags are manually assigned where necessary to a selected amount of the most popular mixes.
  • the metadata tags the administrator assigns to the mixes also may include the assignment of a category.
  • the targeting module 297 applies a clustering algorithm to the manually tagged mixes; this algorithm segments the mixes into different clusters based on similarities and differences between the mixes and their assigned tags.
  • the output of the clustering algorithm is a cluster number assigned to each mix, plus a clustering model that can be used on subsequent mixes to automatically assign them to a cluster.
  • the targeting module 297 then runs a decision tree induction algorithm on the manually tagged mixes to classify the rules that define each cluster, the decision tree algorithm sets the cluster number for the manually tagged mixes as the target.
  • the output of the algorithm is a decision tree with an associated set of rules indicating which input variables correlate with which cluster.
  • the induced rules for each cluster are manually examined and a category is assigned to each cluster using human intelligence. Several clusters can be determined to represent different subcategories of a major category, in which case the clusters can be assigned to the same category.
  • the next mix is obtained; this is the first of a new batch of mixes that will be classified into a cluster using the cluster model created earlier.
  • the cluster model is run to identify the cluster number for the next mix. The cluster number now automatically identifies the category the mix belongs to.
  • the targeting module 297 determines if there are more mixes. If there are more mixes, the process repeats at step 1008 and the next mix is obtained. The process repeats until all remaining mixes, including any new mixes, are processed. After processing all of the mixes, step 1012 becomes false at which time the targeting module 297 determines whether a time period has elapsed at step 1014.
  • the time period can vary depending on the needs and capabilities of the system 200 ( Figure 2).
  • the time period can be one week or one month. If the time period has not elapsed, the process repeats at step 1012 to ensure no new mixes exist that have not been processed. Once the time period passes, step 1000 repeats, wherein the most popular mixes are manually clustered again, mix categories are manually assigned with the assistance of a decision tree, and the new clustering model is rerun with regard to all of the remaining mixes.
  • the targeting module 297 uses a hybrid approach that combines both predictive modeling and clustering to assign video categories.
  • Figure 11 is a flowchart illustrating an exemplary process for the implementation of categorization using a combination of predictive modeling and clustering.
  • the targeting module 297 uses a combination of both approaches, fine-tuning each model based on the results of the other model, and then picking whichever model achieves the most accurate result on unseen mixes.
  • tags and categories are manually assigned to a selected amount of the most popular mixes.
  • the metadata tags the administrator assigns to the mixes may include the assignment of a category or the assignment of other metadata tags such as media metadata tags, title metadata tags, etc.
  • a predictive model and a cluster model are built using the manually categorized mixes and the results are compared.
  • the manually assigned categories are compared to the clusters identified by the cluster model.
  • the cluster model is fine-tuned. The fine tuning can occur using different numbers of clusters and different algorithms in order to get a cluster classification that most closely matches the manual classification.
  • step 1108 the remaining mixes are run against both the cluster model and the predictive model.
  • step 1110 all or some of the cases where the models disagree with the classification are manually examined.
  • step 1112 both the predictive model and the cluster model are fine tuned based on the differences.
  • step 1114 the targeting module 297 determines whether a satisfactory agreement level is reached between the models and the manual classifications. If not, the process repeats at step 1108 wherein the mixes continue to be run against the models and the models are fine tuned until step 1114 becomes satisfactory.
  • step 1114 the targeting module 297 determines which model achieved a more accurate result at step 1116. If the predictive model achieved a more accurate result, then the predictive model is used at step 1118. If the cluster model achieved a more accurate result, then the cluster model is used at step 1118.
  • Categorization methods also can be applied to groups.
  • the set of group categories is the same as the set of video categories, and is based on the predominant video category in the set of mixes posted to the group.
  • Group categories are used for various purposes, including personalization in order to recommend to viewers that they visit a particular group which is of the same category as the mix just viewed.
  • the same set of categories can also be applied to viewers and authors of mixes, based on the predominant categories of mixes that they view or create, in order to achieve a simple form of user categorization.
  • Multiple video categories can be applied to a viewer or author.
  • More sophisticated viewer and author categorization schemes take into account additional user static and dynamic behavioral metadata, including search terms used, prior clickthrough behavior, and the frequency and duration of visits to the website.
  • the online video editing environment can also be configured to support syndication of media. Any of the publicly available media available on the website can be syndicated to partners for redistribution or other purposes.
  • the system 200 can have a set of open application programming interfaces ("APIs") that allow partners that can benefit from these services to access metadata associated with users and media. This is potentially a paid service that includes access to "actionable" metadata (filtering metadata, personalization metadata, segmentation / target marketing metadata, and monetization metadata) that follows the media as it is passed from the system's platform to partner platforms (e.g., if publishing a mix to the YouTube site).
  • APIs application programming interfaces
  • the targeting module 297 also provides semi-automatic and fully automatic methods of matching advertisements to video categories, viewer categories, and author categories. To achieve this, the targeting module 297 can also include a categorization scheme for advertisements, consisting of approximately 40 categories, based on a consolidation of the advertising categories of several leading media companies. The targeting module 297 also can maintain a cross-reference list (built and updated on a regular basis by the system administrators) that defines the most appropriate advertisement categories for each video, viewer and author category. [0165] The sets of video, viewer and author categories each can consist of a number of categories. In one example, there are between 50 and 100 categories.
  • matching an advertisement to a video based on video category involves referring to a video category-advertisement category cross-reference list, which in the current example comprises 50-100 entries showing one or more advertisement categories representing the most appropriate advertisement for each video category.
  • This cross-reference list is either used manually, in conjunction with advertiser or their agent, as an aid in selecting one or more video categories, or automatically, to make an automatic selection of video categories.
  • the selected video categories represent the categories of mixes to which the advertisement can be attached, and guide the decision-making process when a mix is selected for viewing.
  • advertisers can also be given the option of specifying one or more additional keywords describing their target mix, where the keywords would be used to match on tags associated with mixes.
  • the match could be based on a combination of the advertisement category/video category match and a match of one or more advertisement keywords to one or more video tags associated with a mix.
  • the ability to detail the components of a final mix can be used to create methods to allocate any resulting revenue attributed to a particular creation.
  • this functionality is implemented by the targeting module 297, although a separate module can be implemented as well to carry out this functionality.
  • a final mix can be constructed of three media components, Ma + Mb + Mc, from three different authors, Aa + Ab + Ac. If the final mix is able to generate revenue through any number of means including but limited to pay to download, subscribe to view, and advertising, the resulting revenue can be divided up among the authors based on their contributions.
  • Dividing up the revenue can be accomplished using several different methods including but not limited to the following: (1) the duration of the components (Ma, Mb, Mc); (2) the popularity of the components including the number of times viewed, the number of times harvested in other mixes ("mashups"), the number of times shared, the number of times downloaded, and the number of times published to other locations; (3) the method of viewing, sharing and downloading including to a personal computer, to a mobile device, and to a digital home connection; and (4) the fixed cost for the media (e.g., use of licensed content can command a fixed percentage of revenue).
  • the duration of the components Ma, Mb, Mc
  • the popularity of the components including the number of times viewed, the number of times harvested in other mixes (“mashups"), the number of times shared, the number of times downloaded, and the number of times published to other locations
  • the method of viewing, sharing and downloading including to a personal computer, to a mobile device, and to a digital home connection
  • the fixed cost for the media e.g.,
  • video mashup metadata can result in a variety of different monetization methods, any of which can be inserted in a modular fashion into the monetization algorithm.
  • User A combines collected media assets into a mashup and then publishes the mashup by an API integration.
  • the platform publishes the mashup to a corporate account, as opposed to user A's account.
  • the platform then automatically applies a calculus to determine how the net $100 should be split between users A, B, C, and D as well as the platform itself.
  • the targeting module 297 transmits the revenue splits to the platform via the API.
  • the targeting module 297 in turn credits appropriate percentages to users A, B, C, and D.
  • user A, B, C, and D can all have individual accounts established through the API.
  • the targeting module 297 automatically applies a calculus to determine how the net $100 should be split between users A, B, C, and D, the revenue splits are transmitted to the platform via the API for each user.
  • the first step is analysis of the mashup to determine which media assets were used as components of the final mashup video.
  • the targeting module 297 determines whether the asset is subject to specific per usage (e.g., $0.03 per video that incorporates a music artist song) or per stream licensing terms (e.g., $0.003/content/stream) which come from specific licensing deals with copyright holders. If so, the targeting module subtracts the appropriate amount from the revenue pool off the net revenue and compensates the asset owner.
  • the targeting module 297 can subtract a percentage, (E%), from the revenue pool remaining to be assigned to the platform as compensation for providing the technology.
  • the targeting module 297 can subtract a percentage, (M%), from the revenue pool and assign it to user A as compensation for creating the mashup. If the mashup is a first-generation creation (i.e., they did not "remix" another user's work), user A is allowed to keep 100% of this allocation.
  • the targeting module 297 can distribute the remaining revenue in the pool, (R%), to any digital assets (e.g., user-generated content that was used in the mix) that are not subject to specific per usage or per stream licensing terms. Revenue allocation can be determined using a weighted percentage algorithm. [0176] The weighted percentage algorithm is described further below.
  • the duration of all component assets from users whose clips were used in a mix is calculated to determine the total duration of media used in the mashup. Note that this sum can be different from the duration of the mashup itself.
  • assets A, B, and C can be clips of duration 20 seconds each
  • asset D can be an audio soundtrack of 60 seconds, in which case the total sum of all media is 120 seconds as compared to the 60 second duration of the mashup itself.
  • the relative weight of each media asset is calculated on a percentage basis using the sum duration of all media as the denominator (less any copyright content already compensated) and the durations of each media asset as numerators. For example, asset A accounts for 16.67% (20/120) of the mashup and soundtrack asset D accounts for 50% (60/120). After relative weights are calculated, the remaining revenue pool, R%, is allocated to each asset in accordance with its weight.
  • remixing and publishing are easy using the present system 200 creates an opportunity for gamers to remix popular revenue-generating mashups, make insignificant changes, and unjustly gain a revenue share.
  • the targeting module 297 can require remixes to be quantifiably different from their originals before they can be published.
  • a remix can only be published, and thus eligible for monetization, if one or more of the following are true: the entire soundtrack has been changed; 50% or more of the effects have been changed; the number of non-audio media assets (i.e., video, photo, title frames) changed is greater than or equal to 25%; or the total duration of the mashup changes is greater than or equal to 25%.
  • a mix in the online video platform is a remix that is in its fourth generation (which all qualify as remixes defined above) and, therefore, has four mixers involved: Ml, M2, M3, M4 where M4 is the last/final user to mix the media.
  • Ml the last/final user to mix the media.
  • PVl record label
  • PAl the record label
  • UVl UV2, UV3, UPl, UP2.
  • UVl 20sec UV2: lOsec
  • UV3 5 sec UPl : 5sec UP2: 5 sec PVl : 15sec PAl : 60sec
  • the total mix duration is 60 seconds.
  • the total media duration is 120 seconds.
  • the user generated media duration is 120 seconds - 15 seconds - 60 seconds, which is 45 seconds.
  • the copyright content pricing assumed as follows: PAl : $0.003/stream; and PVl : $0.003/stream.
  • Monetization algorithms can also take into account distribution of revenues where the media is syndicated to other websites. For example, if a mix is requested for viewing by another website, in real time the targeting module 297 can attach a targeted advertisement based on the video category, and serve the mix with the advertisement; revenue derived from the advertising can be shared between the two sites and the author of the mix. More sophisticated monetization algorithms can also take into account more complex networks of systems, such as a network consisting of a video creation/mixing site, a separate video sharing site to which the mix is published, an advertising site that attaches advertisements to mixes and distributes them to third-party sites, and the third- party sites that publish the resulting mix with attached advertising.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Educational Administration (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Système et procédés connexes concernant un service d'application sur Internet de stockage, d'édition et de partage en ligne de vidéo numérique. L'application permet de créer, de recueillir et d'analyser des métadonnées de media concernant des mélanges ou des parties de mélanges ( dont des titres, étiquettes et mots clés) et leur utilisation ainsi que des métadonnées utilisateur statiques et dynamiques sur des caractéristiques personnelles et des comportements d'utilisateur du système permettant d'effectuer un filtrage intelligent des mélanges, services de personnalisation, activités de marketing ciblées et monétisation.
PCT/US2007/078162 2006-09-12 2007-09-11 Systèmes et procédés de création, de collecte et d'utilisation de métadonnées WO2008033840A2 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US82536406P 2006-09-12 2006-09-12
US60/825,364 2006-09-12
US86967206P 2006-12-12 2006-12-12
US60/869,672 2006-12-12

Publications (2)

Publication Number Publication Date
WO2008033840A2 true WO2008033840A2 (fr) 2008-03-20
WO2008033840A3 WO2008033840A3 (fr) 2008-10-16

Family

ID=39184512

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/078162 WO2008033840A2 (fr) 2006-09-12 2007-09-11 Systèmes et procédés de création, de collecte et d'utilisation de métadonnées

Country Status (1)

Country Link
WO (1) WO2008033840A2 (fr)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011086465A1 (fr) * 2010-01-18 2011-07-21 Stephen Brett Systèmes et procédés permettant une collaboration, une protection et/ou une promotion de musique en ligne
WO2012107630A1 (fr) * 2011-02-07 2012-08-16 Nokia Corporation Procédé et appareil pour permettre un mélange de contenu multimédia à téléversement réduit
EP2570979A1 (fr) * 2011-09-13 2013-03-20 Gface GmbH Gestion de contenu en ligne dans un réseau
US20130124517A1 (en) * 2011-11-16 2013-05-16 Google Inc. Displaying auto-generated facts about a music library
US8577973B2 (en) 2010-06-30 2013-11-05 International Business Machines Corporation Accelerated micro blogging using correlated history and targeted item actions
RU2504085C2 (ru) * 2010-09-23 2014-01-10 Сони Корпорейшн Система и способ для использования процедуры морфинга в сети распределения информации
WO2014043218A1 (fr) * 2012-09-12 2014-03-20 F16Apps, Inc. Publicités inversées
WO2014055565A1 (fr) 2012-10-01 2014-04-10 Google Inc. Système et procédé permettant d'optimiser des vidéos
US8701008B2 (en) 2009-06-18 2014-04-15 Cyberlink Corp. Systems and methods for sharing multimedia editing projects
WO2014064321A1 (fr) * 2012-10-22 2014-05-01 Nokia Corporation Remélange de contenu multimédia personnalisé
US8819559B2 (en) 2009-06-18 2014-08-26 Cyberlink Corp. Systems and methods for sharing multimedia editing projects
US20150205824A1 (en) * 2014-01-22 2015-07-23 Opentv, Inc. System and method for providing aggregated metadata for programming content
EP2662821A4 (fr) * 2011-01-04 2016-01-27 Olaworks Inc Procédé, système et support d'enregistrement lisible par un ordinateur pour recommander d'autres utilisateurs ou objets grâce à la prise en compte de la préférence d'au moins un utilisateur
US20160148249A1 (en) * 2014-11-26 2016-05-26 Adobe Systems Incorporated Content Creation, Deployment Collaboration, and Tracking Exposure
DE102015105590A1 (de) * 2015-04-13 2016-10-13 Jörg Helmholz Verfahren zum Übertragen einer Aneinanderreihung einer Mehrzahl von Videosequenzen
CN110555157A (zh) * 2018-03-27 2019-12-10 优酷网络技术(北京)有限公司 内容推荐方法、内容推荐装置和电子设备
CN110555135A (zh) * 2018-03-27 2019-12-10 优酷网络技术(北京)有限公司 内容推荐方法、内容推荐装置和电子设备
CN110555131A (zh) * 2018-03-27 2019-12-10 优酷网络技术(北京)有限公司 内容推荐方法、内容推荐装置和电子设备
US11429832B2 (en) * 2016-06-02 2022-08-30 Kodak Alaris Inc. System and method for predictive curation, production infrastructure, and personal content assistant

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6216112B1 (en) * 1998-05-27 2001-04-10 William H. Fuller Method for software distribution and compensation with replenishable advertisements
US20010023436A1 (en) * 1998-09-16 2001-09-20 Anand Srinivasan Method and apparatus for multiplexing seperately-authored metadata for insertion into a video data stream
US20020116716A1 (en) * 2001-02-22 2002-08-22 Adi Sideman Online video editor
US6553178B2 (en) * 1992-02-07 2003-04-22 Max Abecassis Advertisement subsidized video-on-demand system
US20030093790A1 (en) * 2000-03-28 2003-05-15 Logan James D. Audio and video program recording, editing and playback systems using metadata
US20040181545A1 (en) * 2003-03-10 2004-09-16 Yining Deng Generating and rendering annotated video files
US20050114784A1 (en) * 2003-04-28 2005-05-26 Leslie Spring Rich media publishing
US20050246541A1 (en) * 1995-02-13 2005-11-03 Intertrust Technologies Corporation Trusted and secure techniques, systems and methods for item delivery and execution
US20060026104A1 (en) * 2004-07-29 2006-02-02 Toshiyasu Abe System and method for making copyrightable material available
US20060087683A1 (en) * 2004-02-15 2006-04-27 King Martin T Methods, systems and computer program products for data gathering in a digital and hard copy document environment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6553178B2 (en) * 1992-02-07 2003-04-22 Max Abecassis Advertisement subsidized video-on-demand system
US20050246541A1 (en) * 1995-02-13 2005-11-03 Intertrust Technologies Corporation Trusted and secure techniques, systems and methods for item delivery and execution
US6216112B1 (en) * 1998-05-27 2001-04-10 William H. Fuller Method for software distribution and compensation with replenishable advertisements
US20010023436A1 (en) * 1998-09-16 2001-09-20 Anand Srinivasan Method and apparatus for multiplexing seperately-authored metadata for insertion into a video data stream
US20030093790A1 (en) * 2000-03-28 2003-05-15 Logan James D. Audio and video program recording, editing and playback systems using metadata
US20020116716A1 (en) * 2001-02-22 2002-08-22 Adi Sideman Online video editor
US20040181545A1 (en) * 2003-03-10 2004-09-16 Yining Deng Generating and rendering annotated video files
US20050114784A1 (en) * 2003-04-28 2005-05-26 Leslie Spring Rich media publishing
US20060087683A1 (en) * 2004-02-15 2006-04-27 King Martin T Methods, systems and computer program products for data gathering in a digital and hard copy document environment
US20060026104A1 (en) * 2004-07-29 2006-02-02 Toshiyasu Abe System and method for making copyrightable material available

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8819559B2 (en) 2009-06-18 2014-08-26 Cyberlink Corp. Systems and methods for sharing multimedia editing projects
US8701008B2 (en) 2009-06-18 2014-04-15 Cyberlink Corp. Systems and methods for sharing multimedia editing projects
WO2011086465A1 (fr) * 2010-01-18 2011-07-21 Stephen Brett Systèmes et procédés permettant une collaboration, une protection et/ou une promotion de musique en ligne
US8577973B2 (en) 2010-06-30 2013-11-05 International Business Machines Corporation Accelerated micro blogging using correlated history and targeted item actions
RU2504085C2 (ru) * 2010-09-23 2014-01-10 Сони Корпорейшн Система и способ для использования процедуры морфинга в сети распределения информации
EP2662821A4 (fr) * 2011-01-04 2016-01-27 Olaworks Inc Procédé, système et support d'enregistrement lisible par un ordinateur pour recommander d'autres utilisateurs ou objets grâce à la prise en compte de la préférence d'au moins un utilisateur
CN103718206B (zh) * 2011-01-04 2017-09-15 英特尔公司 推荐其他用户或对象的方法、系统和装置
US8805954B2 (en) 2011-02-07 2014-08-12 Nokia Corporation Method and apparatus for providing media mixing with reduced uploading
WO2012107630A1 (fr) * 2011-02-07 2012-08-16 Nokia Corporation Procédé et appareil pour permettre un mélange de contenu multimédia à téléversement réduit
US8862717B2 (en) 2011-09-13 2014-10-14 Crytek Ip Holding Llc Management of online content in a network
EP2570979A1 (fr) * 2011-09-13 2013-03-20 Gface GmbH Gestion de contenu en ligne dans un réseau
US20130124517A1 (en) * 2011-11-16 2013-05-16 Google Inc. Displaying auto-generated facts about a music library
US8612442B2 (en) * 2011-11-16 2013-12-17 Google Inc. Displaying auto-generated facts about a music library
US9467490B1 (en) 2011-11-16 2016-10-11 Google Inc. Displaying auto-generated facts about a music library
WO2014043218A1 (fr) * 2012-09-12 2014-03-20 F16Apps, Inc. Publicités inversées
US9143823B2 (en) 2012-10-01 2015-09-22 Google Inc. Providing suggestions for optimizing videos to video owners
CN104813674A (zh) * 2012-10-01 2015-07-29 谷歌公司 用于优化视频的系统和方法
JP2015536101A (ja) * 2012-10-01 2015-12-17 グーグル インコーポレイテッド ビデオを最適化するためのシステムおよび方法
EP2904561A4 (fr) * 2012-10-01 2016-05-25 Google Inc Système et procédé permettant d'optimiser des vidéos
US11930241B2 (en) 2012-10-01 2024-03-12 Google Llc System and method for optimizing videos
EP3675014A1 (fr) * 2012-10-01 2020-07-01 Google LLC Système et procédé permettant d'optimiser des vidéos
US10194096B2 (en) 2012-10-01 2019-01-29 Google Llc System and method for optimizing videos using optimization rules
CN104813674B (zh) * 2012-10-01 2019-01-04 谷歌有限责任公司 用于优化视频的系统和方法
WO2014055565A1 (fr) 2012-10-01 2014-04-10 Google Inc. Système et procédé permettant d'optimiser des vidéos
WO2014064321A1 (fr) * 2012-10-22 2014-05-01 Nokia Corporation Remélange de contenu multimédia personnalisé
US20150205824A1 (en) * 2014-01-22 2015-07-23 Opentv, Inc. System and method for providing aggregated metadata for programming content
CN106415482A (zh) * 2014-01-22 2017-02-15 开放电视公司 提供节目内容的聚集元数据
CN105654332A (zh) * 2014-11-26 2016-06-08 奥多比公司 内容创建、部署合作以及随后的销售活动
US11087282B2 (en) 2014-11-26 2021-08-10 Adobe Inc. Content creation, deployment collaboration, and channel dependent content selection
US10936996B2 (en) 2014-11-26 2021-03-02 Adobe Inc. Content creation, deployment collaboration, activity stream, and task management
CN105631700A (zh) * 2014-11-26 2016-06-01 奥多比公司 内容创建、部署合作以及标记
US20160148278A1 (en) * 2014-11-26 2016-05-26 Adobe Systems Incorporated Content Creation, Deployment Collaboration, and Subsequent Marketing Activities
US20160148279A1 (en) * 2014-11-26 2016-05-26 Adobe Systems Incorporated Content Creation, Deployment Collaboration, and Badges
US20160148277A1 (en) * 2014-11-26 2016-05-26 Adobe Systems Incorporated Content Creation, Deployment Collaboration, and Subsequent Marketing Activities
US20160148249A1 (en) * 2014-11-26 2016-05-26 Adobe Systems Incorporated Content Creation, Deployment Collaboration, and Tracking Exposure
CN105654356A (zh) * 2014-11-26 2016-06-08 奥多比公司 内容创建、部署合作以及取决于渠道的内容选择
US11004036B2 (en) 2014-11-26 2021-05-11 Adobe Inc. Content creation, deployment collaboration, and tracking exposure
US20160148280A1 (en) * 2014-11-26 2016-05-26 Adobe Systems Incorporated Content Creation, Deployment Collaboration, and Channel Dependent Content Selection
US10776754B2 (en) 2014-11-26 2020-09-15 Adobe Inc. Content creation, deployment collaboration, and subsequent marketing activities
CN105654332B (zh) * 2014-11-26 2021-01-01 奥多比公司 内容创建、部署合作以及随后的销售活动
CN105654356B (zh) * 2014-11-26 2021-01-08 奥多比公司 内容创建、部署合作以及取决于渠道的内容选择
CN105631700B (zh) * 2014-11-26 2021-01-08 奥多比公司 内容创建、部署合作以及标记
US10929812B2 (en) 2014-11-26 2021-02-23 Adobe Inc. Content creation, deployment collaboration, and subsequent marketing activities
DE102015105590A1 (de) * 2015-04-13 2016-10-13 Jörg Helmholz Verfahren zum Übertragen einer Aneinanderreihung einer Mehrzahl von Videosequenzen
US11429832B2 (en) * 2016-06-02 2022-08-30 Kodak Alaris Inc. System and method for predictive curation, production infrastructure, and personal content assistant
US20220414418A1 (en) * 2016-06-02 2022-12-29 Kodak Alaris Inc. System and method for predictive curation, production infrastructure, and personal content assistant
CN110555131A (zh) * 2018-03-27 2019-12-10 优酷网络技术(北京)有限公司 内容推荐方法、内容推荐装置和电子设备
CN110555135A (zh) * 2018-03-27 2019-12-10 优酷网络技术(北京)有限公司 内容推荐方法、内容推荐装置和电子设备
CN110555157B (zh) * 2018-03-27 2023-04-07 阿里巴巴(中国)有限公司 内容推荐方法、内容推荐装置和电子设备
CN110555131B (zh) * 2018-03-27 2023-04-07 阿里巴巴(中国)有限公司 内容推荐方法、内容推荐装置和电子设备
CN110555135B (zh) * 2018-03-27 2023-04-07 阿里巴巴(中国)有限公司 内容推荐方法、内容推荐装置和电子设备
CN110555157A (zh) * 2018-03-27 2019-12-10 优酷网络技术(北京)有限公司 内容推荐方法、内容推荐装置和电子设备

Also Published As

Publication number Publication date
WO2008033840A3 (fr) 2008-10-16

Similar Documents

Publication Publication Date Title
WO2008033840A2 (fr) Systèmes et procédés de création, de collecte et d'utilisation de métadonnées
US10437896B2 (en) Singular, collective, and automated creation of a media guide for online content
Gao et al. Vlogging: A survey of videoblogging technology on the web
US9552428B2 (en) System for generating media recommendations in a distributed environment based on seed information
US8887058B2 (en) Media management for multi-user group
US10282425B2 (en) Identifying popular segments of media objects
Bockstedt et al. The move to artist-led on-line music distribution: a theory-based assessment and prospects for structural changes in the digital music market
US9055317B2 (en) Media content catalog service
JP5546246B2 (ja) コンテンツ管理システム
CN100481930C (zh) 根据播放列表生成用户档案的方法和装置
US20100145794A1 (en) Media Processing Engine and Ad-Per-View
US8176028B2 (en) Broadcast network platform system
US20150046537A1 (en) Retrieving video annotation metadata using a p2p network and copyright free indexes
US20070118801A1 (en) Generation and playback of multimedia presentations
JP2010502116A (ja) 推奨エンジンによる選択的メディアコンテンツアクセスのシステム及び方法
EP2304606A1 (fr) Système de contenu
US20140222775A1 (en) System for curation and personalization of third party video playback
US20220107978A1 (en) Method for recommending video content
US20170161273A1 (en) Graph-based music recommendation and dynamic media work micro-licensing systems and methods
López-Nores et al. Cloud-based personalization of new advertising and e-commerce models for video consumption
JP2003168051A (ja) 電子カタログ提供システム、電子カタログ提供方法、そのプログラム、及びそのプログラムを記録した記録媒体
KR20080035300A (ko) 동영상 사용자 제작 콘텐츠를 통한 광고방법
Bailer et al. A video browsing tool for content management in postproduction
Karlgren et al. CHORUS deliverable 3.4: Vision document
CN116610824A (zh) 一种融媒体的内容管理系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07842246

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 24-07-2009)

122 Ep: pct application non-entry in european phase

Ref document number: 07842246

Country of ref document: EP

Kind code of ref document: A2