WO2007082169A2 - Automatic aggregation of content for use in an online video editing system - Google Patents

Automatic aggregation of content for use in an online video editing system Download PDF

Info

Publication number
WO2007082169A2
WO2007082169A2 PCT/US2007/060177 US2007060177W WO2007082169A2 WO 2007082169 A2 WO2007082169 A2 WO 2007082169A2 US 2007060177 W US2007060177 W US 2007060177W WO 2007082169 A2 WO2007082169 A2 WO 2007082169A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
video
user
content
external content
Prior art date
Application number
PCT/US2007/060177
Other languages
French (fr)
Other versions
WO2007082169A3 (en
Inventor
David A. Dudas
James H. Kaskade
Kenneth W. O'flaherty
Original Assignee
Eyespot Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eyespot Corporation filed Critical Eyespot Corporation
Publication of WO2007082169A2 publication Critical patent/WO2007082169A2/en
Publication of WO2007082169A3 publication Critical patent/WO2007082169A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • This invention relates in general to the use of computer technology to store, edit and share personal digital video material.
  • DSCs digital still cameras
  • DVCs digital video camcorders
  • webcams computer mounted web cameras
  • FIG. 1 is a block diagram illustrating a prior art video editing platform including a creation block 199, a consumption block 198, and a media aggregation, storage, manipulation & delivery infrastructure 108.
  • Figure 1 shows with arrows the paths that currently exist for transferring video material from a particular source, including a DSC 100, a DVC 102, a mobile phone 104, and a webcam 106 to a particular destination viewing device including a DVD player 110, a DSTB 112, a DVR 114, a mobile phone 116, a handheld 118, a video iPod 120, or a PC 122.
  • the only destination device that supports material from all input devices is the PC 122.
  • mobile phone 104 can send video material to another mobile phone 116, and a limited number of today's digital camcorders and digital cameras can create video material on DVDs that can then be viewed on the DVD player 110.
  • these paths are fractured and many of the devices in the creation block 199 have no way of interfacing with many of the devices in the consumption block 198.
  • Beyond th eh igh lighted paths through the media aggregation, storage, manipulation & delivery infrastructure 108, no other practical video transfer paths exist today.
  • the online video sharing websites do not support video editing, there is no mechanism for their members to incorporate external material into their video productions, such as photographs, audio, music, video or animation that may be available over the Internet.
  • Videographers who are adept at using a PC-based video editor may succeed in creating professional-looking productions, but they also have no means of incorporating externally available material, since none of the available desktop video editing applications provide such a feature, having been designed for standalone editing.
  • a system and methods are disclosed for storing, editing and distributing video material in an online environment.
  • the systems and methods automatically aggregate externally available content of interest to each user, such as photographs, audio, music, video and animation, thereby enabling creators of online video productions to easily enhance their productions with selections from such material.
  • an online video platform regularly spiders and indexes
  • the digital material may consist of any media available on the Internet, including photographs, audio, music, video or animation.
  • users are presented with graphical thumbnail representations of the materials that have been specifically indexed for them. Users can review the aggregated materials by clicking on specific thumbnails. Users can then select a particular aggregated material by clicking on its thumbnail and dragging and dropping the thumbnail into the timeline of a new video creation. The material will then be automatically integrated into the new video.
  • external material including websites and other data sources that do not reside on the user's local machine, as well as local material is spidered.
  • external material is spidered at a first interval, while local material is spidered at a second interval.
  • copy permission is verified for material that is destined to be aggregated.
  • analytics are included so that the system can recommend material that the user may wish to be aggregated.
  • the Internet-hosted application service can be used on a dedicated website or its functionality can be served to different websites seeking to provide users with enhanced video editing capabilities.
  • the external material may include media from several remote sources and need not reside on the online video platform. Instead, the external content may be cached in realtime on the online video platform when a user views, mixes, or otherwise performs an editing action on the external content.
  • Figure 1 is a block diagram illustrating a prior art video editing platform.
  • Figure 2 is a block diagram illustrating the functional blocks or modules in an example architecture.
  • Figure 3 is a block diagram illustrating an example online video platform.
  • Figure 4 is a block diagram illustrating an example online video editor application.
  • Figure 5 is a block diagram illustrating an example video preprocessing application.
  • Figure 6 is a diagram illustrating an example process for automatically segmenting a video file.
  • Figure 7 is a diagram illustrating an example process for direct uploading and editing.
  • Figure 8 is a diagram illustrating an example process for automatically aggregating video content.
  • Figure 9 is a diagram illustrating an example process for automatically aggregating video content that uses a spidering module.
  • Figure 10 is a diagram illustrating an example process for automatically aggregating video content that ensures permission to copy the content.
  • Figure 11 is a diagram illustrating an example process for making multiple copies of aggregated material.
  • Figure 12 is a diagram illustrating an example process for using analytics to recommend aggregated material.
  • Certain examples as disclosed herein provide for the use of computer technology to store, edit, and share personal digital video material.
  • Various methods, for example, as disclosed herein provide for the automatic aggregation of content of interest to each user for possible incorporation into the user's future video creations, including external content such as photographs, audio, music, video and animation.
  • external content such as photographs, audio, music, video and animation.
  • FIG. 2 is a block diagram illustrating the functional blocks or modules in an example architecture.
  • a system 200 includes an online video platform 206, an online video editor 202, a preprocessing application 204, as well as a content creation block 208 and a content consumption block 210.
  • the content creation block 208 may include input data from multiple sources that are provided to the online video platform 206, including personal video creation devices 212, personal photo and music repositories 214, and personally selected online video resources 216, for example.
  • video files may be uploaded by consumers from their personal video creation devices 212.
  • the personal video creation devices 212 may include, for example, DSCs, DVCs, cell phones equipped with video cameras, and webcams.
  • input to the online video platform 206 may be obtained from other sources of digital video and non-video content selected by the user.
  • Non- video sources include the personal photo and music repositories 214, which may be stored on the user's PC, or on the video server, or on an external server, such as a photo-sharing application service provider ("ASP"), for example.
  • Additional video sources include websites that publish shareable video material, such as news organizations or other external video-sharing sites, which are designated as personally selected online video resources 216, for example.
  • the online video editor 202 (also referred to as the Internet-hosted application service) can be used on a dedicated website or its functionality can be served to different websites seeking to provide users with enhanced video editing capabilities.
  • a user may go to any number of external websites providing an enhanced video editing service.
  • the present system may be used, for example, to enable the external websites to provide the video editing capabilities while maintaining the look and feel of the external websites.
  • the user of one of the external websites may not be aware that they are using the present system other than the fact that they are using functionality provided by the present system.
  • the system may serve the application to the external IP address of the external website and provide the needed function while at the same time running the application in a manner consistent with the graphical user interface ("GUI") that is already implemented at the external IP address.
  • GUI graphical user interface
  • a user of the external website may cause the invocation of a redirection and GUI recreation module 230, which may cause the user to be redirected to one of the servers used in the present system which provides the needed functionality while at the same time recreating the look and feel of the external website.
  • Video productions may be output by the online video platform 206 to the content consumption block 210.
  • Content consumption block 210 may be utilized by a user of a variety of possible destination devices, including, but not limited to, mobile devices 218, computers 220, DVRs 222, DSTBs 224, and DVDs 226.
  • the mobile devices 218 may be, for example, cell phones or PDAs equipped with video display capability.
  • the computers 220 may include PCs, Apples, or other computers or video viewing devices that download material via the PC or Apple, such as handheld devices (e.g., PalmOne), or an Apple video iPod.
  • the DVDs 226 may be used as a media to output video productions to a permanent storage location, as part of a fulfillment service for example.
  • Delivery by the online video platform 206 to the mobile devices 218 may use a variety of methods, including but not limited to a multimedia messaging service (“MMS”), a wireless application protocol (“WAP”), and instant messaging (“IM”). Delivery by the online video platform 206 to the computers 220 may use a variety of methods, including but not limited to: email, IM, uniform resource locator (“URL”) addresses, peer-to-peer file distribution (“P2P”), or really simple syndication (“RSS”), for example.
  • MMS multimedia messaging service
  • WAP wireless application protocol
  • IM instant messaging
  • Delivery by the online video platform 206 to the computers 220 may use a variety of methods, including but not limited to: email, IM, uniform resource locator (“URL”) addresses, peer-to-peer file distribution (“P2P”), or really simple syndication (“RSS”), for example.
  • RSS really simple syndication
  • Figure 3 is a block diagram illustrating an example online video platform.
  • the online video platform 206 includes an opt-in engine module 300, a delivery engine module 302, a presence engine module 304, a transcoding engine module 306, an analytic engine module 308, and an editing engine module 310.
  • the online video platform 206 may be implemented on one or more servers, for example, Linux servers.
  • the system can leverage open source applications and an open source software development environment.
  • the system has been architected to be extremely scalable, requiring no system reconfiguration to accommodate a growing number of service users, and to support the need for high reliability.
  • the application suite may be based on AJAX where the online application behaves as if it resides on the user's local computing device, rather than across the
  • the AJAX architecture allows users to manipulate data and perform "drag and drop” operations, without the need for page refreshes or other interruptions.
  • the opt-in engine module 300 may be a server, which manages distribution relationships between content producers in the content creation block 208 and content consumers in the content consumption block 210.
  • the delivery engine module 302 may be a server that manages the delivery of content from content producers in the content creation block 208 to content consumers in the content consumption block 210.
  • the presence engine module 304 may be a server that determines device priority for delivery of content to each consumer, based on predefined delivery preferences and detection of consumer presence at each delivery device.
  • the transcoding engine module 306 may be a server that performs decoding and encoding tasks on media to achieve optimal format for delivery to target devices.
  • the analytic engine module 308 may be a server that maintains and analyzes statistical data relating to website activity and viewer behavior.
  • the editing engine module 310 may be a server that performs tasks associated with enabling a user to edit productions efficiently in an online environment.
  • FIG. 4 is a block diagram illustrating an example online video editor 202.
  • the online video editor 202 includes an interface 400, input media 402a-h, and a template 404.
  • a digital content aggregation and control module 406 may also be used in conjunction with the online video editor 202 and thumbnails 408 representing the actual video files may be included in the interface 400.
  • the online video editor 202 may be an Internet-hosted application, which provides the interface 400 for selecting video and other digital material (e.g., music, voice, photos) and incorporating the selected materials into a video production via the digital content aggregation and control module 406.
  • the digital content aggregation and control module 406 may be software, hardware, and/or firmware that enables the modification of the video production as well as the visual representation of the user's actions in the interface 400.
  • the input media 402a-h may include such input sources as the shutterfly website 402a, remote media 402b, local media 402c, the napster web service 402d, the real rhapsody website 402e, the garage band website 402f, the flickr website 402g and webshots 402h.
  • the input media 402a-h may be media that the user has selected for possible inclusion in the video production and may be represented as the thumbnails 408 in a working "palette" of available material elements, in the main window of the interface 400.
  • the input media 402a-h may be of diverse types and formats, which may be aggregated together by the digital content aggregation and control module 406.
  • the thumbnails 408 are used as a way to represent material and can be acted on in parallel with the upload process.
  • the thumbnails 408 may be generated in a number of manners.
  • the thumbnails may be single still frames created from certain sections within the video, clip, or mix.
  • the thumbnails 408 may include multiple selections of frames (e.g., a quadrant of four frames).
  • the thumbnails may include an actual sample of the video in seconds (e.g., a 1 minute video could be represented by the first 5 seconds).
  • the thumbnails 408 can be multiple samples of video (e.g., 4 thumbnails of 3 second videos for a total of 12 seconds).
  • the thumbnails 408 are a method of representing the media to be uploaded (and after it is uploaded), whereby the process of creating the representation and uploading it takes a significantly less amount of time than either uploading the original media or compressing and uploading the original media.
  • the online video editor 202 allows the user to choose (or can create) the template 404 for the video production.
  • the template 404 may represent a timeline sequence and structure for insertion of materials into the production.
  • the template 404 may be presented in a separate window at the bottom of the screen, and the online video editor 202 via the digital content aggregation and control module 406 may allow -lithe user to drag and drop the thumbnails 408 (representing material content) in order to insert them into the timeline to create the new video production.
  • the online video editor 202 may also allow the user to select from a library of special effects to create transitions between scenes in the video. The work-in-progress of a particular video project may be shown in a separate window.
  • a spidering module 414 is included in the digital content aggregation and control module 406.
  • the spidering module may periodically search and index both local content and external content.
  • the spidering module 414 may use the Internet 416 to search for external material periodically for inclusion or aggregation with the production the user is editing.
  • the local storage 418 may be a local source, such as a user's hard disk drive on their local computer, for the spidering module 414 to periodically spider to find additional internal locations of interest and/or local material for possible aggregation.
  • the external content or material spidered by the spidering module 414 may include media from several remote sources that are intended to be aggregated together.
  • the external content need not reside on the online video platform. Instead, the external content may be cached in realtime on the online video platform when a user views, mixes, or otherwise performs an editing action on the external content.
  • many sources of diverse material of different formats may be aggregated on the fly.
  • the latency of producing a final result may vary depending on: 1 ) what is cached already, 2) the speed of the remote media connection, 3) the size of the remote media (related to whether the media is compressed).
  • An intelligent caching algorithm may be employed, which takes the above factors into account can shorten the time for online mixing.
  • the online video editor 202 allows the user to publish the video to one or more previously defined galleries / archives 410. Any new video published to the gallery / archive 410 can be made available automatically to all subscribers 412 to the gallery. Alternatively, the user may choose to keep certain productions private or to only share the productions with certain users.
  • Figure 5 is a block diagram illustrating an example preprocessing application.
  • the preprocessing application 204 includes a data model module 502, a control module 504, a user interface module 506, foundation classes 508, an operating system module 510, a video segmentation module 512, a video compression module 514, a video segment upload module 516, a video source 518, and video segment files 520.
  • the preprocessing application 204 is written in C++ and runs on a Windows PC, wherein the foundation classes 508 includes Microsoft foundation classes ("MFCs").
  • MFCs Microsoft foundation classes
  • an object-oriented programming model is provided to the Windows APIs.
  • the preprocessing application 204 is written, wherein the foundation classes 508 are in a format suitable for the operating system module 510 to be the Linux operating system.
  • the video segment upload module 516 may be an application that uses a Model-View-Controller ("MVC") architecture.
  • MVC Model-View-Controller
  • the MVC architecture separates the data model module 502, the user interface module 506, and the control module 504 into three distinct components.
  • the preprocessing application 204 automatically segments, compresses, and uploads video material from the user's PC, regardless of length.
  • the preprocessing application 204 uses the video segmentation module 512, the video compression module 514, and the video segment upload module 516 respectively to perform these tasks.
  • the uploading method works in parallel with the online video editor 202, allowing the user to begin editing the material immediately, while the material is in the process of being uploaded.
  • the material may be uploaded to the online video platform 206 and stored as one or more video segment files 520, one file per segment, for example.
  • the video source 518 may be a digital video camcorder or other video source device.
  • the preprocessing application 204 starts automatically when the video source 518 is plugged into the user's PC. Thereafter, it may automatically segment the video stream by scene transition using the video segmentation module 512, and save each of the video segment files 520 as a separate file on the PC.
  • a video would be captured on any number of devices at the video source block 518. Once the user captured the video (i.e., on their camcorder, cellular phone, etc.) it would be transferred to a local computing device, such as the hard drive of a client computer with Internet access. [0054] Alternatively videos can be transferred to a local computing device whereby an intelligent uploader can be deployed. In some cases, the video can be sent directly from the video source block 518 over a wireless network (not shown), then over the Internet, and finally to the online video platform 206. This alternative bypasses the need to involve a local computing device or a client computer. However, this example is most useful when the video, clip, or mix is either very short, or highly compressed, or both.
  • the video is not compressed or long or both, and, therefore, relatively large, it is typically transferred first to a client computer where an intelligent uploader is useful.
  • an upload process is initiated from a local computing device using the video segment upload module 516, which facilitates the input of lengthy video material.
  • the user would be provided with the ability to interact with the user interface module 506.
  • the control module 504 controls the video segmentation module 512 and the video compression module 514, wherein the video material is segmented and compressed into the video segment files 520.
  • a lengthy production may be segmented into 100 upload segments, which are in turn compressed into 100 segmented and compressed upload segments.
  • Each of the compressed video segment files 520 begin to be uploaded separately via the video segment upload module 516 under the direction of the control module 504. This may occur, for example, by each of the upload segments being uploaded in parallel. Alternatively each of the upload segments may be uploaded in order, the largest segment first, the smallest segment first, or any other manner.
  • the online video editor 202 is presented to the user. Through a user interface provided by the user interface module 506, thumbnails representing the video segments in the process of being uploaded are made available to the user. The user would proceed to edit the video material via an interaction with the thumbnails.
  • the user may be provided with the ability to drag and drop the thumbnails into and out of a timeline or a storyline, to modify the order of the segments that will appear in the final edited video material.
  • the system is configured to behave as if all of the video represented by the thumbnails is currently in one location (i.e., on the user's local computer) despite the fact that the material is still in the process of being uploaded by the video segment upload module 516.
  • the upload process may be changed.
  • the upload process may immediately begin to upload the last sequential portion of the production, thereby lowering the priority of the segments that were currently being uploaded prior to the user's editing action.
  • Figure 6 is a diagram illustrating an example process for automatically segmenting a video file. This process can be carried out by the preprocessing application 204 previously described with respect to Figure 2.
  • the video segmentation module 512 of the preprocessing application 204 may be used to carry out one or more of the steps described in Figure 6.
  • step 600 scene transitions within the video material are automatically detected.
  • step 602 the material is segmented into separate files.
  • Step 602 may include the preprocessing application 204 providing for the application of metadata tags by the user for the purpose of defining the subject matter. These additional steps may allow the user to apply one or more descriptive names to each file segment ("segment tags”) at step 604, and further to preview the content of each file segment and to provide additional descriptive names ("deep tags") defining specific points-in-time within the file segment at step 606.
  • Both segment tags and deep tags at steps 604 and 606 can later be used as metadata references in search and retrieval operations by the user on video material stored within a remote computing device, such as a server.
  • a remote computing device such as a server.
  • any subsequent viewer searching on either of these tags will retrieve the file segment, and the segment will be positioned for viewing at the appropriate point: at the start of the segment if the search term was "harbor” or at the one-minute mark if the search term was "sailboat.”
  • the drag-and-drop editor will automatically extract the segment beginning at the sailboat scene, rather than requiring the user to manually edit or clip the segment.
  • the deep tags 606 can be used to dynamically serve up advertisements at appropriate times of viewing based on an association between time and the deep tags 606.
  • the separate files may be ready for uploading to a server at this stage, for example.
  • a thumbnail image is created for each file segment.
  • the set of thumbnail images representing all of the video file segments is initially uploaded to the server.
  • the thumbnail images may be selected by copying the first non-blank image in each video file segment, for example, and then uploading them to a remote computing device using the video segment upload module 516.
  • the online video editor 202 also handles uploading of video clips directly from a PC, or cell phone, without the need to use the preprocessing application 206.
  • Figure 7 is a diagram illustrating an example process for direct uploading and editing.
  • the online video editor 202 treats each video clip as a separate video segment, and creates a thumbnail image for each segment (based on the first non-blank image detected in the segment's data stream, for example). If the clip includes transitions, the editor detects these and splits the clip into separate segments, creating a new segment following each transition, and builds an accompanying thumbnail image for each created segment. For each segment, the editor prompts the user to supply one or more segment tags. After each segment has been uploaded, the user can review the segment and create additional deep tags defining specific points-in-time within the segment.
  • a folder When uploading video clips, users are provided with the ability to define a folder at step 1700, which is retrieved to receive a set of clips that they wish to associate together later in the editing process. Upon completion of the upload process, the folder will contain identification information (including tags) for each of the segments relating to the clip set.
  • users subsequently use the online video editor 202 to create a video production by accessing a particular folder they retrieve the set of segments that they intended to use together, which are displayed as a set of segment thumbnails at step 1702. They can then drag and drop segment thumbnails into the editor's timeline at step 1704 to create a video sequence out of the segments they wish to include in their new production.
  • External content is provided for selection by tag at step 1706.
  • the user is also provided with the ability to add transitions, special effects, as well music or voice overlays at steps 1708 and 1710 before saving the edited work as a new production at step 1712.
  • the drag-and-drop interface provides an extremely simple method of video editing, and is designed to enable the average Internet user to easily edit his or her video material. The process of video editing is thus greatly simplified, by providing a single Internet-hosted source that automatically manages the processes of uploading, storing, organizing, editing, and subsequently sharing video material.
  • the system may also automatically tag all digital content that it has aggregated on behalf of the user. Where a file name or title is supplied with a piece of aggregated material, this may be used as the tag.
  • the system may create a tag in the form of: "Photo mm/dd/yy nnn", “Audio mm/dd/yy nnn”, “Music mm/dd/yy nnn “, “Video mm/dd/yy nnn” or “Animation mm/dd/yy nnn”, for example, where “mm/dd/yy” is the date when the spidering occurred, and "nnn” is a sequential number representing the sequence in which the piece of material was aggregat ⁇ d by the system on the date specified.
  • the user can change any of the automatically aggregated material tags to a more meaningful tag name.
  • users can create entire video productions by aggregating together a set of tagged segments or sections of video from any source available within the system, including tagged material from external sources. It thus becomes extremely easy for users to create new video productions from existing material from multiple sources, without the need to introduce their own new material. Any such aggregated production will exist as a separate file, but the system also retains separate files for all of aggregated segments from which it is constructed.
  • the online video editor 202 includes an application that automatically aggregates content of interest to each user for possible incorporation into the user's future video creations.
  • Figure 8 is a diagram illustrating an example process for automatically aggregating video content.
  • This process can be carried out, for example, by the spidering module 414 previously described with respect to Figure 4.
  • the spidering module 414 regularly spiders and indexes Internet sites identified by users to be of interest to them at step 800. Thumbnail links to current digital material from each source are built at step 802.
  • the digital material may consist of any media available on the Internet, including photographs, audio, music, video or animation, for example.
  • step 804 it is determined whether the user has entered a video editing portion of the application. If so, the user is presented at step 806 with graphical thumbnail representations of the materials that have been specifically indexed for them. Users can review the aggregated materials by clicking on specific thumbnails. Users can then select a particular aggregated material at step 808, for example, by clicking on its thumbnail and dragging and dropping the thumbnail into the timeline of a new video creation. If the user selected the aggregated material represented by the thumbnail at step 808 then the material will be automatically integrated into the new video at step 810.
  • the online video editor 202 uses a regular process in the form of a spidering module, in order to seek out and add material to aggregate to enhance the user's experience.
  • Figure 9 is a diagram illustrating an example process for automatically aggregating video content that uses a spidering module. This process can be carried out by the online video editor 202 previously described with respect to Figure 2, and more specifically by the spidering module 414 previously described with respect to Figure 4.
  • the spidering module 414 regularly spiders and indexes Internet sites (i.e., locations having external content) identified by users to be of interest to them at step 902 after it is determined that a first interval has passed at step 900.
  • the first interval may be, for example, one day.
  • the process builds links to current relevant digital material from each source at step 904. Each link may be represented by a thumbnail image at step 906.
  • the external digital material may consist of any media available on the Internet, including photographs, audio, music, video or animation.
  • step 908 it is determined whether a second interval has passed.
  • the second regular interval may be, for example, one week. If one week has passed, the spidering module 414 may spider the user's local disk storage at step 910 (i.e., locations having internal content).
  • the application may detect whether there is digital material that has not yet been aggregated on behalf of the user. If not, the process repeats at step 900. Otherwise, at step 914 the application aggregates the non-aggregated digital material.
  • Many Internet sites offering digital media provide an API that supports exporting of digital material from their site, where the creator has authorized free copying of their material.
  • the spidering module 414 spiders the Internet, one example may search for such authorized materials. If a user has requested spidering of a site that does not provide an API, and therefore provides no automated way of indicating whether copying permission has been granted on its digital materials, the system may only proceed to spider sites that have been verified manually to offer free copying of their digital content. To verify copying permission, the system may check a list that it maintains that contains entries for all Internet sites for which free copying of digital materials has been verified manually, by the operators of the service.
  • FIG 10 is a diagram illustrating an example process for automatically aggregating video content that ensures permission to copy the content. This process can be carried out by the online video editor 202 previously described with respect to Figure 2.
  • Internet site spidering by the spidering module 414 proceeds for each source of the aggregated material.
  • step 1000 for each piece of available digital material that has not previously been aggregated, it is determined whether the provider of the aggregated material also provides an associated API. If so, the system checks for copying permission via the API at step 1002 and saves a link to the source material within the site at step 1004.
  • an appropriate set of commands is built to access and stream the material and at step 1008 a link and the commands are saved, for example, as a record in a file associated with a thumbnail of the aggregated material.
  • step 1000 If, on the other hand, step 1000 is false, (i.e., the source of the aggregated material does not provide an API), it is determined at step 1010 whether the source is in a list of sites authorized for copying of digital material. If so, the system copies the material over to a disk file on the online video platform 206 at step 1012. In one example, the system creates two copies: one in Flash format and the other in DivX format. Thereafter, the system associates the copies at step 1014 with the thumbnail of the aggregated material. If step 1010 is false, (i.e., there is no API granting copy permission and the site is not on a list of authorized sites) then the material is not aggregated at step 1016.
  • FIG 11 is a diagram illustrating an example process for making multiple copies of the aggregated material. This process can be carried out by the online video editor 202 previously described with respect to Figure 2.
  • the system may first detect the format and resolution of the subject video material at step 1100. Then at step 1102 the system may select the appropriate decode software module to handle the detected video format.
  • the video material may be decoded from the input format using the selected decode codec. Then, at step 1106, the material may be encoded it into Flash format using a Flash codec and into DivX format using a Divx codec. Thereafter, at step 1108, copies of the compressed video are created.
  • Thumbnails for photos may be implemented as miniature renderings of the actual photos.
  • Thumbnails for video and animation may be represented by the first non-blank image detected in the data stream of the subject video or animation.
  • Thumbnails for music and audio are normally imported from source sites that provide an API; where none is available, the system may supply a default image (e.g., an image of a music note or of an audio speaker).
  • thumbnail representations of the materials that have been specifically indexed for them.
  • the thumbnail representations may be organized in a hierarchical file structure to allow easy browsing of available material. Users can review the aggregated materials by clicking on specific thumbnails. When building a new video production, users can select a particular aggregated material by clicking on its thumbnail and dragging and dropping the thumbnail into the timeline of the new production. The material will then be automatically integrated into the production, together with other segments or aggregated material that the user has selected. Users can also add transitions, special effects, as well music or voice overlays.
  • the system retrieves a copy of the aggregated material and makes it available for viewing and possible inclusion into the user's production. If the system has not previously stored a copy of the material locally, but has instead saved the link to the material and related API commands, the system accesses the material and creates copies in multiple formats, prior to making the material available for viewing and possible inclusion into the user's production. [0081] Users can share productions that include aggregated materials in the same manner in which they can share any production they create. On completion of a video production, the creator has the option of defining whether the video is shareable with other users.
  • the video can be shared a multiple levels: at the community level (by any person viewing the video), or at one or more levels within a group hierarchy (e.g., only by people identified as "family" within a "friends and family” group).
  • the sharing hierarchy may be implemented as a system of folders within a directory structure, similar to the structure of a UNIX file system or a Windows file system.
  • the file structure is, in effect, the user's own personal collection of current multimedia material from the Internet, analogous to a music playlist.
  • Each member who creates video productions has such a directory, and a folder may be created within the directory for each group or subgroup that the member defines.
  • the member creates For each video production that the member creates, he or she has the ability to define which folders have the ability to view the video.
  • the person's ID is entered into the appropriate folder, and the person inherits the sharing privileges that are associated with the folder.
  • the system incorporates analytic methods to determine the most likely interests of users, and to make recommendations to users in the form of additional material that the system aggregates automatically on their behalf, based on their predicted interests.
  • Figure 12 is a diagram illustrating an example process for using analytics to recommend aggregated material. This process can be carried out by the online video editor 202 previously described with respect to Figure 2. [0084] In the illustrated example, a number of factors are considered at step 1200 and a recommendation is made at step 1202. Recommendations may be based on analyzing such variables as the user's production titles and tags, and the types or genres of external materials that they have requested to be aggregated, for example.
  • the system also may analyze the interests of its user base, and aggregates recommended materials based on similar interests among users. [0085] At step 1204 it is determined whether the user accepted the recommendation. If not the process repeats. Otherwise, at step 1206, the additional material is aggregated for the user. Thus a user who requests aggregation of a particular musical performer may receive aggregated materials from other musical performers because other users have frequently requested materials from both performers. [0086]
  • the system thus has a self-learning aspect to its aggregation of external content.
  • the combination of aggregated file structures of all users can be considered to be a self-learning Internet file system, which evolves based on the composite set of interests of its user base. Unlike other network file systems or distributed file systems, its content is not constrained by whomever set up the system, but grows organically to reflect the common interests of its online community.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A system and related methods comprising an Internet-hosted application service for online storage, editing and sharing of digital video content, whereby the application automatically aggregates content of interest to each user for possible incorporation into the user's future video creations. The Internet-hosted application service can be used on a dedicated website or its functionality can be served to different websites seeking to provide users with enhanced video editing capabilities. The external content or material spidered by the spidering module 414 includes media from several remote sources that are intended to be aggregated together. The external content need not reside on the online video platform. Instead, the external content is cached in realtime on the online video platform when a user views, mixes, or otherwise performs an editing action on the external content.

Description

AUTOMATIC AGGREGATION OF CONTENT FOR USE IN AN ONLINE VIDEO EDITING SYSTEM
[0001] This application hereby incorporates by reference the following U.S. Non-
Provisional Patent Applications.
Figure imgf000003_0001
FIELD OF THE INVENTION
[0002] This invention relates in general to the use of computer technology to store, edit and share personal digital video material.
BACKGROUND
[0003] There are currently around 500 million devices in existence worldwide that are capable of producing video: 350 million video camera phones, 115 million video digital cameras, plus 35 million digital camcorders. The extremely rapid increase in availability of such devices, especially camera phones, has generated a mounting need on the part of consumers to find ways of converting their video material into productions that that they can share with others. This amounts mainly to a need for two capabilities: video editing and online video sharing.
[0004] Online sharing of consumer-generated video material via the Internet is a relatively new phenomenon, and is still poorly developed. A variety of websites have come into existence to support online video publishing and sharing. Most of these sites are focused on providing a viewing platform whereby members can upload their short amateur video productions to the website and offer them for viewing by the general public (or in some cases by specified users or groups of users), and whereby visitors to the website can browse and select video productions for viewing. But none of these websites currently support editing of video material, and most of them have severe limitations on the length of videos that they support (typically a maximum of 5-10 minutes). Consequently, most videos available for viewing on these sites are short (typically averaging less than 2 or 3 minutes), and are of poor quality, since they have not been edited.
[0005] Storing, editing, and sharing video is therefore difficult for consumers who create video material today on various electronic devices, including digital still cameras ("DSCs"), digital video camcorders ("DVCs"), mobile phones equipped with video cameras and computer mounted web cameras ("webcams"). These devices create video files of varying sizes, resolutions and formats. Digital video recorders ("DVRs"), in particular, are capable of recording several hours of high-resolution material occupying multiple gigabytes of digital storage. Consumers who generate these video files typically wish to edit their material down to the highlights that they wish to keep, save the resulting edited material on some permanent storage medium, and then share this material with friends and family, or possibly with the public at large. [0006] A wide variety of devices exist for viewing video material, ranging from
DVD players, TV-connected digital set-top boxes ("DSTBs") and DVRs, mobile phones, personal computers ("PCs"), and video viewing devices that download material via the PC, such as handheld devices (e.g., PalmOne), or the Apple video iPod. The video recording formats accepted by each of these viewing devices vary widely, and it is unlikely that the format that a particular delivery device accepts will match the format in which a particular video production will have been recorded. [0007] Figure 1 is a block diagram illustrating a prior art video editing platform including a creation block 199, a consumption block 198, and a media aggregation, storage, manipulation & delivery infrastructure 108. Figure 1 shows with arrows the paths that currently exist for transferring video material from a particular source, including a DSC 100, a DVC 102, a mobile phone 104, and a webcam 106 to a particular destination viewing device including a DVD player 110, a DSTB 112, a DVR 114, a mobile phone 116, a handheld 118, a video iPod 120, or a PC 122. The only destination device that supports material from all input devices is the PC 122.
Otherwise, mobile phone 104 can send video material to another mobile phone 116, and a limited number of today's digital camcorders and digital cameras can create video material on DVDs that can then be viewed on the DVD player 110. In general, these paths are fractured and many of the devices in the creation block 199 have no way of interfacing with many of the devices in the consumption block 198. Beyond th eh igh lighted paths through the media aggregation, storage, manipulation & delivery infrastructure 108, no other practical video transfer paths exist today. [0008] Moreover, since the online video sharing websites do not support video editing, there is no mechanism for their members to incorporate external material into their video productions, such as photographs, audio, music, video or animation that may be available over the Internet. Videographers who are adept at using a PC-based video editor (or Macintosh-based editor) may succeed in creating professional-looking productions, but they also have no means of incorporating externally available material, since none of the available desktop video editing applications provide such a feature, having been designed for standalone editing.
[0009] With the recent rapid growth in popularity of digital cameras and mobile phones that are capable of shooting video, a new class of consumer has emerged with a new need for video editing. These consumers often have large collections of short video clips (typically 15 seconds for cell phones and 15-30 seconds for digital cameras) that they would like to trim, edit and combine into more meaningful video productions. Many of these consumers would also like to include externally available material, such as other short video clips or a music soundtrack, to enhance their productions. [0010] There is thus a need to provide consumers with an online service that facilitates the incorporation of external material into their productions and eliminates many of the drawbacks associated with current schemes.
SUMMARY
[0011] A system and methods are disclosed for storing, editing and distributing video material in an online environment. The systems and methods automatically aggregate externally available content of interest to each user, such as photographs, audio, music, video and animation, thereby enabling creators of online video productions to easily enhance their productions with selections from such material. [0012] In one aspect, an online video platform regularly spiders and indexes
Internet sites identified by users to be of interest to them, and builds thumbnail links to current digital material from each source. The digital material may consist of any media available on the Internet, including photographs, audio, music, video or animation. Upon entering the video editing portion of the application, users are presented with graphical thumbnail representations of the materials that have been specifically indexed for them. Users can review the aggregated materials by clicking on specific thumbnails. Users can then select a particular aggregated material by clicking on its thumbnail and dragging and dropping the thumbnail into the timeline of a new video creation. The material will then be automatically integrated into the new video. [0013] In another aspect, external material including websites and other data sources that do not reside on the user's local machine, as well as local material is spidered. In one example, external material is spidered at a first interval, while local material is spidered at a second interval. In another aspect, copy permission is verified for material that is destined to be aggregated. In yet another aspect, analytics are included so that the system can recommend material that the user may wish to be aggregated.
[0014] The Internet-hosted application service can be used on a dedicated website or its functionality can be served to different websites seeking to provide users with enhanced video editing capabilities. The external material may include media from several remote sources and need not reside on the online video platform. Instead, the external content may be cached in realtime on the online video platform when a user views, mixes, or otherwise performs an editing action on the external content. Other features and advantages of the present invention will become more readily apparent to those of ordinary skill in the art after reviewing the following detailed description and accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The details of the present invention, both as to its structure and operation, may be gleaned in part by study of the accompanying drawings, in which like reference numerals refer to like parts, and in which:
[0016] Figure 1 is a block diagram illustrating a prior art video editing platform.
[0017] Figure 2 is a block diagram illustrating the functional blocks or modules in an example architecture.
[0018] Figure 3 is a block diagram illustrating an example online video platform.
[0019] Figure 4 is a block diagram illustrating an example online video editor application.
[0020] Figure 5 is a block diagram illustrating an example video preprocessing application.
[0021] Figure 6 is a diagram illustrating an example process for automatically segmenting a video file.
[0022] Figure 7 is a diagram illustrating an example process for direct uploading and editing.
[0023] Figure 8 is a diagram illustrating an example process for automatically aggregating video content.
[0024] Figure 9 is a diagram illustrating an example process for automatically aggregating video content that uses a spidering module.
[0025] Figure 10 is a diagram illustrating an example process for automatically aggregating video content that ensures permission to copy the content. [0026] Figure 11 is a diagram illustrating an example process for making multiple copies of aggregated material.
[0027] Figure 12 is a diagram illustrating an example process for using analytics to recommend aggregated material. DETAILED DESCRIPTION
[0028] Certain examples as disclosed herein provide for the use of computer technology to store, edit, and share personal digital video material. Various methods, for example, as disclosed herein provide for the automatic aggregation of content of interest to each user for possible incorporation into the user's future video creations, including external content such as photographs, audio, music, video and animation. [0029] After reading this description it will become apparent to one skilled in the art how to implement the invention in various alternative examples and alternative applications. However, although various examples of the present invention are described herein, it is understood that these examples are presented by way of example only, and not limitation. As such, this detailed description of various alternative examples should not be construed to limit the scope or breadth of the present invention as set forth in the appended claims.
[0030] Those of skill will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein can often be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled persons can implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the invention. In addition, the grouping of functions within a module, block, circuit or step is for ease of description. Specific functions or steps can be moved from one module, block or circuit without departing from the invention. [0031] Referring now to the Figures, Figure 2 is a block diagram illustrating the functional blocks or modules in an example architecture. In the illustrated example, a system 200 includes an online video platform 206, an online video editor 202, a preprocessing application 204, as well as a content creation block 208 and a content consumption block 210.
[0032] The content creation block 208 may include input data from multiple sources that are provided to the online video platform 206, including personal video creation devices 212, personal photo and music repositories 214, and personally selected online video resources 216, for example.
[0033] In one example, video files may be uploaded by consumers from their personal video creation devices 212. The personal video creation devices 212 may include, for example, DSCs, DVCs, cell phones equipped with video cameras, and webcams. In another example, input to the online video platform 206 may be obtained from other sources of digital video and non-video content selected by the user. Non- video sources include the personal photo and music repositories 214, which may be stored on the user's PC, or on the video server, or on an external server, such as a photo-sharing application service provider ("ASP"), for example. Additional video sources include websites that publish shareable video material, such as news organizations or other external video-sharing sites, which are designated as personally selected online video resources 216, for example.
[0034] The online video editor 202 (also referred to as the Internet-hosted application service) can be used on a dedicated website or its functionality can be served to different websites seeking to provide users with enhanced video editing capabilities. For example, a user may go to any number of external websites providing an enhanced video editing service. The present system may be used, for example, to enable the external websites to provide the video editing capabilities while maintaining the look and feel of the external websites. In that respect, the user of one of the external websites may not be aware that they are using the present system other than the fact that they are using functionality provided by the present system. In a transparent manner then, the system may serve the application to the external IP address of the external website and provide the needed function while at the same time running the application in a manner consistent with the graphical user interface ("GUI") that is already implemented at the external IP address. Alternatively, a user of the external website may cause the invocation of a redirection and GUI recreation module 230, which may cause the user to be redirected to one of the servers used in the present system which provides the needed functionality while at the same time recreating the look and feel of the external website.
[0035] Video productions may be output by the online video platform 206 to the content consumption block 210. Content consumption block 210 may be utilized by a user of a variety of possible destination devices, including, but not limited to, mobile devices 218, computers 220, DVRs 222, DSTBs 224, and DVDs 226. The mobile devices 218 may be, for example, cell phones or PDAs equipped with video display capability. The computers 220 may include PCs, Apples, or other computers or video viewing devices that download material via the PC or Apple, such as handheld devices (e.g., PalmOne), or an Apple video iPod. The DVDs 226 may be used as a media to output video productions to a permanent storage location, as part of a fulfillment service for example.
[0036] Delivery by the online video platform 206 to the mobile devices 218 may use a variety of methods, including but not limited to a multimedia messaging service ("MMS"), a wireless application protocol ("WAP"), and instant messaging ("IM"). Delivery by the online video platform 206 to the computers 220 may use a variety of methods, including but not limited to: email, IM, uniform resource locator ("URL") addresses, peer-to-peer file distribution ("P2P"), or really simple syndication ("RSS"), for example.
[0037] The functions and the operation of the online video platform 206 will now be described in more detail with reference to Figure 3. Figure 3 is a block diagram illustrating an example online video platform. In the illustrated example, the online video platform 206 includes an opt-in engine module 300, a delivery engine module 302, a presence engine module 304, a transcoding engine module 306, an analytic engine module 308, and an editing engine module 310.
[0038] The online video platform 206 may be implemented on one or more servers, for example, Linux servers. The system can leverage open source applications and an open source software development environment. The system has been architected to be extremely scalable, requiring no system reconfiguration to accommodate a growing number of service users, and to support the need for high reliability.
[0039] The application suite may be based on AJAX where the online application behaves as if it resides on the user's local computing device, rather than across the
Internet on a remote computing device, such as a server. The AJAX architecture allows users to manipulate data and perform "drag and drop" operations, without the need for page refreshes or other interruptions.
[0040] The opt-in engine module 300 may be a server, which manages distribution relationships between content producers in the content creation block 208 and content consumers in the content consumption block 210. The delivery engine module 302 may be a server that manages the delivery of content from content producers in the content creation block 208 to content consumers in the content consumption block 210. The presence engine module 304 may be a server that determines device priority for delivery of content to each consumer, based on predefined delivery preferences and detection of consumer presence at each delivery device.
[0041] The transcoding engine module 306 may be a server that performs decoding and encoding tasks on media to achieve optimal format for delivery to target devices. The analytic engine module 308 may be a server that maintains and analyzes statistical data relating to website activity and viewer behavior. The editing engine module 310 may be a server that performs tasks associated with enabling a user to edit productions efficiently in an online environment.
[0042] The functions and the operation of the online video editor 202 will now be described in more detail with reference to Figure 4. Figure 4 is a block diagram illustrating an example online video editor 202. In the illustrated example, the online video editor 202 includes an interface 400, input media 402a-h, and a template 404. A digital content aggregation and control module 406 may also be used in conjunction with the online video editor 202 and thumbnails 408 representing the actual video files may be included in the interface 400.
[0043] The online video editor 202 may be an Internet-hosted application, which provides the interface 400 for selecting video and other digital material (e.g., music, voice, photos) and incorporating the selected materials into a video production via the digital content aggregation and control module 406. The digital content aggregation and control module 406 may be software, hardware, and/or firmware that enables the modification of the video production as well as the visual representation of the user's actions in the interface 400. The input media 402a-h may include such input sources as the shutterfly website 402a, remote media 402b, local media 402c, the napster web service 402d, the real rhapsody website 402e, the garage band website 402f, the flickr website 402g and webshots 402h. The input media 402a-h may be media that the user has selected for possible inclusion in the video production and may be represented as the thumbnails 408 in a working "palette" of available material elements, in the main window of the interface 400. The input media 402a-h may be of diverse types and formats, which may be aggregated together by the digital content aggregation and control module 406.
[0044] The thumbnails 408 are used as a way to represent material and can be acted on in parallel with the upload process. The thumbnails 408 may be generated in a number of manners. For example, the thumbnails may be single still frames created from certain sections within the video, clip, or mix. Alternatively, the thumbnails 408 may include multiple selections of frames (e.g., a quadrant of four frames). In another example, the thumbnails may include an actual sample of the video in seconds (e.g., a 1 minute video could be represented by the first 5 seconds). In yet another example, the thumbnails 408 can be multiple samples of video (e.g., 4 thumbnails of 3 second videos for a total of 12 seconds). In general, the thumbnails 408 are a method of representing the media to be uploaded (and after it is uploaded), whereby the process of creating the representation and uploading it takes a significantly less amount of time than either uploading the original media or compressing and uploading the original media.
[0045] The online video editor 202 allows the user to choose (or can create) the template 404 for the video production. The template 404 may represent a timeline sequence and structure for insertion of materials into the production. The template 404 may be presented in a separate window at the bottom of the screen, and the online video editor 202 via the digital content aggregation and control module 406 may allow -lithe user to drag and drop the thumbnails 408 (representing material content) in order to insert them into the timeline to create the new video production. The online video editor 202 may also allow the user to select from a library of special effects to create transitions between scenes in the video. The work-in-progress of a particular video project may be shown in a separate window.
[0046] A spidering module 414 is included in the digital content aggregation and control module 406. The spidering module may periodically search and index both local content and external content. For example, the spidering module 414 may use the Internet 416 to search for external material periodically for inclusion or aggregation with the production the user is editing. Similarly, the local storage 418 may be a local source, such as a user's hard disk drive on their local computer, for the spidering module 414 to periodically spider to find additional internal locations of interest and/or local material for possible aggregation.
[0047] The external content or material spidered by the spidering module 414 may include media from several remote sources that are intended to be aggregated together. The external content need not reside on the online video platform. Instead, the external content may be cached in realtime on the online video platform when a user views, mixes, or otherwise performs an editing action on the external content. Thus, many sources of diverse material of different formats may be aggregated on the fly. The latency of producing a final result may vary depending on: 1 ) what is cached already, 2) the speed of the remote media connection, 3) the size of the remote media (related to whether the media is compressed). An intelligent caching algorithm may be employed, which takes the above factors into account can shorten the time for online mixing.
[0048] On completion of the project, the online video editor 202 allows the user to publish the video to one or more previously defined galleries / archives 410. Any new video published to the gallery / archive 410 can be made available automatically to all subscribers 412 to the gallery. Alternatively, the user may choose to keep certain productions private or to only share the productions with certain users. [0049] The functions and the operation of the preprocessing application 204 will now be described in more detail with reference to Figure 5. Figure 5 is a block diagram illustrating an example preprocessing application. In the illustrated example, the preprocessing application 204 includes a data model module 502, a control module 504, a user interface module 506, foundation classes 508, an operating system module 510, a video segmentation module 512, a video compression module 514, a video segment upload module 516, a video source 518, and video segment files 520. [0050] In one example, the preprocessing application 204 is written in C++ and runs on a Windows PC, wherein the foundation classes 508 includes Microsoft foundation classes ("MFCs"). In this example, an object-oriented programming model is provided to the Windows APIs. In another example, the preprocessing application 204 is written, wherein the foundation classes 508 are in a format suitable for the operating system module 510 to be the Linux operating system. The video segment upload module 516 may be an application that uses a Model-View-Controller ("MVC") architecture. The MVC architecture separates the data model module 502, the user interface module 506, and the control module 504 into three distinct components. [0051] In operation, the preprocessing application 204 automatically segments, compresses, and uploads video material from the user's PC, regardless of length. The preprocessing application 204 uses the video segmentation module 512, the video compression module 514, and the video segment upload module 516 respectively to perform these tasks. The uploading method works in parallel with the online video editor 202, allowing the user to begin editing the material immediately, while the material is in the process of being uploaded. The material may be uploaded to the online video platform 206 and stored as one or more video segment files 520, one file per segment, for example.
[0052] The video source 518 may be a digital video camcorder or other video source device. In one example, the preprocessing application 204 starts automatically when the video source 518 is plugged into the user's PC. Thereafter, it may automatically segment the video stream by scene transition using the video segmentation module 512, and save each of the video segment files 520 as a separate file on the PC.
[0053] From the user's perspective, a video would be captured on any number of devices at the video source block 518. Once the user captured the video (i.e., on their camcorder, cellular phone, etc.) it would be transferred to a local computing device, such as the hard drive of a client computer with Internet access. [0054] Alternatively videos can be transferred to a local computing device whereby an intelligent uploader can be deployed. In some cases, the video can be sent directly from the video source block 518 over a wireless network (not shown), then over the Internet, and finally to the online video platform 206. This alternative bypasses the need to involve a local computing device or a client computer. However, this example is most useful when the video, clip, or mix is either very short, or highly compressed, or both.
[0055] In the case that the video is not compressed or long or both, and, therefore, relatively large, it is typically transferred first to a client computer where an intelligent uploader is useful. In this example, an upload process is initiated from a local computing device using the video segment upload module 516, which facilitates the input of lengthy video material. To that end, the user would be provided with the ability to interact with the user interface module 506. Based on user input, the control module 504 controls the video segmentation module 512 and the video compression module 514, wherein the video material is segmented and compressed into the video segment files 520. For example, a lengthy production may be segmented into 100 upload segments, which are in turn compressed into 100 segmented and compressed upload segments.
[0056] Each of the compressed video segment files 520 begin to be uploaded separately via the video segment upload module 516 under the direction of the control module 504. This may occur, for example, by each of the upload segments being uploaded in parallel. Alternatively each of the upload segments may be uploaded in order, the largest segment first, the smallest segment first, or any other manner. [0057] As the video material is being uploaded, the online video editor 202 is presented to the user. Through a user interface provided by the user interface module 506, thumbnails representing the video segments in the process of being uploaded are made available to the user. The user would proceed to edit the video material via an interaction with the thumbnails. For example, the user may be provided with the ability to drag and drop the thumbnails into and out of a timeline or a storyline, to modify the order of the segments that will appear in the final edited video material. [0058] The system is configured to behave as if all of the video represented by the thumbnails is currently in one location (i.e., on the user's local computer) despite the fact that the material is still in the process of being uploaded by the video segment upload module 516. When the user performs an editing action on the thumbnails, for example, by dragging one of the thumbnails into a storyline, the upload process may be changed. For example, if the upload process was uploading all of the compressed upload segments in sequential order and the user dropped an upload segment representing the last sequential portion of the production into the storyline, the upload process may immediately begin to upload the last sequential portion of the production, thereby lowering the priority of the segments that were currently being uploaded prior to the user's editing action.
[0059] All of the user's editing actions are saved by the online video editor 202.
Once the material is uploaded completely (including the prioritized upload segments and the remaining upload segments), the saved editing actions are applied to the completely uploaded segments. In this manner, the user may have already finished the editing process and logged off or the user may still be logged on. Regardless, the process of applying the edits only when the material is finished uploading saves the user from having to wait for the upload process to finish before editing the material. Once the final edits are applied, various capabilities exist to share, forward, publish, browse, and otherwise use the uploaded video in a number of ways. [0060] Figure 6 is a diagram illustrating an example process for automatically segmenting a video file. This process can be carried out by the preprocessing application 204 previously described with respect to Figure 2. In particular, the video segmentation module 512 of the preprocessing application 204 may be used to carry out one or more of the steps described in Figure 6. At step 600, scene transitions within the video material are automatically detected. At step 602, the material is segmented into separate files. Step 602 may include the preprocessing application 204 providing for the application of metadata tags by the user for the purpose of defining the subject matter. These additional steps may allow the user to apply one or more descriptive names to each file segment ("segment tags") at step 604, and further to preview the content of each file segment and to provide additional descriptive names ("deep tags") defining specific points-in-time within the file segment at step 606. [0061] Both segment tags and deep tags at steps 604 and 606 can later be used as metadata references in search and retrieval operations by the user on video material stored within a remote computing device, such as a server. Thus, for example, if the segment tag "harbor" has been applied to the file segment and the deep tag "sailboat" has been applied to the one-minute mark within the segment where a sailboat appears, then any subsequent viewer searching on either of these tags will retrieve the file segment, and the segment will be positioned for viewing at the appropriate point: at the start of the segment if the search term was "harbor" or at the one-minute mark if the search term was "sailboat."
[0062] Furthermore, in any subsequent video editing process, if the user searches on the term "sailboat," the drag-and-drop editor will automatically extract the segment beginning at the sailboat scene, rather than requiring the user to manually edit or clip the segment. In the above example, the deep tags 606 can be used to dynamically serve up advertisements at appropriate times of viewing based on an association between time and the deep tags 606.
[0063] The separate files may be ready for uploading to a server at this stage, for example. At step 608, a thumbnail image is created for each file segment. Then, at step 610, the set of thumbnail images representing all of the video file segments is initially uploaded to the server. In one example, the thumbnail images may be selected by copying the first non-blank image in each video file segment, for example, and then uploading them to a remote computing device using the video segment upload module 516.
[0064] The online video editor 202 also handles uploading of video clips directly from a PC, or cell phone, without the need to use the preprocessing application 206. Figure 7 is a diagram illustrating an example process for direct uploading and editing. During the direct upload process, the online video editor 202 treats each video clip as a separate video segment, and creates a thumbnail image for each segment (based on the first non-blank image detected in the segment's data stream, for example). If the clip includes transitions, the editor detects these and splits the clip into separate segments, creating a new segment following each transition, and builds an accompanying thumbnail image for each created segment. For each segment, the editor prompts the user to supply one or more segment tags. After each segment has been uploaded, the user can review the segment and create additional deep tags defining specific points-in-time within the segment.
[0065] When uploading video clips, users are provided with the ability to define a folder at step 1700, which is retrieved to receive a set of clips that they wish to associate together later in the editing process. Upon completion of the upload process, the folder will contain identification information (including tags) for each of the segments relating to the clip set. When users subsequently use the online video editor 202 to create a video production, by accessing a particular folder they retrieve the set of segments that they intended to use together, which are displayed as a set of segment thumbnails at step 1702. They can then drag and drop segment thumbnails into the editor's timeline at step 1704 to create a video sequence out of the segments they wish to include in their new production.
[0066] External content is provided for selection by tag at step 1706. The user is also provided with the ability to add transitions, special effects, as well music or voice overlays at steps 1708 and 1710 before saving the edited work as a new production at step 1712. The drag-and-drop interface provides an extremely simple method of video editing, and is designed to enable the average Internet user to easily edit his or her video material. The process of video editing is thus greatly simplified, by providing a single Internet-hosted source that automatically manages the processes of uploading, storing, organizing, editing, and subsequently sharing video material. [0067] The system may also automatically tag all digital content that it has aggregated on behalf of the user. Where a file name or title is supplied with a piece of aggregated material, this may be used as the tag. Where no file name or title is supplied, the system may create a tag in the form of: "Photo mm/dd/yy nnn", "Audio mm/dd/yy nnn", "Music mm/dd/yy nnn ", "Video mm/dd/yy nnn" or "Animation mm/dd/yy nnn", for example, where "mm/dd/yy" is the date when the spidering occurred, and "nnn" is a sequential number representing the sequence in which the piece of material was aggregatθd by the system on the date specified. The user can change any of the automatically aggregated material tags to a more meaningful tag name. [0068] In a further variation of tagging, users can create entire video productions by aggregating together a set of tagged segments or sections of video from any source available within the system, including tagged material from external sources. It thus becomes extremely easy for users to create new video productions from existing material from multiple sources, without the need to introduce their own new material. Any such aggregated production will exist as a separate file, but the system also retains separate files for all of aggregated segments from which it is constructed. [0069] The online video editor 202 includes an application that automatically aggregates content of interest to each user for possible incorporation into the user's future video creations. Figure 8 is a diagram illustrating an example process for automatically aggregating video content. This process can be carried out, for example, by the spidering module 414 previously described with respect to Figure 4. In the illustrated example, the spidering module 414 regularly spiders and indexes Internet sites identified by users to be of interest to them at step 800. Thumbnail links to current digital material from each source are built at step 802. The digital material may consist of any media available on the Internet, including photographs, audio, music, video or animation, for example.
[0070] At step 804, it is determined whether the user has entered a video editing portion of the application. If so, the user is presented at step 806 with graphical thumbnail representations of the materials that have been specifically indexed for them. Users can review the aggregated materials by clicking on specific thumbnails. Users can then select a particular aggregated material at step 808, for example, by clicking on its thumbnail and dragging and dropping the thumbnail into the timeline of a new video creation. If the user selected the aggregated material represented by the thumbnail at step 808 then the material will be automatically integrated into the new video at step 810.
[0071] In another aspect, the online video editor 202 uses a regular process in the form of a spidering module, in order to seek out and add material to aggregate to enhance the user's experience. Figure 9 is a diagram illustrating an example process for automatically aggregating video content that uses a spidering module. This process can be carried out by the online video editor 202 previously described with respect to Figure 2, and more specifically by the spidering module 414 previously described with respect to Figure 4.
[0072] In the illustrated embodiment, the spidering module 414 regularly spiders and indexes Internet sites (i.e., locations having external content) identified by users to be of interest to them at step 902 after it is determined that a first interval has passed at step 900. The first interval may be, for example, one day. Thereafter, the process builds links to current relevant digital material from each source at step 904. Each link may be represented by a thumbnail image at step 906. The external digital material may consist of any media available on the Internet, including photographs, audio, music, video or animation.
[0073] At step 908 it is determined whether a second interval has passed. The second regular interval may be, for example, one week. If one week has passed, the spidering module 414 may spider the user's local disk storage at step 910 (i.e., locations having internal content). At step 912, the application may detect whether there is digital material that has not yet been aggregated on behalf of the user. If not, the process repeats at step 900. Otherwise, at step 914 the application aggregates the non-aggregated digital material.
[0074] Many Internet sites offering digital media (e.g., Magnatune, Soundlift) provide an API that supports exporting of digital material from their site, where the creator has authorized free copying of their material. When the spidering module 414 spiders the Internet, one example may search for such authorized materials. If a user has requested spidering of a site that does not provide an API, and therefore provides no automated way of indicating whether copying permission has been granted on its digital materials, the system may only proceed to spider sites that have been verified manually to offer free copying of their digital content. To verify copying permission, the system may check a list that it maintains that contains entries for all Internet sites for which free copying of digital materials has been verified manually, by the operators of the service. [0075] Figure 10 is a diagram illustrating an example process for automatically aggregating video content that ensures permission to copy the content. This process can be carried out by the online video editor 202 previously described with respect to Figure 2. In the illustrated example, Internet site spidering by the spidering module 414 proceeds for each source of the aggregated material. At step 1000, for each piece of available digital material that has not previously been aggregated, it is determined whether the provider of the aggregated material also provides an associated API. If so, the system checks for copying permission via the API at step 1002 and saves a link to the source material within the site at step 1004. Next, at step 1006, an appropriate set of commands is built to access and stream the material and at step 1008 a link and the commands are saved, for example, as a record in a file associated with a thumbnail of the aggregated material.
[0076] If, on the other hand, step 1000 is false, (i.e., the source of the aggregated material does not provide an API), it is determined at step 1010 whether the source is in a list of sites authorized for copying of digital material. If so, the system copies the material over to a disk file on the online video platform 206 at step 1012. In one example, the system creates two copies: one in Flash format and the other in DivX format. Thereafter, the system associates the copies at step 1014 with the thumbnail of the aggregated material. If step 1010 is false, (i.e., there is no API granting copy permission and the site is not on a list of authorized sites) then the material is not aggregated at step 1016.
[0077] Figure 11 is a diagram illustrating an example process for making multiple copies of the aggregated material. This process can be carried out by the online video editor 202 previously described with respect to Figure 2. In the illustrated example, in order to accomplish the creation of two copies of the aggregated material, for example after step 1012 of Figure 10, the system may first detect the format and resolution of the subject video material at step 1100. Then at step 1102 the system may select the appropriate decode software module to handle the detected video format. At step 1104, the video material may be decoded from the input format using the selected decode codec. Then, at step 1106, the material may be encoded it into Flash format using a Flash codec and into DivX format using a Divx codec. Thereafter, at step 1108, copies of the compressed video are created.
[0078] In one example, the generation of thumbnails, which may occur at step
906 of Figure 9 for example, may be generated in the following manner. Thumbnails for photos may be implemented as miniature renderings of the actual photos. Thumbnails for video and animation may be represented by the first non-blank image detected in the data stream of the subject video or animation. Thumbnails for music and audio are normally imported from source sites that provide an API; where none is available, the system may supply a default image (e.g., an image of a music note or of an audio speaker).
[0079] Upon entering the video editing portion of the online video editor 202, users are presented with the thumbnail representations of the materials that have been specifically indexed for them. The thumbnail representations may be organized in a hierarchical file structure to allow easy browsing of available material. Users can review the aggregated materials by clicking on specific thumbnails. When building a new video production, users can select a particular aggregated material by clicking on its thumbnail and dragging and dropping the thumbnail into the timeline of the new production. The material will then be automatically integrated into the production, together with other segments or aggregated material that the user has selected. Users can also add transitions, special effects, as well music or voice overlays. [0080] If, during the editing process, the user invokes a piece of aggregated material by referencing its thumbnail image, the system retrieves a copy of the aggregated material and makes it available for viewing and possible inclusion into the user's production. If the system has not previously stored a copy of the material locally, but has instead saved the link to the material and related API commands, the system accesses the material and creates copies in multiple formats, prior to making the material available for viewing and possible inclusion into the user's production. [0081] Users can share productions that include aggregated materials in the same manner in which they can share any production they create. On completion of a video production, the creator has the option of defining whether the video is shareable with other users. In one embodiment the video can be shared a multiple levels: at the community level (by any person viewing the video), or at one or more levels within a group hierarchy (e.g., only by people identified as "family" within a "friends and family" group). The sharing hierarchy may be implemented as a system of folders within a directory structure, similar to the structure of a UNIX file system or a Windows file system. The file structure is, in effect, the user's own personal collection of current multimedia material from the Internet, analogous to a music playlist. [0082] Each member who creates video productions has such a directory, and a folder may be created within the directory for each group or subgroup that the member defines. For each video production that the member creates, he or she has the ability to define which folders have the ability to view the video. When a member designates a person as belonging to a group, or when a person accepts a member's invitation to join a group, the person's ID is entered into the appropriate folder, and the person inherits the sharing privileges that are associated with the folder.
[0083] In one aspect, the system incorporates analytic methods to determine the most likely interests of users, and to make recommendations to users in the form of additional material that the system aggregates automatically on their behalf, based on their predicted interests. Figure 12 is a diagram illustrating an example process for using analytics to recommend aggregated material. This process can be carried out by the online video editor 202 previously described with respect to Figure 2. [0084] In the illustrated example, a number of factors are considered at step 1200 and a recommendation is made at step 1202. Recommendations may be based on analyzing such variables as the user's production titles and tags, and the types or genres of external materials that they have requested to be aggregated, for example. The system also may analyze the interests of its user base, and aggregates recommended materials based on similar interests among users. [0085] At step 1204 it is determined whether the user accepted the recommendation. If not the process repeats. Otherwise, at step 1206, the additional material is aggregated for the user. Thus a user who requests aggregation of a particular musical performer may receive aggregated materials from other musical performers because other users have frequently requested materials from both performers. [0086] The system thus has a self-learning aspect to its aggregation of external content. The combination of aggregated file structures of all users can be considered to be a self-learning Internet file system, which evolves based on the composite set of interests of its user base. Unlike other network file systems or distributed file systems, its content is not constrained by whomever set up the system, but grows organically to reflect the common interests of its online community.
[0087] The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles described herein can be applied to other embodiments- without departing from the spirit or scope of the invention. For example, references above to "clips" are not intended to be limited to video but are intended to encompass different types of digital media, including, for example, photographs, audio and multimedia. Thus, it is to be understood that the description and drawings presented herein represent a presently preferred embodiment of the invention and are therefore representative of the subject matter which is broadly contemplated by the present invention. It is further understood that the scope of the present invention fully encompasses other embodiments that may become obvious to those skilled in the art and that the scope of the present invention is accordingly limited by nothing other than the appended claims.

Claims

1. A method for aggregating information associated with video material comprising: searching for external content which resides outside a local storage device of a user, that may be of interest to the user; searching for internal content which resides on the local storage device of the user, that may be of interest to the user; indexing the external content after a first interval; building a link in a video editor to a first data at a location of the external content; generating a thumbnail associated with the first data in the video editor; indexing the internal content after a second interval; aggregating a second data associated with the internal content with the first data; and performing one or more editing actions using the first and the second data.
2. The method of claim 1 wherein the step of indexing the external content further comprises determining whether there is an application programming interface (API) associated with the first data.
3. The method of claim 2 wherein the step of determining whether there is an application programming interface further comprises determining whether the API grants copy permission for the first data.
4. The method of claim 3 wherein the step of determining whether there is an application programming interface further comprises building a set of commands to access the first data, if the API grants copy permission.
5. The method of claim 2 wherein the step of indexing the external content further comprises determining whether the location of the external content is on a list of locations granting copy permission, if there is no API associated with the first data.
6. The method of claim 5 wherein the step of indexing the external content further comprises copying the first data to a disk file, if the location of the external content is on the list of locations granting copy permission.
7. The method of claim 6 wherein the step of copying further comprises: generating a Flash version of the first data; and generating a DivX version of the first data.
8. The method of claim 7 further comprising making the Flash version and the DivX version available for viewing and possible inclusion into the video material.
9. The method of claim 1 wherein the step of searching further comprises: considering a number of factors; and making a recommendation to a user about the external content or the internal content based on the factors.
10. The method of claim 9 wherein the factors include one or more of a production titles, a tag, a type, a genre, or an interest of a user base.
11. The method of claim 1 further wherein the step of performing further comprises caching the first data when the user performs one of the editing actions which is associated with the first data.
12. A system comprising: a video platform which receives video material; a preprocessing application which receives the video material from the video platform and initiates an upload process on the video material from a local computing device to a remote computing device; and an online video editor which displays one or more thumbnails associated with the video material on the remote computing device as the upload process is occurring, the online video editor further comprising, a first spidering module which determines one or more locations of external content which resides outside a local storage device of a user, that may be of interest to the user and which indexes the external content after a first interval, builds links in the online video editor to a first data at the locations of the external content, and generates one or more additional thumbnails in the online video editor associated with the first data, and a second spidering module which determines one or more locations of internal content which resides on the local storage device of the user, that may be of interest to the user and which indexes the internal content after a second interval, the online video editor further configured to aggregate a second data associated with the internal content with the first data.
13. The system of claim 12 wherein the online video editor is further configured to determine whether there is an application programming interface (API) associated with the first data.
14. The system of claim 13 wherein the online video editor is further configured to determine whether the API grants copy permission for the first data.
15. The system of claim 14 wherein the online video editor if further configured to build a set of commands to access the first data, if the API grants copy permission.
16. The system of claim 12 wherein the online video editor is further configured to determine whether one of the locations of the external content is on a list of locations granting copy permission, if there is no API associated with the first data.
17. The system of claim 16 wherein the online video editor is further configured to copy the first data to a disk file, if the one locations of the external content is on the list of locations granting copy permission.
\
17. The system of claim 16 further comprising: a Flash version of the first data generated after copying the first data to the disk file; and a DivX version of the first data generated after copying the first data to the disk file.
18. The system of claim 17 wherein the Flash version and the DivX version are made available for viewing and possible inclusion into the video material.
19. The system of claim 12 wherein the first and the second spidering modules further include a number of factors to be considered and a recommendation configured to be made to a user about the external content or the internal content based on the factors.
20. The system of claim 19 wherein the factors include one or more of a production titles, a tag, a type, a genre, or an interest of a user base.
21. The system of claim 12 further comprising a cache which is used to store the first data when the user performs an editing action which is associated with the first data.
22. An aggregator of information associated with video material comprising: means for searching for external content which resides outside a local storage device of a user that may be of interest to the user; means for searching for internal content which resides on the local storage device of the user that may be of interest to the user; means for indexing the external content after a first interval; means for building a link in a video editor to a first data at a location of the external content; means for generating a thumbnail associated with the first data in the video editor; means for indexing the internal content after a second interval; means for aggregating a second data associated with the internal content with the first data; and means for performing one or more editing actions using the first and the second data.
23. The aggregator of claim 22 wherein the means for indexing the external content further comprises means for determining whether there is an application programming interface (API) associated with the first data.
24. The aggregator of claim 23 wherein the means for determining whether there is an application programming interface (API) further comprises means for determining whether the API grants copy permission for the first data.
25. The aggregator of claim 24 wherein the means for determining whether there is an application programming interface (API) further comprises means for building a set of commands to access the first data, if the API grants copy permission.
26. The aggregator of claim 23 wherein the means for indexing the external content further comprises means for determining whether one of the locations of the external content is on a list of locations granting copy permission, if there is no API associated with the first data.
27. The aggregator of claim 26 wherein the means for indexing the external content further comprises means for copying the first data to a disk file, if the one of the locations of the external content is on the list of locations granting copy permission.
28. The aggregator of claim 27 wherein the means for copying further comprises: means for generating a Flash version of the first data; and means for generating a DivX version of the first data.
29. The aggregator of claim 28 further comprising means for making the Flash version and the DivX version available for viewing and possible inclusion into the video material.
30. The aggregator of claim 22 wherein the means for searching further comprises: means for considering a number of factors; and means for making a recommendation to a user about the external content or the internal content based on the factors.
31. The aggregator of claim 30 wherein the factors include one or more of a production titles, a tag, a type, a genre, or an interest of a user base.
32. The aggregator of claim 22 further comprising means for caching the first data when the user performs one of the editing actions which is associated with the first data.
PCT/US2007/060177 2006-01-05 2007-01-05 Automatic aggregation of content for use in an online video editing system WO2007082169A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US75636906P 2006-01-05 2006-01-05
US60/756,369 2006-01-05

Publications (2)

Publication Number Publication Date
WO2007082169A2 true WO2007082169A2 (en) 2007-07-19
WO2007082169A3 WO2007082169A3 (en) 2008-05-02

Family

ID=38257088

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/060177 WO2007082169A2 (en) 2006-01-05 2007-01-05 Automatic aggregation of content for use in an online video editing system

Country Status (1)

Country Link
WO (1) WO2007082169A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010119181A1 (en) * 2009-04-16 2010-10-21 Valtion Teknillinen Tutkimuskeskus Video editing system
US8826357B2 (en) 2008-06-03 2014-09-02 Google Inc. Web-based system for generation of interactive games based on digital videos
US8826117B1 (en) * 2009-03-25 2014-09-02 Google Inc. Web-based system for video editing
US8826320B1 (en) 2008-02-06 2014-09-02 Google Inc. System and method for voting on popular video intervals
US9044183B1 (en) 2009-03-30 2015-06-02 Google Inc. Intra-video ratings
US9684644B2 (en) 2008-02-19 2017-06-20 Google Inc. Annotating video intervals
US9805012B2 (en) 2006-12-22 2017-10-31 Google Inc. Annotation framework for video

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020016786A1 (en) * 1999-05-05 2002-02-07 Pitkow James B. System and method for searching and recommending objects from a categorically organized information repository
US20040030741A1 (en) * 2001-04-02 2004-02-12 Wolton Richard Ernest Method and apparatus for search, visual navigation, analysis and retrieval of information from networks with remote notification and content delivery
US20040098740A1 (en) * 2000-12-07 2004-05-20 Maritzen L. Michael Method and apparatus for using a kiosk and a transaction device in an electronic commerce system
US6850252B1 (en) * 1999-10-05 2005-02-01 Steven M. Hoffberg Intelligent electronic appliance system and method
US20050177716A1 (en) * 1995-02-13 2005-08-11 Intertrust Technologies Corp. Systems and methods for secure transaction management and electronic rights protection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050177716A1 (en) * 1995-02-13 2005-08-11 Intertrust Technologies Corp. Systems and methods for secure transaction management and electronic rights protection
US20020016786A1 (en) * 1999-05-05 2002-02-07 Pitkow James B. System and method for searching and recommending objects from a categorically organized information repository
US6850252B1 (en) * 1999-10-05 2005-02-01 Steven M. Hoffberg Intelligent electronic appliance system and method
US20040098740A1 (en) * 2000-12-07 2004-05-20 Maritzen L. Michael Method and apparatus for using a kiosk and a transaction device in an electronic commerce system
US20040030741A1 (en) * 2001-04-02 2004-02-12 Wolton Richard Ernest Method and apparatus for search, visual navigation, analysis and retrieval of information from networks with remote notification and content delivery

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9805012B2 (en) 2006-12-22 2017-10-31 Google Inc. Annotation framework for video
US11727201B2 (en) 2006-12-22 2023-08-15 Google Llc Annotation framework for video
US11423213B2 (en) 2006-12-22 2022-08-23 Google Llc Annotation framework for video
US10853562B2 (en) 2006-12-22 2020-12-01 Google Llc Annotation framework for video
US10261986B2 (en) 2006-12-22 2019-04-16 Google Llc Annotation framework for video
US8826320B1 (en) 2008-02-06 2014-09-02 Google Inc. System and method for voting on popular video intervals
US9684644B2 (en) 2008-02-19 2017-06-20 Google Inc. Annotating video intervals
US9690768B2 (en) 2008-02-19 2017-06-27 Google Inc. Annotating video intervals
US9684432B2 (en) 2008-06-03 2017-06-20 Google Inc. Web-based system for collaborative generation of interactive videos
US8826357B2 (en) 2008-06-03 2014-09-02 Google Inc. Web-based system for generation of interactive games based on digital videos
US8826117B1 (en) * 2009-03-25 2014-09-02 Google Inc. Web-based system for video editing
US9044183B1 (en) 2009-03-30 2015-06-02 Google Inc. Intra-video ratings
WO2010119181A1 (en) * 2009-04-16 2010-10-21 Valtion Teknillinen Tutkimuskeskus Video editing system

Also Published As

Publication number Publication date
WO2007082169A3 (en) 2008-05-02

Similar Documents

Publication Publication Date Title
US20210193182A1 (en) Distributed scalable media environment for advertising placement in movies
US8180826B2 (en) Media sharing and authoring on the web
US8990214B2 (en) Method and system for providing distributed editing and storage of digital media over a network
WO2007082167A2 (en) System and methods for storing, editing, and sharing digital video
US8972862B2 (en) Method and system for providing remote digital media ingest with centralized editorial control
US20090196570A1 (en) System and methods for online collaborative video creation
US8126313B2 (en) Method and system for providing a personal video recorder utilizing network-based digital media content
US7970260B2 (en) Digital media asset management system and method for supporting multiple users
US8644679B2 (en) Method and system for dynamic control of digital media content playback and advertisement delivery
US20100169786A1 (en) system, method, and apparatus for visual browsing, deep tagging, and synchronized commenting
US8443276B2 (en) System and data model for shared viewing and editing of time-based media
US20070089151A1 (en) Method and system for delivery of digital media experience via common instant communication clients
US8606084B2 (en) Method and system for providing a personal video recorder utilizing network-based digital media content
US20100274820A1 (en) System and method for autogeneration of long term media data from networked time-based media
WO2007082166A2 (en) System and methods for distributed edit processing in an online video editing system
WO2007082169A2 (en) Automatic aggregation of content for use in an online video editing system
KR20040042612A (en) Methods for fixing-up lastURL representing path name and file name of asset in MPV environment
US7610554B2 (en) Template-based multimedia capturing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07709976

Country of ref document: EP

Kind code of ref document: A2