WO2007082166A2 - System and methods for distributed edit processing in an online video editing system - Google Patents

System and methods for distributed edit processing in an online video editing system Download PDF

Info

Publication number
WO2007082166A2
WO2007082166A2 PCT/US2007/060174 US2007060174W WO2007082166A2 WO 2007082166 A2 WO2007082166 A2 WO 2007082166A2 US 2007060174 W US2007060174 W US 2007060174W WO 2007082166 A2 WO2007082166 A2 WO 2007082166A2
Authority
WO
WIPO (PCT)
Prior art keywords
video
nodes
weight
tier
remote computing
Prior art date
Application number
PCT/US2007/060174
Other languages
French (fr)
Other versions
WO2007082166A3 (en
Inventor
David A. Dudas
James H. Kaskade
Kenneth W. O'flaherty
Charles L. Sismondo
Original Assignee
Eyespot Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eyespot Corporation filed Critical Eyespot Corporation
Publication of WO2007082166A2 publication Critical patent/WO2007082166A2/en
Publication of WO2007082166A3 publication Critical patent/WO2007082166A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/162Authorising the user terminal, e.g. by paying; Registering the use of a subscription channel, e.g. billing
    • H04N7/165Centralised control of user terminal ; Registering at central
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2181Source of audio or video content, e.g. local disk arrays comprising remotely distributed storage units, e.g. when movies are replicated over a plurality of video servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/222Secondary servers, e.g. proxy server, cable television Head-end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client

Definitions

  • This invention relates in general to the use of computer technology to store, edit and share personal digital video material, and in particular to a system and methods whereby the processing of editing can be distributed across multiple servers.
  • DSCs digital still cameras
  • DVCs digital video camcorders
  • webcams computer mounted web cameras
  • DVRs digital video recorders
  • Consumers who generate these video files typically wish to edit their material down to the highlights that they wish to keep, save the resulting edited material on some permanent storage medium, and then share this material with friends and family, or possibly with the public at large.
  • a wide variety of devices exist for viewing video material ranging from DVD players, TV-connected digital set-top boxes ("DSTBs") and DVRs, mobile phones, personal computers (“PCs”), and video viewing devices that download material via the PC, such as handheld devices (e.g., PalmOne), or the Apple video iPod.
  • DSTBs digital set-top boxes
  • PCs personal computers
  • video viewing devices that download material via the PC, such as handheld devices (e.g., PalmOne), or the Apple video iPod.
  • the video recording formats accepted by each of these viewing devices vary widely, and it is unlikely that the format that a particular delivery device accepts will match the format in which a particular video production will . have been recorded.
  • Figure 1 is a block diagram illustrating a prior art video editing platform including a creation block 199, a consumption block 198, and a media aggregation, storage, manipulation & delivery infrastructure 108.
  • Figure 1 shows with arrows the paths that currently exist for transferring video material from a particular source, including a DSC 100, a DVC 102, a mobile phone 104, and a webcam 106 to a particular destination viewing device including a DVD player 110, a DSTB 112, a DVR 1 14, a mobile phone 116, a handheld 118, a video iPod 120, or a PC 122.
  • the only destination device that supports material from all input devices is the PC 122.
  • mobile phone 104 can send video material to another mobile phone 116, and a limited number of today's digital camcorders and digital cameras can create video material on DVDs that can then be viewed on the DVD player 110.
  • these paths are fractured and many of the devices in the creation block 199 have no way of interfacing with many of the devices in the consumption block 198. Beyond thehighlighted paths through the media aggregation, storage, manipulation & delivery infrastructure 108, no other practical video transfer paths exist today.
  • template (or “style”), analogous to a recipe - a format and specification for a particular video production, defining its content in terms of the sequence of scenes and related soundtrack, transitions, filters or special effects that are used to construct the video.
  • Some products offer starter templates to help users create a video production. Starter templates provided with the product and templates derived from the user's own productions can be reused by the user to create new productions. But, since they were not designed with online sharing in mind, desktop video editors do not provide a way for users to access and use templates created by other users.
  • a system and methods are disclosed for storing, editing and distributing video material in an online environment.
  • a system and related methods comprise an Internet-hosted application service for online storage, editing and sharing of digital video content.
  • a working file including multimedia files and edits requiring transcoding can be divided and processed on more than one server.
  • Each such request may be expressed as a decision tree.
  • the tree may have three tiers.
  • First tier nodes are leaf node files with file specific edits to apply. File specific edits may include, for example, fading, clipping, etc.
  • the second tier nodes are sets of files that require audio or edits across the subset. Audio or edits across the subset include, for example, applying an MP3 sound track or adding a logo. Multiple second tiers can exist.
  • the third tier is the final merged product and can have the same edits as the second tier. [0015] To determine how to distribute the required processing, each node of the tree is given a weight.
  • the weight may be based upon a measure of time in seconds based on the time the file will take to perform a predefined transcode that functions in N time. Each edit produces a function on weight (most commonly a straight multiplier, sometimes an exponent).
  • Various algorithms may utilize the weighted tree information to determine if a portion of the working file should be sent to a server for processing and which server.
  • the Internet-hosted application service can be used on a dedicated website or its functionality can be served to different websites seeking to provide users with enhanced video editing capabilities.
  • Figure 1 is a block diagram illustrating a prior art video editing platform.
  • Figure 2 is a block diagram illustrating the functional blocks or modules in an example architecture.
  • Figure 3 is a block diagram illustrating an example online video platform.
  • Figure 4 is a block diagram illustrating an example online video editor application.
  • Figure 5 is a block diagram illustrating an example video preprocessing application.
  • Figure 6 is a diagram illustrating an example edit sequence.
  • Figure 7 is a diagram illustrating an example data structures that support hyper-templates.
  • Figure 8 is a diagram illustrating an example editing decision tree.
  • Figure 9 is a diagram illustrating an example editing decision tree with leaf node weights.
  • Figure 10 is a diagram illustrating an example process for distributed edit processing using a weighted tree.
  • a working file which includes multimedia files and edits requiring transcoding that can be divided and processed on more than one server.
  • Each such request is expressed as a decision tree.
  • Each node of the tree is given a weight.
  • Various algorithms can utilize the weighted tree information to determine if a portion of the working file should be sent to a server for processing and which server.
  • FIG. 2 is a block diagram illustrating the functional blocks or modules in an example architecture.
  • a system 200 includes an online video platform 206, an online video editor 202, a preprocessing application 204, as well as a content creation block 208 and a content consumption block 210.
  • the content creation block 208 may include input data from multiple sources that are provided to the online video platform 206, including personal video creation devices 212, personal photo and music repositories 214, and personally selected online video resources 216, for example.
  • video files may be uploaded by consumers from their personal video creation devices 212.
  • the personal video creation devices 212 may include, for example, DSCs, DVCs, cell phones equipped with video cameras, and webcams.
  • input to the online video platform 206 may be obtained from other sources of digital video and non-video content selected by the user.
  • Non-video sources include the personal photo and music repositories 214, which may be stored on the user's PC, or on the video server, or on an external server, such as a photo-sharing application service provider ("ASP"), for example.
  • Additional video sources include websites that publish shareable video material, such as news organizations or other external video- sharing sites, which are designated as personally selected online video resources 216, for example.
  • the online video editor 202 (also referred to as the Internet-hosted application service) can be used on a dedicated website or its functionality can be served to different websites seeking to provide users with enhanced video editing capabilities.
  • a user may go to any number of external websites providing an enhanced video editing service.
  • the present system may be used, for example, to enable the external websites to provide the video editing capabilities while maintaining the look and feel of the external websites.
  • the user of one of the external websites may not be aware that they are using the present system other than the fact that they are using functionality provided by the present system.
  • the system may serve the application to the external IP address of the external website and provide the needed function while at the same time running the application in a manner consistent with the graphical user interface ("GUI") that is already implemented at the external IP address.
  • GUI graphical user interface
  • a user of the external website may cause the invocation of a redirection and GUI recreation module 230, which may cause the user to be redirected to one of the servers used in the present system which provides the needed functionality while at the same time recreating the look and feel of the external website.
  • Video productions may be output by the online video platform 206 to the content consumption block 210.
  • Content consumption block 210 may be utilized by a user of a variety of possible destination devices, including, but not limited to, mobile devices 218, computers 220, DVRs 222, DSTBs 224, and DVDs 226.
  • the mobile devices 218 may be, for example, cell phones or PDAs equipped with video display capability.
  • the computers 220 may include PCs, Apples, or other computers or video viewing devices that download material via the PC or Apple, such as handheld devices (e.g., PalmOne), or an Apple video iPod.
  • the DVDs 226 may be used as a media to output video productions to a permanent storage location, as part of a fulfillment service for example.
  • Delivery by the online video platform 206 to the mobile devices 218 may use a variety of methods, including but not limited to a multimedia messaging service (“MMS”), a wireless application protocol (“WAP”), and instant messaging (“IM”). Delivery by the online video platform 206 to the computers 220 may use a variety of methods, including but not limited to: email, IM, uniform resource locator (“URL”) addresses, peer-to-peer file distribution (“P2P”), or really simple syndication (“RSS”), for example.
  • MMS multimedia messaging service
  • WAP wireless application protocol
  • IM instant messaging
  • Delivery by the online video platform 206 to the computers 220 may use a variety of methods, including but not limited to: email, IM, uniform resource locator (“URL”) addresses, peer-to-peer file distribution (“P2P”), or really simple syndication (“RSS”), for example.
  • RSS really simple syndication
  • FIG. 3 is a block diagram illustrating an example online video platform.
  • the online video platform 206 includes an opt-in engine module 300, a delivery engine module 302, a presence engine module 304, a transcoding engine module 306, an analytic engine module 308, and an editing engine module 310.
  • the online video platform 206 may be implemented on one or more servers, for example, Linux servers.
  • the system can leverage open source applications and an open source software development environment.
  • the system has been architected to be extremely scalable, requiring no system reconfiguration to accommodate a growing number of service users, and to support the need for high reliability.
  • the application suite may be based on AJAX where the online application behaves as if it resides on the user's local computing device, rather than across the Internet on a remote computing device, such as a server.
  • the AJAX architecture allows users to manipulate data and perform "drag and drop" operations, without the need for page refreshes or other interruptions.
  • the opt-in engine module 300 may be a server, which manages distribution relationships between content producers in the content creation block 208 and content consumers in the content consumption block 210.
  • the delivery engine module 302 may be a server that manages the delivery of content from content producers in the content creation block 208 to content consumers in the content consumption block 210.
  • the presence engine module 304 may be a server that determines device priority for delivery of content to each consumer, based on predefined delivery preferences and detection of consumer presence at each delivery device.
  • the transcoding engine module 306 may be a server that performs decoding and encoding tasks on media to achieve optimal format for delivery to target devices.
  • the analytic engine module 308 may be a server that maintains and analyzes statistical data relating to website activity and viewer behavior.
  • the editing engine module 310 may be a server that performs tasks associated with enabling a user to edit productions efficiently in an online environment.
  • Figure 4 is a block diagram illustrating an example online video editor 202.
  • the online video editor 202 includes an interface 400, input media 402a-h, and a template 404.
  • a digital content aggregation and control module 406 may also be used in conjunction with the online video editor 202 and thumbnails 408 representing the actual video files may be included in the interface 400.
  • the online video editor 202 may be an Internet-hosted application, which provides the interface 400 for selecting video and other digital material (e.g., music, voice, photos) and incorporating the selected materials into a video production via the digital content aggregation and control module 406.
  • the digital content aggregation and control module 406 may be software, hardware, and/or firmware that enables the modification of the video production as well as the visual representation of the user's actions in the interface 400.
  • the input media 402a-h may include such input sources as the shutterfly website 402a, remote media 402b, local media 402c, the napster web service 402d, the real rhapsody website 402e, the garage band website 402f, the flickr website 402g and webshots 402h.
  • the input media 402a-h may be media that the user has selected for possible inclusion in the video production and may be represented as the thumbnails 408 in a working "palette" of available material elements, in the main window of the interface 400.
  • the input media 402a-h may be of diverse types and formats, which may be aggregated together by the digital content aggregation and control module 406.
  • the thumbnails 408 are used as a way to represent material and can be acted on in parallel with the upload process.
  • the thumbnails 408 may be generated in a number of manners.
  • the thumbnails may be single still frames created from certain sections within the video, clip, or mix.
  • the thumbnails 408 may include multiple selections of frames (e.g., a quadrant of four frames).
  • the thumbnails may include an actual sample of the video in seconds (e.g., a 1 minute video could be represented by the first 5 seconds).
  • the thumbnails 408 can be multiple samples of video (e.g., 4 thumbnails of 3 second videos for a total of 12 seconds).
  • the thumbnails 408 are a method of representing the media to be uploaded (and after it is uploaded), whereby the process of creating the representation and uploading it takes a significantly less amount of time than either uploading the original media or compressing and uploading the original media.
  • the online video editor 202 allows the user to choose (or can create) the template 404 for the video production.
  • the template 404 may represent a timeline sequence and structure for insertion of materials into the production.
  • the template 404 may be presented in a separate window at the bottom of the screen, and the online video editor 202 via the digital content aggregation and control module 406 may allow the user to drag and drop the thumbnails 408 (representing material content) in order to insert them into the timeline to create the new video production.
  • the online video editor 202 may also allow the user to select from a library of special effects to create transitions between scenes in the video. The work-in-progress of a particular video project may be shown in a separate window.
  • a spidering module 414 is included in the digital content aggregation and control module 406.
  • the spidering module may periodically search and index both local content and external content.
  • the spidering module 414 may use the Internet 416 to search for external material periodically for inclusion or aggregation with the production the user is editing.
  • the local storage 418 may be a local source for the spidering module 414 to periodically spider to find additional internal locations of interest and/or local material for possible aggregation.
  • the online video editor 202 allows the user to publish the video to one or more previously defined galleries / archives 410. Any new video published to the gallery / archive 410 can be made available automatically to all subscribers 412 to the gallery. Alternatively, the user may choose to keep certain productions private or to only share the productions with certain users.
  • FIG. 5 is a block diagram illustrating an example preprocessing application.
  • the preprocessing application 204 includes a data model module 502, a control module 504, a user interface module 506, foundation classes 508, an operating system module 510, a video segmentation module 512, a video compression module 514, a video segment upload module 516, a video source 518, and video segment files 520.
  • the preprocessing application 204 is written in C++ and runs on a Windows PC, wherein the foundation classes 508 includes Microsoft foundation classes ("MFCs").
  • MFCs Microsoft foundation classes
  • an object-oriented programming model is provided to the Windows APIs.
  • the preprocessing application 204 is written, wherein the foundation classes 508 are in a format suitable for the operating system module 510 to be the Linux operating system.
  • the video segment upload module 516 may be an application that uses a Model-View-Controller (“MVC") architecture.
  • MVC Model-View-Controller
  • the MVC architecture separates the data model module 502, the user interface module 506, and the control module 504 into three distinct components.
  • the preprocessing application 204 automatically segments, compresses, and uploads video material from the user's PC, regardless of length.
  • the preprocessing application 204 uses the video segmentation module 512, the video compression module 514, and the video segment upload module 516 respectively to perform these tasks.
  • the uploading method works in parallel with the online video editor 202, allowing the user to begin editing the material immediately, while the material is in the process of being uploaded.
  • the material may be uploaded to the online video platform 206 and stored as one or more video segment files 520, one file per segment, for example.
  • the video source 518 may be a digital video camcorder or other video source device.
  • the preprocessing application 204 starts automatically when the video source 518 is plugged into the user's PC. Thereafter, it may automatically segment the video stream by scene transition using the video segmentation module 512, and save each of the video segment files 520 as a separate file on the PC.
  • a video would be captured on any number of devices at the video source block 518. Once the user captured the video (i.e., on their camcorder, cellular phone, etc.) it would be transferred to a local computing device, such as the hard drive of a client computer with Internet access.
  • a local computing device such as the hard drive of a client computer with Internet access.
  • videos can be transferred to a local computing device whereby an intelligent uploader can be deployed.
  • the video can be sent directly from the video source block 518 over a wireless network (not shown), then over the Internet, and finally to the online video platform 206.
  • This alternative bypasses the need to involve a local computing device or a client computer.
  • this example is most useful when the video, clip, or mix is either very short, or highly compressed, or both.
  • the video is not compressed or long or both, and, therefore, relatively large, it is typically transferred first to a client computer where an intelligent uploader is useful.
  • an upload process is initiated from a local computing device using the video segment upload module 516, which facilitates the input of lengthy video material.
  • the user would be provided with the ability to interact with the user interface module 506.
  • the control module 504 controls the video segmentation module 512 and the video compression module 514, wherein the video material is segmented and compressed into the video segment files 520.
  • a lengthy production may be segmented into 100 upload segments, which are in turn compressed into 100 segmented and compressed upload segments.
  • Each of the compressed video segment files 520 begin to be uploaded separately via the video segment upload module 516 under the direction of the control module 504. This may occur, for example, by each of the upload segments being uploaded in parallel. Alternatively each of the upload segments may be uploaded in order, the largest segment first, the smallest segment first, or any other manner.
  • the online video editor 202 is presented to the user.
  • thumbnails representing the video segments in the process of being uploaded are made available to the user.
  • the user would proceed to edit the video material via an interaction with the thumbnails.
  • the user may be provided with the ability to drag and drop the thumbnails into and out of a timeline or a storyline, to modify the order of the segments that will appear in the final edited video material.
  • the system is configured to behave as if all of the video represented by the thumbnails is currently in one location (i.e., on the user's local computer) despite the fact that the material is still in the process of being uploaded by the video segment upload module 516.
  • the upload process may be changed. For example, if the upload process was uploading all of the compressed upload segments in sequential order and the user dropped an upload segment representing the last sequential portion of the production into the storyline, the upload process may immediately begin to upload the last sequential portion of the production, thereby lowering the priority of the segments that were currently being uploaded prior to the user's editing action.
  • All of the user's editing actions are saved by the online video editor
  • the saved editing actions are applied to the completely uploaded segments.
  • the user may have already finished the editing process and logged off or the user may still be logged on.
  • the process of applying the edits only when the material is finished uploading saves the user from having to wait for the upload process to finish before editing the material.
  • various capabilities exist to share, forward, publish, browse, and otherwise use the uploaded video in a number of ways.
  • the online video editor 202 also may support the construct of a
  • Hyper-template a shareable definition of how a video production was created, that can be reused by others to help them create their own derivative works. Hyper-templates, therefore, are shareable versions of templates.
  • a template defines the sequence of scenes (edit sequence) that make up a video, and the related soundtrack, transitions, filters or special effects that are used in the production.
  • FIG. 6 is a block diagram illustrating an example edit sequence.
  • four video clips (a 1104, b 1106, c 1108, and d 1110) are combined into a video production 1100.
  • the editing sequence occurs whereby first the individual clips are edited, then clips a 1104 and b 1106 are merged with sound added 1102, and then clips c 1108 and d 1110 are combined with the previously merged clips a and b to form the video production 1100.
  • Figure 7 is a block diagram illustrating example data structures that support hyper-templates.
  • data structures 1200 include an edit tree table 1202, an edit dependencies table 1204, an edit command table 1206, a sequence table 1208, and a sequence composition map 1210.
  • the sequence composition map 1210 provides pointers to the four video files (a 1104, b 1106, c 1108, and d 1110) previously described in Figure 6.
  • the edit tree table 1202 identifies a sequence of six editing actions.
  • the edit dependencies table 1204 defines dependencies between editing actions (e.g., editing action E must wait for completion of editing actions A and B).
  • the sequence composition map 1210 identifies the video clips that are used in each sequence step.
  • the online video editor 202 may be used to provide a growing library of community hyper-templates, based on the work of its members.
  • a user can either use one of the available hyper-templates that have been designated as "shareable,” or create a video and its accompanying template from scratch.
  • the user may drag and drop components from a palette of available video segments into a timeline that defines the sequence for the video production.
  • the user also may drag and drop transitions between segments, and can optionally drag and drop special transitions on to individual segments.
  • the creator On completion of a video production, the creator has the option of defining whether the video is shareable with other users.
  • the video can be shared at multiple levels: at the community level (by any person viewing the video), or at one or more levels within a group hierarchy (e.g., only by people identified as "family" within a "friends and family” group).
  • the sharing hierarchy may be implemented as a system of folders within a directory structure, similar to the structure of a UNIX file system or a Windows file system, for example. Each member who creates video productions has such a directory, and a folder is created within the directory for each group or subgroup that the member defines.
  • the system also provides convenient mechanisms for creators of video productions to share their creation processes.
  • the user On completion of a video production, for example, the user has the option of defining whether the hyper- template used in the production is shareable with other users, and whether the content of the video is also shareable in combination with the hyper-template.
  • the hyper-template can be shared at multiple levels: at the community level (by any person viewing the video), or at one or more levels within a group hierarchy (e.g., only by people identified as "family" within a "friends and family” group). Sharing controls for hyper-templates and their content may be implemented using the same method outlined above, for sharing video productions.
  • the user can identify individual segments within the video that are shareable when reusing the hyper-template and which are not.
  • the user can identify which specific groups or subgroups of people can share specific video segments when reusing the hyper-template.
  • the system provides two methods for selecting hyper-templates for reuse: browsing and hyper-linking. Using the first method, members of the video- sharing website browse among the set of hyper-templates designated as available to them for reuse.
  • the hyper-templates may be organized in a variety of classification structures, similar to the structures by which the actual video productions are classified.
  • classification schemes based on categories of videos (or "channels"), styles of video production, lengths of videos, tags or titles of videos, a grouping of favorite hyper- templates (based on popularity), and a set of hyper-templates recommended by the website, organized by category.
  • the second method of selecting hyper-templates for reuse involves the use of hyperlinks, and, in particular, hypervideo links.
  • Hyperlinks are a referencing device in hypertext documents. They are used widely on the World Wide Web to act as references that, when clicked on, link dynamically from one webpage to another.
  • the hypervideo concept extends the use of the hyperlink device to provide a link out of a video production (rather than a text document) to another webpage, typically to another section of video.
  • the presently described system and methods use the hypervideo link as a method of transferring control out of a viewed video and into the online video editor 202, such that the viewer can use the template of the viewed video to create his or her own production.
  • hyper-template linking is a special case of hypervideo linking, the special case being that the system always transfers control to the online video editor 202, rather than to a destination defined by the video-creator.
  • video productions created by the online video editor 202 are discretely watermarked with a small logo that appears in the lower left or right corner of the video, for example.
  • the watermark acts as a hyper-template link, in the sense that, if clicked on, it triggers a hyperlink that takes the viewer seamlessly into the online video editor 202, with the hyper-template of the viewed video pre-loaded and ready to be reused in creating a new video production. This is achieved by structuring the hyperlink in the form of ⁇ M ⁇ M ⁇ addill ⁇ where
  • hypertemplateidentifier identifies the particular video that is being viewed and its hyper-template
  • websiteaddress and "editor” identify the online editor to be linked to.
  • a hyper-template watermark may be distinguished in several possible ways, such as by having two separate watermarks placed in different areas of the video image, or, in the case of a shared watermark, by a passive appearance for a hyper-template hyperlink (as opposed to flashing, which indicates a hotspot), or by color-coding (e.g., blue indicates a hyper-template link, whereas red indicates a hotspot).
  • a hyper-template hyperlink is initially generated by the online video editor 202 during construction of a video production, and is stored as metadata with the video.
  • the data structures supporting the metadata were described earlier in this section, and shown in Figure 7. If the video is posted on another website (e.g., on a blogger's home page or on a Myspace user's home page), the hyperlink metadata remains associated with it. No matter where the video is viewed, on any website, it still retains the hyperlink that will link back to the original online editor if the hyper-video hyperlink is clicked on. This is because the video is never actually exported, but remains on the video-sharing website which acts as a proxy server that retrieves and streams the video when requested.
  • the hyper- template thus not only provides users with a convenient way of sharing and reusing video creation processes, but also benefits the online video sharing website by generating traffic to the website and potentially enlisting new members.
  • the user may be linked into the online video editor 202 and, in one example, is presented with a webpage showing the hyper-template of the selected video in the form of a timeline at the bottom of the screen, with the shareable segments of the related video displayed on the main palette in the center of the screen.
  • the timeline of the hyper-template is displayed vertically at the left or right side of the screen, with an additional vertical window alongside the timeline to allow insertion of text to be used as a commentary relating to the contents of the video timeline.
  • the positioning of the text can be adjusted to appear alongside the particular video sequence that it relates to.
  • the text can then serve as a teleprompter, and the commentary can then be recorded by the user in synchronization with the video sequence, as the video is played back in a separate window, and a marker moves down the timeline and its associated commentary.
  • hyper-templates Upon selecting a hyper-template, users have a variety of choices regarding content that they may include in their new production. From the selected video, they can reuse any segments that the owner has designated as shareable. Users can also add or remove segments of video. They can select and include material from their own work-in-progress or their own galleries of completed productions, as well as from external sources that they have defined to be of interest and that the system has aggregated on their behalf, such as sources of photos, music, animation and other video content. Users can also change titles, credits and other text that may appear in the production, as well as any of the transitions, filters or special effects. Thus hyper-templates offer users a wide range of options regarding reuse of others' work, ranging from simple substitution of one or more video segments or other elements, to a major restructuring of the video production.
  • FIG. 8 is a diagram illustrating example editing decision tree.
  • five clips (or files) 800, 802, 804, 806, and 808 will be strung together to create a single production (output file) 810.
  • Each clip 800-808 may have edits, and subsets of the final production may have soundtracks or edits applied before merging the N clips.
  • the tree 812 in the example of Figure 8 has 3 tiers 814, 816, and
  • Nodes in the first tier 814 are leaf nodes.
  • Leaf nodes may contain clips with clip specific edits to apply such as fading, clipping, etc.
  • the nodes on the second tier 816 may be sets of clips that require audio or edits across the subset such as applying an MP3 or adding a logo. Multiple second tiers can exist.
  • the third tier
  • editing time varies with the size and editing needs.
  • Distributing clips to be edited across all servers available at the online video platform 206 for such processing to process separately could result in unneeded disk I/O and wait times due to the relative size of the files involved and the type of editing that needs to occur.
  • each node of the tree 812 is given a weight.
  • the base weight of a leaf node may be expressed as a measure of time in seconds based on the time the clip will take to perform a predefined editing function.
  • Each edit produces a function on weight (most commonly a straight multiplier, sometimes an exponent). For example a 10MB file that processes the basic transcode in 20 seconds may be given weight 20.
  • a logo overlay edit that takes
  • 30 seconds on a 10MB file may be given a weight multiplier of weight*1.5 (i.e.,
  • Each server in a workhorse cluster can be tracked by weight load.
  • the weight load is an expression of expected load dependent on tasks the server must perform and is not dependent on the current load the server is experiencing.
  • anytime a file is copied there is a cost in disk I/O and network speed in time. In one example, that time is also taken into account, for example, based on the shortest weight path. Any given tree will be small from a data perspective and even forcing a 2 ⁇ n optimization on the weight of the tree will not outweigh the cost of single wrong choice in transferring a file from one server to another.
  • the second tier A+B 820 is identified as the biggest task with a total weight of 23.
  • the paths for the nodes 822 and 824 on the second tier 816 are 17 and 18.5 respectively.
  • the tier producing 18.5 (806, 808, and 824) becomes the destination server (S1 ) for the smaller second tier (804 and 822), which is processed on a second server S2. Since 18.5 is still less than 23, the server represented by the node 808 becomes the destination for files after completion that reach the server represented by the node at path 826 (S3).
  • a first method is called second tier distribution. Second tier distribution is based on the total weight of the second tier.
  • FIG. 9 is a diagram illustrating example editing decision tree with leaf node weights.
  • five clips (or files) 900, 902, 904, 906, and 908 will be strung together to create a single production (output file) 910.
  • Each of the clips 900-908 in the present example resides on a different server.
  • Node 900 is on server S1.
  • Node 902 is on server S2.
  • Node 904 is on server S3.
  • Node 906 is on server S4.
  • Node 908 is on server S5.
  • the tree 912 in the example of Figure 9 has 3 tiers 914, 916, and
  • Nodes in the first tier 914 are leaf nodes. As the size of individual files increase in combination with many smaller files, a single transcode can be the biggest weight piece of an entire tree. By adding leaf nodes into distribution this case can be accounted for. This assumes a larger number of workhorse servers to distribute tasks to.
  • leaf node weights are added.
  • Leaf node B 902 is the largest, and its encode path to a final product is 32.5 (node 902 + node 920), the clear longest path.
  • Node A 900 is processed on the server S1 and then transferred to the server represented by the node 920 (S2) to await addition.
  • node C 904 is on a separate server (S3), and a total path of 12 to its final destination still identifies the server represented in node 910 (S1 ) as the final destination.
  • Node D 906 is processed on a separate server (S4) as well and so is node E 908 (S5).
  • Node E 908 has less weight than node D 906 and is added to the server S4 after processing.
  • the total path of D+E is 12.5 (since E was processed elsewhere), and makes its way to the server S2 for final merge.
  • Another method is termed distribution by user profile.
  • a user hitting the site has an active file catalogue of known weights. Based on common use and transcoding patterns a virtual tree can be constructed asynchronously of likely distribution across the workhorse cluster. Even a partial match on any pre-distribution saves on I/O overhead decreasing the time the user is waiting for the transcode.
  • Two total weights are tallied by the server: actual weight and potential weight. Potential weight is factored in second when other tasks are distributed.
  • Another method is termed user profile preprocessing.
  • workhorse servers will be idle over 90% of the time.
  • Transcoding is completely user driven; by nature there are high and low periods of user activity. Users will have common transcoding edits they want to perform. These can be as simple as always desiring an output format of quicktime, to fading each file 3 seconds on the back. Common edit profiles are kept / user and applied to files when the workhorse server is currently idle.
  • FIG 10 is a diagram illustrating example process for distributed edit processing using a weighted tree.
  • the process can be carried out by the preprocessing application 204 shown in Figure 2.
  • a project is submitted for processing at step 1000 to the online video platform 206.
  • the submission may be, for example, from a user or a group of users who have finished creating a project.
  • the project is transferred from the online video platform 206 to the preprocessing application 204.
  • the preprocessing application 204 represents the editing actions as leaf nodes in a tree structure at step 1004.
  • the tree may reside, for example, in an internal or external memory available to the preprocessing application 204.
  • the preprocessing application 204 assigns weights to each of the leaf nodes in the tree.
  • the weights may be, for example, based on the expected load required to complete the editing action represented by the leaf node.
  • the preprocessing application 204 determines the total weight of the path through the tree to complete the editing actions. This includes, for example, the required processing at each level of the tree, from the initial leaf node processing to second and additional tier processing, until the final merge occurs at the root of the tree.
  • leaf node editing actions are distributed by the preprocessing application 204 across multiple servers based on the total weight of each path through the tree.

Abstract

A system and related methods comprising an Internet-hosted application service for online storage, editing and sharing of digital video content. A working file including multimedia files and edits requiring transcoding can be divided and processed on more than one server. Each such request is expressed as a decision tree. Each node of the tree is given a weight. Various algorithms can utilize the weighted tree information to determine if a portion of the working file should be sent to a server for processing and which server. The Internet-hosted application service can be used on a dedicated website or its functionality can be served to different websites seeking to provide users with enhanced video editing capabilities.

Description

SYSTEM AND METHODS FOR DISTRIBUTED EDIT PROCESSING IN AN ONLINE VIDEO EDITING SYSTEM
[0001] This application hereby incorporates by reference the following U.S.
Non-Provisional Patent Applications.
Figure imgf000003_0001
FIELD OF THE INVENTION
[0002] This invention relates in general to the use of computer technology to store, edit and share personal digital video material, and in particular to a system and methods whereby the processing of editing can be distributed across multiple servers.
BACKGROUND
[0003] Collaboration in the creation of video productions has so far been the limited domain of movie and TV professionals, using expensive computer- based systems and software. None of the popular desktop video editors available to consumer videographers have the ability to support collaborative video production. If two or more amateur videographers were to attempt to collaborate, they would need to transmit large video files back and forth to each other, and would quickly run into storage and bandwidth issues, as well as potential incompatibilities between the hardware and software they use. [0004] There are currently around 500 million devices in existence worldwide that are capable of producing video: 350 million video camera phones, 115 million video digital cameras, plus 35 million digital camcorders. The extremely rapid increase in availability of such devices, especially camera phones, has generated a mounting need on the part of consumers to find ways of converting their video material into productions that that they can share with others. This amounts mainly to a need for two capabilities: video editing and online video sharing.
[0005] Online sharing of consumer-generated video material via the
Internet is a relatively new phenomenon, and is still poorly developed. A variety of websites have come into existence to support online video publishing and sharing. Most of these sites are focused on providing a viewing platform whereby members can upload their short amateur video productions to the website and offer them for viewing by the general public (or in some cases by specified users or groups of users), and whereby visitors to the website can browse and select video productions for viewing. But none of these websites currently support editing of video material, and most of them have severe limitations on the length of videos that they support (typically a maximum of 5-10 minutes). Consequently, most videos available for viewing on these sites are short (typically averaging less than 2 or 3 minutes), and are of poor quality, since they have not been edited. [0006] Storing, editing, and sharing video is therefore difficult for consumers who create video material today on various electronic devices, including digital still cameras ("DSCs"), digital video camcorders ("DVCs"), mobile phones equipped with video cameras and computer mounted web cameras ("webcams"). These devices create video files of varying sizes, resolutions and formats. Digital video recorders ("DVRs"), in particular, are capable of recording several hours of high- resolution material occupying multiple gigabytes of digital storage. Consumers who generate these video files typically wish to edit their material down to the highlights that they wish to keep, save the resulting edited material on some permanent storage medium, and then share this material with friends and family, or possibly with the public at large.
[0007] A wide variety of devices exist for viewing video material, ranging from DVD players, TV-connected digital set-top boxes ("DSTBs") and DVRs, mobile phones, personal computers ("PCs"), and video viewing devices that download material via the PC, such as handheld devices (e.g., PalmOne), or the Apple video iPod. The video recording formats accepted by each of these viewing devices vary widely, and it is unlikely that the format that a particular delivery device accepts will match the format in which a particular video production will . have been recorded.
[0008] Figure 1 is a block diagram illustrating a prior art video editing platform including a creation block 199, a consumption block 198, and a media aggregation, storage, manipulation & delivery infrastructure 108. Figure 1 shows with arrows the paths that currently exist for transferring video material from a particular source, including a DSC 100, a DVC 102, a mobile phone 104, and a webcam 106 to a particular destination viewing device including a DVD player 110, a DSTB 112, a DVR 1 14, a mobile phone 116, a handheld 118, a video iPod 120, or a PC 122. The only destination device that supports material from all input devices is the PC 122. Otherwise, mobile phone 104 can send video material to another mobile phone 116, and a limited number of today's digital camcorders and digital cameras can create video material on DVDs that can then be viewed on the DVD player 110. In general, these paths are fractured and many of the devices in the creation block 199 have no way of interfacing with many of the devices in the consumption block 198. Beyond thehighlighted paths through the media aggregation, storage, manipulation & delivery infrastructure 108, no other practical video transfer paths exist today.
[0009] Most of the desktop video editing products include the concept of a
"template" (or "style"), analogous to a recipe - a format and specification for a particular video production, defining its content in terms of the sequence of scenes and related soundtrack, transitions, filters or special effects that are used to construct the video. Some products offer starter templates to help users create a video production. Starter templates provided with the product and templates derived from the user's own productions can be reused by the user to create new productions. But, since they were not designed with online sharing in mind, desktop video editors do not provide a way for users to access and use templates created by other users.
[0010] Thus the video-creating public who wish to share their work with others today can share short works by joining one of the available online video- sharing websites, but are faced with the choice of either posting their works unedited (and therefore of limited quality), or taking on the task of using a desktop editor. When using a desktop editor, the help they have available in constructing their production is limited to the templates or styles offered with their editing program, or to the templates they have already created themselves. [0011] There are thus no effective ways today for the community to collaborate in the creation of video productions. A small minority of users edit their material using desktop editors, but neither they nor the remaining majority of video-creators have available today any online services that support collaboration in video production. Other forms of creative collaboration exist on the Internet, ranging from "wiki"-style efforts to leverage the community in building online knowledge bases (e.g., the Wikipedia online encyclopedia) to online consumer product reviews (e.g., epinions.com) and consumer reviews of books, music and video (typified by Amazon.com).
[0012] There is thus a need to provide consumers with an online service that provides an easy-to-use editing interface, but also offers ways for users to benefit from and build on the creative work of others and eliminates many of the drawbacks associated with current schemes.
SUMMARY
[0013] A system and methods are disclosed for storing, editing and distributing video material in an online environment. In one aspect, a system and related methods comprise an Internet-hosted application service for online storage, editing and sharing of digital video content. A working file including multimedia files and edits requiring transcoding can be divided and processed on more than one server.
[0014] Each such request may be expressed as a decision tree. The tree may have three tiers. First tier nodes are leaf node files with file specific edits to apply. File specific edits may include, for example, fading, clipping, etc. The second tier nodes are sets of files that require audio or edits across the subset. Audio or edits across the subset include, for example, applying an MP3 sound track or adding a logo. Multiple second tiers can exist. The third tier is the final merged product and can have the same edits as the second tier. [0015] To determine how to distribute the required processing, each node of the tree is given a weight. The weight may be based upon a measure of time in seconds based on the time the file will take to perform a predefined transcode that functions in N time. Each edit produces a function on weight (most commonly a straight multiplier, sometimes an exponent). Various algorithms may utilize the weighted tree information to determine if a portion of the working file should be sent to a server for processing and which server.
[0016] The Internet-hosted application service can be used on a dedicated website or its functionality can be served to different websites seeking to provide users with enhanced video editing capabilities. Other features and advantages of the present invention will become more readily apparent to those of ordinary skill in the art after reviewing the following detailed description and accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The details of the present invention, both as to its structure and operation, may be gleaned in part by study of the accompanying drawings, in which like reference numerals refer to like parts, and in which: [0018] Figure 1 is a block diagram illustrating a prior art video editing platform.
[0019] Figure 2 is a block diagram illustrating the functional blocks or modules in an example architecture.
[0020] Figure 3 is a block diagram illustrating an example online video platform.
[0021] Figure 4 is a block diagram illustrating an example online video editor application.
[0022] Figure 5 is a block diagram illustrating an example video preprocessing application.
[0023] Figure 6 is a diagram illustrating an example edit sequence.
[0024] Figure 7 is a diagram illustrating an example data structures that support hyper-templates.
[0025] Figure 8 is a diagram illustrating an example editing decision tree.
[0026] Figure 9 is a diagram illustrating an example editing decision tree with leaf node weights.
[0027] Figure 10 is a diagram illustrating an example process for distributed edit processing using a weighted tree. DETAILED DESCRIPTION
[0028] Certain examples as disclosed herein provide for the use of computer technology to store, edit, and share personal digital video material. In one aspect, a working file is provided, which includes multimedia files and edits requiring transcoding that can be divided and processed on more than one server. Each such request is expressed as a decision tree. Each node of the tree is given a weight. Various algorithms can utilize the weighted tree information to determine if a portion of the working file should be sent to a server for processing and which server.
[0029] After reading this description it will become apparent to one skilled in the art how to implement the invention in various alternative examples and alternative applications. However, although various examples of the present invention are described herein, it is understood that these examples are presented by way of example only, and not limitation. As such, this detailed description of various alternative examples should not be construed to limit the scope or breadth of the present invention as set forth in the appended claims. [0030] Those of skill will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein can often be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled persons can implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the invention. In addition, the grouping of functions within a module, block, circuit or step is for ease of description. Specific functions or steps can be moved from one module, block or circuit without departing from the invention. [0031] Referring now to the Figures, Figure 2 is a block diagram illustrating the functional blocks or modules in an example architecture. In the illustrated example, a system 200 includes an online video platform 206, an online video editor 202, a preprocessing application 204, as well as a content creation block 208 and a content consumption block 210.
[0032] The content creation block 208 may include input data from multiple sources that are provided to the online video platform 206, including personal video creation devices 212, personal photo and music repositories 214, and personally selected online video resources 216, for example. [0033] In one example, video files may be uploaded by consumers from their personal video creation devices 212. The personal video creation devices 212 may include, for example, DSCs, DVCs, cell phones equipped with video cameras, and webcams. In another example, input to the online video platform 206 may be obtained from other sources of digital video and non-video content selected by the user. Non-video sources include the personal photo and music repositories 214, which may be stored on the user's PC, or on the video server, or on an external server, such as a photo-sharing application service provider ("ASP"), for example. Additional video sources include websites that publish shareable video material, such as news organizations or other external video- sharing sites, which are designated as personally selected online video resources 216, for example.
[0034] The online video editor 202 (also referred to as the Internet-hosted application service) can be used on a dedicated website or its functionality can be served to different websites seeking to provide users with enhanced video editing capabilities. For example, a user may go to any number of external websites providing an enhanced video editing service. The present system may be used, for example, to enable the external websites to provide the video editing capabilities while maintaining the look and feel of the external websites. In that respect, the user of one of the external websites may not be aware that they are using the present system other than the fact that they are using functionality provided by the present system. In a transparent manner then, the system may serve the application to the external IP address of the external website and provide the needed function while at the same time running the application in a manner consistent with the graphical user interface ("GUI") that is already implemented at the external IP address. Alternatively, a user of the external website may cause the invocation of a redirection and GUI recreation module 230, which may cause the user to be redirected to one of the servers used in the present system which provides the needed functionality while at the same time recreating the look and feel of the external website.
[0035] Video productions may be output by the online video platform 206 to the content consumption block 210. Content consumption block 210 may be utilized by a user of a variety of possible destination devices, including, but not limited to, mobile devices 218, computers 220, DVRs 222, DSTBs 224, and DVDs 226. The mobile devices 218 may be, for example, cell phones or PDAs equipped with video display capability. The computers 220 may include PCs, Apples, or other computers or video viewing devices that download material via the PC or Apple, such as handheld devices (e.g., PalmOne), or an Apple video iPod. The DVDs 226 may be used as a media to output video productions to a permanent storage location, as part of a fulfillment service for example. [0036] Delivery by the online video platform 206 to the mobile devices 218 may use a variety of methods, including but not limited to a multimedia messaging service ("MMS"), a wireless application protocol ("WAP"), and instant messaging ("IM"). Delivery by the online video platform 206 to the computers 220 may use a variety of methods, including but not limited to: email, IM, uniform resource locator ("URL") addresses, peer-to-peer file distribution ("P2P"), or really simple syndication ("RSS"), for example.
[0037] The functions and the operation of the online video platform 206 will now be described in more detail with reference to Figure 3. Figure 3 is a block diagram illustrating an example online video platform. In the illustrated example, the online video platform 206 includes an opt-in engine module 300, a delivery engine module 302, a presence engine module 304, a transcoding engine module 306, an analytic engine module 308, and an editing engine module 310. [0038] The online video platform 206 may be implemented on one or more servers, for example, Linux servers. The system can leverage open source applications and an open source software development environment. The system has been architected to be extremely scalable, requiring no system reconfiguration to accommodate a growing number of service users, and to support the need for high reliability. [0039] The application suite may be based on AJAX where the online application behaves as if it resides on the user's local computing device, rather than across the Internet on a remote computing device, such as a server. The AJAX architecture allows users to manipulate data and perform "drag and drop" operations, without the need for page refreshes or other interruptions. [0040] The opt-in engine module 300 may be a server, which manages distribution relationships between content producers in the content creation block 208 and content consumers in the content consumption block 210. The delivery engine module 302 may be a server that manages the delivery of content from content producers in the content creation block 208 to content consumers in the content consumption block 210. The presence engine module 304 may be a server that determines device priority for delivery of content to each consumer, based on predefined delivery preferences and detection of consumer presence at each delivery device.
[0041] The transcoding engine module 306 may be a server that performs decoding and encoding tasks on media to achieve optimal format for delivery to target devices. The analytic engine module 308 may be a server that maintains and analyzes statistical data relating to website activity and viewer behavior. The editing engine module 310 may be a server that performs tasks associated with enabling a user to edit productions efficiently in an online environment. [0042] The functions and the operation of the online video editor 202 will now be described in more detail with reference to Figure 4. Figure 4 is a block diagram illustrating an example online video editor 202. In the illustrated example, the online video editor 202 includes an interface 400, input media 402a-h, and a template 404. A digital content aggregation and control module 406 may also be used in conjunction with the online video editor 202 and thumbnails 408 representing the actual video files may be included in the interface 400. [0043] The online video editor 202 may be an Internet-hosted application, which provides the interface 400 for selecting video and other digital material (e.g., music, voice, photos) and incorporating the selected materials into a video production via the digital content aggregation and control module 406. The digital content aggregation and control module 406 may be software, hardware, and/or firmware that enables the modification of the video production as well as the visual representation of the user's actions in the interface 400. The input media 402a-h may include such input sources as the shutterfly website 402a, remote media 402b, local media 402c, the napster web service 402d, the real rhapsody website 402e, the garage band website 402f, the flickr website 402g and webshots 402h. The input media 402a-h may be media that the user has selected for possible inclusion in the video production and may be represented as the thumbnails 408 in a working "palette" of available material elements, in the main window of the interface 400. The input media 402a-h may be of diverse types and formats, which may be aggregated together by the digital content aggregation and control module 406.
[0044] The thumbnails 408 are used as a way to represent material and can be acted on in parallel with the upload process. The thumbnails 408 may be generated in a number of manners. For example, the thumbnails may be single still frames created from certain sections within the video, clip, or mix. Alternatively, the thumbnails 408 may include multiple selections of frames (e.g., a quadrant of four frames). In another example, the thumbnails may include an actual sample of the video in seconds (e.g., a 1 minute video could be represented by the first 5 seconds). In yet another example, the thumbnails 408 can be multiple samples of video (e.g., 4 thumbnails of 3 second videos for a total of 12 seconds). In general, the thumbnails 408 are a method of representing the media to be uploaded (and after it is uploaded), whereby the process of creating the representation and uploading it takes a significantly less amount of time than either uploading the original media or compressing and uploading the original media.
[0045] The online video editor 202 allows the user to choose (or can create) the template 404 for the video production. The template 404 may represent a timeline sequence and structure for insertion of materials into the production. The template 404 may be presented in a separate window at the bottom of the screen, and the online video editor 202 via the digital content aggregation and control module 406 may allow the user to drag and drop the thumbnails 408 (representing material content) in order to insert them into the timeline to create the new video production. The online video editor 202 may also allow the user to select from a library of special effects to create transitions between scenes in the video. The work-in-progress of a particular video project may be shown in a separate window. U
[0046] A spidering module 414 is included in the digital content aggregation and control module 406. The spidering module may periodically search and index both local content and external content. For example, the spidering module 414 may use the Internet 416 to search for external material periodically for inclusion or aggregation with the production the user is editing. Similarly, the local storage 418 may be a local source for the spidering module 414 to periodically spider to find additional internal locations of interest and/or local material for possible aggregation.
[0047] On completion of the project, the online video editor 202 allows the user to publish the video to one or more previously defined galleries / archives 410. Any new video published to the gallery / archive 410 can be made available automatically to all subscribers 412 to the gallery. Alternatively, the user may choose to keep certain productions private or to only share the productions with certain users.
[0048] The functions and the operation of the preprocessing application 204 will now be described in more detail with reference to Figure 5. Figure 5 is a block diagram illustrating an example preprocessing application. In the illustrated example, the preprocessing application 204 includes a data model module 502, a control module 504, a user interface module 506, foundation classes 508, an operating system module 510, a video segmentation module 512, a video compression module 514, a video segment upload module 516, a video source 518, and video segment files 520.
[0049] In one example, the preprocessing application 204 is written in C++ and runs on a Windows PC, wherein the foundation classes 508 includes Microsoft foundation classes ("MFCs"). In this example, an object-oriented programming model is provided to the Windows APIs. In another example, the preprocessing application 204 is written, wherein the foundation classes 508 are in a format suitable for the operating system module 510 to be the Linux operating system. The video segment upload module 516 may be an application that uses a Model-View-Controller ("MVC") architecture. The MVC architecture separates the data model module 502, the user interface module 506, and the control module 504 into three distinct components.
[0050] In operation, the preprocessing application 204 automatically segments, compresses, and uploads video material from the user's PC, regardless of length. The preprocessing application 204 uses the video segmentation module 512, the video compression module 514, and the video segment upload module 516 respectively to perform these tasks. The uploading method works in parallel with the online video editor 202, allowing the user to begin editing the material immediately, while the material is in the process of being uploaded. The material may be uploaded to the online video platform 206 and stored as one or more video segment files 520, one file per segment, for example.
[0051] The video source 518 may be a digital video camcorder or other video source device. In one example, the preprocessing application 204 starts automatically when the video source 518 is plugged into the user's PC. Thereafter, it may automatically segment the video stream by scene transition using the video segmentation module 512, and save each of the video segment files 520 as a separate file on the PC.
[0052] From the user's perspective, a video would be captured on any number of devices at the video source block 518. Once the user captured the video (i.e., on their camcorder, cellular phone, etc.) it would be transferred to a local computing device, such as the hard drive of a client computer with Internet access.
[0053] Alternatively videos can be transferred to a local computing device whereby an intelligent uploader can be deployed. In some cases, the video can be sent directly from the video source block 518 over a wireless network (not shown), then over the Internet, and finally to the online video platform 206. This alternative bypasses the need to involve a local computing device or a client computer. However, this example is most useful when the video, clip, or mix is either very short, or highly compressed, or both.
[0054] In the case that the video is not compressed or long or both, and, therefore, relatively large, it is typically transferred first to a client computer where an intelligent uploader is useful. In this example, an upload process is initiated from a local computing device using the video segment upload module 516, which facilitates the input of lengthy video material. To that end, the user would be provided with the ability to interact with the user interface module 506. Based on user input, the control module 504 controls the video segmentation module 512 and the video compression module 514, wherein the video material is segmented and compressed into the video segment files 520. For example, a lengthy production may be segmented into 100 upload segments, which are in turn compressed into 100 segmented and compressed upload segments. [0055] Each of the compressed video segment files 520 begin to be uploaded separately via the video segment upload module 516 under the direction of the control module 504. This may occur, for example, by each of the upload segments being uploaded in parallel. Alternatively each of the upload segments may be uploaded in order, the largest segment first, the smallest segment first, or any other manner.
[0056] As the video material is being uploaded, the online video editor 202 is presented to the user. Through a user interface provided by the user interface module 506, thumbnails representing the video segments in the process of being uploaded are made available to the user. The user would proceed to edit the video material via an interaction with the thumbnails. For example, the user may be provided with the ability to drag and drop the thumbnails into and out of a timeline or a storyline, to modify the order of the segments that will appear in the final edited video material.
[0057] The system is configured to behave as if all of the video represented by the thumbnails is currently in one location (i.e., on the user's local computer) despite the fact that the material is still in the process of being uploaded by the video segment upload module 516. When the user performs an editing action on the thumbnails, for example, by dragging one of the thumbnails into a storyline, the upload process may be changed. For example, if the upload process was uploading all of the compressed upload segments in sequential order and the user dropped an upload segment representing the last sequential portion of the production into the storyline, the upload process may immediately begin to upload the last sequential portion of the production, thereby lowering the priority of the segments that were currently being uploaded prior to the user's editing action. [0058] All of the user's editing actions are saved by the online video editor
202. Once the material is uploaded completely (including the prioritized upload segments and the remaining upload segments), the saved editing actions are applied to the completely uploaded segments. In this manner, the user may have already finished the editing process and logged off or the user may still be logged on. Regardless, the process of applying the edits only when the material is finished uploading saves the user from having to wait for the upload process to finish before editing the material. Once the final edits are applied, various capabilities exist to share, forward, publish, browse, and otherwise use the uploaded video in a number of ways.
[0059] The online video editor 202 also may support the construct of a
"hyper-template" - a shareable definition of how a video production was created, that can be reused by others to help them create their own derivative works. Hyper-templates, therefore, are shareable versions of templates. A template defines the sequence of scenes (edit sequence) that make up a video, and the related soundtrack, transitions, filters or special effects that are used in the production.
[0060] Figure 6 is a block diagram illustrating an example edit sequence. In the illustrated example, four video clips (a 1104, b 1106, c 1108, and d 1110) are combined into a video production 1100. In the example of Figure 6, the editing sequence occurs whereby first the individual clips are edited, then clips a 1104 and b 1106 are merged with sound added 1102, and then clips c 1108 and d 1110 are combined with the previously merged clips a and b to form the video production 1100.
[0061] Figure 7 is a block diagram illustrating example data structures that support hyper-templates. In the illustrated example, data structures 1200 include an edit tree table 1202, an edit dependencies table 1204, an edit command table 1206, a sequence table 1208, and a sequence composition map 1210. [0062] The sequence composition map 1210 provides pointers to the four video files (a 1104, b 1106, c 1108, and d 1110) previously described in Figure 6. The edit tree table 1202 identifies a sequence of six editing actions. The edit dependencies table 1204 defines dependencies between editing actions (e.g., editing action E must wait for completion of editing actions A and B). The sequence table 1208 identifies the sequence of editing actions and the root of the editing tree (where the Root Flag = "1"). The sequence composition map 1210 identifies the video clips that are used in each sequence step. [0063] The online video editor 202 may be used to provide a growing library of community hyper-templates, based on the work of its members. When creating a video production, a user can either use one of the available hyper-templates that have been designated as "shareable," or create a video and its accompanying template from scratch. When creating a video from scratch, the user may drag and drop components from a palette of available video segments into a timeline that defines the sequence for the video production. The user also may drag and drop transitions between segments, and can optionally drag and drop special transitions on to individual segments. The user can also select still photos and add them into the timeline (e.g., from the Flickr website), and can select and add a soundtrack to the video production (e.g., from the Magnatune website). [0064] On completion of a video production, the creator has the option of defining whether the video is shareable with other users. In one example, the video can be shared at multiple levels: at the community level (by any person viewing the video), or at one or more levels within a group hierarchy (e.g., only by people identified as "family" within a "friends and family" group). The sharing hierarchy may be implemented as a system of folders within a directory structure, similar to the structure of a UNIX file system or a Windows file system, for example. Each member who creates video productions has such a directory, and a folder is created within the directory for each group or subgroup that the member defines.
[0065] For each video production that the member creates, he or she has the ability to define which folders have the ability to view the video. When a member designates a person as belonging to a group, or when a person accepts a member's invitation to join a group, the person's ID is entered into the appropriate folder, and the person inherits the sharing privileges that are associated with the folder.
[0066] The system also provides convenient mechanisms for creators of video productions to share their creation processes. On completion of a video production, for example, the user has the option of defining whether the hyper- template used in the production is shareable with other users, and whether the content of the video is also shareable in combination with the hyper-template. In one example, the hyper-template can be shared at multiple levels: at the community level (by any person viewing the video), or at one or more levels within a group hierarchy (e.g., only by people identified as "family" within a "friends and family" group). Sharing controls for hyper-templates and their content may be implemented using the same method outlined above, for sharing video productions. [0067] In another example, the user can identify individual segments within the video that are shareable when reusing the hyper-template and which are not. In a further example, the user can identify which specific groups or subgroups of people can share specific video segments when reusing the hyper-template. [0068] The system provides two methods for selecting hyper-templates for reuse: browsing and hyper-linking. Using the first method, members of the video- sharing website browse among the set of hyper-templates designated as available to them for reuse. The hyper-templates may be organized in a variety of classification structures, similar to the structures by which the actual video productions are classified. These include but are not limited to classification schemes based on categories of videos (or "channels"), styles of video production, lengths of videos, tags or titles of videos, a grouping of favorite hyper- templates (based on popularity), and a set of hyper-templates recommended by the website, organized by category.
[0069] The second method of selecting hyper-templates for reuse involves the use of hyperlinks, and, in particular, hypervideo links. Hyperlinks are a referencing device in hypertext documents. They are used widely on the World Wide Web to act as references that, when clicked on, link dynamically from one webpage to another. The hypervideo concept extends the use of the hyperlink device to provide a link out of a video production (rather than a text document) to another webpage, typically to another section of video. [0070] The presently described system and methods use the hypervideo link as a method of transferring control out of a viewed video and into the online video editor 202, such that the viewer can use the template of the viewed video to create his or her own production. In this method, hyper-template linking is a special case of hypervideo linking, the special case being that the system always transfers control to the online video editor 202, rather than to a destination defined by the video-creator. Various implementation techniques exist to implement the special case of a hyper-template link, and to distinguish this from other hypervideo links (i.e., hotspots).
[0071] In one example, video productions created by the online video editor
202 are replayed with a TV-like encasement surrounding the video image, with several control buttons located below the image, one such control button being a "Remix" button which, when clicked on, specifically invokes a hyper-template link into the online video editor. In another example, video productions created by the online video editor 202 are discretely watermarked with a small logo that appears in the lower left or right corner of the video, for example. At any time during a viewing of the video, the watermark acts as a hyper-template link, in the sense that, if clicked on, it triggers a hyperlink that takes the viewer seamlessly into the online video editor 202, with the hyper-template of the viewed video pre-loaded and ready to be reused in creating a new video production. This is achieved by structuring the hyperlink in the form of ϊϊ^M^M^addill^^ where
"hypertemplateidentifier" identifies the particular video that is being viewed and its hyper-template, and "websiteaddress" and "editor" identify the online editor to be linked to.
[0072] Since a watermark may also be used to identify a hypervideo hotspot, a hyper-template watermark may be distinguished in several possible ways, such as by having two separate watermarks placed in different areas of the video image, or, in the case of a shared watermark, by a passive appearance for a hyper-template hyperlink (as opposed to flashing, which indicates a hotspot), or by color-coding (e.g., blue indicates a hyper-template link, whereas red indicates a hotspot).
[0073] A hyper-template hyperlink is initially generated by the online video editor 202 during construction of a video production, and is stored as metadata with the video. The data structures supporting the metadata were described earlier in this section, and shown in Figure 7. If the video is posted on another website (e.g., on a blogger's home page or on a Myspace user's home page), the hyperlink metadata remains associated with it. No matter where the video is viewed, on any website, it still retains the hyperlink that will link back to the original online editor if the hyper-video hyperlink is clicked on. This is because the video is never actually exported, but remains on the video-sharing website which acts as a proxy server that retrieves and streams the video when requested. The hyper- template thus not only provides users with a convenient way of sharing and reusing video creation processes, but also benefits the online video sharing website by generating traffic to the website and potentially enlisting new members. [0074] Upon selecting a hyper-template via either of the methods described above, the user may be linked into the online video editor 202 and, in one example, is presented with a webpage showing the hyper-template of the selected video in the form of a timeline at the bottom of the screen, with the shareable segments of the related video displayed on the main palette in the center of the screen. In an alternative example, the timeline of the hyper-template is displayed vertically at the left or right side of the screen, with an additional vertical window alongside the timeline to allow insertion of text to be used as a commentary relating to the contents of the video timeline. The positioning of the text can be adjusted to appear alongside the particular video sequence that it relates to. The text can then serve as a teleprompter, and the commentary can then be recorded by the user in synchronization with the video sequence, as the video is played back in a separate window, and a marker moves down the timeline and its associated commentary.
[0075] Upon selecting a hyper-template, users have a variety of choices regarding content that they may include in their new production. From the selected video, they can reuse any segments that the owner has designated as shareable. Users can also add or remove segments of video. They can select and include material from their own work-in-progress or their own galleries of completed productions, as well as from external sources that they have defined to be of interest and that the system has aggregated on their behalf, such as sources of photos, music, animation and other video content. Users can also change titles, credits and other text that may appear in the production, as well as any of the transitions, filters or special effects. Thus hyper-templates offer users a wide range of options regarding reuse of others' work, ranging from simple substitution of one or more video segments or other elements, to a major restructuring of the video production.
[0076] After either creating a project from scratch or through the use of a hyper-template, the project is submitted for processing by the online video platform 206 to create the final project. However, because the described system is user driven, it can be advantageous to distribute the transcoding across multiple servers. One approach to the distribution of the transcoding uses a weighted tree. This approach will now be described with reference to Figure 8. [0077] Figure 8 is a diagram illustrating example editing decision tree. In the illustrated example, five clips (or files) 800, 802, 804, 806, and 808 will be strung together to create a single production (output file) 810. Each clip 800-808 may have edits, and subsets of the final production may have soundtracks or edits applied before merging the N clips.
[0078] The tree 812 in the example of Figure 8 has 3 tiers 814, 816, and
818. Nodes in the first tier 814 are leaf nodes. Leaf nodes may contain clips with clip specific edits to apply such as fading, clipping, etc. The nodes on the second tier 816 may be sets of clips that require audio or edits across the subset such as applying an MP3 or adding a logo. Multiple second tiers can exist. The third tier
818 is the final merged product and can have the same edits as the second tier
816.
[0079] Based on the size and complexity of each clip, editing time, including transcoding time, varies with the size and editing needs. Distributing clips to be edited across all servers available at the online video platform 206 for such processing to process separately could result in unneeded disk I/O and wait times due to the relative size of the files involved and the type of editing that needs to occur.
[0080] In one aspect, each node of the tree 812 is given a weight. The base weight of a leaf node may be expressed as a measure of time in seconds based on the time the clip will take to perform a predefined editing function. Each edit produces a function on weight (most commonly a straight multiplier, sometimes an exponent). For example a 10MB file that processes the basic transcode in 20 seconds may be given weight 20. A logo overlay edit that takes
30 seconds on a 10MB file may be given a weight multiplier of weight*1.5 (i.e.,
30/20=1.5).
[0081] Each server in a workhorse cluster can be tracked by weight load.
The weight load is an expression of expected load dependent on tasks the server must perform and is not dependent on the current load the server is experiencing.
This avoids overloading a server with tasks that has a moment of low load as several uses make simultaneous requests.
[0082] Anytime a file is copied, there is a cost in disk I/O and network speed in time. In one example, that time is also taken into account, for example, based on the shortest weight path. Any given tree will be small from a data perspective and even forcing a 2Λn optimization on the weight of the tree will not outweigh the cost of single wrong choice in transferring a file from one server to another. [0083] Referring to the example in Figure 8, the second tier A+B 820 is identified as the biggest task with a total weight of 23. The paths for the nodes 822 and 824 on the second tier 816 are 17 and 18.5 respectively. The path weighted as 17 including nodes 822 and 826, the path weighted as 18.5 including nodes 824 and 826. The tier producing 18.5 (806, 808, and 824) becomes the destination server (S1 ) for the smaller second tier (804 and 822), which is processed on a second server S2. Since 18.5 is still less than 23, the server represented by the node 808 becomes the destination for files after completion that reach the server represented by the node at path 826 (S3). [0084] Using the weight model, several methods can be applied incrementally as the business grows to increase performance across the workhorse clusters. A first method is called second tier distribution. Second tier distribution is based on the total weight of the second tier. Based on the total weight of the second tier, servers are chosen with an appropriate current weight load to handle the second tier task. The server that receives the task of processing the highest weight will be the destination sever for the final product, and all children will roll from their servers into it. The reason only second tier nodes are used is they are most likely to be the biggest total weight. [0085] A second method is illustrated in Figure 9. Figure 9 is a diagram illustrating example editing decision tree with leaf node weights. In the illustrated example, five clips (or files) 900, 902, 904, 906, and 908 will be strung together to create a single production (output file) 910. Each of the clips 900-908 in the present example resides on a different server. Node 900 is on server S1. Node 902 is on server S2. Node 904 is on server S3. Node 906 is on server S4. Node 908 is on server S5.
[0086] The tree 912 in the example of Figure 9 has 3 tiers 914, 916, and
918. Nodes in the first tier 914 are leaf nodes. As the size of individual files increase in combination with many smaller files, a single transcode can be the biggest weight piece of an entire tree. By adding leaf nodes into distribution this case can be accounted for. This assumes a larger number of workhorse servers to distribute tasks to.
[0087] Using the example method of Figure 9, leaf node weights are added.
Leaf node B 902 is the largest, and its encode path to a final product is 32.5 (node 902 + node 920), the clear longest path. Node A 900 is processed on the server S1 and then transferred to the server represented by the node 920 (S2) to await addition. On the other side of the first tier 918, node C 904 is on a separate server (S3), and a total path of 12 to its final destination still identifies the server represented in node 910 (S1 ) as the final destination. Node D 906 is processed on a separate server (S4) as well and so is node E 908 (S5). Node E 908 has less weight than node D 906 and is added to the server S4 after processing. The total path of D+E is 12.5 (since E was processed elsewhere), and makes its way to the server S2 for final merge.
[0088] Another method is termed distribution by user profile. In this method, a user hitting the site has an active file catalogue of known weights. Based on common use and transcoding patterns a virtual tree can be constructed asynchronously of likely distribution across the workhorse cluster. Even a partial match on any pre-distribution saves on I/O overhead decreasing the time the user is waiting for the transcode. Two total weights are tallied by the server: actual weight and potential weight. Potential weight is factored in second when other tasks are distributed.
[0089] Another method is termed user profile preprocessing. In this method workhorse servers will be idle over 90% of the time. Transcoding is completely user driven; by nature there are high and low periods of user activity. Users will have common transcoding edits they want to perform. These can be as simple as always desiring an output format of quicktime, to fading each file 3 seconds on the back. Common edit profiles are kept / user and applied to files when the workhorse server is currently idle.
[0090] Most transcoding systems have batch or queue processes that allow servers to self balance load. The main goal is simplicity and predictable throughput. The current system can be user driven with a goal of quick and realtime feedback. Weight not only allows the construction of a shortest path algorithm (i.e., the weighted tree), but it also balances load across servers based on tasks being performed instead of a snapshot of load at any one time. The danger of balancing load based on traditional load measures is several users may flood a single workhorse server through bad timing.
[0091] Figure 10 is a diagram illustrating example process for distributed edit processing using a weighted tree. The process can be carried out by the preprocessing application 204 shown in Figure 2. In the illustrated example, a project is submitted for processing at step 1000 to the online video platform 206. The submission may be, for example, from a user or a group of users who have finished creating a project. At step 1002, the project is transferred from the online video platform 206 to the preprocessing application 204. The preprocessing application 204 represents the editing actions as leaf nodes in a tree structure at step 1004. The tree may reside, for example, in an internal or external memory available to the preprocessing application 204.
[0092] At step 1006, the preprocessing application 204 assigns weights to each of the leaf nodes in the tree. The weights may be, for example, based on the expected load required to complete the editing action represented by the leaf node. At step 1008, the preprocessing application 204 determines the total weight of the path through the tree to complete the editing actions. This includes, for example, the required processing at each level of the tree, from the initial leaf node processing to second and additional tier processing, until the final merge occurs at the root of the tree. Finally, at step 1010 leaf node editing actions are distributed by the preprocessing application 204 across multiple servers based on the total weight of each path through the tree.
[0093] The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles described herein can be applied to other embodiments without departing from the spirit or scope of the invention. Thus, it is to be understood that the description and drawings presented herein represent a presently preferred embodiment of the invention and are therefore representative of the subject matter which is broadly contemplated by the present invention. It is further understood that the scope of the present invention fully encompasses other embodiments that may become obvious to those skilled in the art and that the scope of the present invention is accordingly limited by nothing other than the appended claims.

Claims

1. A method for distributed processing of video material comprising: initiating an upload process on the video material from a local computing device to a remote computing device; receiving editing actions from a user for application to the video material on the remote computing device as the upload process is occurring; saving the editing actions on the remote computing device as the upload process is occurring; distributing portions of the video material to one or more additional remote computing devices; and applying the editing actions to the video material on the remote computing device and the one or more additional remote computing devices once the upload process has completed.
2. The method of claim 1 wherein the step of distributing further comprises using a weighted tree to determine which of the portions of the video material are distributed to which of the one or more additional remote computing devices.
3. The method of claim 2 wherein the weighted tree includes leaf nodes, the leaf nodes representing clips with clip specific edits.
4. The method of claim 2 wherein the weighted tree includes a second tier, the second tier including a set of clips with edits across the set.
5. The method of claim 2 wherein the weighted tree includes a third tier, the third tier including a final merged product.
6. The method of claim 2 wherein the weighted tree includes leaf nodes, each of the leaf nodes including a weight.
7. The method of claim 2 wherein the weighted tree includes a first, a second and a third tier, the first tier including the portions of the video material in a plurality of nodes, further comprising: determining a weight associated with each of the nodes; determining an order for each of the nodes based on the weight; and processing the nodes based on the order.
8. The method of claim 7 wherein the order is based on the weight wherein nodes having a smaller weight are processed before nodes having a larger weight.
9. A distributed processing apparatus comprising: an upload process which is initiated on the video material from a local computing device to a remote computing device; an online video editor which receives one or more editing actions from a user for application to the video material on the remote computing device and saves the editing actions on the remote computing device as the upload process is occurring; a preprocessing application which distributes portions of the video material to one or more additional remote computing devices for processing, the editing actions being applied to the video material on the remote computing device once the upload process has completed.
10. The apparatus of claim 9 wherein the preprocessing application is configured to use a weighted tree to determine which of the portions are distributed to which of the one or more additional remote computing devices.
11. The apparatus of claim 10 wherein the weighted tree includes leaf nodes, the leaf nodes representing clips with clip specific edits.
12. The apparatus of claim 10 wherein the weighted tree includes a second tier, the second tier including a set of clips with edits across the set.
13. The apparatus of claim 10 wherein the weighted tree includes a third tier, the third tier including a final merged product.
14. The apparatus of claim 10 wherein the weighted tree includes leaf nodes, each of the leaf nodes including a weight.
15. The apparatus of claim 10 wherein the weighted tree includes a first, a second and a third tier, the first tier including the portions of the video material in a plurality of nodes, further comprising: a weight which is associated with each of the nodes; an order which is determined for each of the nodes based on the weight; and a processor which processes the nodes based on the order.
16. The apparatus of claim 15 wherein the order is based on the weight wherein nodes having a smaller weight are processed by the processor before nodes having a larger weight.
17. A method for distributed edit processing comprising: generating a tree structure; representing editing actions for a project as leaf nodes in the tree structure; assigning weights to each of the leaf nodes; determining a total weight of a path to a root of the tree from each of the leaf nodes; and distributing the leaf nodes among a plurality of servers based on the total weights of the paths.
18. The method of claim 17 wherein the step of assigning further comprises determining an amount of load required to complete the editing actions.
19. The method of claim 17 wherein the step of assigning further comprises determining a time in seconds for the editing actions to occur on a portion of the project.
20. The method of claim 17 wherein the step of assigning further comprises: determining a multiplier based on a weight of a first of the leaf nodes; and applying the multiplier to a remainder of the leaf nodes.
PCT/US2007/060174 2006-01-05 2007-01-05 System and methods for distributed edit processing in an online video editing system WO2007082166A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US75639406P 2006-01-05 2006-01-05
US60/756,394 2006-01-05

Publications (2)

Publication Number Publication Date
WO2007082166A2 true WO2007082166A2 (en) 2007-07-19
WO2007082166A3 WO2007082166A3 (en) 2008-04-17

Family

ID=38257085

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/060174 WO2007082166A2 (en) 2006-01-05 2007-01-05 System and methods for distributed edit processing in an online video editing system

Country Status (1)

Country Link
WO (1) WO2007082166A2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2172936A3 (en) * 2008-09-22 2010-06-09 a-Peer Holding Group, LLC Online video and audio editing
EP2398021A3 (en) * 2010-06-15 2012-01-04 Sony Corporation Information processing apparatus, information processing method, and program
EP2425586A1 (en) * 2009-04-28 2012-03-07 WHP Workflow Solutions, LLC Correlated media for distributed sources
EP2752853A1 (en) * 2013-01-03 2014-07-09 Alcatel Lucent Worklist with playlist and query for video composition by sequentially selecting segments from servers depending on local content availability
US9436927B2 (en) 2008-03-14 2016-09-06 Microsoft Technology Licensing, Llc Web-based multiuser collaboration
US9760573B2 (en) 2009-04-28 2017-09-12 Whp Workflow Solutions, Llc Situational awareness
US10192583B2 (en) 2014-10-10 2019-01-29 Samsung Electronics Co., Ltd. Video editing using contextual data and content discovery using clusters
US10419722B2 (en) 2009-04-28 2019-09-17 Whp Workflow Solutions, Inc. Correlated media source management and response control
US10565065B2 (en) 2009-04-28 2020-02-18 Getac Technology Corporation Data backup and transfer across multiple cloud computing providers
CN112804548A (en) * 2021-01-08 2021-05-14 武汉球之道科技有限公司 Online editing system for event videos

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5408588A (en) * 1991-06-06 1995-04-18 Ulug; Mehmet E. Artificial neural network method and architecture
US20020116716A1 (en) * 2001-02-22 2002-08-22 Adi Sideman Online video editor
US20050144284A1 (en) * 1997-11-04 2005-06-30 Collaboration Properties, Inc. Scalable networked multimedia system and applications
US20050165881A1 (en) * 2004-01-23 2005-07-28 Pipelinefx, L.L.C. Event-driven queuing system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5408588A (en) * 1991-06-06 1995-04-18 Ulug; Mehmet E. Artificial neural network method and architecture
US20050144284A1 (en) * 1997-11-04 2005-06-30 Collaboration Properties, Inc. Scalable networked multimedia system and applications
US20020116716A1 (en) * 2001-02-22 2002-08-22 Adi Sideman Online video editor
US20050165881A1 (en) * 2004-01-23 2005-07-28 Pipelinefx, L.L.C. Event-driven queuing system and method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9436927B2 (en) 2008-03-14 2016-09-06 Microsoft Technology Licensing, Llc Web-based multiuser collaboration
US8270815B2 (en) 2008-09-22 2012-09-18 A-Peer Holding Group Llc Online video and audio editing
EP2172936A3 (en) * 2008-09-22 2010-06-09 a-Peer Holding Group, LLC Online video and audio editing
US9760573B2 (en) 2009-04-28 2017-09-12 Whp Workflow Solutions, Llc Situational awareness
EP2425586A4 (en) * 2009-04-28 2013-05-22 Whp Workflow Solutions Llc Correlated media for distributed sources
US9214191B2 (en) 2009-04-28 2015-12-15 Whp Workflow Solutions, Llc Capture and transmission of media files and associated metadata
EP2425586A1 (en) * 2009-04-28 2012-03-07 WHP Workflow Solutions, LLC Correlated media for distributed sources
US10419722B2 (en) 2009-04-28 2019-09-17 Whp Workflow Solutions, Inc. Correlated media source management and response control
US10565065B2 (en) 2009-04-28 2020-02-18 Getac Technology Corporation Data backup and transfer across multiple cloud computing providers
US10728502B2 (en) 2009-04-28 2020-07-28 Whp Workflow Solutions, Inc. Multiple communications channel file transfer
US8774604B2 (en) 2010-06-15 2014-07-08 Sony Corporation Information processing apparatus, information processing method, and program
EP2398021A3 (en) * 2010-06-15 2012-01-04 Sony Corporation Information processing apparatus, information processing method, and program
EP2752853A1 (en) * 2013-01-03 2014-07-09 Alcatel Lucent Worklist with playlist and query for video composition by sequentially selecting segments from servers depending on local content availability
US10192583B2 (en) 2014-10-10 2019-01-29 Samsung Electronics Co., Ltd. Video editing using contextual data and content discovery using clusters
CN112804548A (en) * 2021-01-08 2021-05-14 武汉球之道科技有限公司 Online editing system for event videos

Also Published As

Publication number Publication date
WO2007082166A3 (en) 2008-04-17

Similar Documents

Publication Publication Date Title
US20090196570A1 (en) System and methods for online collaborative video creation
US9038108B2 (en) Method and system for providing end user community functionality for publication and delivery of digital media content
US8990214B2 (en) Method and system for providing distributed editing and storage of digital media over a network
US8972862B2 (en) Method and system for providing remote digital media ingest with centralized editorial control
WO2007082166A2 (en) System and methods for distributed edit processing in an online video editing system
US8644679B2 (en) Method and system for dynamic control of digital media content playback and advertisement delivery
US8180826B2 (en) Media sharing and authoring on the web
US8126313B2 (en) Method and system for providing a personal video recorder utilizing network-based digital media content
US8977108B2 (en) Digital media asset management system and method for supporting multiple users
US20070089151A1 (en) Method and system for delivery of digital media experience via common instant communication clients
US20070133609A1 (en) Providing end user community functionality for publication and delivery of digital media content
US9076311B2 (en) Method and apparatus for providing remote workflow management
WO2007082167A2 (en) System and methods for storing, editing, and sharing digital video
US20060236221A1 (en) Method and system for providing digital media management using templates and profiles
US9210482B2 (en) Method and system for providing a personal video recorder utilizing network-based digital media content
US20100169411A1 (en) System And Method For Improved Content Delivery
US20100169786A1 (en) system, method, and apparatus for visual browsing, deep tagging, and synchronized commenting
KR100948608B1 (en) Method for personal media portal service
WO2007082169A2 (en) Automatic aggregation of content for use in an online video editing system
US7610554B2 (en) Template-based multimedia capturing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07701207

Country of ref document: EP

Kind code of ref document: A2