EP1969447A2 - System and methods for storing, editing, and sharing digital video - Google Patents

System and methods for storing, editing, and sharing digital video

Info

Publication number
EP1969447A2
EP1969447A2 EP07701208A EP07701208A EP1969447A2 EP 1969447 A2 EP1969447 A2 EP 1969447A2 EP 07701208 A EP07701208 A EP 07701208A EP 07701208 A EP07701208 A EP 07701208A EP 1969447 A2 EP1969447 A2 EP 1969447A2
Authority
EP
European Patent Office
Prior art keywords
video
video material
user
upload
segments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP07701208A
Other languages
German (de)
English (en)
French (fr)
Inventor
David A. Dudas
James H. Kaskade
Kenneth W. O'flaherty
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eyespot Corp
Original Assignee
Eyespot Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eyespot Corp filed Critical Eyespot Corp
Publication of EP1969447A2 publication Critical patent/EP1969447A2/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/613Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for the control of the source by the destination

Definitions

  • This invention relates in general to the use of computer technology to store, edit, and share personal digital video material.
  • DSCs digital still cameras
  • DVCs digital video camcorders
  • webcams computer mounted web cameras
  • DVRs digital video recorders
  • Consumers who generate these video files typically wish to edit their material down to the highlights that they wish to keep, save the resulting edited material on some permanent storage medium, and then share this material with friends and family, or possibly with the public at large.
  • FIG. 1 is a block diagram illustrating a prior art video editing platform including a creation block 199, a consumption block 198, and a media aggregation, storage, manipulation & delivery infrastructure 108.
  • Figure 1 shows with arrows the paths that currently exist for transferring video material from a particular source, including a DSC 100, a DVC 102, a mobile phone 104, and a webcam 106 to a particular destination viewing device including a DVD player 110, a DSTB 112, a DVR 114, a mobile phone 116, a handheld 118, a video iPod 120, or a PC 122.
  • the only destination device that supports material from all input devices is the PC 122. Otherwise, mobile phone 104 can send video material to another mobile phone 116, and a limited number of today's digital camcorders and digital cameras can create video material on DVDs that can then be viewed on the DVD player 110.
  • the video- sharing websites In order to assist visitors to search for videos of interest, the video- sharing websites typically request their members to provide information describing each of their video productions, such as a title and one or more descriptive words that characterize the content of the video.
  • the title and descriptions for each video production are stored as very simple metadata associated with the final video production by the website. They canbe displayed to visitors in simple ways, including sometimes organized by subject matter, and sometimes in the form of a collection of descriptions where the font size varies according to the popularity of the description, the most popular having the largest font.
  • Clicking on a descriptive word brings up a set of thumbnail images of the videos corresponding to the description (often in the form of several successive pages of thumbnail images, one for each referenced video). Clicking on a thumbnail launches the video represented by the thumbnail. . No information or data is provided with regard to the elements that make up the video production.
  • descriptions of the video have additional potential value when applied to online video material. For example, a description could be used to quickly access a specific section within a video production in order to view the specific section, or to reuse the section by inserting it into a new video production. Descriptive words may also be used to automatically aggregate and link together two or more video productions or video sections into a new production.
  • none of these capabilities are offered in any of today's online video-sharing systems.
  • a system and method for uploading, editing and distributing video material in an online environment, whereby users can perform the task of editing their video material online while the same material is being uploaded and stored at a remote Internet-hosted service, regardless of the size of the material.
  • One example of the system comprises an Internet-hosted application service for online storage, editing and sharing of digital video content and a companion client PC-based video upload application.
  • the Internet-hosted application may be based on a group of technologies referred to as asynchronous JavaScript and extensible markup language ("AJAX"), which allows the online editing application to behave as if it resides on the user's local computing device, rather than across the Internet on a remote computing device, such as a server.
  • AJAX asynchronous JavaScript and extensible markup language
  • the online editing application provides users with a "drag-and-drop" interface for creating their video productions.
  • the client PC-based video upload application facilitates the input of lengthy camcorder video material by automatically segmenting, compressing and uploading the material from the user's PC, while allowing users to edit their material during the upload process.
  • the Internet-hosted application service can be used on a dedicated website or its functionality can be served to different websites seeking to provide users with enhanced video editing capabilities.
  • Another aspect enables users to browse or preview video material in an online environment.
  • the example includes variations on the use of thumbnail images, and the use of a virtual joystick to vary the replay speed of the video.
  • users can select the browsing method that they find most effective in previewing video material presented to them.
  • Another aspect stores, edits, and distributes video material in an online environment.
  • One aspect is automated, whereby creators or owners of online video productions may select a production and a destination target, and then publish the production to the destination target with one click.
  • viewers of a video production are allowed to select a destination target and forward the production to any destination with one click.
  • the possible destinations include websites, email recipients, Instant Messaging recipients, mobile phone users, software applications, digital set-top boxes and digital video recorders, as well as any pre-defined combination of these, for example.
  • FIG. 1 Another aspect allows users to share the processes by which video productions have been created, in the form of hyper-templates. Users can designate as shareable the template they used in creating a video production, such that other users may reuse the template in their own productions.
  • One method of invoking a template during the viewing of a video production is by clicking on button or on a watermark within the video that acts as a hyperlink into an online video editor and causes the editor to pre-load the particular template, ready for reuse.
  • styles can be provided which are automated templates. The styles include a template, a question list and a program which automatically applies the template to a user's media.
  • consumers may insert hypervideo links into their video material during the online editing process. Viewers of the video material may optionally follow an inserted hypervideo link by clicking on its visible representation during replay and selecting an alternative non-linear viewing path. [0028] Another aspect allows users to classify video material for future use
  • tags e.g., filtering, advertising, copyright protection, and making recommendations
  • tags by associating tags with specific segments of a video file ("segment tags"), or with specific points in time within a segment (“deep tags”), such that the tags can later be used as search terms to find video productions of particular interest, or to quickly access specific parts of video material for viewing or for reuse in creating a new video production or for advertising or for filtering or for personalization.
  • Figure 1 is a block diagram illustrating a prior art video editing platform.
  • Figure 2 is a block diagram illustrating the functional blocks or modules in an example architecture.
  • Figure 3 is a block diagram illustrating an example online video platform.
  • Figure 4 is a block diagram illustrating an example online video editor application.
  • Figure 5 is a block diagram illustrating an example video preprocessing application.
  • Figure 6 is a diagram illustrating an example process for automatically segmenting a video file.
  • Figure 7 is a diagram illustrating an example process for automatically compressing a video file.
  • Figure 8 is a diagram illustrating an example process for automatically uploading a video file.
  • Figure 9 is a diagram illustrating an example process for allowing immediate online editing of video material, using thumbnails, while the material is being uploaded.
  • Figure 10 is a diagram illustrating an example process for browsing a video file.
  • Figure 11 is a diagram illustrating an example process for automatically transcoding video materials to the appropriate format for a video-receiving destination device.
  • Figure 12 is a block diagram illustrating an example edit sequence.
  • Figure 13 is a block diagram illustrating example data structures that support hyper-templates.
  • Figure 14 is a diagram illustrating an example process for editing video material and distributing the edited video material using a cell phone.
  • Figure 15 is a diagram illustrating an example process for using a hypervideo link.
  • Figure 16 is a diagram illustrating an example process for defining a hotspot.
  • Figure 17 is a diagram illustrating an example process for direct uploading and editing.
  • Certain examples as disclosed herein provide for the use of computer technology to store, edit, and share personal digital video material.
  • Various methods for example, as disclosed herein enable a user to handle large video files created on video recording devices; enable users to browse video material in an online environment; publish a video production or forward a viewed production to any destination with one click; view an online video and create a video using the same process that was used to create the viewed video; edit and distribute video material directly from a mobile device on a network, such as a cell phone; pursue multiple possible viewing paths within or outside a video production; use tags with specific segments of a video file; and use the tags to find video productions or portions of video productions of particular interest.
  • FIG. 2 is a block diagram illustrating the functional blocks or modules in an example architecture.
  • a system 200 includes an online video platform 206, an online video editor 202, a preprocessing application 204, as well as a content creation block 208 and a content consumption block 210.
  • the content creation block 208 may include input data from multiple sources that are provided to the online video platform 206, including personal video creation devices 212, personal photo and music repositories 214, and personally selected online video resources 216, for example.
  • video files may be uploaded by consumers from their personal video creation devices 212.
  • the personal video creation devices 212 may include, for example, DSCs, DVCs, mobile devices equipped with video cameras, and webcams.
  • input to the online video platform 206 may be obtained from other sources of digital video and non-video content selected by the user.
  • Non-video sources include the personal photo and music repositories 214, which may be stored on the user's PC, or on the video server, or on an external server, such as a photo-sharing application service provider ("ASP"), for example.
  • Additional video sources include websites that publish shareable video material, such as news organizations or other external video-sharing sites, which are designated as personally selected online video resources 216, for example.
  • Video productions may be output by the online video platform 206 to the content consumption block 210.
  • Content consumption block 210 may be utilized by a user of a variety of possible destination devices, including, but not limited to, mobile devices 218, computers 220, DVRs 222, DSTBs 224, and DVDs 226.
  • the mobile devices 218 may be, for example, cell phones or PDAs equipped with video display capability.
  • the computers 220 may include PCs, Apples, or other computers or video viewing devices that download material via the PC or Apple, such as handheld devices (e.g., PalmOne), or an Apple video iPod.
  • the DVDs 226 may be used as a media to output video productions to a permanent storage location, as part of a fulfillment service for example.
  • the online video editor 202 (also referred to as the Internet-hosted application service) can be used on a dedicated website or its functionality can be served to different websites seeking to provide users with enhanced video editing capabilities.
  • a user may go to any number of external websites providing an enhanced video editing service.
  • the present system may be used, for example, to enable the external websites to provide the video editing capabilities while maintaining the look and feel of the external websites.
  • the user of one of the external websites may not be aware that they are using the present system other than the fact that they are using functionality provided by the present system.
  • the system may serve the application to the external IP address of the external website and provide the needed function while at the same time running the application in a manner consistent with the graphical user interface ("GUI") that is already implemented at the external IP address.
  • GUI graphical user interface
  • a user of the external website may cause the invocation of a redirection and GUI recreation module 230, which may cause the user to be redirected to one of the servers used in the present system which provides the needed functionality while at the same time recreating the look and feel of the external website.
  • Delivery by the online video platform 206 to the mobile devices 218 may use a variety of methods, including but not limited to a multimedia messaging service (“MMS”), a wireless application protocol (“WAP”), and instant messaging (“IM”).
  • MMS multimedia messaging service
  • WAP wireless application protocol
  • IM instant messaging
  • Delivery by the online video platform 206 to the computers 220 may use a variety of methods, including but not limited to: email, IM, uniform resource locator ("URL”) addresses, peer-to-peer file distribution (“P2P”), or really simple syndication (“RSS”), for example.
  • email IM
  • uniform resource locator URL
  • P2P peer-to-peer file distribution
  • RSS really simple syndication
  • FIG. 3 is a block diagram illustrating an example online video platform.
  • the online video platform 206 includes an opt-in engine module 300, a delivery engine module 302, a presence engine module 304, a transcoding engine module 306, an analytic engine module 308, and an editing engine module 310.
  • the online video platform 206 may be implemented on one or more servers, for example, Linux servers.
  • the system can leverage open source applications and an open source software development environment.
  • the system has been architected to be extremely scalable, requiring no system reconfiguration to accommodate a growing number of service users, and to support the need for high reliability.
  • the application suite may be based on AJAX where the online application behaves as if it resides on the user's local computing device, rather than across the Internet on a remote computing device, such as a server.
  • the AJAX architecture allows users to manipulate data and perform "drag and drop" operations, without the need for page refreshes or other interruptions.
  • the opt-in engine module 300 may be a server, which manages distribution relationships between content producers in the content creation block 208 and content consumers in the content consumption block 210.
  • the delivery engine module 302 may be a server that manages the delivery of content from content producers in the content creation block 208 to content consumers in the content consumption block 210.
  • the presence engine module 304 may be a server that determines device priority for delivery of content to each consumer, based on predefined delivery preferences and detection of consumer presence at each delivery device.
  • the transcoding engine module 306 may be a server that performs decoding and encoding tasks on media to achieve optimal format for delivery to target devices.
  • the analytic engine module 308 may be a server that maintains and analyzes statistical data relating to website activity and viewer behavior.
  • the editing engine module 310 may be a server that performs tasks associated with enabling a user to edit productions efficiently in an online environment.
  • Figure 4 is a block diagram illustrating an example online video editor 202.
  • the online video editor 202 includes an interface 400, input media 402a-h, and a template 404.
  • a digital content aggregation and control module 406 may also be used in conjunction with the online video editor 202 and thumbnails 408 representing the actual video files may be included in the interface 400.
  • the online video editor 202 may be an Internet-hosted application, which provides the interface 400 for selecting video and other digital material (e.g., music, voice, photos) and incorporating the selected materials into a video production via the digital content aggregation and control module 406.
  • the digital content aggregation and control module 406 may be software, hardware, and/or firmware that enables the modification of the video production as well as the visual representation of the user's actions in the interface 400.
  • the input media 402a-h may include such input sources as the shutterfly website 402a, remote media 402b, local media 402c, the napster web service 402d, the real rhapsody website 402e, the garage band website 402f, the fiickr website 402g and webshots 402h.
  • the input media 402a-h may be media that the user has selected for possible inclusion in the video production and may be represented as the thumbnails 408 in a working "palette" of available material elements, in the main window of the interface 400.
  • the input media 402a-h may be of diverse types and formats, which may be aggregated together by the digital content aggregation and control module 406.
  • the thumbnails 408 are used as a way to represent material and can be acted on in parallel with the upload process.
  • the thumbnails 408 may be generated in a number of manners.
  • the thumbnails may be single still frames created from certain sections within the video, clip, or mix.
  • the thumbnails 408 may include multiple selections of frames (e.g., a quadrant of four frames).
  • the thumbnails may include an actual sample of the video in seconds (e.g., a 1 minute video could be represented by the first 5 seconds).
  • the thumbnails 408 can be multiple samples of video (e.g., 4 thumbnails of 3 second videos for a total of 12 seconds).
  • the thumbnails 408 are a method of representing the media to be uploaded (and after it is uploaded), whereby the process of creating the representation and uploading it takes a significantly less amount of time than either uploading the original media or compressing and uploading the original media.
  • the online video editor 202 allows the user to choose (or can create) the template 404 for the video production.
  • the template 404 may represent a timeline sequence and structure for insertion of materials into the production.
  • the template 404 may be presented in a separate window at the bottom of the screen, and the online video editor 202 via the digital content aggregation and control module 406 may allow the user to drag and drop the thumbnails 408 (representing material content) in order to insert them into the timeline to create the new video production.
  • the online video editor 202 may also allow the user to select from a library of special effects to create transitions between scenes in the video. The work-in-progress of a particular video project may be shown in a separate window. [0066] On completion of the project, the online video editor 202 allows the user to publish the video to one or more previously defined galleries / archives 410. Any new video published to the gallery / archive 410 can be made available automatically to all subscribers 412 to the gallery. Alternatively, the user may choose to keep certain productions private or to only share the productions with certain users.
  • FIG. 5 is a block diagram illustrating an example preprocessing application.
  • the preprocessing application 204 includes a data model module 502, a control module 504, a user interface module 506, foundation classes 508, an operating system module 510, a video segmentation module 512, a video compression module 514, a video segment upload module 516, a video source 518, and video segment files 520.
  • the preprocessing application 204 is written in C++ and runs on a Windows PC, wherein the foundation classes 508 includes Microsoft foundation classes ("MFCs").
  • MFCs Microsoft foundation classes
  • an object-oriented programming model is provided to the Windows APIs.
  • the preprocessing application 204 is written, wherein the foundation classes 508 are in a format suitable for the operating system module 510 to be the Linux operating system.
  • the video segment upload module 516 may be an application that uses a Model-View- Controller (“MVC") architecture.
  • MVC Model-View- Controller
  • the preprocessing application 204 automatically segments, compresses, and uploads video material from the user's PC, regardless of length.
  • the preprocessing application 204 uses the video segmentation module 512, the video compression module 514, and the video segment upload module 516 respectively to perform these tasks.
  • the uploading method works in parallel with the online video editor 202, allowing the user to begin editing the material immediately, while the material is in the process of being uploaded.
  • the material may be uploaded to the online video platform 206 and stored as one or more video segment files 520, one file per segment, for example.
  • the video source 518 may be a digital video camcorder or other video source device.
  • the preprocessing application 204 starts automatically when the video source 518 is plugged into the user's PC. Thereafter, it may automatically segment the video stream by scene transition using the video segmentation module 512, and save each of the video segment files 520 as a separate file on the PC.
  • a video would be captured on any number of devices at the video source block 518. Once the user captured the video (i.e., on their camcorder, cellular phone, etc.) it would be transferred to a local computing device, such as the hard drive of a client computer with Internet access. [0072] Alternatively videos can be transferred to a local computing device whereby an intelligent uploader can be deployed. In some cases, the video can be sent directly from the video source block 518 over a wireless network (not shown), then over the Internet, and finally to the online video platform 206. This alternative bypasses the need to involve a local computing device or a client computer. However, this example is most useful when the video, clip, or mix is either very short, or highly compressed, or both.
  • the video is not compressed or long or both, and, therefore, relatively large, it is typically transferred first to a client computer where an intelligent uploader is useful.
  • an upload process is initiated from a local computing device using the video segment upload module 516, which facilitates the input of lengthy video material.
  • the user would be provided with the ability to interact with the user interface module 506.
  • the control module 504 controls the video segmentation module 512 and the video compression module 514, wherein the video material is segmented and compressed into the video segment files 520.
  • a lengthy production may be segmented into 100 upload segments, which are in turn compressed into 100 segmented and compressed upload segments.
  • Each of the compressed video segment files 520 begin to be uploaded separately via the video segment upload module 516 under the direction of the control module 504. This may occur, for example, by each of the upload segments being uploaded in parallel. Alternatively each of the upload segments may be uploaded in order, the largest segment first, the smallest segment first, or any other manner.
  • the online video editor 202 is presented to the user.
  • thumbnails representing the video segments in the process of being uploaded are made available to the user.
  • the user would proceed to edit the video material via an interaction with the thumbnails.
  • the user may be provided with the ability to drag and drop the thumbnails into and out of a timeline or a storyline, to modify the order of the segments that will appear in the final edited video material.
  • the system is configured to behave as if all of the video represented by the thumbnails is currently in one location (i.e., on the user's local computer) despite the fact that the material is still in the process of being uploaded by the video segment upload module 516.
  • the upload process may be changed. For example, if the upload process was uploading all of the compressed upload segments in sequential order and the user dropped an upload segment representing the last sequential portion of the production into the storyline, the upload process may immediately begin to upload the last sequential portion of the production, thereby lowering the priority of the segments that were currently being uploaded prior to the user's editing action.
  • All of the user's editing actions are saved by the online video editor
  • the saved editing actions are applied to the completely uploaded segments.
  • the user may have already finished the editing process and logged off or the user may still be logged on.
  • the process of applying the edits only when the material is finished uploading saves the user from having to wait for the upload process to finish before editing the material.
  • various capabilities exist to share, forward, publish, browse, and otherwise use the uploaded video in a number of ways.
  • FIG. 6 is a diagram illustrating an example process for automatically segmenting a video file. This process can be carried out by the preprocessing application 204 previously described with respect to Figure 2.
  • the video segmentation module 512 of the preprocessing application 204 may be used to carry out one or more of the steps described in Figure 6.
  • step 600 scene transitions within the video material are automatically detected.
  • step 602 the material is segmented into separate files.
  • Step 602 may include the preprocessing application 204 providing for the application of metadata tags by the user for the purpose of defining the subject matter.
  • Additional steps may allow the user to apply one or more descriptive names to each file segment ("segment tags”) at step 604, and further to preview the content of each file segment and to provide additional descriptive names ("deep tags”) defining specific points-in-time within the file segment at step 606.
  • Both segment tags and deep tags at steps 604 and 606 can later be used as metadata references in search and retrieval operations by the user on video material stored within a remote computing device, such as a server.
  • a remote computing device such as a server.
  • any subsequent viewer searching on either of these tags will retrieve the file segment, and the segment will be positioned for viewing at the appropriate point: at the start of the segment if the search term was "harbor” or at the one-minute mark if the search term was "sailboat.”
  • the drag-and-drop editor will automatically extract the segment beginning at the sailboat scene, rather than requiring the user to manually edit or clip the segment.
  • the deep tags 606 can be used to dynamically serve up advertisements at appropriate times of viewing based on an association between time and the deep tags 606.
  • the separate files may be ready for uploading to a server at this stage, for example.
  • a thumbnail image is created for each file segment.
  • the set of thumbnail images representing all of the video file segments is initially uploaded to the server.
  • the thumbnail images may be selected by copying the first non-blank image in each video file segment, for example, and then uploading them to a remote computing device using the video segment upload module 516.
  • FIG. 7 is a diagram illustrating an example process for automatically compressing a video file. This process can be carried out by the preprocessing application 204 previously described with respect to Figure 2.
  • the video compression module 514 of the preprocessing application 204 may be used to carry out one or more of the steps described in Figure 7.
  • the format and resolution of the subject video material is automatically detected.
  • the appropriate decode software module to handle the detected input format is selected.
  • the video material is decoded from the input format using the selected decode codec.
  • the video material is encoded into a base format using a base codec.
  • a DivX codec can be used as the base codec to encode the video material into the DivX format, although other base codecs can be used.
  • the video compression module 514 may use DivX because it is an emerging industry-standard format for digital video compression, which typically achieves a space reduction of 15:1 over raw video material.
  • DivX video compression technology By using the DivX video compression technology, user and equipment productivity may be greatly enhanced by dramatically shortening the subsequent upload time for the video. (A typical 30- minute sequence of uncompressed digital camcorder material would take approximately 30 hours to upload over a standard DSL line, whereas the compressed form would take approximately 2 hours.)
  • a local copy of the compressed video material is stored on the user's local PC at step 708.
  • FIG 8 is a diagram illustrating an example process for automatically uploading a video file. This process can be carried out by the control module 504 and the video segment upload module 516 of Figure 5, which typically resides in the preprocessing application 204 previously described with respect to Figure 2.
  • video segments that are subject to editing actions by the user are automatically detected.
  • segments that the user has requested to be deleted in their entirety are automatically detected and deleted.
  • the compressed video file segments are uploaded individually by the video segment upload module 516 to the remote computing device, while giving priority to those remaining segments that have been subject to user editing actions, for example.
  • the process of uploading all except deleted segments to the remote computing device is completed, without involving the user.
  • FIG. 8 One aspect of the process described in Figure 8 is that the uploading of compressed video material is accomplished independently and asynchronously from the user, who can be offline from his or her computer during the remaining upload process, or can be engaged in other activities on his or her PC (including online editing of the video material prior to its arrival at the server).
  • the resulting material is eventually uploaded to the online video editor 202.
  • Figure 9 is a diagram illustrating an example process for allowing immediate online editing of video material, using thumbnails, while the material is being uploaded. This process can be carried out by the online video editor 202 in conjunction with the preprocessing application 204 previously described with respect to Figure 2.
  • the uploaded thumbnail images representing each video file segment that the user wishes to retain are saved.
  • the uploaded thumbnail images are visually displayed to the user as editable entities within the interface 400 (which may act as surrogate placeholders for the actual video file segments).
  • the user is allowed to perform editing actions on the thumbnail images segments, including, for example, dragging and dropping thumbnails into a video production timeline.
  • step 906 all of the editing actions performed by the user are remembered and/or saved by the remote computing device. Then at step 908, all of the editing actions are applied to the actual video material after the material has completed the uploading process. This process may occur, for example, without the continuing involvement of the user. User productivity is thereby further enhanced by not requiring the user to be online while the actual editing actions are performed on the uploaded video material.
  • the system supports online editing of material in parallel with the uploading of the same material, accomplishing this by using thumbnail images representing the material, rather than requiring the presence of the actual material.
  • the system does not require the user to remain online after completing his or her editing actions.
  • Many modifications and variations are possible in the light of the above teaching. For example, although the foregoing has been described with respect to its application to digital video material, the system and methods can be applied to other forms of digital media, including files of digital photographs, digital music and digital audio files.
  • the system and methods described herein can be used to build a slideshow production by uploading a file of digital photographs and editing the photographs into a preferred sequence, removing unwanted items, and optionally adding an overlay of music or voice-over.
  • digital music or audio it can be seen that it can be used to insert deep tags at specific points in the music or audio, such that the users can later retrieve the specifically tagged section of the material, either for play back or for inclusion in multimedia productions.
  • the online video editor 202 may be used to enable users to browse or preview video material in an online environment.
  • the browsing and previewing function includes several variations on the use of thumbnail images, and the use of a virtual joystick to vary the replay speed of the video.
  • users can select the browsing method that they find most effective in previewing video material presented to them.
  • the online video editor 202 provides the following ways of representing video productions using thumbnail images: as a single thumbnail image taken from the beginning of the video production; as a single thumbnail image selected by the owner of the video production through an interface provided by the online video editor 202; as a quadrant of four thumbnail images taken the beginning of four equal sections of the video production; as a collection of thumbnail images taken from the start of each scene transition in the video production; as a collection of thumbnail images selected by the user through an interface provided by the online video editor 202; as a slideshow of thumbnail images taken from random points within the video production, where the owner of the video production specifies the number of points through an interface provided by the online video editor 202; or as a slideshow of thumbnail images taken at regular intervals within the video production, where the owner of the video production specifies the interval period through an interface provided by the online video editor 202.
  • the system provides a means of representing the images in a visual hierarchy, through which the viewer can navigate in order to see further detail.
  • the visual hierarchy is displayed in quadrant form, with the top level containing four images selected as equidistantly as possible across the entire video production. If the viewer clicks on one of the four images, the quadrant is replaced with four images selected as equidistantly as possible from the region represented by the clicked-on image. The user can click successively on individual images within quadrants until reaching the lowest level of the hierarchy, at which point the lowest- level images remain in place. The user can navigate back up the hierarchy by mechanisms such a right-clicking on the quadrant.
  • FIG. 10 is a diagram illustrating an example process for browsing a video file. This process can be carried out by the online video editor 202 previously described with respect to Figure 2.
  • a visual hierarchy is displayed in a quadrant form, the visual hierarchy including a plurality of images selected to be primarily equidistant across the video material.
  • the user is provided with the ability to select one of the images and it is determined whether the user selected one of the images. If not, the process repeats until the user selects one of the images.
  • a region is obtained at step
  • step 1004 the region being one that is represented by the selected one of the images.
  • step 1006 another visual hierarchy is displayed in a quadrant form including a plurality of images selected to be primarily equidistant across the region represented by the selected one of the images.
  • the process then repeats at step 1002 wherein the user can continue to browse material by moving further down the hierarchy until such time as the user finds the material they are browsing for or reaches the lowest possible level of granularity.
  • the online video editor 202 provides a method of varying the replay speed of a video production.
  • the replay speed is adjusted by the viewer by means of a virtual joystick, which displays a speed dial ranging from very slow to very fast and allows the user to adjust the speed by using the mouse to move a virtual needle left or right from its central position, which represents normal speed.
  • Using the virtual joystick to replay a video production at high speed creates the effect of time-lapse photography, and provides a way for the viewer to browse the production in a short period of time, and to receive a visual summary of the content that may be more effective than thumbnails, due to its use of motion.
  • Using the virtual joystick to replay a video production at low speed creates the effect of slow motion, and allows users to study sections of video to more accurately determine actions captured in them - actions that may have been missed when viewing at normal speed. For example, by replaying in slow motion a video of a bird flying, a viewer would be able to better study the ways in which the bird moves its wings.
  • variable-speed replay method which also applies to video material played at normal speed, is a process whereby the system partitions the video production into four equal-length segments, and plays the four segments in parallel in a quadrant format. This provides a faster means of browsing a video production in motion form.
  • Users of the online video editor 202 can select the browsing method that they find most effective in previewing video material presented to them.
  • One means of selecting a browsing method is by right-clicking on the currently displayed representation, at which point a menu appears listing the available browsing options. By clicking on a browsing option, the user causes the system to switch to the appropriate representation.
  • the online video platform 206 may be used to enable users to publish and forward video productions.
  • an automated method provides an abstraction layer that shields the user from detailed concerns regarding the distribution of the video material.
  • One automated publishing method comprises an interface whereby creators or owners of online video productions can select a production and a destination target, and then publish the production to an external location, such as an Internet site, with one click.
  • Publishing may be accomplished by a three-step process whereby: (1) from a toolbar, users navigate through their video galleries to select the video they wish to publish; (2) users then select the distribution target via an automated address book; and then (3) users invoke the automated publishing process with one click.
  • One automated forwarding method comprises an interface whereby viewers of a video production can select a destination target and forward the production to any destination with one click. Forwarding may be accomplished in a three-step process whereby: (1 ) the user clicks on a "Forward" button displayed with the viewed video, or available through a toolbar.
  • video productions created by the online video editor are replayed with a TV-like encasement surrounding the video image, with several control buttons located below the image, one such control button being a button which, when clicked on, invokes forwarding of a viewed production; (2) the user selects a distribution target via an automated address book; and then (3) the user invokes the automated forwarding process with one click.
  • the distribution targets may cover a variety of possible potential destinations, including websites, email recipients, Instant Messaging recipients, mobile phone users, software applications, digital set-top boxes and digital video recorders, or any combination of these. Users may pre-define destination groups, where each group may consist of any combination of possible destinations. Users may also set up any of the potential destinations or destination groups in their address books, and the system will automatically take care of all issues related to delivery of each video production to the requested destinations.
  • the one-click publishing and one- click forwarding methods enable users to automatically send their productions to multiple destinations with one click, without the need to enter individual destination targets repeatedly, each time they wish to publish or forward a video.
  • the delivery system incorporates presence detection mechanisms for target devices, whereby the current presence of the user at a device (e.g., a user active at his or her PC) is detected in real time, and the video is delivered via the most immediate channel.
  • a device e.g., a user active at his or her PC
  • FIG 11 is a diagram illustrating an example process for automatically transcoding video materials to the appropriate format for a video-receiving destination device.
  • possible delivery mechanisms for the destination are determined at step 1400.
  • information about each destination device may be gathered and maintained by the system at step 1402, and may include the specific video format that each device requires and the highest priority destination device may be selected. Where this information is not available, the system may use the default format that most closely matches the device type.
  • the system may use a base decode codec in association with the encode codec required for the selected destination device at steps 1404 and 1406, and may create a copy of the subject material on the server in the destination format, prior to streaming it to the destination at step 1408.
  • the base codec used in steps 1406 and 1408 may be the DivX codec. If the video material is not delivered successfully to a device, the system may provide a feedback mechanism whereby users are solicited to provide details about the device in question.
  • Distribution of video material can be accomplished both directly from an online video-sharing portal website, or indirectly from any website via a toolbar and associated browser plug-in. If a video is posted on another website (e.g., on a blogger's home page or on a Myspace user's home page, for example), the video material is not actually exported, but remains on the video-sharing website, which acts as a proxy server that retrieves and streams the video when requested. In order forward the video, the viewer interacts with the browser plug-in via the toolbar, which communicates with the portal to perform the actual forwarding. Thus the sharing controls established by the owner of the video material are still enforced, and all of the previously described delivery mechanisms still apply.
  • the online video editor 202 also may support the construct of a "hyper- template" - a shareable definition of how a video production was created, that can be reused by others to help them create their own derivative works.
  • Hyper-templates therefore, are shareable versions of templates.
  • a template defines the sequence of scenes that make up a video, and the related soundtrack, transitions, filters or special effects that are used in the production.
  • FIG. 12 is a block diagram illustrating an example edit sequence.
  • four video clips (a 1104, b 1106, c 1108, and d 1110) are combined into a video production 1100.
  • the editing sequence occurs whereby first the individual clips are edited, then clips a 1104 and b 1106 are merged with sound added 1102, and then clips c 1108 and d 1110 are combined with the previously merged clips a and b to form the video production 1100.
  • Figure 13 is a block diagram illustrating example data structures that support hyper-templates.
  • data structures 1200 include an edit tree table 1202, an edit dependencies table 1204, an edit command table 1206, a sequence table 1208, and a sequence composition map 1210.
  • the sequence composition map 1210 provides pointers to the four video files (a 1104, b 1106, c 1108, and d 1110) previously described in Figure 12.
  • the edit tree table 1202 identifies a sequence of six editing actions.
  • the edit dependencies table 1204 defines dependencies between editing actions (e.g., editing action E must wait for completion of editing actions A and B).
  • the sequence composition map 1210 identifies the video clips that are used in each sequence step.
  • the online video editor 202 may be used to provide a growing library of community hyper-templates, based on the work of its members.
  • a user can either use one of the available hyper-templates that have been designated as "shareable," or create a video and its accompanying template from scratch.
  • the user may drag and drop components from a palette of available video segments into a timeline that defines the sequence for the video production.
  • the user also may drag and drop transitions between segments, and can optionally drag and drop special transitions on to individual segments.
  • the user can also select still photos and add them into the timeline (e.g., from the Flickr website), and can select and add a soundtrack to the video production (e.g., from the Magnatune website).
  • the creator On completion of a video production, the creator has the option of defining whether the video is shareable with other users.
  • the video can be shared at multiple levels: at the community level (by any person viewing the video), or at one or more levels within a group hierarchy (e.g., only by people identified as "family" within a "friends and family” group).
  • the sharing hierarchy may be implemented as a system of folders within a directory structure, similar to the structure of a UNIX file system or a Windows file system, for example. Each member who creates video productions has such a directory, and a folder is created within the directory for each group or subgroup that the member defines.
  • the member creates For each video production that the member creates, he or she has the ability to define which folders have the ability to view the video. When a member designates a person as belonging to a group, or when a person accepts a member's invitation to join a group, the person's ID is entered into the appropriate folder, and the person inherits the sharing privileges that are associated with the folder. [00113]
  • the system also provides convenient mechanisms for creators of video productions to share their creation processes. On completion of a video production, for example, the user has the option of defining whether the hyper-template used in the production is shareable with other users, and whether the content of the video is also shareable in combination with the hyper-template.
  • the hyper- template can be shared at multiple levels: at the community level (by any person viewing the video), or at one or more levels within a group hierarchy (e.g., only by people identified as "family” within a "friends and family” group). Sharing controls for hyper-templates and their content may be implemented using the same method outlined above, for sharing video productions.
  • the user can identify individual segments within the video that are shareable when reusing the hyper-template and which are not.
  • the user can identify which specific groups or subgroups of people can share specific video segments when reusing the hyper-template.
  • the system provides two methods for selecting hyper-templates for reuse: browsing and hyper-linking. Using the first method, members of the video- sharing website browse among the set of hyper-templates designated as available to them for reuse.
  • the hyper-templates may be organized in a variety of classification structures, similar to the structures by which the actual video productions are classified.
  • hyper-templates include but are not limited to classification schemes based on categories of videos (or "channels"), styles of video production, lengths of videos, tags or titles of videos, a grouping of favorite hyper-templates (based on popularity), and a set of hyper-templates recommended by the website, organized by category.
  • the second method of selecting hyper-templates for reuse involves the use of hyperlinks, and, in particular, hypervideo links.
  • Hyperlinks are a referencing device in hypertext documents. They are used widely on the World Wide Web to act as references that, when clicked on, link dynamically from one webpage to another.
  • the hypervideo concept extends the use of the hyperlink device to provide a link out of a video production (rather than a text document) to another webpage, typically to another section of video.
  • hyper-template linking is a special case of hypervideo linking, the special case being that the system always transfers control to the online video editor 202, rather than to a destination defined by the video-creator.
  • video productions created by the online video editor 202 are replayed with a TV-like encasement surrounding the video image, with several control buttons located below the image, one such control button being a "Remix" button which, when clicked on, specifically invokes a hyper-template link into the online video editor.
  • video productions created by the online video editor 202 are discretely watermarked with a small logo that appears in the lower left or right corner of the video, for example.
  • the watermark acts as a hyper-template link, in the sense that, if clicked on, it triggers a hyperlink that takes the viewer seamlessly into the online video editor 202, with the hyper-template of the viewed video pre-loaded and ready to be reused in creating a new video production. This is achieved by structuring the hyperlink in
  • hypertemplateidentifier identifies the particular video that is being viewed and its hyper-template
  • websiteaddress and "editor” identify the online editor to be linked to.
  • a hyper-template watermark may be distinguished in several possible ways, such as by having two separate watermarks placed in different areas of the video image, or, in the case of a shared watermark, by a passive appearance for a hyper-template hyperlink (as opposed to flashing, which indicates a hotspot), or by color-coding (e.g., blue indicates a hyper-template link, whereas red indicates a hotspot).
  • a hyper-template hyperlink is initially generated by the online video editor 202 during construction of a video production, and is stored as metadata with the video. The data structures supporting the metadata were described earlier in this section, and shown in Figure 13.
  • the hyperlink metadata remains associated with it. No matter where the video is viewed, on any website, it still retains the hyperlink that will link back to the original online editor if the hypervideo hyperlink is clicked on. This is because the video is never actually exported, but remains on the video-sharing website which acts as a proxy server that retrieves and streams the video when requested.
  • the hyper-template thus not only provides users with a convenient way of sharing and reusing video creation processes, but also benefits the online video sharing website by generating traffic to the website and potentially enlisting new members.
  • the user may be linked into the online video editor 202 and, in one example, is presented with a webpage showing the hyper-template of the selected video in the form of a timeline at the bottom of the screen, with the shareable segments of the related video displayed on the main palette in the center of the screen.
  • the timeline of the hyper-template is displayed vertically at the left or right side of the screen, with an additional vertical window alongside the timeline to allow insertion of text to be used as a commentary relating to the contents of the video timeline. The positioning of the text can be adjusted to appear alongside the particular video sequence that it relates to.
  • the text can then serve as a teleprompter, and the commentary can then be recorded by the user in synchronization with the video sequence, as the video is played back in a separate window, and a marker moves down the timeline and its associated commentary.
  • users Upon selecting a hyper-template, users have a variety of choices regarding content that they may include in their new production. From the selected video, they can reuse any segments that the owner has designated as shareable. Users can also add or remove segments of video. They can select and include material from their own work-in-progress or their own galleries of completed productions, as well as from external sources that they have defined to be of interest and that the system has aggregated on their behalf, such as sources of photos, music, animation and other video content.
  • the online video editor 202 may provide a user interface that enables users of mobile devices on a network, such as cell phones to issue commands directly from their cell phones to accomplish simple editing of their video material, and to distribute the resulting edited video material to individuals or to predefined distribution groups.
  • a command line interface (the "mobile video editor") that supports all of the basic functions required to edit and distribute video material.
  • the commands are entered on the cellular phone by the user in text form and are transmitted separately or in groups to the online editor using a short message service (“SMS”) or a multimedia message service (“MMS").
  • SMS short message service
  • MMS multimedia message service
  • SMS messages are typically available on digital global system for communications (“GSM”) networks allowing text messages of up to 160 characters to be sent and received via the network operator's message center to the cell phone, or from the Internet, using a so-called "SMS gateway” website. If the phone is powered off or out of range, messages are stored in the network and are delivered at the next opportunity.
  • MMS is a method of transmitting graphics, video clips, sound files, and text messages over wireless networks using the wireless application protocol (“WAP").
  • WAP wireless application protocol
  • the entire online video editing process may be accomplished using SMS or MMS messages, thereby obviating the need for any supporting application executing on the user's cell phone handset.
  • the user may interface with a Java-based application or a binary runtime environment for wireless (“BREW") based application residing on the cell phone handset, which then uses SMS, MMS, WAP, or some other interface to transmit the editing commands to the online editing service.
  • BREW binary runtime environment for wireless
  • the mobile video-editing commands can also be input in command- line form from an Internet-connected PC.
  • FIG 14 is a diagram illustrating an example process for editing video material and distributing the edited video material using a mobile device, such as a cell phone.
  • This process can be carried out by the online video editor 202 previously described with respect to Figure 2.
  • the user sets up a work-in- progress folder to receive video clips from the cell phone, or from other sources available to the user (as used herein, the term "clips" refers to video material, audio, photographs, and other content that is useful for insertion into a project).
  • the user may supply a name for the project, which is later used as the title for the video production.
  • a project is created.
  • one or more video clips are added into the work-in- progress folder, typically from the user's cell phone input folder that contains clips that the user has just sent to the system.
  • the system may maintain a cell phone input folder for each user who has requested the ability to use the mobile editor.
  • the user may select a template (or "style") to be used in the video production. Templates have options to add enhancements to a production, including but not limited to: soundtracks, captions, transitions, filters and other special effects.
  • a default template may be provided by the system.
  • the clips are combined and transformed, which may cause the editor to create a timeline/storyline for insertion of video clips, and to then insert clips into the timeline/storyline serially from the work-in-progress folder.
  • the editor may apply a template to the production, using the last template that was selected by the user. If no template has ever been specified by the user, the system applies the default template.
  • the command also may have an option to specify "No Template.”
  • the production is previewed. In one example, previewing the production includes replaying the combined set of video clips from the timeline, displaying the combined production on the user's cell phone, such that the user can preview the production before distributing it.
  • the user may optionally remove a clip from the production, for example, by specifying the sequence number of the clip within the production.
  • the user sends the production.
  • the user may distribute the video production to the addressee of the command.
  • the addressee may be the phone number or email address of an individual, or it may be a website, an Instant Messaging recipient, a software application, a digital set-top box or a digital video recorder, or it may be a pre-defined group consisting of any combination of these.
  • the "group" function the user avoids the need to individually enter multiple addressees.
  • additional functions may also be included in the mobile video editor command set.
  • the mobile video editor supports a library of templates that the user may choose from. Users may supply templates that they have created into the template library, thereby sharing their creative processes with others.
  • the mobile video editor also supports a macro command whereby the user can create and distribute a video production by issuing just one command: "create production.”
  • the create production command references a previously created project (in a "using” clause), and causes the system to execute the set of commands that were previously entered for the referenced project. Prior to issuing the create production command, the user will have sent a set of clips to his or her input folder. By executing the commands from the referenced project, the editor will create a new production using the clips from the user's input folder, and send the production to the distribution defined in the referenced project.
  • the mobile video editor also provides an API to its command set.
  • the API can be used by developers of applications that reside in a cell phone handset, in order to incorporate online video editing into their feature set. This includes third- party application software providers and the cell phone handset manufacturers themselves.
  • a new type of mobile video editor is created which is a WAP-enabled subset of the PC browser-based video-editing application.
  • users with WAP-enabled cell phones can interface to the WAP-enabled video editor over the Internet, and are provided a simplified visual environment for editing their video material.
  • the simplified interface compensates for the absence of mouse input for such functions as dragging and dropping, instead providing more automated forms of video production, using pre-defined templates that the user can select from the cell phone.
  • the online video editor 202 supports the construct of a hypervideo link - a means of allowing non-linear viewing of video material.
  • Figure 15 is a diagram illustrating an example process for using a hypervideo link.
  • a hypervideo link allows the viewer to navigate among multiple possible viewing paths within or outside the video production he or she is currently viewing.
  • the user sees an unobtrusive mark in one area of the display.
  • the mark is a rendered as a watermark, for example in the form of the logo of the video-sharing service or in a form selected by the video-creator, and all videos produced by the service bear such a watermark.
  • step 1500 it is determined whether a hypervideo link occurred in the video stream.
  • the mark may become "active" at step 1502, making itself more noticeable to the viewer, by techniques such as glowing brighter or flashing, for example.
  • step 1504 it is determined whether the user selected the hypervideo mark. If the viewer does not click on the hypervideo mark, the process repeats at step 1500. When the user clicks on the active hypervideo mark at step 1504, he or she is given the option at step 1506 of switching out of the current video sequence and following one or more links to an alternative viewing destination. If the user does not switch out of the current video sequence at this step the process repeats at step 1500. Otherwise, at step 1508, the user proceeds to an alternate viewing destination.
  • Video targets of a hypervideo link may be within the viewed video production, or to any video material external to the production that has been tagged by the system. External material may include any material from other users that has been marked as reusable, or any material that has been aggregated by the system.
  • Hypervideo marks may come and go during the playing of a video production. The length of time for which a mark is active on replay can be determined by the system (e.g., by a default value), or by the creator of the video production. When the viewer clicks on an active hypervideo mark, navigation options may be displayed in a menu form, listing one or more possible viewing destinations that are alternatives to continuing to view the production sequentially.
  • the system executes the hypervideo link associated with the destination description, thereby transferring control to the target webpage.
  • the target webpage may be the entry into another video production, or to any tagged segment or section of a video production (all of which are examples of temporal links); alternatively, the target may be an Internet webpage or email message (both being examples of a textual link).
  • the viewer is able to click on or select a specific area on the screen where a particular activity is occurring, and thereby link out to a different section of video that pertains to the activity.
  • the hotspot is thus not related to a mark on the screen, but to an area of the screen that makes itself noticeable to the viewer.
  • Various techniques may be used to attract the attention of the viewer, such as temporarily brightening up the area of the hotspot, or temporarily zooming in on the area.
  • the target of a spatio-temporal link may be an Internet webpage or email message (both being examples of a textual link).
  • a textual link may result from a temporal or a spatio-temporal opportunity.
  • One special case of a textual link is a mouse over. In the case of a mouse over, clicking on a hypervideo link (temporal or spatio-temporal) results in a text-box appearing on the screen, providing commentary or information about the section of video that is currently being viewed.
  • the text-box may appear on the screen outside the video viewing space, or it may appear in an area of the video viewing space (e.g., over a spatio-temporal hotspot area).
  • Various mechanisms are possible for returning control back to the original viewing point, after a hypervideo link has been executed.
  • one such method is to return control at completion of the linked-to video segment, (i.e., when the first segment transition is detected in the linked-to video).
  • each video segment is stored as a separate file, rendering straightforward the detection of the end of a video segment.
  • An alternative return method is to return control on completion of the entire linked-to video production.
  • a further method which could be used in conjunction with the prior two, and is also applicable to textual links, is to provide a means for the user to initiate the return link, for example by clicking on a "Return" button that is always displayed by the system, and that is activated (e.g., by glowing brighter) on issuance of a hypervideo link.
  • a general return mechanism that applies to all forms of hypervideo links is for the system to superimpose or overlap the linked-to window over the linked-from window, or to show both windows beside each other, in all cases in such a manner that the user may at any time close the linked-to window and reactivate the linked-from window.
  • a textual link to an email message a user could compose a message within his or her email system and send it, then close the email window, and return to viewing the video.
  • the target of a link is defined by the creator of the video production by referring to a tag.
  • Tags identify whole productions, segments of productions, or (in the case of "deep tags") a point- in-time within a segment or a production.
  • the online video editor 202 provides a convenient graphical interface for users to look up tags among their own material and among material designated as shareable by their creators.
  • the system also syndicates publicly available video segments and makes them available with tags for videographers to include in their productions.
  • tagged material can either be easily embedded in the sequence of the production or easily set up as the target of a hypervideo link, using a drag-and-drop interface.
  • the system implements a means of linking video material across the Internet, making this facility available to any consumer who wishes to work in the medium of video.
  • the online video editor 202 also provides a convenient graphical interface enabling users to mark sections within their video material as hotspots carrying hypervideo links. The user can replay video material, either completed productions or work-in-progress, and stop the action at any point-in-time to define a hotspot.
  • Figure 16 is a diagram illustrating an example process for defining a hotspot.
  • the user stops the action at step 1600 by clicking on a virtual "Pause” button located with other virtual controls below the replay window, for example.
  • the user clicks on the mark on the video at step 1602 (which can be rendered as a watermark), and is provided a window providing various options for creating a hypervideo link.
  • the options may include, for example, "Start Hotspot”, End Hotspot”, “Mark Spatial Hotspot”, “Set Hotspot Duration", and "Select Hypervideo Destination”.
  • the user clicks on the "Set Hotspot Duration” option enters a time in seconds at step 1606.
  • step 1608 it is determined whether the user wants to include a spatio-temporal hotspot.
  • the user also clicks on the "Mark Spatial Hotspot” option, for example, and then uses an input device, such as a mouse, to outline the spatial area of the video to be associated with the hotspot (e.g., the upper righthand quadrant of the video replay window) at step 1610.
  • an input device such as a mouse
  • the user can then select one or more destination targets from a list at step 1612 of system- supplied linkage options.
  • step 1618 may include, for example, the user's set of available segment of deep tags (either within the current production or in other productions created by the user), a set of system-supplied tags to other video material, or a link to any Internet webpage or email message that the user then specifies.
  • the user Having set up the start of the hotspot, if the user has not set up a time- based duration for the hotspot, he or she can then click on a virtual "Continue" button to continue playing the video at step 1614, and then at step 1616 is determined whether the user clicked on the "Pause” button to again stop the video and define the end point-in-time of the hotspot at step 1618.
  • the system automatically applies a user-definable default time for the duration of the hotspot (which, in one example, is initially set to ten seconds).
  • Hypervideo links are also dynamic, in the sense that the creator can alter the targets of links at any time, even after publication. By re- entering the video editor, creators can change productions on the fly, changing the content both in terms of modifying the sequential material and inserting or modifying hypervideo links. This is achieved by deploying the two mechanisms of a proxy server and metadata.
  • video productions created by the system are served dynamically by the system acting as a proxy server to the requesting service.
  • Proxy servers cache frequently referenced material, thus improving performance for groups of users accessing similar content.
  • a video production is posted to another website (e.g., on a blogger's home page or on a Myspace user's home page)
  • the video is may not be actually exported, but can remain on the video-sharing website which retrieves and streams the video when requested.
  • the online video editor creates metadata pertaining to the link, including such information as the tag name and the URL address of the destination.
  • the metadata is stored by the system and its association with the video production is maintained by the system. If the video is posted on another website, the hyperlink metadata remains associated with it. No matter where the video is viewed, on any website, it still retains all hyperlinks that have been defined for it.
  • users may include links to external video material that the system has previously aggregated.
  • the system may have either already created a local copy of aggregated external material, or may have simply provided a link to the material. If the system has not previously stored a copy of the aggregated material locally, but has instead saved a link to the material together with the related commands for retrieving it, the system accesses the material via the API and creates copies of it in Flash and DivX formats, prior to making the material available to be referenced by hypervideo link in the user's production.
  • the system first detects the format and resolution of the subject video material, then selects the appropriate decode software module to handle the detected video format, then decodes the video material from the input format using the selected decode codec, and then encodes it into Flash format using a Flash codec and into DivX format using a Divx codec.
  • the online video editor 202 also handles uploading of video clips directly from a PC, or cell phone, without the need to use the preprocessing application 206.
  • Figure 17 is a diagram illustrating an example process for direct uploading and editing.
  • the online video editor 202 treats each video clip as a separate video segment, and creates a thumbnail image for each segment (based on the first non-blank image detected in the segment's data stream, for example). If the clip includes transitions, the editor detects these and splits the clip into separate segments, creating a new segment following each transition, and builds an accompanying thumbnail image for each created segment. For each segment, the editor prompts the user to supply one or more segment tags. After each segment has been uploaded, the user can review the segment and create additional deep tags defining specific points-in-time within the segment.
  • External content is provided for selection by tag at step 1706.
  • the user is also provided with the ability to add transitions, special effects, as well music or voice overlays at steps 1708 and 1710 before saving the edited work as a new production at step 1712.
  • the drag-and-drop interface provides an extremely simple method of video editing, and is designed to enable the average Internet user to easily edit his or her video material.
  • the process of video editing is thus greatly simplified, by providing a single Internet-hosted source that automatically manages the processes of uploading, storing, organizing, editing, and subsequently sharing video material.
  • the video-editing process is further simplified through the mechanism of hyper-templates, which allow users to reuse video-production processes and methods that they previously created, or that other users have created, or that the system supplies.
  • any new video production will have been constructed from separately defined segments, on completion it will inherently include segment tags for every separate clip included in the production, as well as for every scene transition.
  • the new production will exist as a separate file, but the system also retains separate files for all of segments from which it is constructed.
  • the segments can be rearranged in any manner, or combined in a variety of ways with other tagged segments, to create new productions with tags.
  • a further extension of the tagging concept is embodied in the ability to tag external content, such as photos, music or other external video material, and to include the tagged external content into a video production.
  • a video production can include a mixture of video segments and photos from multiple sources, plus a music overlay, and all segments, photos and music start points will be automatically tagged within the production.
  • the system may also automatically tag all digital content that it has aggregated on behalf of the user. Where a file name or title is supplied with a piece of aggregated material, this may be used as the tag. Where no file name or title is supplied, the system may create a tag in the form of: "Photo mm/dd/yy nnn", “Audio mm/dd/yy nnn”, “Music mm/dd/yy nnn ", "Video mm/dd/yy nnn” or “Animation mm/dd/yy nn", for example, where “mm/dd/yy” is the date when the spidering occurred, and “nnn” is a sequential number representing the sequence in which the piece of material was aggregated by the system on the date specified.
  • the user can change any of the automatically aggregated material tags to a more meaningful tag name.
  • users can create entire video productions by aggregating together a set of tagged segments or sections of video from any source available within the system, including tagged material from external sources. It thus becomes extremely easy for users to create new video productions from existing material from multiple sources, without the need to introduce their own new material. Any such aggregated production will exist as a separate file, but the system also retains separate files for all of aggregated segments from which it is constructed.
  • a further extension of the tagging concept relates to the concept of hypervideo links.
  • a hypervideo link makes its presence known by a visible change in the appearance of an area of the screen, or in the appearance of a watermark which is always present on the video. By clicking on the changed area or watermark, the viewer is given the option of switching out of the current video sequence and following one or more hypervideo links that may lead to another video, or to any tagged segment or section of a video, or to an internet webpage, or into an email message.
  • the online video editor prompts the user to supply one or more tags to be associated with the link.
  • Hypervideo tags then become another form of segment tag, which viewers can subsequently search on, just as they can search on any other form of tag.
  • searching on a hypervideo tag a viewer can gain access to any Internet- connected media source that has been referenced by a video creator.
  • a video creator can also reuse a hypervideo link and include it in a new production, either by reusing it as a non-linear hypervideo link, or by retrieving the linked-to material and including it as one or more inline video segments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Management Or Editing Of Information On Record Carriers (AREA)
EP07701208A 2006-01-05 2007-01-05 System and methods for storing, editing, and sharing digital video Withdrawn EP1969447A2 (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US75639506P 2006-01-05 2006-01-05
US75639706P 2006-01-05 2006-01-05
US75633206P 2006-01-05 2006-01-05
US75639306P 2006-01-05 2006-01-05
US75639806P 2006-01-05 2006-01-05
US75632806P 2006-01-05 2006-01-05
US75639606P 2006-01-05 2006-01-05
PCT/US2007/060175 WO2007082167A2 (en) 2006-01-05 2007-01-05 System and methods for storing, editing, and sharing digital video

Publications (1)

Publication Number Publication Date
EP1969447A2 true EP1969447A2 (en) 2008-09-17

Family

ID=38257086

Family Applications (1)

Application Number Title Priority Date Filing Date
EP07701208A Withdrawn EP1969447A2 (en) 2006-01-05 2007-01-05 System and methods for storing, editing, and sharing digital video

Country Status (3)

Country Link
EP (1) EP1969447A2 (ja)
JP (1) JP2009527135A (ja)
WO (1) WO2007082167A2 (ja)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10074400B2 (en) 2013-06-05 2018-09-11 Snakt, Inc. Methods and systems for creating, combining, and sharing time-constrained videos

Families Citing this family (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8639714B2 (en) * 2007-08-29 2014-01-28 Yahoo! Inc. Integrating sponsored media with user-generated content
WO2009046324A2 (en) * 2007-10-05 2009-04-09 Flickbitz Corporation Online search, storage, manipulation, and delivery of video content
US7953796B2 (en) 2008-04-02 2011-05-31 Microsoft Corporation Sharing content using selection and proposal
US8346540B2 (en) * 2008-06-03 2013-01-01 International Business Machines Corporation Deep tag cloud associated with streaming media
US8171411B1 (en) 2008-08-18 2012-05-01 National CineMedia LLC System and method for delivering content in a movie trailer
US8640097B2 (en) * 2009-03-16 2014-01-28 Microsoft Corporation Hosted application platform with extensible media format
US9037986B2 (en) 2009-03-20 2015-05-19 Lara M. Sosnosky Online virtual safe deposit box user experience
US8737825B2 (en) 2009-09-10 2014-05-27 Apple Inc. Video format for digital video recorder
US8554061B2 (en) 2009-09-10 2013-10-08 Apple Inc. Video format for digital video recorder
US8583725B2 (en) 2010-04-05 2013-11-12 Microsoft Corporation Social context for inter-media objects
CN101860573A (zh) * 2010-06-25 2010-10-13 宇龙计算机通信科技(深圳)有限公司 一种更新互联网信息的方法、系统及移动终端
JP5361831B2 (ja) * 2010-09-09 2013-12-04 株式会社東芝 ビデオサーバ、管理情報キャッシュ方法及び管理情報キャッシュプログラム
JP5740128B2 (ja) * 2010-10-01 2015-06-24 株式会社東芝 チャプタ設定制御装置及びチャプタ設定制御装置によるチャプタ設定制御方法
JP5707080B2 (ja) * 2010-10-01 2015-04-22 株式会社東芝 携帯端末及び携帯端末によるタグ位置制御方法
JP4681685B1 (ja) * 2010-11-25 2011-05-11 株式会社イマジカ・ロボットホールディングス 映像編集システムおよび映像編集方法
CN102164181A (zh) * 2011-04-08 2011-08-24 传聚互动(北京)科技有限公司 基于视频播放平台的微博发布工具
US8886009B2 (en) 2011-04-26 2014-11-11 Sony Corporation Creation of video bookmarks via scripted interactivity in advanced digital television
US20130083210A1 (en) * 2011-09-30 2013-04-04 Successfactors, Inc. Screen and webcam video capture techniques
JP2013141064A (ja) * 2011-12-28 2013-07-18 Jvc Kenwood Corp 撮像装置、及び制御方法
WO2013116163A1 (en) * 2012-01-26 2013-08-08 Zaletel Michael Edward Method of creating a media composition and apparatus therefore
US9514785B2 (en) * 2012-09-07 2016-12-06 Google Inc. Providing content item manipulation actions on an upload web page of the content item
US9497276B2 (en) 2012-10-17 2016-11-15 Google Inc. Trackable sharing of on-line video content
US9570108B2 (en) 2012-11-02 2017-02-14 Apple Inc. Mapping pixels to underlying assets in computer graphics
EP2965231A1 (en) * 2013-03-08 2016-01-13 Thomson Licensing Method and apparatus for automatic video segmentation
US20160004395A1 (en) * 2013-03-08 2016-01-07 Thomson Licensing Method and apparatus for using a list driven selection process to improve video and media time based editing
CN104168508A (zh) * 2013-05-16 2014-11-26 上海斐讯数据通信技术有限公司 移动电视节目内容处理方法、移动终端及移动电视系统
US10915868B2 (en) 2013-06-17 2021-02-09 Microsoft Technology Licensing, Llc Displaying life events while navigating a calendar
US20150199994A1 (en) * 2014-01-10 2015-07-16 Sony Corporation Systems and Methods of Segmenting a Video Recording Into Different Viewing Segments
CN106537374A (zh) * 2014-05-15 2017-03-22 全球内容极点有限公司 用于管理关于电影和/或娱乐行业的媒体内容的系统
WO2017083418A1 (en) * 2015-11-09 2017-05-18 Nexvidea Inc. Methods and systems for recording, producing and transmitting video and audio content
US11087445B2 (en) 2015-12-03 2021-08-10 Quasar Blu, LLC Systems and methods for three-dimensional environmental modeling of a particular location such as a commercial or residential property
US10607328B2 (en) 2015-12-03 2020-03-31 Quasar Blu, LLC Systems and methods for three-dimensional environmental modeling of a particular location such as a commercial or residential property
US9965837B1 (en) 2015-12-03 2018-05-08 Quasar Blu, LLC Systems and methods for three dimensional environmental modeling
CN105872635A (zh) * 2015-12-16 2016-08-17 乐视云计算有限公司 视频资源分发的方法和装置
KR102462880B1 (ko) * 2018-08-30 2022-11-03 삼성전자 주식회사 디스플레이장치, 그 제어방법 및 기록매체
CN109769141B (zh) * 2019-01-31 2020-07-14 北京字节跳动网络技术有限公司 一种视频生成方法、装置、电子设备及存储介质
EP3948502A4 (en) * 2019-04-01 2022-12-28 Blackmagic Design Pty Ltd MULTIMEDIA MANAGEMENT SYSTEM
US12057141B2 (en) 2019-08-02 2024-08-06 Blackmagic Design Pty Ltd Video editing system, method and user interface
US11721365B2 (en) 2020-11-09 2023-08-08 Blackmagic Design Pty Ltd Video editing or media management system
CN113038234B (zh) * 2021-03-15 2023-07-21 北京字跳网络技术有限公司 视频的处理方法、装置、电子设备和存储介质
CN115580749A (zh) * 2021-06-17 2023-01-06 北京字跳网络技术有限公司 展示方法、装置及可读存储介质
CN116095412B (zh) * 2022-05-30 2023-11-14 荣耀终端有限公司 视频处理方法及电子设备
CN117749959A (zh) * 2022-09-14 2024-03-22 北京字跳网络技术有限公司 一种视频编辑方法、装置、设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5404316A (en) * 1992-08-03 1995-04-04 Spectra Group Ltd., Inc. Desktop digital video processing system
US20030093790A1 (en) * 2000-03-28 2003-05-15 Logan James D. Audio and video program recording, editing and playback systems using metadata
US6357042B2 (en) * 1998-09-16 2002-03-12 Anand Srinivasan Method and apparatus for multiplexing separately-authored metadata for insertion into a video data stream
US6515687B1 (en) * 2000-05-25 2003-02-04 International Business Machines Corporation Virtual joystick graphical user interface control with one and two dimensional operation
US20040181545A1 (en) * 2003-03-10 2004-09-16 Yining Deng Generating and rendering annotated video files
US7349923B2 (en) * 2003-04-28 2008-03-25 Sony Corporation Support applications for rich media publishing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2007082167A3 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10074400B2 (en) 2013-06-05 2018-09-11 Snakt, Inc. Methods and systems for creating, combining, and sharing time-constrained videos
US10706888B2 (en) 2013-06-05 2020-07-07 Snakt, Inc. Methods and systems for creating, combining, and sharing time-constrained videos

Also Published As

Publication number Publication date
WO2007082167A3 (en) 2008-04-17
WO2007082167A2 (en) 2007-07-19
JP2009527135A (ja) 2009-07-23

Similar Documents

Publication Publication Date Title
US11626141B2 (en) Method, system and computer program product for distributed video editing
WO2007082167A2 (en) System and methods for storing, editing, and sharing digital video
CA2600207C (en) Method and system for providing distributed editing and storage of digital media over a network
US20100169786A1 (en) system, method, and apparatus for visual browsing, deep tagging, and synchronized commenting
CN101390032A (zh) 用于存储、编辑和共享数字视频的系统和方法
US20100274820A1 (en) System and method for autogeneration of long term media data from networked time-based media
WO2007082166A2 (en) System and methods for distributed edit processing in an online video editing system
WO2007082169A2 (en) Automatic aggregation of content for use in an online video editing system

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20080724

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20100420