US20080137756A1 - System and method for capturing, editing, searching, and delivering multi-media content - Google Patents
System and method for capturing, editing, searching, and delivering multi-media content Download PDFInfo
- Publication number
- US20080137756A1 US20080137756A1 US11/634,441 US63444106A US2008137756A1 US 20080137756 A1 US20080137756 A1 US 20080137756A1 US 63444106 A US63444106 A US 63444106A US 2008137756 A1 US2008137756 A1 US 2008137756A1
- Authority
- US
- United States
- Prior art keywords
- acquisition
- streams
- stream
- timeline
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims description 41
- 238000009826 distribution Methods 0.000 claims abstract description 57
- 230000001360 synchronised effect Effects 0.000 claims abstract description 9
- 238000009877 rendering Methods 0.000 claims description 9
- 238000003860 storage Methods 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 5
- 230000003139 buffering effect Effects 0.000 claims description 3
- 230000004044 response Effects 0.000 abstract description 5
- 230000008569 process Effects 0.000 description 15
- 230000006870 function Effects 0.000 description 7
- 230000002093 peripheral effect Effects 0.000 description 6
- 238000007726 management method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000012384 transportation and delivery Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000004513 sizing Methods 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000003466 anti-cipated effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 239000012530 fluid Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000007795 chemical reaction product Substances 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000012467 final product Substances 0.000 description 1
- 238000007373 indentation Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
- H04L65/756—Media network packet handling adapting media to device capabilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8455—Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
Definitions
- the present invention is directed generally to video capture and editing systems.
- U.S. patent publication no. 2002/0018124 A1 entitled Methods and Systems for Networked Camera Control discloses methods and systems for providing fluid, real-time camera control of at least one camera to at least one network user via a network including the Internet.
- a control pad or area can be provided to camera users via an application or applet that can be calibrated to provide fluid camera control.
- Compressed or uncompressed analog, digital, or streaming video and audio data can also be provided to the users to allow real-time low latency continuous audio/visual feedback.
- Multiple camera users can obtain control of a camera using a dynamic queuing technique that can allow single user camera control for certain time intervals.
- An administrator can establish user camera control parameters including camera control intervals for subscriber users versus non-subscriber users, camera usurping by an administrator, elimination of camera control privileges for a user, and denial of camera control requests by a user.
- U.S. Pat. No. 6,785,013 entitled System for Capturing Images From a Peripheral Unit and Transferring the Captured Images to an Image Management Server discloses an image data storing system, and more particularly a video capture controller to capture raw video image data from a peripheral unit and to provide compressed video image data to a document image management server.
- the video capture controller includes a control processor, a first memory, and a second memory.
- the first memory stores raw video image data from the peripheral unit under control of the control processor.
- the raw video image data stored in the first memory is then converted into compressed video image data, and is then stored in the second memory, again under control of the control processor.
- the compressed video image data from the second memory is transferred to the image management server.
- a third memory may further be provided between the peripheral unit and the first memory.
- This third memory can be a page memory which performs a direct memory access of the raw video image data output from the peripheral unit into the page memory.
- the raw video image data may be initially output to the page memory when the page memory is set to a maximum page size. Then, the page size setting of the third memory can be changed based on subsequently received page size data.
- U.S. publication no. 2004/0240541 entitled Method and System for Direct Ingest and Storage of Digital Video Content With Immediate Access to Content for Browsing and Editing discloses a video encoder system and method for receiving uncompressed streaming video and outputting a continuous compressed video stream.
- the system uses a video encoder to compress the input video stream, and a formatter and indexer to receive the compressed video stream, apply indexing metadata and formatting metadata to the video stream, and output a formatted video stream that is capable of storage and access.
- U.S. publication no. 2005/0246725 entitled Method and System for Sharing Video Over a Network enables a user to create a video segment or employ an existing video segment, and then share it over a computer network.
- the user provides an indication that one or more particular video segments are to be shared over the network.
- the video segment(s) is/are then automatically assessed and determined to be compatible with streaming video, or not. If the video segment(s) is/are not compatible with streaming video, it/they are converted to a compatible format automatically.
- An identifier for the video segment is automatically created and the segment and the identifier are automatically uploaded to a host computer over the network such as the Internet.
- the video segment and the identifier can be stored at the direction of the host computer.
- a viewer can be sent an identifier of the video, and can request that the video be served as a streaming video to the viewer's computer.
- the viewer can be sent a location of the video such as a URL, can be served the video as an embedded portion of a Web page, or can be served the video as a consequence of being sent a link in an e-mail or as an e-mail greeting card.
- GUI graphical user interface
- a video region for displaying a video of a presenter giving a presentation a primary slide region for displaying slides used by the presenter during the presentation
- a thumbnail region containing thumbnails representing slides in the presentation the thumbnails selectable by a user via a cursor control device
- U.S. Pat. No. 5,966,121 entitled Interactive Hypervideo Editing System and Interface discloses an apparatus and method for interfacing with a hypervideo multimedia application when composing and playing same.
- a novel hypervideo control and interface provides for either user-actuated or automatic transitioning between a plurality of video, graphics, textual, animation, and other types of multimedia files.
- a hypervideo control preferably transitions through distinct lifecycle phases and events as it is presented and removed from an interface display to visually convey to a user the present availability and impending removal of the control from the interface display, thereby providing an intuitive means for navigating a hypervideo application.
- a hypervideo editing system includes a wordprocessing system and a separate video playback system.
- An author of a hypervideo application preferably identifies a particular frame of video displayed by the video playback system and creates an ASCII compliant mark video file that defines the type and functional characteristics of hypervideo controls, marks, and actions using the wordprocessing system.
- editing and playing of a hypervideo application is facilitated by a software-implemented hypervideo editing system that provides an intuitive graphical-user-interface (GUI) to facilitate rapid, real-time hypervideo application development, as well as playback capability.
- GUI graphical-user-interface
- Object-oriented design principles are preferably employed to permit efficient linking and embedding of a hypervideo application or interface into an off-the-shelf software application or other parent application.
- the present invention has the capability to handle, in a scalable manner, a wide diversity of heterogeneous information streams which may be generated by separate (or the same) computer(s), or other capture devices, which may or may not be connected to a network.
- “handle” refers to efficient, scalable, multi-stream acquisition (including after-the-fact acquisition of new information streams) and remote network-based editing of the multiple streams.
- the system also supports management for access control and delivery.
- the system has an integrated approach to assimilation and management of metadata and support for content-based searching.
- the present invention is directed to apparatus and methods for operating the apparatus according to a session acquisition mode, an editing mode, and a playback or distribution mode.
- a global timeline sync signal is supplied to various capture devices.
- acquisition streams are automatically produced which capture an analog or digital input signal.
- Those acquisition streams produced by capture devices in sync with the global timeline i.e., online devices
- Those capture devices which are not in sync with the global timeline i.e., offline devices
- the system may inlcude one or more on line capture devices that come on line and go off line during the global time line, one or more online capture devices operating in conjunction with one or more offline capture devices, or a plurality of offline capture devices.
- the various acquisition streams are delivered, synchronously or asynchronously, to a server. Those streams having local time indicia are synchronized with the global timeline and the various acquisition streams are then stored. Low bit rate streams corresponding to the stored acquisition streams are generated for use in subsequent editing of the acquisition streams.
- the editing of the acquisition streams entails reviewing the various acquisition streams and selecting portions for replay.
- the editing may include “after the fact videography” in which one or more potions from a fixed view are selected for replay.
- the portions could include, for example, two rectangles of varying size.
- the portions selected for replay are identified through metadata which is then stored.
- the replay or distribution mode begins by editing the global timeline using the metadata generated in the editing mode so as to produce an edited timeline.
- the various acquisition streams are then rendered or played back according to the edited timeline to produce a distribution stream.
- the distribution stream may be provided to various users via the acquisition server or a separate distribution server.
- FIG. 1 is a block diagram of a system configuration which may be used to implement a session acquisition mode according to various embodiments of the invention disclosed herein;
- FIG. 2 illustrates how a wide diversity of heterogeneous acquisition streams which may be generated by separate (or the same) computer(s), or other capture devices, which may or may not be connected to a network, are integrated into a global timeline which defines a single session stream;
- FIG. 3 is a block diagram of a system configuration which may be used to implement an edit mode according to various embodiments of the invention disclosed herein;
- FIGS. 4 and 5 are screenshots illustrating operation of an editor disclosed herein;
- FIG. 6 is a block diagram of a system configuration which may be used to implement a playback or distribution mode according to various embodiments of the invention disclosed herein;
- FIGS. 7-10 are screenshots illustrating a user's experience accessing a distribution stream during the playback or distribution mode
- FIG. 11 illustrates a function of sub-rectangle selection which may be performed by the editor according to one embodiment of the present invention
- FIG. 12 illustrates one example of the steps in a session acquisition mode
- FIG. 13 illustrates one example of the steps in an edit mode
- FIG. 14 illustrates one example of the steps in a playback or distribution mode.
- FIG. 1 is a block diagram of a system 10 which may be used to implement various embodiments of the present invention.
- the system 10 is exemplary.
- the direction of information flow as illustrated by the arrows in FIG. 1 illustrates the system's operation during a session acquisition mode.
- the system 10 may be comprised of a server 12 , which provides a focal point for the reception of various acquisition streams (discussed below) through, for example, the Internet or other type of distribution/communication network 14 .
- the server 12 is capable of providing a synchronization (sync) signal 16 to various capture devices (discussed below) through the network 14 .
- a first type of capture device is a lecturer's personal or desktop computer (PC) 18 which carries a Power Point presentation.
- the PC 18 may receive the sync signal 16 .
- the PC 18 produces an acquisition stream 20 which in this case is a set of Power Point files along with time stamps associated with file/slide transitions (see FIG. 2A ). If the PC 18 received the sync signal 16 , then the time stamps will be in sync with a global time line (see FIG. 2H ); if the PC did not receive the sync signal 16 (e.g. the network connection was terminated), then the acquisition stream 20 will have local time stamps relative to the start which will have to be aligned (as discussed below) with the global time line.
- a video computer 22 operating in conjunction with a first camera 24 forms a second type of capture device.
- the camera 24 may be operated by a videographer and may be used to follow the lecturer although in other embodiments, discussed below, a real time videographer is not needed.
- the video computer 22 operating in conjunction with a second camera 26 forms another example of the second type of capture device. In the case of camera 26 , that camera may be fixed on a chalk board (not shown) or other type of display that may vary slowly over time.
- the video computer 22 operating in conjunction with a microphone 28 forms a third type of video capture device.
- the video computer 22 produces three acquisition streams, an uncompressed video stream 30 (used to produce a distribution session as discussed below) having timestamps shown in FIG.
- each of the acquisition streams 30 , 32 , 34 will have timestamps that will be in sync with the global time line; if the computer 22 did not receive the sync signal 16 , then the acquisition streams 30 , 32 , 34 will have local time stamps which will have to be aligned (as discussed below) with the global time line.
- Another type of capture device is an electronic board or tablet PC 35 of the type that can sense what has been written on the board or PC and output the sensed material as a “presenter's ink” acquisition stream 36 . If the electronic board/PC 35 received the sync signal 16 , then the acquisition stream 36 will have timestamps (see FIG. 2E ) that are in sync with the global time line; if the electronic board/PC 35 did not receive the sync signal 16 , then the acquisition stream 36 will have local timestamps which will have to be aligned with the global time line.
- Personal computers 38 a, 38 b . . . 38 n represent yet another type of capture device.
- the personal computers 38 a, 38 b . . . 38 n may be used by students or others listening to the presentation.
- Each of the computers 38 a, 38 b . . . 38 n may produce an acquisition stream 40 a, 40 b . . . 40 n, respectively, that is comprised of notes (annotations) and timestamps shown in FIGS. 2F , 2 G and 2 I, respectively.
- the acquisition streams 40 a, 40 b . . . 40 n (or any of the other acquisition streams) may be marked “public” or “private” to control access to the content in that acquisition stream.
- the timestamps will be in sync with the global time line; if the computers 38 a, 38 b . . . 38 n did not receive the sync signal, then the timestamps will be local and will have to be aligned with the global time line. It is anticipated, as with the other capture devices, that certain of the computers 38 a, 38 b . . . 38 n may be in sync with the global time line and on during the entire presentation ( FIG. 2F ), certain of the computers 38 a, 38 b . . .
- This type of capture (e.g. student notes) may also be performed after the video capture is complete, for example, while a person is viewing a stored version of the presentation.
- the various acquisition streams 20 , 30 , 32 , 34 , 36 , 40 a, 40 b . . . 40 n are delivered via the network 14 to the server 12 .
- the uncompressed video acquisition stream 30 is input to a storage device 42 . If the storage device 42 is separately addressable via the network 14 , the uncompressed video acquisition steam 30 could be delivered directly to the storage device 42 .
- an editing server is provided by, for example, a computer 44 .
- the editing server 44 receives a low bit rate copy of the various acquisition streams 20 , 30 , 32 , 34 , 36 , 40 a, 40 b . . . 40 n.
- the function of the editing server is performed by the server 12 and the computer 44 is used to access the editing function.
- the acquisition process results in a single composite shared timeline seen in FIG. 2H .
- the acquisition can be synchronous, where all acquisition is occurring simultaneously ( FIGS. 2A-2G ), or asynchronous, where some acquisition streams are integrated into the global timeline after the original activity is completed ( FIG. 2I ).
- multiple-stream content acquisition is enable by using multiple capture devices, each one capturing one or more types of content.
- content streams include but are not limited to, high-motion video, low-motion/high-resolution video, screen capture, slideshow viewer (e.g. PowerPoint or PDF) slide changes, and audio.
- the content acquisition mode has two modes of operation: network-connected (online) and network-independent (offline). The final product is identical for each mode of operation, but the method of synchronizing the various acquisition streams differs between the two modes.
- the capture devices In the “online” mode, the capture devices begin individually capturing content in response to either an operator-generated “begin acquisition” command or the system generated sync signal 16 .
- the server 12 logs the beginning and end of each acquisition interval, as well as any time-stamped events generated during the capture interval, against the global timeline ( FIG. 2H ) maintained by the server 12 . Once all the capture devices inform the server 12 that each has finished acquiring its particular content stream, the acquisition session is complete. Capture devices may upload synchronously as data is acquired or asynchronously, by buffering data on the capture device prior to sending the data to the server 12 .
- the capture devices In the “offline” mode, the capture devices begin individually capturing content in response to the user-generated “begin acquisition” command. This case differs from the “online” mode primarily in that none of the capture devices can be assumed to be able to interact with the server 12 , and that any content acquired by such capture devices is done on a local timeline which must be aligned with the global timeline through an external process.
- An example of this process might be human manipulation of a particular acquisition stream's start time relative to the global timeline, thus synchronizing that stream with the other streams sharing the global timeline. Whether that manipulation is considered part of the acquisition process or part of the editing process is not of significance.
- the outcome of this process should be an interval of multi-stream content in which the same event (for example, a lecturer changing the current slide) happens simultaneously in all the acquisition streams.
- a low-fidelity, low-bit-rate mirror copy of each acquisition stream is conveyed to and stored on the editing server 44 .
- the online mode that happens at acquisition time.
- the offline mode that happens when a network connection is made available to the capture device storing a previously acquired content stream, and the low-fidelity mirror copy of that content stream is transferred to the editing server 44 .
- certain of the capture devices may be PCs. These PCs can be ordinary commodity laptop or desktop computers. The clocks on the computers do not necessarily need to be synchronized.
- content is captured and may be stored locally on the capture device and sent to the server 12 , possibly asynchronously (as noted above).
- the server 12 may be broken due to a network failure, server failure, or localized failure on the capture device, no content data is lost because the content data is also buffered on the capture device (see for example, buffer 46 on the electronic board/PC 35 ) and can be uploaded (as noted above) after the fact.
- the original, archival-quality versions of each stream are transferred to the editing server 44 and stored there.
- the acquisition streams encompassed by this disclosure include both media streams and event streams.
- a given set of streams can include zero or more representatives of each kind of stream.
- an acquisition session could include two video media streams and two PowerPoint event streams.
- Media streams include, but are not limited to the following:
- Video at any resolution (including low-resolution such as 320 ⁇ 240 and high resolution such as 1920 ⁇ 1080 HDTV) and at any frame rate (such as high-speed 60 Hz, slow speed, and time-lapse).
- Video can be captured either in archival digital video (DV) form or in a streaming format.
- DV digital video
- Audio either as an integral part of a video source stream or as a separate stream.
- Event streams include, but are not limited to the following:
- Microsoft PowerPoint presentations represented as a pair of a set of PowerPoint files and a sequence of timestamps for transition points in the presentation (builds or slide changes).
- a PowerPoint presentation stream can include multiple PowerPoint files and navigation among the slides in them, including switching from one file to another;
- ink event streams from a tablet computer or other source This could include, for example, ink from one or more presenters or from other sources, such as viewer annotations or from a software application that generates ink;
- Text can be captured in relatively small segments each of which receives a timestamp when it is captured.
- the representation is thus a set of pairs of text segments and timestamps.
- Text segments may also include formatting information such as outline indentation, type faces, fonts, highlighting, and so on.
- One way to capture text is from a text-capture client, which presents a human user experience similar to an instant messaging client. Text may also be captured using conventional text editing tools such as Microsoft Word that are instrumented to acquire timestamp information or by software development tools (“IDEs”) such as Eclipse that are instrumented to identify visible lines of software code;
- IDEs software development tools
- tag events which associate a particular meaning with a timestamp in the capture session.
- a presenter could create a tag event (e.g., by pushing a special button on a lectern console or a function key on a computer) to identify for later use (say, in editing) a time position in the overall capture session. This could signal a change of topic, or the start of a break, or some other information that can assist a person editing the streams at a later time. It could also itself be incorporated automatically by the rendering tool in a rendering as a switch, for example, from one stream to another in a presentation tool.
- FIG. 3 illustrations a portion of the network shown in FIG. 1 which may be used to implement an edit mode or perform an editing process on the acquired content.
- the arrow in FIG. 3 shows the flow of metadata 50 from the computer 44 having the editor to the server 12 . It is assumed that all of the acquisition streams are aligned with the global timeline and that the low resolution versions of all of the acquisition streams have already been stored in computer 44 .
- the goal of the editing process is to produce modifications of the absolute timeline, which are stored by the server 12 as meta-data 50 , for use in asynchronously generating a representation of, or derivative of, the acquired multi-stream content for delivery to end users.
- the editor can simultaneously edit the multiple streams to generate the metadata 50 that describes a resulting distribution stream that can include shifts of focus from one data stream to another, realignment of segments of the original timeline, etc.
- the user interacts with an editing client application, running on, for example, computer 44 .
- an editing client application running on, for example, computer 44 .
- the editor resides on server 12
- the computer 44 would need a moderate-bandwidth network so as to access the low bit-rate copies stored on server 12 .
- Visualization windows examples of which are seen in FIGS. 4 and 5 , are shown within the editor of the low bit-rate mirror copies of each acquired stream in a given multi-stream acquisition session.
- the user can play, pause, seek, or scan all streams simultaneously by interacting, via the client application/editor, with the absolute timeline for the acquisition session.
- the user may choose to do several of the following operations (not a comprehensive list): eliminate portions of the absolute timeline altogether; alter the order of segments of the absolute timeline; merge and alter the timelines of two or more multi-stream acquisition sessions; and determine which subset of the multiple acquired streams will be visible to the end user in the end product (distribution stream).
- the editor is designed to be used by videographers as well as presenters.
- the editor is fast. There is no waiting during editing, no waiting for a review, and no waiting for a save. Segmentation is done with precise cuts. Slide boundaries and identified events can be quickly selected. Fine tuning is done very rapidly. The “seek time” is effectively zero.
- the audio histogram can be used to define cuts on sentence boundaries.
- Typical use cases for the editor are as follows.
- the videographer will want to eliminate start-up and tear-down. That means the videographer can turn on the camera whenever they want and edit out the set-up and tear-down later.
- the instructor/presenter may want to break long lectures into shorter topic-focused segments.
- the instructor/presenter may want to edit out segments such as bad jokes, breaks, or off-topic digressions.
- the instructor/presenter may want to edit out short clips for frequently asked questions on homework assignments or other topics.
- the edited out clips could be posted separately. Multiple edit results (deliveries) may be created from the same global timeline.
- the server 12 acts as both the acquisition and the distribution server, although a separate server (not shown) could be used as the distribution server.
- the playback or distribution mode is the overall activity of taking the captured acquisition streams and presenting them in accordance with the edited global timeline.
- the distribution process takes as input one or more source media streams (the stream input process) from storage device 42 , and the edited global timeline (the edit input process), and produces as an output one or more distribution streams 52 aligned according to the edited global timeline.
- the rendered session is a self-contained object encapsulating binary data for each distribution stream that (ideally but not exclusively) contains only the segments (for example, video frames) of the source streams specified in the edited global timeline. In the preferred embodiment, it is anticipated that new distribution streams will be created.
- a compressed video output stream might be created from the archival-quality source stream by compressing only the source video frames that fall within the segments specified in the edited timeline. That output stream may be redistributed along with other components of the rendered session, independently of the source video stream from which it is generated. In the case of all other media streams (audio, etc), only the stream data corresponding to the segments specified in the edited timeline would be present in the distribution stream.
- event streams which are composed of a series of discrete time-stamped events corresponding to units of data (for example, a PowerPoint slide or a single instant message)
- event streams which are composed of a series of discrete time-stamped events corresponding to units of data (for example, a PowerPoint slide or a single instant message)
- only events whose timestamps fall within the segments specified in the edited timeline would be present in thedistribution stream.
- the rendering or playback process always results in a timeline in which all streams are synchronized precisely as they were in the global timeline.
- the distribution stream may be played back as a continuous unit of media in which one or more source streams are displayed. It may be seeked (i.e., random access to time points in the stream), paused, fast-forwarded, rewound, etc. and played for its duration with no apparent breaks or discontinuities.
- the distribution stream 52 may be distributed via network 14 using network streaming, physical media, etc. Viewing a distribution stream does not require that the content consumer have access to the original source streams.
- the distribution stream 52 is rendered to a viewer using a presentation tool.
- the presentation tool could be a commodity web browser with plug-ins to support media streams and other kinds of rendering particular to the kinds of source streams, edited forms of which are incorporated into the presentation.
- the presentation tool could also be a custom software application that runs on a standard desktop or laptop computer or on an appliance such as a television set.
- the exact layout, including dimensions, that is offered by the presentation tool to a person viewing the distribution stream is determined by a combination of the presentation tool design, the editor, and the configuration of the viewer.
- FIGS. 7-10 are screenshots illustrating a user's experience accessing a distribution stream during the playback or distribution mode.
- FIG. 7 an overall view of what the user, in this case a student, would see is illustrated. The student has the ability to speed up or slow down the distribution stream.
- Index tools are available to allow access by slide (e.g., slide selector, slide sorter view), by bookmark (e.g., explicit bookmark in sequence, URL link), by time (e.g., time select in video stream), or by content (e.g., search Power Point slide text across one or more lectures).
- FIG. 8 illustrates the slide sorter view in detail. Selecting (clicking on) a slide immediately takes the user to that part of the presentation.
- FIG. 8 illustrates the slide sorter view in detail. Selecting (clicking on) a slide immediately takes the user to that part of the presentation.
- FIG. 9 illustrates the ability to capture content on a chalkboard or other stationary live action with very high resolution and, in the case of a chalkboard or the like, possibly a slow framerate.
- FIG. 10 illustrates how users may index the images on the chalkboard which scroll across the bottom. Clicking on an image immediately takes the user to that part of the presentation.
- the camera 60 produces an acquisition stream 62 , including timestamps, which is an image of the entire field of view of interest (e.g. the entire front of a classroom) in high definition (e.g. 4096 ⁇ 2560, 1920 ⁇ 1080, etc.)).
- timestamps an image of the entire field of view of interest (e.g. the entire front of a classroom) in high definition (e.g. 4096 ⁇ 2560, 1920 ⁇ 1080, etc.)).
- the editing client user might choose to select a 320 ⁇ 240 pixel viewing window 64 of interest that is moved within the original video stream 62 to maintain focus on the head and torso of the human speaker as he walks around the “frame” of the original video stream 62 .
- the distribution stream is produced during the rendering phase, it will appear to contain (in videographic terms) a “tight shot” of the speaker's head and torso.
- the editing client user may select, for example, a 1024 ⁇ 1024 pixel viewing window 66 from the original video stream 62 .
- the distribution stream produced during the rendering phase will appear to contain a tight shot of the region of interest.
- One example of a possible human interface device for selecting rectangles 64 , 66 in real-time or in post production, e.g., faster than real-time, is a common console type of game controller.
- the game controller should have two analog thumb-sticks, where each analog thumb-stick corresponds to a rectangular region within the original video stream 62 , and two analog triggers, one each for controlling the zoom for one of the rectangles.
- superimposed on the display window of the original video stream 62 are the sub-rectangles whose dimensions are controlled by the analog triggers and whose positions are controlled by the analog thumb-sticks. In this manner, we can create two or more sub-selection, possibly overlapping rectangles in the distribution (output) stream, each with its own pixel dimensionality and frame rate.
- the resulting distribution video stream can have pixel dimensions different from one point in time to the next resulting in the appearance of zoom or widening.
- the selection/positioning/sizing of the sub-rectangles in the distribution stream yields a metadata stream like any of the other editing functions disclosed herein.
- the metadata stream instructs the server 14 to transform the original global timeline to produce an edited timeline from which the distribution stream is created. That type of editing function can be integrated into the editor or done separately.
- the selection/positioning/sizing of any of the delivery streams can be later revised creating new metadata and thus additional distribution streams. Additional benefits include the elimination of gimbel mounts, zoom lenses, and the need for a videographer in the presentation room.
- the system 10 disclosed herein has several advantages.
- the computer on which the editor runs need not store copies of any of the multiple video streams in an acquisition session, nor does it need to have direct network access to the high bit-rate original copies.
- This is different from traditional client-side editing systems, in which local copies of the original high bit-rate files must be present on the computer running the editing software for any selection or manipulation operations to take place.
- the editor can be run on multiple computers simultaneously, each one editing the original global timeline to produce a different result.
- the network connection between the editing client computer and the editing server can be of far lower bandwidth than would be required if this operation were performed using the original high-fidelity video streams.
- the low-fidelity mirror streams and the audio histogram can be generated and transferred to the editing server in real-time. This allows an editing client user to begin an editing operation immediately after the multi-stream acquisition session terminates, and before the high-fidelity original streams have even been transferred from the acquisition systems to the editing server.
- Edit inputs or selection guidance inputs are gathered for the rendering process at multiple times, including the time of capture and zero or more times after capture. All the inputs contribute to the edited timeline.
- Selection guidance can come from different kinds of sources, including: (1) use of the editing client application, (2) video selection human interface device used by videographer during the capture session, (3) video selection human interface device used after the capture session, and (4) automated video focus device such as a commercial presenter tracking device (“necklace”). Those kinds of editing functions can be integrated into the editor or done separately.
- Selection guidance can be modified at any time prior to the rendering process. For example, if an automated video focus device gives erroneous tracking information, the selection rectangle it produces can later be adjusted using any of the means mentioned above. Selection guidance, whether automatically captured or manually recorded, can be used to create an edited timeline.
- Metadata contained in the edited timeline can include rectangle selection within a given video stream, which may include (1) locating and sizing the rectangle, and (2) adjusting scale from source pixel dimensions (e.g., 640 ⁇ 480) to destination pixel dimensions (e.g., 320 ⁇ 240). Note: Where a selected rectangle is present, only the pixels specified by (1) the dimensions of the selected rectangle, and (2) its position within to the source video, are written to the image buffer to be compressed. Metadata contained in the edited timeline can also include timeline adjustments for an individual source stream, which may include (1) time shifting and (2) time expansion or contraction.
- FIG. 12 illustrates one example of the steps in a session acquisition mode for the system illustrated in FIG. 1
- FIG. 13 illustrates one example of the steps in an edit mode for the hardware of FIG. 3
- FIG. 4 illustrates one example of the steps in a playback or distribution mode for the hardware illustrated in FIG. 6
- the first step 70 in the method is to provide the global timeline sync signal 16 to the various capture devices.
- acquisition streams 20 , 30 , 32 , 34 , 36 , 40 a, 40 b, . . . 40 n, 62 are automatically produced as shown by 72 .
- the various acquisition streams are delivered to the server at 74 . It should be noted that, as discussed above, the acquisition streams may be delivered at various times. Certain of the acquisition streams may be delivered synchroneously, while other acquisition streams are delivered asynchroneously. Additionally, certain acquisition streams may be produced at a much later point in time, such as when a user is viewing a distribution stream and creating notes based on viewing of the distribution stream. Under those circumstances, an acquisition stream is produced and delivered substantially later than the acquisition streams representing the initial presentation.
- server 12 may synchronize those streams having local time indicia so that those acquisition streams will be in sync with the global timeline.
- the various acquisition streams are then stored at 78 .
- Low bit rate streams corresponding to the stored acquisition streams are generated at 80 for use in subsequent editing of the acquisition streams.
- the editing of the acquisition streams is illustrated in FIG. 13 .
- the various acquisition streams may be reviewed and portions selected for replay as shown by 82 .
- the portions selected for replay are identified through metadata which is then stored at 84 .
- the replay or distribution mode begins by editing the global timeline using the metadata generated in the editing mode so as to produce an edited timeline as shown by 86 .
- the various acquisition streams are then rendered or played back according to the edited timeline to produce a distribution stream as shown by 88 .
- the distribution stream may be provided to various users via the server 12 , or a separate distribution server (not shown) may be used.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Television Signal Processing For Recording (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Management Or Editing Of Information On Record Carriers (AREA)
Abstract
Description
- Not Applicable
- The present invention is directed generally to video capture and editing systems.
- A number of systems are known or have been proposed that are directed to various aspects of the present disclosure. For example, U.S. patent publication no. 2002/0018124 A1 entitled Methods and Systems for Networked Camera Control discloses methods and systems for providing fluid, real-time camera control of at least one camera to at least one network user via a network including the Internet. A control pad or area can be provided to camera users via an application or applet that can be calibrated to provide fluid camera control. Compressed or uncompressed analog, digital, or streaming video and audio data can also be provided to the users to allow real-time low latency continuous audio/visual feedback. Multiple camera users can obtain control of a camera using a dynamic queuing technique that can allow single user camera control for certain time intervals. An administrator can establish user camera control parameters including camera control intervals for subscriber users versus non-subscriber users, camera usurping by an administrator, elimination of camera control privileges for a user, and denial of camera control requests by a user.
- U.S. Pat. No. 6,785,013 entitled System for Capturing Images From a Peripheral Unit and Transferring the Captured Images to an Image Management Server discloses an image data storing system, and more particularly a video capture controller to capture raw video image data from a peripheral unit and to provide compressed video image data to a document image management server. The video capture controller includes a control processor, a first memory, and a second memory. The first memory stores raw video image data from the peripheral unit under control of the control processor. The raw video image data stored in the first memory is then converted into compressed video image data, and is then stored in the second memory, again under control of the control processor. Then, the compressed video image data from the second memory is transferred to the image management server. A third memory may further be provided between the peripheral unit and the first memory. This third memory can be a page memory which performs a direct memory access of the raw video image data output from the peripheral unit into the page memory. The raw video image data may be initially output to the page memory when the page memory is set to a maximum page size. Then, the page size setting of the third memory can be changed based on subsequently received page size data. These operations ensure that the raw video image data can be quickly stored in a first memory, which may prevent having to reduce a speed of generating the raw video image data in the peripheral unit. Further, if the raw video image data is output prior to outputting accompanying commands indicating a page size of the raw video image data, no raw video image data will be lost.
- U.S. publication no. 2004/0240541 entitled Method and System for Direct Ingest and Storage of Digital Video Content With Immediate Access to Content for Browsing and Editing discloses a video encoder system and method for receiving uncompressed streaming video and outputting a continuous compressed video stream. The system uses a video encoder to compress the input video stream, and a formatter and indexer to receive the compressed video stream, apply indexing metadata and formatting metadata to the video stream, and output a formatted video stream that is capable of storage and access.
- U.S. publication no. 2005/0246725 entitled Method and System for Sharing Video Over a Network enables a user to create a video segment or employ an existing video segment, and then share it over a computer network. The user provides an indication that one or more particular video segments are to be shared over the network. The video segment(s) is/are then automatically assessed and determined to be compatible with streaming video, or not. If the video segment(s) is/are not compatible with streaming video, it/they are converted to a compatible format automatically. An identifier for the video segment is automatically created and the segment and the identifier are automatically uploaded to a host computer over the network such as the Internet. The video segment and the identifier (optionally with other identifying material such as an identity of the sender, an access authorization for the video, a number of accesses permitted, and a duration for the availability of the video) can be stored at the direction of the host computer. A viewer can be sent an identifier of the video, and can request that the video be served as a streaming video to the viewer's computer. Alternatively, the viewer can be sent a location of the video such as a URL, can be served the video as an embedded portion of a Web page, or can be served the video as a consequence of being sent a link in an e-mail or as an e-mail greeting card.
- U.S. RE38,609 E is entitled On-Demand Presentation Graphical User Interface. Disclosed therein is a graphical user interface (“GUI”) comprising: a video region for displaying a video of a presenter giving a presentation; a primary slide region for displaying slides used by the presenter during the presentation; and a thumbnail region containing thumbnails representing slides in the presentation, the thumbnails selectable by a user via a cursor control device
- Finally, U.S. Pat. No. 5,966,121 entitled Interactive Hypervideo Editing System and Interface discloses an apparatus and method for interfacing with a hypervideo multimedia application when composing and playing same. A novel hypervideo control and interface provides for either user-actuated or automatic transitioning between a plurality of video, graphics, textual, animation, and other types of multimedia files. A hypervideo control preferably transitions through distinct lifecycle phases and events as it is presented and removed from an interface display to visually convey to a user the present availability and impending removal of the control from the interface display, thereby providing an intuitive means for navigating a hypervideo application. In one embodiment, a hypervideo editing system includes a wordprocessing system and a separate video playback system. An author of a hypervideo application preferably identifies a particular frame of video displayed by the video playback system and creates an ASCII compliant mark video file that defines the type and functional characteristics of hypervideo controls, marks, and actions using the wordprocessing system. In a preferred embodiment, editing and playing of a hypervideo application is facilitated by a software-implemented hypervideo editing system that provides an intuitive graphical-user-interface (GUI) to facilitate rapid, real-time hypervideo application development, as well as playback capability. Object-oriented design principles are preferably employed to permit efficient linking and embedding of a hypervideo application or interface into an off-the-shelf software application or other parent application.
- The present invention has the capability to handle, in a scalable manner, a wide diversity of heterogeneous information streams which may be generated by separate (or the same) computer(s), or other capture devices, which may or may not be connected to a network. Here, “handle” refers to efficient, scalable, multi-stream acquisition (including after-the-fact acquisition of new information streams) and remote network-based editing of the multiple streams. The system also supports management for access control and delivery. In addition the system has an integrated approach to assimilation and management of metadata and support for content-based searching.
- The present invention is directed to apparatus and methods for operating the apparatus according to a session acquisition mode, an editing mode, and a playback or distribution mode. In the session acquisition mode, a global timeline sync signal is supplied to various capture devices. In response to the global timeline sync signal or a locally generated start signal, acquisition streams are automatically produced which capture an analog or digital input signal. Those acquisition streams produced by capture devices in sync with the global timeline (i.e., online devices) will have time indicia in sync with the global timeline. Those capture devices which are not in sync with the global timeline (i.e., offline devices) will have local time indicia. The system may inlcude one or more on line capture devices that come on line and go off line during the global time line, one or more online capture devices operating in conjunction with one or more offline capture devices, or a plurality of offline capture devices. The various acquisition streams are delivered, synchronously or asynchronously, to a server. Those streams having local time indicia are synchronized with the global timeline and the various acquisition streams are then stored. Low bit rate streams corresponding to the stored acquisition streams are generated for use in subsequent editing of the acquisition streams.
- The editing of the acquisition streams entails reviewing the various acquisition streams and selecting portions for replay. The editing may include “after the fact videography” in which one or more potions from a fixed view are selected for replay. The portions could include, for example, two rectangles of varying size. The portions selected for replay are identified through metadata which is then stored.
- The replay or distribution mode begins by editing the global timeline using the metadata generated in the editing mode so as to produce an edited timeline. The various acquisition streams are then rendered or played back according to the edited timeline to produce a distribution stream. The distribution stream may be provided to various users via the acquisition server or a separate distribution server.
- For the present disclosure to be easily understood and readily practiced, the present disclosure will now be described, in conjunction with preferred embodiments thereof, for purposes of illustration and not limitation, in connection with the following figures wherein:
-
FIG. 1 is a block diagram of a system configuration which may be used to implement a session acquisition mode according to various embodiments of the invention disclosed herein; -
FIG. 2 illustrates how a wide diversity of heterogeneous acquisition streams which may be generated by separate (or the same) computer(s), or other capture devices, which may or may not be connected to a network, are integrated into a global timeline which defines a single session stream; -
FIG. 3 is a block diagram of a system configuration which may be used to implement an edit mode according to various embodiments of the invention disclosed herein; -
FIGS. 4 and 5 are screenshots illustrating operation of an editor disclosed herein; -
FIG. 6 is a block diagram of a system configuration which may be used to implement a playback or distribution mode according to various embodiments of the invention disclosed herein; -
FIGS. 7-10 are screenshots illustrating a user's experience accessing a distribution stream during the playback or distribution mode; -
FIG. 11 illustrates a function of sub-rectangle selection which may be performed by the editor according to one embodiment of the present invention; -
FIG. 12 illustrates one example of the steps in a session acquisition mode; -
FIG. 13 illustrates one example of the steps in an edit mode; and -
FIG. 14 illustrates one example of the steps in a playback or distribution mode. -
FIG. 1 is a block diagram of asystem 10 which may be used to implement various embodiments of the present invention. Thesystem 10 is exemplary. The direction of information flow as illustrated by the arrows inFIG. 1 illustrates the system's operation during a session acquisition mode. - The
system 10 may be comprised of aserver 12, which provides a focal point for the reception of various acquisition streams (discussed below) through, for example, the Internet or other type of distribution/communication network 14. Theserver 12 is capable of providing a synchronization (sync) signal 16 to various capture devices (discussed below) through thenetwork 14. - In
FIG. 1 , a first type of capture device is a lecturer's personal or desktop computer (PC) 18 which carries a Power Point presentation. ThePC 18 may receive thesync signal 16. ThePC 18 produces anacquisition stream 20 which in this case is a set of Power Point files along with time stamps associated with file/slide transitions (seeFIG. 2A ). If thePC 18 received thesync signal 16, then the time stamps will be in sync with a global time line (seeFIG. 2H ); if the PC did not receive the sync signal 16 (e.g. the network connection was terminated), then theacquisition stream 20 will have local time stamps relative to the start which will have to be aligned (as discussed below) with the global time line. - A
video computer 22 operating in conjunction with afirst camera 24 forms a second type of capture device. Thecamera 24 may be operated by a videographer and may be used to follow the lecturer although in other embodiments, discussed below, a real time videographer is not needed. Thevideo computer 22 operating in conjunction with asecond camera 26 forms another example of the second type of capture device. In the case ofcamera 26, that camera may be fixed on a chalk board (not shown) or other type of display that may vary slowly over time. Thevideo computer 22 operating in conjunction with amicrophone 28 forms a third type of video capture device. Thevideo computer 22 produces three acquisition streams, an uncompressed video stream 30 (used to produce a distribution session as discussed below) having timestamps shown inFIG. 2B , a compressed video stream 32 (used for editing as discussed below) having timestamps shown inFIG. 2C , and an audio histogram 32 (from which the audio portion of the presentation can be reproduced as is known in the art) having timestamps shown inFIG. 2D . If thecomputer 22 received thesync signal 16, then each of the acquisition streams 30, 32, 34 will have timestamps that will be in sync with the global time line; if thecomputer 22 did not receive thesync signal 16, then the acquisition streams 30, 32, 34 will have local time stamps which will have to be aligned (as discussed below) with the global time line. - Another type of capture device is an electronic board or
tablet PC 35 of the type that can sense what has been written on the board or PC and output the sensed material as a “presenter's ink”acquisition stream 36. If the electronic board/PC 35 received thesync signal 16, then theacquisition stream 36 will have timestamps (seeFIG. 2E ) that are in sync with the global time line; if the electronic board/PC 35 did not receive thesync signal 16, then theacquisition stream 36 will have local timestamps which will have to be aligned with the global time line. -
Personal computers personal computers computers acquisition stream FIGS. 2F , 2G and 2I, respectively. The acquisition streams 40 a, 40 b . . . 40 n (or any of the other acquisition streams) may be marked “public” or “private” to control access to the content in that acquisition stream. If thepersonal computers sync signal 16, the timestamps will be in sync with the global time line; if thecomputers computers FIG. 2F ), certain of thecomputers FIG. 2G ), while certain of thecomputers FIG. 2I ). This type of capture (e.g. student notes) may also be performed after the video capture is complete, for example, while a person is viewing a stored version of the presentation. - Completing the description of
FIG. 1 , the various acquisition streams 20, 30, 32, 34, 36, 40 a, 40 b . . . 40 n are delivered via thenetwork 14 to theserver 12. The uncompressedvideo acquisition stream 30 is input to astorage device 42. If thestorage device 42 is separately addressable via thenetwork 14, the uncompressedvideo acquisition steam 30 could be delivered directly to thestorage device 42. Finally, an editing server is provided by, for example, acomputer 44. Theediting server 44 receives a low bit rate copy of the various acquisition streams 20, 30, 32, 34, 36, 40 a, 40 b . . . 40 n. In another embodiment, the function of the editing server is performed by theserver 12 and thecomputer 44 is used to access the editing function. - The acquisition process results in a single composite shared timeline seen in
FIG. 2H . The acquisition can be synchronous, where all acquisition is occurring simultaneously (FIGS. 2A-2G ), or asynchronous, where some acquisition streams are integrated into the global timeline after the original activity is completed (FIG. 2I ). - Not all (or any) of the capture devices need necessarily be on the network 14 (i.e. online or in sync with the global timeline) during the acquisition mode. Indeed, data can be transferred from any capture device via the
network 14 synchronously during the activity, over the network after-the-fact (for example, when the particular capture device is offline (i.e. not on the network)), or physically from a capture device to theserver 12. - As discussed above, multiple-stream content acquisition is enable by using multiple capture devices, each one capturing one or more types of content. Examples of content streams include but are not limited to, high-motion video, low-motion/high-resolution video, screen capture, slideshow viewer (e.g. PowerPoint or PDF) slide changes, and audio. The content acquisition mode has two modes of operation: network-connected (online) and network-independent (offline). The final product is identical for each mode of operation, but the method of synchronizing the various acquisition streams differs between the two modes.
- In the “online” mode, the capture devices begin individually capturing content in response to either an operator-generated “begin acquisition” command or the system generated
sync signal 16. Theserver 12 logs the beginning and end of each acquisition interval, as well as any time-stamped events generated during the capture interval, against the global timeline (FIG. 2H ) maintained by theserver 12. Once all the capture devices inform theserver 12 that each has finished acquiring its particular content stream, the acquisition session is complete. Capture devices may upload synchronously as data is acquired or asynchronously, by buffering data on the capture device prior to sending the data to theserver 12. - In the “offline” mode, the capture devices begin individually capturing content in response to the user-generated “begin acquisition” command. This case differs from the “online” mode primarily in that none of the capture devices can be assumed to be able to interact with the
server 12, and that any content acquired by such capture devices is done on a local timeline which must be aligned with the global timeline through an external process. An example of this process might be human manipulation of a particular acquisition stream's start time relative to the global timeline, thus synchronizing that stream with the other streams sharing the global timeline. Whether that manipulation is considered part of the acquisition process or part of the editing process is not of significance. The outcome of this process should be an interval of multi-stream content in which the same event (for example, a lecturer changing the current slide) happens simultaneously in all the acquisition streams. - In both modes, a low-fidelity, low-bit-rate mirror copy of each acquisition stream is conveyed to and stored on the
editing server 44. In the online mode, that happens at acquisition time. In the offline mode, that happens when a network connection is made available to the capture device storing a previously acquired content stream, and the low-fidelity mirror copy of that content stream is transferred to theediting server 44. - As discussed above with
FIG. 1 , certain of the capture devices may be PCs. These PCs can be ordinary commodity laptop or desktop computers. The clocks on the computers do not necessarily need to be synchronized. - When one (or more) of the capture devices is attached to a network, content is captured and may be stored locally on the capture device and sent to the
server 12, possibly asynchronously (as noted above). In particular, should the network connection be broken due to a network failure, server failure, or localized failure on the capture device, no content data is lost because the content data is also buffered on the capture device (see for example, buffer 46 on the electronic board/PC35) and can be uploaded (as noted above) after the fact. - At some time following the multi-stream acquisition session, the original, archival-quality versions of each stream are transferred to the
editing server 44 and stored there. - The acquisition streams encompassed by this disclosure include both media streams and event streams. A given set of streams can include zero or more representatives of each kind of stream. For example, an acquisition session could include two video media streams and two PowerPoint event streams.
- Media streams include, but are not limited to the following:
- Video at any resolution (including low-resolution such as 320×240 and high resolution such as 1920×1080 HDTV) and at any frame rate (such as high-
speed 60 Hz, slow speed, and time-lapse). Video can be captured either in archival digital video (DV) form or in a streaming format. - Audio, either as an integral part of a video source stream or as a separate stream.
- Screen capture on a presenter computer, possibly represented as a video stream.
- Event streams include, but are not limited to the following:
- Microsoft PowerPoint presentations, represented as a pair of a set of PowerPoint files and a sequence of timestamps for transition points in the presentation (builds or slide changes). A PowerPoint presentation stream can include multiple PowerPoint files and navigation among the slides in them, including switching from one file to another;
- screen capture on a presenter's computer, possibly represented as a set of pairs of images and timestamps;
- ink event streams from a tablet computer or other source. This could include, for example, ink from one or more presenters or from other sources, such as viewer annotations or from a software application that generates ink;
- other slide presentations, such as sequences of Adobe PDF images, Apple Keynote presentations, etc. These are managed in a manner analogous to PowerPoint as described above;
- text, either as rich text or as plain text. Text can be captured in relatively small segments each of which receives a timestamp when it is captured. The representation is thus a set of pairs of text segments and timestamps. Text segments may also include formatting information such as outline indentation, type faces, fonts, highlighting, and so on. One way to capture text is from a text-capture client, which presents a human user experience similar to an instant messaging client. Text may also be captured using conventional text editing tools such as Microsoft Word that are instrumented to acquire timestamp information or by software development tools (“IDEs”) such as Eclipse that are instrumented to identify visible lines of software code;
- tag events, which associate a particular meaning with a timestamp in the capture session. For example, a presenter could create a tag event (e.g., by pushing a special button on a lectern console or a function key on a computer) to identify for later use (say, in editing) a time position in the overall capture session. This could signal a change of topic, or the start of a break, or some other information that can assist a person editing the streams at a later time. It could also itself be incorporated automatically by the rendering tool in a rendering as a switch, for example, from one stream to another in a presentation tool.
- Turning now to
FIG. 3 ,FIG. 3 illustrations a portion of the network shown inFIG. 1 which may be used to implement an edit mode or perform an editing process on the acquired content. The arrow inFIG. 3 shows the flow ofmetadata 50 from thecomputer 44 having the editor to theserver 12. It is assumed that all of the acquisition streams are aligned with the global timeline and that the low resolution versions of all of the acquisition streams have already been stored incomputer 44. - The goal of the editing process is to produce modifications of the absolute timeline, which are stored by the
server 12 as meta-data 50, for use in asynchronously generating a representation of, or derivative of, the acquired multi-stream content for delivery to end users. The editor can simultaneously edit the multiple streams to generate themetadata 50 that describes a resulting distribution stream that can include shifts of focus from one data stream to another, realignment of segments of the original timeline, etc. - During the editing process, the user interacts with an editing client application, running on, for example,
computer 44. If the editor resides onserver 12, then thecomputer 44 would need a moderate-bandwidth network so as to access the low bit-rate copies stored onserver 12. Visualization windows, examples of which are seen inFIGS. 4 and 5 , are shown within the editor of the low bit-rate mirror copies of each acquired stream in a given multi-stream acquisition session. The user can play, pause, seek, or scan all streams simultaneously by interacting, via the client application/editor, with the absolute timeline for the acquisition session. In doing so, the user may choose to do several of the following operations (not a comprehensive list): eliminate portions of the absolute timeline altogether; alter the order of segments of the absolute timeline; merge and alter the timelines of two or more multi-stream acquisition sessions; and determine which subset of the multiple acquired streams will be visible to the end user in the end product (distribution stream). - The editor is designed to be used by videographers as well as presenters. The editor is fast. There is no waiting during editing, no waiting for a review, and no waiting for a save. Segmentation is done with precise cuts. Slide boundaries and identified events can be quickly selected. Fine tuning is done very rapidly. The “seek time” is effectively zero. The audio histogram can be used to define cuts on sentence boundaries.
- Typical use cases for the editor are as follows. The videographer will want to eliminate start-up and tear-down. That means the videographer can turn on the camera whenever they want and edit out the set-up and tear-down later. The instructor/presenter may want to break long lectures into shorter topic-focused segments. The instructor/presenter may want to edit out segments such as bad jokes, breaks, or off-topic digressions. The instructor/presenter may want to edit out short clips for frequently asked questions on homework assignments or other topics. The edited out clips could be posted separately. Multiple edit results (deliveries) may be created from the same global timeline.
- Once the
metadata 50 is used to edit the global timeline to produce an edited global timeline, it is possible to render the sections of the acquisition streams according to the edited global timeline in a playback or distribution mode. The portion of thesystem 10 ofFIG. 1 used in the playback or distribution mode is illustrated inFIG. 6 . InFIG. 6 , theserver 12 acts as both the acquisition and the distribution server, although a separate server (not shown) could be used as the distribution server. The playback or distribution mode is the overall activity of taking the captured acquisition streams and presenting them in accordance with the edited global timeline. More specifically, the distribution process takes as input one or more source media streams (the stream input process) fromstorage device 42, and the edited global timeline (the edit input process), and produces as an output one or more distribution streams 52 aligned according to the edited global timeline. The rendered session is a self-contained object encapsulating binary data for each distribution stream that (ideally but not exclusively) contains only the segments (for example, video frames) of the source streams specified in the edited global timeline. In the preferred embodiment, it is anticipated that new distribution streams will be created. - In the case of video streams, a compressed video output stream might be created from the archival-quality source stream by compressing only the source video frames that fall within the segments specified in the edited timeline. That output stream may be redistributed along with other components of the rendered session, independently of the source video stream from which it is generated. In the case of all other media streams (audio, etc), only the stream data corresponding to the segments specified in the edited timeline would be present in the distribution stream.
- In the case of event streams, which are composed of a series of discrete time-stamped events corresponding to units of data (for example, a PowerPoint slide or a single instant message), only events whose timestamps fall within the segments specified in the edited timeline would be present in thedistribution stream.
- The rendering or playback process always results in a timeline in which all streams are synchronized precisely as they were in the global timeline. By example, this means that a given PowerPoint slide event in all rendered sessions always occurs at exactly the same instant relative to a source video stream as it did in the global timeline. Adjustments to the timestamps are possible.
- The distribution stream may be played back as a continuous unit of media in which one or more source streams are displayed. It may be seeked (i.e., random access to time points in the stream), paused, fast-forwarded, rewound, etc. and played for its duration with no apparent breaks or discontinuities.
- The
distribution stream 52 may be distributed vianetwork 14 using network streaming, physical media, etc. Viewing a distribution stream does not require that the content consumer have access to the original source streams. - The
distribution stream 52 is rendered to a viewer using a presentation tool. The presentation tool could be a commodity web browser with plug-ins to support media streams and other kinds of rendering particular to the kinds of source streams, edited forms of which are incorporated into the presentation. The presentation tool could also be a custom software application that runs on a standard desktop or laptop computer or on an appliance such as a television set. The exact layout, including dimensions, that is offered by the presentation tool to a person viewing the distribution stream is determined by a combination of the presentation tool design, the editor, and the configuration of the viewer. -
FIGS. 7-10 are screenshots illustrating a user's experience accessing a distribution stream during the playback or distribution mode. InFIG. 7 , an overall view of what the user, in this case a student, would see is illustrated. The student has the ability to speed up or slow down the distribution stream. Index tools are available to allow access by slide (e.g., slide selector, slide sorter view), by bookmark (e.g., explicit bookmark in sequence, URL link), by time (e.g., time select in video stream), or by content (e.g., search Power Point slide text across one or more lectures).FIG. 8 illustrates the slide sorter view in detail. Selecting (clicking on) a slide immediately takes the user to that part of the presentation.FIG. 9 illustrates the ability to capture content on a chalkboard or other stationary live action with very high resolution and, in the case of a chalkboard or the like, possibly a slow framerate.FIG. 10 illustrates how users may index the images on the chalkboard which scroll across the bottom. Clicking on an image immediately takes the user to that part of the presentation. - Returning to
FIG. 1 , an alternative embodiment will now be described. Recall that in theoriginal system 10, twocameras box 60. Thecamera 60 produces anacquisition stream 62, including timestamps, which is an image of the entire field of view of interest (e.g. the entire front of a classroom) in high definition (e.g. 4096×2560, 1920×1080, etc.)). Once theacquisition stream 62 ahs been captured, and the low bit-rate version stored by the editor, selection of one or more rectangles (portions of the entire field of view) can be easily accomplished in the editing mode. - For example, as shown in
FIG. 10 , it is possible to select one or more sub-rectangles (i.e. viewing windows) of arbitrary dimensions from theoriginal video stream 62 and include only the contents of the selected sub-rectangles in a distribution stream. In our example case, the editing client user might choose to select a 320×240pixel viewing window 64 of interest that is moved within theoriginal video stream 62 to maintain focus on the head and torso of the human speaker as he walks around the “frame” of theoriginal video stream 62. When the distribution stream is produced during the rendering phase, it will appear to contain (in videographic terms) a “tight shot” of the speaker's head and torso. - Continuing with the example shown in
FIG. 10 , the editing client user may select, for example, a 1024×1024pixel viewing window 66 from theoriginal video stream 62. Again, the distribution stream produced during the rendering phase will appear to contain a tight shot of the region of interest. - One example of a possible human interface device for selecting
rectangles original video stream 62, and two analog triggers, one each for controlling the zoom for one of the rectangles. Superimposed on the display window of theoriginal video stream 62 are the sub-rectangles whose dimensions are controlled by the analog triggers and whose positions are controlled by the analog thumb-sticks. In this manner, we can create two or more sub-selection, possibly overlapping rectangles in the distribution (output) stream, each with its own pixel dimensionality and frame rate. The resulting distribution video stream can have pixel dimensions different from one point in time to the next resulting in the appearance of zoom or widening. The selection/positioning/sizing of the sub-rectangles in the distribution stream yields a metadata stream like any of the other editing functions disclosed herein. The metadata stream instructs theserver 14 to transform the original global timeline to produce an edited timeline from which the distribution stream is created. That type of editing function can be integrated into the editor or done separately. - The selection/positioning/sizing of any of the delivery streams can be later revised creating new metadata and thus additional distribution streams. Additional benefits include the elimination of gimbel mounts, zoom lenses, and the need for a videographer in the presentation room.
- The
system 10 disclosed herein has several advantages. For example, the computer on which the editor runs need not store copies of any of the multiple video streams in an acquisition session, nor does it need to have direct network access to the high bit-rate original copies. This is different from traditional client-side editing systems, in which local copies of the original high bit-rate files must be present on the computer running the editing software for any selection or manipulation operations to take place. - The editor can be run on multiple computers simultaneously, each one editing the original global timeline to produce a different result.
- Because low-fidelity representations of the original streams are used for selection and manipulation of content, the network connection between the editing client computer and the editing server can be of far lower bandwidth than would be required if this operation were performed using the original high-fidelity video streams.
- In the online acquisition case, the low-fidelity mirror streams and the audio histogram can be generated and transferred to the editing server in real-time. This allows an editing client user to begin an editing operation immediately after the multi-stream acquisition session terminates, and before the high-fidelity original streams have even been transferred from the acquisition systems to the editing server.
- Edit inputs or selection guidance inputs are gathered for the rendering process at multiple times, including the time of capture and zero or more times after capture. All the inputs contribute to the edited timeline. Selection guidance can come from different kinds of sources, including: (1) use of the editing client application, (2) video selection human interface device used by videographer during the capture session, (3) video selection human interface device used after the capture session, and (4) automated video focus device such as a commercial presenter tracking device (“necklace”). Those kinds of editing functions can be integrated into the editor or done separately.
- Selection guidance can be modified at any time prior to the rendering process. For example, if an automated video focus device gives erroneous tracking information, the selection rectangle it produces can later be adjusted using any of the means mentioned above. Selection guidance, whether automatically captured or manually recorded, can be used to create an edited timeline.
- Metadata contained in the edited timeline can include rectangle selection within a given video stream, which may include (1) locating and sizing the rectangle, and (2) adjusting scale from source pixel dimensions (e.g., 640×480) to destination pixel dimensions (e.g., 320×240). Note: Where a selected rectangle is present, only the pixels specified by (1) the dimensions of the selected rectangle, and (2) its position within to the source video, are written to the image buffer to be compressed. Metadata contained in the edited timeline can also include timeline adjustments for an individual source stream, which may include (1) time shifting and (2) time expansion or contraction.
- Turning now to the remaining figures of the application,
FIG. 12 illustrates one example of the steps in a session acquisition mode for the system illustrated inFIG. 1 ,FIG. 13 illustrates one example of the steps in an edit mode for the hardware ofFIG. 3 , andFIG. 4 illustrates one example of the steps in a playback or distribution mode for the hardware illustrated inFIG. 6 . Turning first toFIG. 12 , thefirst step 70 in the method is to provide the globaltimeline sync signal 16 to the various capture devices. In response to the globaltimeline sync signal 16, or a locally generated start signal, acquisition streams 20, 30, 32, 34, 36, 40 a, 40 b, . . . 40 n, 62 are automatically produced as shown by 72. Those acquisition streams produced by capture devices in sync with the global timeline will have time indicia in sync with the global timeline. Those capture devices which are not in sync with the global timeline will have local time indicia. The various acquisition streams are delivered to the server at 74. It should be noted that, as discussed above, the acquisition streams may be delivered at various times. Certain of the acquisition streams may be delivered synchroneously, while other acquisition streams are delivered asynchroneously. Additionally, certain acquisition streams may be produced at a much later point in time, such as when a user is viewing a distribution stream and creating notes based on viewing of the distribution stream. Under those circumstances, an acquisition stream is produced and delivered substantially later than the acquisition streams representing the initial presentation. - At 76,
server 12 may synchronize those streams having local time indicia so that those acquisition streams will be in sync with the global timeline. The various acquisition streams are then stored at 78. Low bit rate streams corresponding to the stored acquisition streams are generated at 80 for use in subsequent editing of the acquisition streams. - The editing of the acquisition streams is illustrated in
FIG. 13 . Basically, the various acquisition streams may be reviewed and portions selected for replay as shown by 82. The portions selected for replay are identified through metadata which is then stored at 84. - Turning now to
FIG. 14 , the replay or distribution mode begins by editing the global timeline using the metadata generated in the editing mode so as to produce an edited timeline as shown by 86. The various acquisition streams are then rendered or played back according to the edited timeline to produce a distribution stream as shown by 88. The distribution stream may be provided to various users via theserver 12, or a separate distribution server (not shown) may be used. - While the present disclosure has been described in connection with preferred embodiments thereof, those of ordinary skill in the art will recognize that many modifications and variations are possible. All such modifications and variations are intended to fall within the scope of the following claims.
Claims (25)
Priority Applications (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/634,441 US8437409B2 (en) | 2006-12-06 | 2006-12-06 | System and method for capturing, editing, searching, and delivering multi-media content |
PCT/US2007/024880 WO2008070105A2 (en) | 2006-12-06 | 2007-12-05 | System and method for capturing, editing, searching, and delivering multi-media content |
US13/868,276 US8910225B2 (en) | 2006-12-06 | 2013-04-23 | System and method for capturing, editing, searching, and delivering multi-media content with local and global time |
US13/868,301 US9031381B2 (en) | 2006-07-20 | 2013-04-23 | Systems and methods for generation of composite video from multiple asynchronously recorded input streams |
US14/075,134 US9251852B2 (en) | 2006-07-20 | 2013-11-08 | Systems and methods for generation of composite video |
US14/559,375 US9584571B2 (en) | 2006-12-06 | 2014-12-03 | System and method for capturing, editing, searching, and delivering multi-media content with local and global time |
US14/685,774 US9473756B2 (en) | 2006-07-20 | 2015-04-14 | Systems and methods for generation of composite video from multiple asynchronously recorded input streams |
US15/012,640 US20160293215A1 (en) | 2006-07-20 | 2016-02-01 | Systems and methods for generation of composite video |
US15/486,544 US10043549B2 (en) | 2006-07-20 | 2017-04-13 | Systems and methods for generation of composite video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/634,441 US8437409B2 (en) | 2006-12-06 | 2006-12-06 | System and method for capturing, editing, searching, and delivering multi-media content |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/868,276 Continuation US8910225B2 (en) | 2006-12-06 | 2013-04-23 | System and method for capturing, editing, searching, and delivering multi-media content with local and global time |
Publications (2)
Publication Number | Publication Date |
---|---|
US20080137756A1 true US20080137756A1 (en) | 2008-06-12 |
US8437409B2 US8437409B2 (en) | 2013-05-07 |
Family
ID=39312929
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/634,441 Active 2031-03-08 US8437409B2 (en) | 2006-07-20 | 2006-12-06 | System and method for capturing, editing, searching, and delivering multi-media content |
US13/868,276 Active US8910225B2 (en) | 2006-12-06 | 2013-04-23 | System and method for capturing, editing, searching, and delivering multi-media content with local and global time |
US14/559,375 Active 2027-03-12 US9584571B2 (en) | 2006-12-06 | 2014-12-03 | System and method for capturing, editing, searching, and delivering multi-media content with local and global time |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/868,276 Active US8910225B2 (en) | 2006-12-06 | 2013-04-23 | System and method for capturing, editing, searching, and delivering multi-media content with local and global time |
US14/559,375 Active 2027-03-12 US9584571B2 (en) | 2006-12-06 | 2014-12-03 | System and method for capturing, editing, searching, and delivering multi-media content with local and global time |
Country Status (2)
Country | Link |
---|---|
US (3) | US8437409B2 (en) |
WO (1) | WO2008070105A2 (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090063994A1 (en) * | 2007-01-23 | 2009-03-05 | Cox Communications, Inc. | Providing a Content Mark |
US20090125589A1 (en) * | 2007-11-09 | 2009-05-14 | International Business Machines Corporation | Reconnection to and migration of electronic collaboration sessions |
US20100042741A1 (en) * | 2007-03-08 | 2010-02-18 | Telefonaktiebolaget L M Ericssson (Publ) | Seeking and Synchronization Using Global Scene Time |
US20110246892A1 (en) * | 2010-03-30 | 2011-10-06 | Hedges Carl | Navigable Content Source Identification for Multimedia Editing Systems and Methods Therefor |
CN102594860A (en) * | 2010-12-02 | 2012-07-18 | 微软公司 | Mixing synchronous and asynchronous data streams |
US20120188350A1 (en) * | 2011-01-25 | 2012-07-26 | Asa Hammond | System and method for improved video motion control |
US20120320013A1 (en) * | 2011-06-16 | 2012-12-20 | Microsoft Corporation | Sharing of event media streams |
US20140229436A1 (en) * | 2013-02-08 | 2014-08-14 | Wistron Corporation | Method of File Synchronization and Electronic Device Thereof |
CN104066007A (en) * | 2013-03-19 | 2014-09-24 | 鸿富锦精密工业(深圳)有限公司 | Cloud service device, video playback multi-screen preview method and system |
US8909661B2 (en) | 2012-09-19 | 2014-12-09 | Nokia Corporation | Methods and apparatuses for time-stamping media for multi-user content rendering |
US20150215497A1 (en) * | 2014-01-24 | 2015-07-30 | Hiperwall, Inc. | Methods and systems for synchronizing media stream presentations |
US9135334B2 (en) | 2007-01-23 | 2015-09-15 | Cox Communications, Inc. | Providing a social network |
US9167302B2 (en) | 2010-08-26 | 2015-10-20 | Cox Communications, Inc. | Playlist bookmarking |
US9170704B2 (en) | 2011-01-04 | 2015-10-27 | Thomson Licensing | Sequencing content |
US9472011B2 (en) | 2011-11-16 | 2016-10-18 | Google Inc. | System and method for 3D projection mapping with robotically controlled objects |
US9584572B2 (en) | 2013-03-19 | 2017-02-28 | Hon Hai Precision Industry Co., Ltd. | Cloud service device, multi-image preview method and cloud service system |
US20170068424A1 (en) * | 2015-09-07 | 2017-03-09 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
US20170078370A1 (en) * | 2014-04-03 | 2017-03-16 | Facebook, Inc. | Systems and methods for interactive media content exchange |
WO2017157155A1 (en) * | 2016-03-14 | 2017-09-21 | 阿里巴巴集团控股有限公司 | Method and device for capturing video during playback |
US9832352B2 (en) | 2011-11-16 | 2017-11-28 | Autofuss | System and method for 3D projection mapping with robotically controlled objects |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8719288B2 (en) * | 2008-04-15 | 2014-05-06 | Alexander Bronstein | Universal lookup of video-related data |
FR2940481B1 (en) * | 2008-12-23 | 2011-07-29 | Thales Sa | METHOD, DEVICE AND SYSTEM FOR EDITING ENRICHED MEDIA |
US8407596B2 (en) | 2009-04-22 | 2013-03-26 | Microsoft Corporation | Media timeline interaction |
US20130182183A1 (en) * | 2012-01-15 | 2013-07-18 | Panopto, Inc. | Hardware-Based, Client-Side, Video Compositing System |
WO2014078805A1 (en) | 2012-11-19 | 2014-05-22 | John Douglas Steinberg | System and method for creating customized, multi-platform video programming |
US9472238B2 (en) * | 2014-03-13 | 2016-10-18 | Panopto, Inc. | Systems and methods for linked mobile device content generation |
US10791356B2 (en) * | 2015-06-15 | 2020-09-29 | Piksel, Inc. | Synchronisation of streamed content |
CN106953892B (en) * | 2017-02-14 | 2020-08-07 | 北京时间股份有限公司 | Method, device and system for acquiring file |
US20190129591A1 (en) | 2017-10-26 | 2019-05-02 | International Business Machines Corporation | Dynamic system and method for content and topic based synchronization during presentations |
US11113322B2 (en) | 2020-01-07 | 2021-09-07 | Bank Of America Corporation | Dynamically generating strategic planning datasets based on collecting, aggregating, and filtering distributed data collections |
US20220374585A1 (en) * | 2021-05-19 | 2022-11-24 | Google Llc | User interfaces and tools for facilitating interactions with video content |
US20240143539A1 (en) * | 2022-10-31 | 2024-05-02 | Mellanox Technologies, Ltd. | Remote direct memory access operations with integrated data arrival indication |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5784099A (en) * | 1994-09-13 | 1998-07-21 | Intel Corporation | Video camera and method for generating time varying video images in response to a capture signal |
US5786814A (en) * | 1995-11-03 | 1998-07-28 | Xerox Corporation | Computer controlled display system activities using correlated graphical and timeline interfaces for controlling replay of temporal data representing collaborative activities |
US5966121A (en) * | 1995-10-12 | 1999-10-12 | Andersen Consulting Llp | Interactive hypervideo editing system and interface |
US20020018124A1 (en) * | 2000-07-26 | 2002-02-14 | Mottur Peter A. | Methods and systems for networked camera control |
US6452615B1 (en) * | 1999-03-24 | 2002-09-17 | Fuji Xerox Co., Ltd. | System and apparatus for notetaking with digital video and ink |
US20020170068A1 (en) * | 2001-03-19 | 2002-11-14 | Rafey Richter A. | Virtual and condensed television programs |
US6785013B1 (en) * | 1999-05-14 | 2004-08-31 | Ricoh Company, Ltd. | System for capturing images from a peripheral unit and transferring the captured images to an image management server |
USRE38609E1 (en) * | 2000-02-28 | 2004-10-05 | Webex Communications, Inc. | On-demand presentation graphical user interface |
US20040240541A1 (en) * | 2003-05-29 | 2004-12-02 | International Business Machines Corporation | Method and system for direct ingest and storage of digital video content with immediate access to content for browsing and editing |
US20050246725A1 (en) * | 2004-05-03 | 2005-11-03 | Microsoft Corporation | Generic user interface command architecture |
US20080256463A1 (en) * | 2003-05-16 | 2008-10-16 | Seiko Epson Corporation | Method and System for Media Playback Architecture |
US7580612B2 (en) * | 2004-10-15 | 2009-08-25 | Hitachi, Ltd. | Digital broadcast sending apparatus, receiving apparatus and digital broadcast system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5613032A (en) | 1994-09-02 | 1997-03-18 | Bell Communications Research, Inc. | System and method for recording, playing back and searching multimedia events wherein video, audio and text can be searched and retrieved |
US6622171B2 (en) * | 1998-09-15 | 2003-09-16 | Microsoft Corporation | Multimedia timeline modification in networked client/server systems |
US7383508B2 (en) | 2002-06-19 | 2008-06-03 | Microsoft Corporation | Computer user interface for interacting with video cliplets generated from digital video |
US8069466B2 (en) * | 2005-08-04 | 2011-11-29 | Nds Limited | Advanced digital TV system |
US20080122986A1 (en) * | 2006-09-19 | 2008-05-29 | Florian Diederichsen | Method and system for live video production over a packeted network |
-
2006
- 2006-12-06 US US11/634,441 patent/US8437409B2/en active Active
-
2007
- 2007-12-05 WO PCT/US2007/024880 patent/WO2008070105A2/en active Application Filing
-
2013
- 2013-04-23 US US13/868,276 patent/US8910225B2/en active Active
-
2014
- 2014-12-03 US US14/559,375 patent/US9584571B2/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5784099A (en) * | 1994-09-13 | 1998-07-21 | Intel Corporation | Video camera and method for generating time varying video images in response to a capture signal |
US5966121A (en) * | 1995-10-12 | 1999-10-12 | Andersen Consulting Llp | Interactive hypervideo editing system and interface |
US5786814A (en) * | 1995-11-03 | 1998-07-28 | Xerox Corporation | Computer controlled display system activities using correlated graphical and timeline interfaces for controlling replay of temporal data representing collaborative activities |
US6452615B1 (en) * | 1999-03-24 | 2002-09-17 | Fuji Xerox Co., Ltd. | System and apparatus for notetaking with digital video and ink |
US6785013B1 (en) * | 1999-05-14 | 2004-08-31 | Ricoh Company, Ltd. | System for capturing images from a peripheral unit and transferring the captured images to an image management server |
USRE38609E1 (en) * | 2000-02-28 | 2004-10-05 | Webex Communications, Inc. | On-demand presentation graphical user interface |
US20020018124A1 (en) * | 2000-07-26 | 2002-02-14 | Mottur Peter A. | Methods and systems for networked camera control |
US20020170068A1 (en) * | 2001-03-19 | 2002-11-14 | Rafey Richter A. | Virtual and condensed television programs |
US20080256463A1 (en) * | 2003-05-16 | 2008-10-16 | Seiko Epson Corporation | Method and System for Media Playback Architecture |
US20040240541A1 (en) * | 2003-05-29 | 2004-12-02 | International Business Machines Corporation | Method and system for direct ingest and storage of digital video content with immediate access to content for browsing and editing |
US20050246725A1 (en) * | 2004-05-03 | 2005-11-03 | Microsoft Corporation | Generic user interface command architecture |
US7580612B2 (en) * | 2004-10-15 | 2009-08-25 | Hitachi, Ltd. | Digital broadcast sending apparatus, receiving apparatus and digital broadcast system |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090063994A1 (en) * | 2007-01-23 | 2009-03-05 | Cox Communications, Inc. | Providing a Content Mark |
US9135334B2 (en) | 2007-01-23 | 2015-09-15 | Cox Communications, Inc. | Providing a social network |
US20100042741A1 (en) * | 2007-03-08 | 2010-02-18 | Telefonaktiebolaget L M Ericssson (Publ) | Seeking and Synchronization Using Global Scene Time |
US8190761B2 (en) * | 2007-03-08 | 2012-05-29 | Telefonaktiebolaget L M Ericsson (Publ) | Seeking and synchronization using global scene time |
US20090125589A1 (en) * | 2007-11-09 | 2009-05-14 | International Business Machines Corporation | Reconnection to and migration of electronic collaboration sessions |
US8386609B2 (en) * | 2007-11-09 | 2013-02-26 | International Business Machines Corporation | Reconnection to and migration of electronic collaboration sessions |
US8788941B2 (en) * | 2010-03-30 | 2014-07-22 | Itxc Ip Holdings S.A.R.L. | Navigable content source identification for multimedia editing systems and methods therefor |
US20110246892A1 (en) * | 2010-03-30 | 2011-10-06 | Hedges Carl | Navigable Content Source Identification for Multimedia Editing Systems and Methods Therefor |
US9167302B2 (en) | 2010-08-26 | 2015-10-20 | Cox Communications, Inc. | Playlist bookmarking |
CN102594860A (en) * | 2010-12-02 | 2012-07-18 | 微软公司 | Mixing synchronous and asynchronous data streams |
US9251284B2 (en) | 2010-12-02 | 2016-02-02 | Microsoft Technology Licensing, Llc | Mixing synchronous and asynchronous data streams |
CN106911790A (en) * | 2010-12-02 | 2017-06-30 | 微软技术许可有限责任公司 | Mixed synchronization and asynchronous flow |
US9170704B2 (en) | 2011-01-04 | 2015-10-27 | Thomson Licensing | Sequencing content |
US9160898B2 (en) * | 2011-01-25 | 2015-10-13 | Autofuss | System and method for improved video motion control |
US20120188350A1 (en) * | 2011-01-25 | 2012-07-26 | Asa Hammond | System and method for improved video motion control |
US9026596B2 (en) * | 2011-06-16 | 2015-05-05 | Microsoft Technology Licensing, Llc | Sharing of event media streams |
US20120320013A1 (en) * | 2011-06-16 | 2012-12-20 | Microsoft Corporation | Sharing of event media streams |
US10447899B2 (en) | 2011-11-16 | 2019-10-15 | X Development Llc | System and method for 3D projection mapping with robotically controlled objects |
US9832352B2 (en) | 2011-11-16 | 2017-11-28 | Autofuss | System and method for 3D projection mapping with robotically controlled objects |
US9472011B2 (en) | 2011-11-16 | 2016-10-18 | Google Inc. | System and method for 3D projection mapping with robotically controlled objects |
US8909661B2 (en) | 2012-09-19 | 2014-12-09 | Nokia Corporation | Methods and apparatuses for time-stamping media for multi-user content rendering |
US20140229436A1 (en) * | 2013-02-08 | 2014-08-14 | Wistron Corporation | Method of File Synchronization and Electronic Device Thereof |
US9584572B2 (en) | 2013-03-19 | 2017-02-28 | Hon Hai Precision Industry Co., Ltd. | Cloud service device, multi-image preview method and cloud service system |
CN104066007A (en) * | 2013-03-19 | 2014-09-24 | 鸿富锦精密工业(深圳)有限公司 | Cloud service device, video playback multi-screen preview method and system |
US20150215497A1 (en) * | 2014-01-24 | 2015-07-30 | Hiperwall, Inc. | Methods and systems for synchronizing media stream presentations |
US9942622B2 (en) * | 2014-01-24 | 2018-04-10 | Hiperwall, Inc. | Methods and systems for synchronizing media stream presentations |
US20170078370A1 (en) * | 2014-04-03 | 2017-03-16 | Facebook, Inc. | Systems and methods for interactive media content exchange |
US10110666B2 (en) * | 2014-04-03 | 2018-10-23 | Facebook, Inc. | Systems and methods for interactive media content exchange |
US20170068424A1 (en) * | 2015-09-07 | 2017-03-09 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
US10048843B2 (en) * | 2015-09-07 | 2018-08-14 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
WO2017157155A1 (en) * | 2016-03-14 | 2017-09-21 | 阿里巴巴集团控股有限公司 | Method and device for capturing video during playback |
Also Published As
Publication number | Publication date |
---|---|
WO2008070105A3 (en) | 2009-04-09 |
US8910225B2 (en) | 2014-12-09 |
US20150381685A1 (en) | 2015-12-31 |
US9584571B2 (en) | 2017-02-28 |
WO2008070105A2 (en) | 2008-06-12 |
US8437409B2 (en) | 2013-05-07 |
US20130339539A1 (en) | 2013-12-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9584571B2 (en) | System and method for capturing, editing, searching, and delivering multi-media content with local and global time | |
US7458013B2 (en) | Concurrent voice to text and sketch processing with synchronized replay | |
US9251852B2 (en) | Systems and methods for generation of composite video | |
US20030124502A1 (en) | Computer method and apparatus to digitize and simulate the classroom lecturing | |
KR20210069711A (en) | Courseware recording and playback methods, devices, smart interactive tablets and storage media | |
US11310463B2 (en) | System and method for providing and interacting with coordinated presentations | |
US20050154679A1 (en) | System for inserting interactive media within a presentation | |
US20050038877A1 (en) | Multi-level skimming of multimedia content using playlists | |
US20110123972A1 (en) | System for automatic production of lectures and presentations for live or on-demand publishing and sharing | |
JP2008172582A (en) | Minutes generating and reproducing apparatus | |
US20190199763A1 (en) | Systems and methods for previewing content | |
WO2001019088A1 (en) | Client presentation page content synchronized to a streaming data signal | |
US20190019533A1 (en) | Methods for efficient annotation of audiovisual media | |
US20030086682A1 (en) | System and method for creating synchronized multimedia presentations | |
JP2005277847A (en) | Image reproduction system, image transmission apparatus, image receiving apparatus, image reproduction method, image reproduction program, and recording medium | |
Chunwijitra et al. | Advanced content authoring and viewing tools using aggregated video and slide synchronization by key marking for web-based e-learning system in higher education | |
JP4129162B2 (en) | Content creation demonstration system and content creation demonstration method | |
JP2004266578A (en) | Moving image editing method and apparatus | |
JP4686990B2 (en) | Content processing system, content processing method, and computer program | |
JP2000267639A (en) | Information processor | |
US20130182183A1 (en) | Hardware-Based, Client-Side, Video Compositing System | |
KR100886149B1 (en) | Method for forming moving image by inserting image into original image and recording media | |
KR100714409B1 (en) | Apparutus for making video lecture coupled with lecture scenario and teaching materials and Method thereof | |
KR100459668B1 (en) | Index-based authoring and editing system for video contents | |
Chunwijitra et al. | Authoring tool for video-based content on WebELS learning system to support higher education |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CARNEGIE MELLON UNIVERSITY, PENNSYLVANIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SCHERLIS, WILLIAM L.;BURNS, ERIC;REEL/FRAME:018799/0300;SIGNING DATES FROM 20061220 TO 20070107 Owner name: CARNEGIE MELLON UNIVERSITY, PENNSYLVANIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SCHERLIS, WILLIAM L.;BURNS, ERIC;SIGNING DATES FROM 20061220 TO 20070107;REEL/FRAME:018799/0300 |
|
AS | Assignment |
Owner name: CARNEGIE MELLON UNIVERSITY, PENNSYLVANIA Free format text: RE-RECORD TO CORRECT THE DATE OF EXECUTION ON A DOCUMENT PREVIOUSLY RECORDED AT REEL 018799, FRAME 0300. (ASSIGNMENT OF ASSIGNOR'S INTEREST);ASSIGNORS:SCHERLIS, WILLIAM L.;BURNS, ERIC;REEL/FRAME:018897/0677;SIGNING DATES FROM 20061220 TO 20070103 Owner name: CARNEGIE MELLON UNIVERSITY, PENNSYLVANIA Free format text: RE-RECORD TO CORRECT THE DATE OF EXECUTION ON A DOCUMENT PREVIOUSLY RECORDED AT REEL 018799, FRAME 0300. (ASSIGNMENT OF ASSIGNOR'S INTEREST);ASSIGNORS:SCHERLIS, WILLIAM L.;BURNS, ERIC;SIGNING DATES FROM 20061220 TO 20070103;REEL/FRAME:018897/0677 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 8 |