EP0609352A4 - Systeme d'edition video en ligne. - Google Patents

Systeme d'edition video en ligne.

Info

Publication number
EP0609352A4
EP0609352A4 EP92922558A EP92922558A EP0609352A4 EP 0609352 A4 EP0609352 A4 EP 0609352A4 EP 92922558 A EP92922558 A EP 92922558A EP 92922558 A EP92922558 A EP 92922558A EP 0609352 A4 EP0609352 A4 EP 0609352A4
Authority
EP
European Patent Office
Prior art keywords
video
edit
user
editor
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP92922558A
Other languages
German (de)
English (en)
Other versions
EP0609352A1 (fr
Inventor
Ian Craven
Bruce Logan Hill
Lance E Kelson
Robert Rose
Stephen J Rentmeesters
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ACCOM Inc
Original Assignee
ACCOM Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ACCOM Inc filed Critical ACCOM Inc
Publication of EP0609352A1 publication Critical patent/EP0609352A1/fr
Publication of EP0609352A4 publication Critical patent/EP0609352A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/022Electronic editing of analogue information signals, e.g. audio or video signals
    • G11B27/028Electronic editing of analogue information signals, e.g. audio or video signals with computer assistance
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/36Monitoring, i.e. supervising the progress of recording or reproducing
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B2220/00Record carriers by type
    • G11B2220/90Tape-like record carriers
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/022Electronic editing of analogue information signals, e.g. audio or video signals
    • G11B27/024Electronic editing of analogue information signals, e.g. audio or video signals on tapes
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/032Electronic editing of digitised analogue information signals, e.g. audio or video signals on tapes

Definitions

  • the invention relates to systems for processing vide tape, and more particularly to on-line systems for editin video tape. 5
  • a video editor (hereafter “editor”) communicates 10 with and synchronizes one or more video tape recorders (“VTRs”) and peripheral devices to allow editing accurate within a single video field or frame.
  • VTRs video tape recorders
  • a user communicates with the editor using a keyboard, and the editor communicates with the user via a monitor that displays 15 information.
  • Off-line editing systems are relativel unsophisticated, and are most suitable for reviewing sourc tapes, and creating relatively straightforward editin effects such as "cuts” and “dissolves".
  • Off-line editors generate an intermediate work tape whose frames are marked according to an accompanying edit decision list ("EDL") that documents what future video changes are desired.
  • EDL edit decision list
  • on-line editing systems are sophisticated, and are used to make post-production changes, including those based upon the work tape and EDL from an off-line editor.
  • On-line editing systems must provide video editor interface to a wide variety of interface accessories, and the cost charged for the use of such a facility (or "suite") often far exceeds what is charged for using an off-line system.
  • the output from an on-line editing system is a final video master tape and an EDL documenting, at a minimum, the most recent generation of changes made to the master tape.
  • a switcher is a peripheral device having multiple input and output signal ports and one or more command ports. Video signals at the various input ports are fed to various output ports depending upon the commands presented to the command ports by the editor.
  • a "cut" is the simplest editing task and is accomplished with an editor and two VTRs: VTR A holds video scenes to be cut into the video tape on VTR B. The editor starts each VTR in the playback mode and at precisely the correct frame, commands VTR B to enter the record mode, thereby recording the desired material from VTR A.
  • VTRs A and B contain video scenes to be dissolved one to the other.
  • the video outputs of VTRs A and B are connected to inputs on the production switcher, with the switcher output being connected to the record input of VTR C.
  • the editor synchronizes all three VTRs and, at precisely the proper frame, activates the switcher, allowing VTR C to record the desired visual effect. Troublesome in perfecting the dissolve effect was the fact that the command port of the production switcher did not "look like" a VTR to the editor.
  • a GPI trigger pulse was transmitted from the editor to command a given function within a newer device.
  • the GPI pulse performed a relay closure function for the remote device.
  • a special effects box might have three GPI input ports: a pulse (or relay closure) at the first port would "start” whatever the effect was, a pulse (or relay closure) provided to the second port would “stop” the effect, while a pulse (or relay closure) provided to the third port would "reverse" the effect.
  • the manufacturer of a VTR, switcher or othe peripheral device provides a protocol instruction manua telling the user what editor control signals command wha device functions.
  • the protocol for on manufacturer's VTR or device might be totally unlike th protocol for the same function on another manufacturer's VTR or device.
  • published protocol commands usually do not make full use of the peripheral device's capabilities, and frequently the VTR or device hardware might be updated by the manufacturer, thus making the published protocol partially obsolete.
  • the video industry has attempted to ameliorate the interface problem by adopting a few common protocols as standards, often with one peripheral device "emulating" the protocol requirements of another device.
  • creating customized interface software is extremely time consuming and requires considerable expertise.
  • a customized software interface for a VTR an established device whose capabilities are well understood
  • VTR an established device whose capabilities are well understood
  • a new peripheral device came to market its manufacturer typically chose to adopt a standardized emulation rather than to bear the burden of writing a customized interface.
  • EDL edit decision list
  • the EDL is a complex collection of timecode numbers and cryptic designations for keying and dissolving operations.
  • the timecode numbers give the precise time and frame number where events occur on the finished tape, as well as "in” and “out” times at which a given video source was put onto the finished tape.
  • the operation designations simply state that at a given time frame, the editor issued a given command, "RECORD" for example, however what the visual effect resulting from the command cannot generally be ascertained.
  • RECORD the editor issued a given command
  • Existing editing systems are also deficient in at least two other aspects: they do not allow for the simultaneous control of two or more edits or edit layers, and they do not allow multiple users on remote editing consoles to simultaneously edit on a single editing machine. While the video monitor used with existing systems can display a single set of parameters advising of the status of the editor, applicants are aware of but one existing system capable of a windowed display showing the current edit and a zoom image of a portion of that edit. However at present no system provides the capability to display two windows simultaneously, each displaying a separate edit or, if desired, one window being under control of a first on-line editor while the second window is under control of a second on-line editor.
  • an object of the invention to provide an on-line editing system capable of interfacing in a universal manner with VTRs and peripheral accessories such that user instructions to the editor are independent of the manufacturer and model number of the accessories.
  • an on-line video editing system capable of receiving as first input a conventional EDL from an off ⁇ line editor, as a second input picture information from said off-line editor's picture storage medium, and capable 20 of creating therefrom an EDL allowing the user of the on ⁇ line system to transfer all timecode, picture and audio from the off-line edit session into the on-line edit session.
  • 25 provide an on-line editor capable of operation with one or more similar on-line editors, such that each editor may control substantially any device in an editing suite without rewiring the suite.
  • It is a further object of the invention to 0 provide an on-line editor with the capability to control on a field-by-field basis essentially any device parameter for a controlled device within the edit suite, and further to maintain proper time synchronization when editing despite the passage of audio and/or video signals through devices 5 contributing varying amounts of time delay.
  • CPU central processing unit
  • video subsystem having a framestore
  • audio board also includes an audio board.
  • Applicants' interface software permits the editor to simultaneously interface, in a universal fashion, using serial communications ports with up to 48 controlled video devices such as recorders, switchers, etc. In addition to controlling these 48 devices (all of which may be switchers) , applicants' editor can control an additional 48 devices requiring GPI control pulses.
  • the hardware and software comprising an editor according to the present invention permits substantially all devices for a site with one or more edit suites remain permanently hardwired to one or more editors, each editor retaining the ability to control any such device. This flexibility minimizes the need for expensive routers and substantially eliminates the time otherwise needed to rewire the editing suite to accommodate various effects and the like.
  • the CPU board preferably includes a software library having a data or text file for each peripheral device with which the editor may be used. (Additional files may be added at a later date for devices for which a file does not already exist within the library.)
  • a given text file contains information relating to the specific commands for a controlled video device, and information relating to the internal functions (or internal architecture) of that device.
  • the text file structure is such that the internal workings of the device are mapped into the data file.
  • a text file format is used, permitting the contents of each file to be readily modified, even by a user unfamiliar with computer programming.
  • a user by means of a keyboard for example, need only instruct the editor to issue a command to the device, for example the command PLAY to a video tape recorder, whereupon the text file will specify how to automatically translate the PLAY command into a command signal having the proper format for the device.
  • the interface software further allows a user to ask the editor, preferably by means of user understandable text files, to perform a desired edit, to "WIPE A/B" for example.
  • each video controlled or other peripheral device will contain its own imbedded text file. This procedure would allow the editor to issue a "REQUEST FOR TEXT FILE" command to the device whereupon, upon identifying the device, the proper commands to operate the device could be downloaded into the CPU board.
  • Applicants' EDL software provides for a scheduling table available on each communications processor to allow the prioritizing of commands, and the execution of commands at a later time and/or repeatedly.
  • the scheduling of commands and related housekeeping provided by the scheduling table results in a smoother flow of data from the editor, minimizing the likelihood of data overflow (or bottlenecks) at any given time.
  • the present invention provides the CPU board with a tree-like hierarchical EDL database that is unlimited in size and is virtual with respect to time.
  • Applicants' EDL software creates a unique "node” for every edit step, and provides a complete historical record enabling a user to later determine exactly what occurred at every frame during an editing session.
  • the EDL generates and stores identification of each source media, offset within the media, the number of edit revisions, the nature of the edit, and so forth.
  • the EDL allows a user to "un- layer” or “re-layer” layered video effects, or to undo any other effect, in essence, to turn back the clock and recreate the original video before editing, or before a particular editing stage.
  • applicants system stores and permits EDL information to be presented in a variety of formats, including graphic and visual presentations.
  • the editor is able to retrieve and display a user identified framestored head or tail image from a desired edit clip.
  • a motorized control allows the user to move to a desired frame within a source media, either relatively or absolutely, simply by moving an indicator bar on the editor control panel. If the bar is moved to the left, for example, the source media rewinds, if moved to the right, the tape winds.
  • the left extreme of the sliding indicator can be made to correspond to the beginning of the entire tape or the beginning of a clip therein, with the right extreme of the indicator corresponding to the tape end or to the segment end, as the user desires.
  • the indicator bar moves correspondingly, to provide a visual representation of where within the tape or clip the user is at any given moment.
  • the hardware and software structure of the editor system provides the capability to simultaneous control two or more edits or edit layers, and allows multiple users on remote consoles to simultaneously edit on a single on-line editor.
  • Applicants' system is also capable of receiving, as input, a conventional EDL from an off-line editor and receiving the video information commonly dumped to disk storage by an off-line editor, and generating therefrom an EDL capable of presenting video images, thereby allowing an off-line editor to communicate picture information to applicants' on-line editor.
  • FIG. 1 shows a generalized editing system according to the present invention
  • Fig. 2A shows a prior art editor interfaced with a VTR
  • Fig. 2B is a block diagram of the interface between the prior art editor and VTR shown in Fig. 2A;
  • Fig. 3A shows an editor according to the present invention interfaced with a VTR
  • Fig. 3B is a block diagram of the interface between the editor and VTR shown in Fig. 3A;
  • Fig. 4 is a scheduling table according to the present invention.
  • Fig. 5 shows schematically the steps required to layer three images
  • Fig. 6 is an example of an EDL generated by a prior art editor
  • Fig. 7 is an example of the types of visual presentations available from applicants' EDL;
  • Fig. 8 is a generalized perspective view of a motorized slider control according to the present invention.
  • Fig. 9 is a block diagram of an on-line editing system according to the present invention.
  • Fig. 10 is a block diagram of a protype CPU board 102
  • Figs. 11A, 11B, 11C are an information model showing the hierarchical structure of applicants' EDL software
  • Figs. 12A-12C schematically illustrate how applicants' EDL forms a hierarchical structure in the presence of layering
  • Fig. 13 • is an information model showing the hierarchical structure of applicants' universal interface software
  • Fig. 14A is an informational model of applicants' key board configuration, while Fig. 14B is a data flow representation of Fig. 14A; Fig. 15 depicts multiple editors according to the present invention coupled to control any device in an editing suite;
  • Figures 16-18 are a representation of a display screen generated in use of the present system for precise selection of video images for inclusion in, or exclusion from, an edited videotape;
  • Figure 19 is a representation of another display screen generated in use of the present system after the precise selection of the video images to prepare the edited video tape;
  • Figure 20 is a representation of a display screen generated in use of the system for editing of video images for inclusion in, or exclusion from, an edited videotape.
  • Fig. 1 shows a typical on-line editing system according to the present invention as including an on-line editor 2, a universal interface 4 (which includes a host processor 6, a communications processor 8, and related software 10) , one or more devices for storing video signals such as video tape recorders (VTRs) 12, and assorted peripheral accessory devices ("devices") such as video switchers 14, 14', and a special effects box 16.
  • VTRs video tape recorders
  • devices peripheral accessory devices
  • an editing system according to the present invention may include other controlled video devices in addition to or in lieu of the VTRs 12, switchers 14, and box 16 shown.
  • Other controlled video devices could include a digital disk recorder, a character generator, a timebase corrector, a still store, and even an audio switcher, and may include more than one of each device.
  • the video editor disclosed herein is capable of simultaneously controlling up to 48 devices through serial communication ports, all of which devices may be video switchers 14, 14'.
  • the editor disclosed is capable of simultaneously controlling up to 48 devices through serial communication ports, all
  • herein can control an additional 48 devices using GPI trigger pulses.
  • the VTRs and devices are each capable of doing various functions and each contain their own interface 18 which in turn include electronic circuitry and software which allow the various VTRs and devices to perform functions upon command from the editor 2.
  • the editor 2 also communicates with a control panel 20 that allows a user to issue commands to the editor.
  • Control panel 20 preferably includes user input devices such as a keyboard 22, a trackball 24, a shuttle control 26, optical encoders 28 and applicants' new motorized control indicators 30.
  • the editor 2 includes a disk drive 32 allowing an edit decision list (EDL) to be input or output using, for example, a diskette 34.
  • EDL edit decision list
  • a monitor 36 connects to the editor 2 for displaying video images and other graphic information for the user. The multitasking capabilities of the present information permit monitor 36 to display two or more edits simultaneously, and even allows one of the two edits to be under control of another editor 2' connected to network with editor 2.
  • editor 2 is capable of generating a virtual EDL 38 having unlimited length and containing video as well as device control information.
  • the manufacturer of each device e.g., VTRs 12, switchers 14, 14', etc.
  • VTRs 12, switchers 14, 14', etc. publishes a manual that states what protocol corjaands must be presented to the interface internal to that device to command the performance of a given function.
  • VTR 12 is a Sony model BVH 2000
  • the Sony protocol manual will give the command signal that must be presented to the interface 18 of that VTR 12 to cause the VTR to enter the RECORD mode.
  • VTR from a different manufacturer, an Ampex VPR-3 for example, will typically have a different protocol command sequence and require a different signal at the VTR interface 18 for a given function, such as RECORD.
  • RECORD a given function
  • Fig. 2A shows a prior art editor 40 interfacing with a device such as a VTR 12.
  • the editor 40 has an interface 42 that includes a host processor 44, a communications processor 46, and related software 48.
  • the prior art interface 42 is essentially a device-specific interface, e.g., an interface that is specific for a given brand and model of VTR or other device.
  • Fig. 2B is a block diagram showing details of the interfacing elements.
  • the host and communications processor 44, 46 must each contain every protocol command required to command the various functions published for the VTR 12 (or other device to be controlled) .
  • the communications processor 46 includes a hardwired look-up table 50 that provides a one- to-one mapping providing a protocol format translation for each command function of the VTR 12.
  • the lookup table 50 is essentially dedicated to the particular brand and model of VTR 12. In essence, lookup table 50 translates a "high level" command ("PLAY" for example) issued to the editor into a "low level” machine understandable command going through the host processor 44 in the particular format required for the interface 18 within the VTR 12.
  • the lookup table 50 will have been constructed to perform the necessary (2) to (23) translation.
  • the protocol output from the communications processor 46 is shown in Fig. 2B as being a 4-byte wide command 51.
  • This command 51 includes the byte count (BC) , start message (STX) , and transport (Tran) information in addition to the specific command ordering VTR 12 to enter the PLAY mode (Play 23) .
  • the protocol information contained in the host and communications processors 44, 46 would each have to be updated, a timely and technically demanding task. Also it must be emphasized that if a different brand or model VTR 12 is connected to the prior art editor 40, the look-up table 50 would have to be changed.
  • FIGs. 3A and 3B show the interface 52 used in the present invention.
  • an editor 2 according to the present invention includes a host processor 52 (an Intel 960CA, for example) , a communications processor 54 (such as a Z80) , and a software data file (shown as 56) whose contents may be readily modified by user input (shown as 60) .
  • the data file 56 contains two types of information: data relating to specific commands for the VTR 12 (or other device under command of the editor 2) , and data relating to the internal functions of the VTR 12 (or other device) , such as how signals within the VTR are routed.
  • FIGs. 3A and 3B illustrate a data or text file 56 embedded within the editor 2, alternatively the text file 56 would be downloaded from the various peripheral devices (if they were so equipped) , or could even be entered manually into the editor 2 by the user.
  • the data file 56 is a high level text file that is understandable to a non ⁇ technical user.
  • text file will be used, although it is for the convenience of the user (and not a prerequisite of the present invention) that data file 56 may be written to be user understandable.
  • a text file 56 accessible to the host processor 54 will then be used to specify how to automatically translate the user command (“PLAY") into a command 61 having the proper format for the VTR 12.
  • Fig. 3B provides many advantages over the prior art method of interface shown in Fig. 2B.
  • a user by means of input 60 (a keyboard, for example) can modify the text file 58 providing for the newly added function Foo.
  • the text file 58 is preferably in a format understandable to a human, no special expertise is required to make the modification.
  • a user wants to create the video effect wipe from composite 1 to composite 2, i.e., to create a multiple layer image that transitions from a layered image (composite 1) to another layered image (composite 2) for example in a left-to-right direction.
  • a user would have to first figure out how the effect should be accomplished (i.e., program a video switcher to perform each key and the wipe, and then trigger the switcher to perform just that effect) .
  • the editing system in essence is "frozen” and can only perform that effect in that fashion.
  • the user knows one way and one way only to accomplish the desired effect.
  • Such a user may be thwarted if a single piece of equipment necessary is unavailable, notwithstanding that the remaining equipment might still be capable of creating the effect using a different procedure, using multiple tape passes for example.
  • the present invention allows a user to create the above effect simply by issuing the user understandable high level command "WIPE FROM COMPOSITE 1 TO COMPOSITE 2" to the editor 2, the user having previously defined composite 1 and composite 2, e.g., in the edit decision list 38 (EDL) contained within editor 2.
  • the editor 2 will examine a preferably stored library of text files 58 defining the characteristics and protocol requirements of the various peripheral devices 12, 14, etc. and will cause the proper commands to issue from the editor 2 to the necessary devices at the appropriate times. It is significant to note that the single command "WIPE FROM COMPOSITE 1 TO COMPOSITE 2" simply does not exist -for the various pieces of equipment being controlled, yet the editor 2 because of its text file capability allows even a lay user to create this complicated visual effect.
  • a user can instruct the editor 2 to "WIPE FROM COMPOSITE 1 TO COMPOSITE 2" whereupon the editor 2 will advise the user what combinations and sequences of available equipment (VTRs, switchers , etc.) should be used to create the effect.
  • the editor 2 can make — echnical decisions for a user, allowing the user to be artistically rather than technically creative.
  • each piece of peripheral equipment (VTRs, switchers , etc.) interfaced to an editor 2 could include imbedded text file information 64 within its interface 18 automatically identifying that piece of equipment and its full capabilities to the editor 2.
  • imbedded text file information 64 within its interface 18 automatically identifying that piece of equipment and its full capabilities to the editor 2.
  • VTR 12 was replaced with a different brand machine, VTR 12' whose interface 18' included an imbedded text file 64'.
  • the user would merely issue a high level REQUEST FOR TEXT FILE command whereupon editor 2 and data file 58 would automatically cause the proper commands to be downloaded to properly operate VTR 12'.
  • the imbedded data 64' in the peripheral device 12' would in essence allow a "handshaking" identification function, somewhat similar to the manner in which two different modems initially identify themselves to one another to determine baud rate and any other common protocol. If the VTR 12' did not include an imbedded data file, using text files the user could identify the brand and model for VTR B to the host processor 54, whereupon the processor 54 would know how to communicate with VTR 12' by virtue of a stored library of text files for various devices. Alternatively, the processor 54 might be programmed to attempt to communicate with VTR 12' (or other device) using a variety of protocols until sufficient "handshaking" occurred, whereupon the device would have been identified to the editor 2. If no previously written text file concerning the device was available, the user could simply add an appropriate text file into the processor 54 manually.
  • the textfile and device modeling aspect of the present invention allows field-by-field control of essentially any device parameter for a device under control of editor 2.
  • a controlled device upon control of editor 2.
  • —triggered input causes an image to move as dictated by the device's internal programming, typically according to an algorithm within the device. For example, commanding "run” normally causes a digital effects device to perform the currently set-up effect. In essence, the device's internal structure and programming can produce a new image for every video field.
  • a prior art editor basically accepts the effect parameter transitions that were programmed into controlled devices at time of manufacture.
  • the present invention permits modification of device parameters, including parameters that determine the position of an image within a device, on a field-by-field basis.
  • the image coming from a controlled device can be completely controlled in different ways under command of editor 2.
  • the communications between the host and communications processors 54, 56 permits the creation of scheduling tables (shown as 62) which, among other tasks, are capable of layering protocol commands according to priority.
  • the editor 2 preferably includes four communications processor boards 104, each board preferably being able to simultaneously control 12 serial ports. (This gives the present system the ability to simultane ⁇ ously serially control 48 devices in addition to 8 GPI controlled devices.)
  • Each communications processor board 104, 104' includes a scheduling table 62 applicable to devices interfacing to that particular board 104, 104'.
  • protocol commands can be assigned priorities, with some commands considered more important than others, and with protocol commands of equal priority being held within a common buffer within the editor 2.
  • the PLAY command is given the highest priority ("0") and PLAY and any other layer 1 commands are grouped (or buffered) together.
  • the present invention recognizes that relative priorities may exist.
  • the PLAY command will issue before any other command of lower priority.
  • Commands from the CPU board 102 which enter an individual communications processor 104 are sorted into the schedule table 62 according first to delay, and then to priority.
  • the command leaves the scheduler table to be grouped together with other commands at the same protocol layer into the appropriate protocol output buffer (preferably 8 such buffers being available) , such as buffer 1 in Fig. 3B.
  • the buffers are assembled by the communication processor boards 104 into the correct protocol layer order and sent to the device.
  • Fig. 4 shows the contents of a typical scheduling table 62 prepared by an editor 2 according to the present invention.
  • the first table column shows by how many fields (if any) execution of a given command should be deferred. For example, while editor 2 is shown as having issued the PLAY command "now", execution of this command is to be deferred for 99 fields hence.
  • the third and fourth columns demonstrate the prioritizing ability of the present invention.
  • the fifth column represents "housekeeping" data used by the editor 2 to supervise the interface process, while the sixth column provides confirmation that commands were actually sent from the editor 2 to the VTR 12 or other device under control.
  • scheduling table 62 has no knowledge of specific commands, all information pertaining to specific commands coming from the text file 58 within the host processor 54 in the editor 2. This ability to schedule commands minimizes problems relating to data overflow, or "bottlenecks" that can occur in prior art systems.
  • the present invention spreads out the amount of data, allowing the host processor 54 to send commands to a communications processor channel 104, where the command resides until the proper time for its execution. This flexibility allows low priority status messages, for example, to reside in the communications processor board 104 until the relative absence of more pressing commands permits the communications channel to automatically make inquiry, for example, as to the status of a device (e.g., is VTR #12 turned on and rewound) .
  • the scheduling flexibility demonstrated in Fig. 4 was simply not available because of the requirement for dedicated hardwired protocol mapping for each specific function.
  • EDLs are especially deficient in providing an historical record where layered images have been produced.
  • VTR A has source tape showing a mountain landscape
  • VTR B has source tape showing an automobile
  • VTR C has source tape showing a telephone number.
  • VTR C is the "overlap" between the onset of VTR C (occurring at time t2) and the tail end of the material on VTR B (occurring at time t3) .
  • the telephone number on VTR C begins at time t 2 , which is before the end of the video on VTR B.
  • the information available on a prior art EDL will show events occurring at t l r t 2 and t 4 .
  • the overlap which occurs between time t 2 and time t 3 is not readily apparent from the EDL, and once the EDL is "cleaned" this information is not historically available for later use.
  • applicants' system is capable of making a historical record of all events occurring in the production of the final tape.
  • Applicants' EDL is able to provide a rich source of information to a user, including, for example, at what frame into the material on VTR B and at what time did tj occur, the duration of the recording from VTR B (i.e., when did t 3 occur), at what frame and at what time was the material from VTR C recorded onto the final tape (i.e., when did t 2 occur) , and at what frame and time did recording from VTR C cease (i.e., when did t 4 occur) .
  • ripple Traditionally, editors have used ripple to insert a new edit between segments in an existing edit decision list.
  • the ripple slides the following edits down the timeline by the duration of the new edit.
  • the user When inserting an edit into the list, the user must indicate "insert with ripple” or "insert without ripple". Where the list is really a set of discrete subprograms, e.g., acts in a movie, only a part of the list needs to be rippled on insertion.
  • Some prior art editing systems permit the user to insert the edit and then select a range of edits and ripple only these. While serviceable, this prior art mechanism is awkward. Further, the mechanism breaks down with an edit decision list representing a multilayered program.
  • ripple relates to edits that are layers, e.g., keys. Often the program in-point of the layer is marked relative to some underlying material. If this underlying material itself moves as a result of a ripple, the user must take care to ripple all of the layers that were marked over this material. At best this prior art approach is cumbersome, and completely fails to represent the relationship between the layer and the background that the user had in mind.
  • Another example of the deficiency of prior art systems involves overlaying or "dropping in" segments over existing material, e.g., skiing scenes interspersed into an interview with a skier.
  • the material to be dropped in does not affect all of the channels on the program, and in fact is marked against channels that it does not effect.
  • the skiing images are video and are marked against the voice audio channel to coordinate with specific comments in the interview.
  • the new edit for the drop-in must ripple with the voice-audio channel but may be independent of any other video in the program.
  • Applicants' hierarchical EDL structured permits the user to specify the relationship of the edit to the rest of the program.
  • the EDL actually contains the relationships that the user has set up, which relationships are automatic ally maintained by editor 2.
  • the user specifies the timing relationship of the in- point of the edit, and the behavior of any edits that will follow this edit. These relationships may be graphically indicated on monitor 36, for example, with an arrow pair 400 as depicted in Figure 21.
  • An out-point arrow 402 represents the most common case of "insert with or without ripple”. When drawn, arrow 402 indicates that following edits will ripple after this edit, and when blanked that this edit will be dropped in over any underlying material. In the preferred embodiment, the arrow 402 defaults on, but may be toggled on and off by a user of editor 2.
  • the in-point arrow 404 is preferably always drawn and has three forms to indicate the in-point timing relationship.
  • a left pointing arrow (e.g., arrow 404) indicates that this edit will ripple against any preceding edit. In the preferred embodiment this is the default mode and represents the most common case for simple cuts-and- dissolves editing.
  • a downward-pointing arrow 406 indicates that this edit was marked against some background context and must ripple with that context whenever the context moves within the EDL. Preferably this is implemented as the default case for layers and drop-ins, marked against some point on an existing program timeline.
  • An upward-pointing arrow 408 indicates that this edit was marked at an absolute time on the program timeline, independent of other edits in the EDL, and should never ripple. While less commonly used, this is useful, for example, for titles that must come in at a particular point in a program, regardless of what video is underneath.
  • a channel name 410 is shown to indicate more specifically how the relationship is constructed.
  • the default mode is for an edit to be marked against the same channels it uses and affects: a video edit is marked relative to video material.
  • applicants' system permits the user to set this channel to describe the sort of relationship required for the above-described skier interview drop-in example.
  • the present invention permits the user to describe complex and powerful timing relationships using simple graphical tools.
  • the present invention performs the work of maintaining these timing relationships for the user, eliminating the need for after-the-fact ripple tools.
  • Fig. 6 shows a typical prior art EDL, presented in the commonly used CMX format, although other formats are also available.
  • the EDL is commonly stored on a computer floppy diskette for possible later use as input should further revisions to the image be required.
  • the EDL may also be displayed on a video monitor connected to the editor.
  • all of the edit information is displayed as timecode numbers. For example the rather cryptic second line of Fig.
  • a GPI trigger pulse was sent to a device (labelled by the user as DFX and perhaps referring to an effects box) to record in point (RI) plus 15:16, e.g., at 15 seconds plus 16 frames (+00:00:15:16).
  • the next line advises that a GPI trigger pulse was sent to a Grass Valley Company switcher capable of fading to black (FTB) , the pulse being sent at the record in point plus 22:00, i.e., 22 seconds plus zero frames.
  • FTB Grass Valley Company switcher capable of fading to black
  • the various timecode numbers are themselves recorded into the various video tapes.
  • the first line in EDL provides timecode information showing, for example, that the source tape in and out times for these events was 1 hour, 16 minutes, 45 seconds and 20 frames, and 1 hour, 17 minutes, 9 seconds and 4 frames respectively. Further, the first line advises that the output record tape input was at 1 hour, 4 minutes, 45 seconds and 21 frames, and 1 hour, 5 minutes, 9 seconds and 5 frames.
  • the EDL shown in Fig. 6 is commonly referred to as a "dirty EDL" because it does not reflect contiguous in and out timecodes. Since even a prior art EDL might be used at a later time to try to reconstruct the final visual effect, there was no point in retaining overlap information in such an EDL. For example, it would be pointless for a user to spend time trying to recreate an event occurring at the beginning of an EDL only to discover later that the effect was recorded over by something else. Therefore "dirty EDLs” are routinely processed with off-the-shelf software to produce a "clean EDL", namely an EDL permitting no timecode overlaps. For example, with reference to Fig. 5, a t 2 -t 3 "overlap" results because the onset of recording VTR B at time t 2 occurred before the time t 3 that recording ceased from VTR B.
  • Fig. 5 reflects only the timecode for the various VTRs. If the effect being "documented” was a multi-layer effect, most of the information needed to know what was recorded when and atop what is simply not present.
  • the prior art EDL reflects the issuance of general purpose interface ("GPI") triggers to various devices, but the EDL neither knows nor documents what the triggered device was actually doing, what the device actually contributed to the edit, or what data is actually relevant to the action being performed.
  • GPS general purpose interface
  • the software within the present invention creates a tree-like EDL database containing a full historical record documenting every step of the editing process.
  • Applicants' EDL data base allows a user to later know exactly what occurred during an editing, and (if desired) to undo editing effects to recover earlier stages of the editing process.
  • a use of the present system can, for example, be provided with all information needed to recreate the visual effect shown graphically in Fig. 5, including the ability to unlayer one or more video layers.
  • Fig. 7 shows by way of illustration some of the displays available (for example on monitor 36 in Fig. 9) to a user of the present system.
  • a user might direct the present invention (using commands issued from the control panel 20 for example) to display a timecode oriented presentation 63 which presentation the user may elect to display in the somewhat limited format of a prior art EDL.
  • the present invention allows a user to call upon the EDL data base to present a still image 65 corresponding to a given point in an edit sequence.
  • Applicants' motorized control 30 enables a user to advance or reverse a video source until the desired image 65 is displayed using information available from the video sub-system within editor 2.
  • a time line presentation 67 similar to what is shown in Fig. 5.
  • applicants' EDL database allows a tree-like representation 69 to be presented, visually depicting in block form the various editing stages in question. (A more detailed description concerning the tree-like EDL database structure accompanies the description of Figs. 11A-11C herein.)
  • Device 30 is preferably used in the present invention to display and to control absolute and relative positions within a video tape reel.
  • VTR 12 holds a source tape 68 containing 30 minutes of video of which 10 seconds or 240 frames, occurring somewhere in the let us say the first third of the tape, are of special interest to the user who perhaps wishes to use the 10 second segment for a special video effect.
  • the user would like to rapidly view the source tape 68 and having identified where therein the 10 second segment lies, be able to literally "zero" in on that segment.
  • the source tape 68 is displayed and digital information identifying the frames cr absolute time corresponding to the beginning and end of the 10 second segment is noted. The user must then enter this numerical information into the editor, commanding the VTR 12 to rewind to the beginning of the 10 second segment.
  • the simple control 30 of Fig. 8 allows a user to both control and display positional information as to the segment of tape 68 passing over the heads (not shown) of the VTR 12.
  • the control 30 includes a sliding indicator bar 70 equipped with a sensor 72 that signals physical contact with a user's hand 74.
  • the control panel 20 includes a slot 76 through which the indicator bar 70 protrudes such that it is capable of sliding left and right (or up and down if the slot is rotated) within the slot 76.
  • the position of the indicator bar 70 can be determined by a drive motor 78 or by the user's hand 74.
  • the sensor 72 and a sense logic means 79 operate such that if the motor 78 is causing the bar 70 to slide when the user's hand 74 touches the bar 70, the motor 70 is disabled, allowing the user to slide the bar 70 left or right as desired.
  • a servo loop, shown generally as 80, provides feedback between the motor 78 and the optical encoder 88 such that unintended vibrations of the control panel 20 do not cause the indicator bar 70 to command movement of the tape 68.
  • Such unintended vibratory motions typically would be characterized by the absence of the user's hand 74 from the bar 70, and often exhibit a —rapid left-right-left-right type motion.
  • a pulley cable 81 passes through a rear portion 82 of the bar 70 and loops about the shaft 84 of the drive motor 78, and about the shaft 86 of a rotation encoder such as optical encoder 88.
  • Encoder 88 includes a vaned disk 90 that rotates with rotation of shaft 86.
  • a light emitting diode (LED) 92 and a light detector 92 are located on opposite sides of the disk 90.
  • LED light emitting diode
  • detector 94 As the disk shaft 86 rotates, intermittent pulses of light are received by detector 94 corresponding to the direction and rate of rotation of the disk 90.
  • Such encoders 88 are commercially available, with a HP HEDS 5500 unit being used in the preferred embodiment.
  • Rotation of disk 90 results either when the user slides the bar 70 left or right within the slot 76, or when the bar 70 is moved left or right by the drive motor 78. Movement caused by the drive motor 78 results when positional signals from the VTR 12 pass through circuit means 96 commanding the drive motor 78 to rotate clockwise, or counterclockwise to reposition the indicator bar 70 according to whether the tape 68 is being transported forward or in reverse.
  • the output of the encoder 88 is also connected to circuit means 96 which digitally determines the relative movement of the indicator bar 70, regardless of whether such movement was directed by the control motor 78 in response to movement of the tape 68, or in response to a sliding motion from the user's hand 74.
  • a sense logic means 79 gives priority to repositioning from the user's hand 74 over repositioning from the drive motor 78, preventing a tug-of-war situation wherein the user is fighting the drive motor 78.
  • a user can command circuit means 96 to scale the encoder output to provide absolute or relative positional information as to the tape 68 in the VTR 12.
  • the user can direct that when the indicator 70 is at the left-most position within the slot 76, the source tape 68 is either at the absolute start of the entire reel of tape, or is at the relative start of a segment of any length therein, such as a 10 second segment.
  • the right-most position of the indicator bar 70 can be made to corresponding to the absolute end of the source tape 68 or to the end of a segment of any desired length therein, such as the 10 second segment desired.
  • the tsr.pe 68 is either 25% of the way from its absolute start in the reel, or 25% of the way through a given segment (depending upon the user's election shown by input 98 to the circuit means
  • the indicator bar 70 moves left or right depending upon whether the tape 68 is travelling in a forward or reverse direction. While Fig. 8 presents a slot 76 permitting a straight-line motion of the bar 72, it will be appreciated that the path traversed by the bar 72 could, if desired, be other than a straight-line, semi-circular, for example. Further, while the motor shaft 84 and the encoder shaft 86 are shown in relative close proximity to panel 20, it is understood that they may be located away from the panel 20 if means are provided to transmit rotational movement to or from the motor 78 and encoder 88 to the pulley belt 81.
  • the present system includes a control panel 20 (previously described with respect to Fig. 1) , a preferably high resolution monitor 36, and a main chassis containing the elements comprising the editor 2.
  • the editor 2 preferably includes a VME bus system 100 permitting communication between a CPU board 102, a plurality of communications boards 104, 104', a timing board 106, a video input board 108, an imaging processing board 110, a video output board 112, a graphics board 114, and an audio board 116.
  • the CPU board 102 communicates with a memory storage system (disk storage 118 for example) , with a network 120, and also provides serial and parallel interface capability 122 for printers and the like.
  • the network 120 permits additional on-line editors 2', 2", etc. to communicate with the present invention.
  • the editor 2 includes a number of communications boards 104, four for example, each board including the communications processor 56 referred to in Fig. 3B and providing 12 channels 134 capable of simultaneous serial control of 12 devices, in addition to providing 12 GPI outputs 137.
  • the various devices being controlled e.g., VTRs 12, 12', video switchers 14, 14', effects boxes 16 and the like
  • the individual communications processor board 104 have the capability to provide either SMPTE (Society of Motion Picture and Television Engineers) communications or RS-232 communications, depending upon the requirements of the peripheral devices communicating with " the board 104.
  • the CPU board 102 was an off-the-shelf VME processor card, namely a Heurikon HKO/V960E with an Intel 960CA RISC based processor, although other CPU board designs capable of providing similar functions could also be used.
  • VME off-the-shelf VME processor card
  • a Heurikon HKO/V960E with an Intel 960CA RISC based processor although other CPU board designs capable of providing similar functions could also be used.
  • Fig. 10 A copy of the block diagram from the Heurikon specification sheet for this CPU board appears as Fig. 10, with further functional details available from Heurikon Corp. , whose address is 3201 Lanham Drive, Madison, WI 53713.
  • CPU board 102 provides the central controlling processor for the on-line editing system, although other CPU chips and board configurations could be used instead.
  • Timing board 106 receives reference video 124 from a source generator (not shown) , and GPI 126 inputs from devices that must provide their timing information through a GPI port, and generates therefrom a bus interrupt for every reference field, thus establishing a processing reference timebase 128.
  • the timing board 106 also provides the CPU 102 with information 130 as to what reference video color field is current.
  • the communications processor boards 104 receive the field timebase 128 and use this signal to synchronize communications processor channel activity to the video fields.
  • the CPU board 102 also communicates with various storage media (hard, floppy and optical disks, tape drives, etc. shown generally as 118) , with the network 120 used to talk to other editors 2', 2", etc. and other network compatible devices.
  • the CPU board 102 preferably includes an ethernet chip permitting network 120 to function, and permitting editor 2 to operate with two or more windows, one of which windows may be used to operate a device or software compatible with the MIT Xll protocol.
  • the ethernet ability is also available for any peripheral device requiring ethernet communications.
  • the CPU board 102 also provides various parallel and serial printer and terminal interfaces 122.
  • software (shown generally as element 132) running on the CPU board 102 provides a user interface, storage and network interfaces, as well as high level edit functionality and EDL database management. Portions of this software are applicants' own invention, namely the universal interface software (including device hierarchy) , the editor control mapping software, and the EDL. These three software components are described more fully in this application.
  • the present invention also uses commercially available software components including VxWorks version 4.0.2 real time operating system, available from Wind River Systems, Inc. whose address is 1351 Ocean Avenue, Emeryville, CA 94068.
  • VxWorks provides the real time kernel for the present system, and provides a basic operating system, networking support, intertask communications, and includes a library of generic subroutines.
  • the present system also uses a software package entitled X11-R4, available from M.I.T., Cambridge, MA which provides the basic window system employed, and a software package entitled OSF/MOTIF, available from Open Software Foundation, located at 11 Cambridge Center, Cambridge, MA 02142, is layered atop the X11-R4 to provide additional windowing functions.
  • X11-R4 available from M.I.T., Cambridge, MA which provides the basic window system employed
  • OSF/MOTIF available from Open Software Foundation, located at 11 Cambridge Center, Cambridge, MA 02142
  • Applicants' support subroutines are generic by nature, and provide, among other functions, generic list support, file input/output, file scanning, etc.
  • Fig. 9 will now be described in more detail.
  • four communications processor boards 104, 104' may be present in the preferred system, with each board providing 12 channels 134, 134' capable of simultaneous serial control of peripheral devices, and each board also providing 12 GPI outputs 137 for devices requiring such control.
  • Z80 processor chips are employed, although other processor devices capable of performing similarly may be used instead.
  • the communications processor boards 104 also include GPI circuitry for controlling external devices that require a contact closure rather than a serial interface.
  • software 132 for each communications processor channel is downloaded from the disk 128 via the CPU board 102.
  • External devices normally are controlled either from their own panel controls (push buttons, for example) or by an editor via their remote control port.
  • the device control functions available at the control port are usually a subset of the functions available at the device front panel controls.
  • the device commands available at the remote control port have throughput limitations and are treated by the device as having lesser priority than the commands issued from the device panel. For example, if an device received a command from its panel control and also received a command remotely through its control port, the panel issued command would govern.
  • Applicants' electrical interconnections from the communications processor boards 104 to the remote devices circumvents the above-described command limitations.
  • Applicants' communications processor channels are preferably electrically connected to allow for either point-to-point control of the external devices, or to allow control in a loop-through mode permitting transparent control by breaking into an external machine's keyboard control cable.
  • the above-described preferred method of connection is realized by cutting the connections from the device's own 5 panel controls and routing the connections through editor 2 in a manner transparent to the device. In such a loop- through connection, the device responds to commands regardless of whether they are issued from the device's own panel controls, or are issued by editor 2. In fact, the
  • the present invention allows all devices in an
  • FIG 15 depicts the above-described interconnect capability, wherein two editors 2, 2' (each 5 according to the present invention) are connected to two different devices 13, 13' .
  • Figure 16 also depicts a prior art editor 3 that may also be coupled in loop-through mode to control a device 13. Either device 13, 13' could of course be a recorder, switcher, an effects box, or the like, and while two devices 13, 13' are shown, as many as 48 devices may be connected to editors 2, 2'.
  • Each device typically provides one or more SMPTE input control ports 300, 302, 300', 302', and at least one keyboard input control port 304, 304'.
  • Device 13 for example, is shown coupled to a control output port 306 on editor 2, to a control output port 306' on editor 2', and to a control output port 308 on a prior art editor 3.
  • Relay contacts, indicated by Kl are shown shorting together the input 310 and output 312 pins on editor 2's port 306, and similarly relay contacts Kl' are connected across port 306' on editor 2'. With contacts Kl, Kl' closed (as depicted) , a straight loop-through connection is made allowing, for example, device 13 to be - controlled by editor 2, or by editor 2' or by editor 3.
  • editors 2, 2' include resource management software 314, 314' that prevents editor 2, for example, from opening contacts Kl when editor 2' is controlling device 13. Essentially when a relay contact Kj is closed, the editor wherein contact Kj is located releases control over the resource, or device, coupled to the relevant control port. Thus, when editor 2' controls device 13, contacts Kl' in editor 2 are open (shown in phantom in Figure 15) , but contacts Kl in editor 2 are closed. As shown by Figure 15, the keyboard 15' for a device 13' may also be looped through output control ports 316, 316' (across which relay contacts K2, K2' respectively are coupled) on editors 2, 2'.
  • the present invention can effeciently manage all resources within the edit suite, simultaneous edits are made possible.
  • a prior art editor typically uses a switcher device both to accomplish special effects and to accomplish simple cuts. While other devices might also accomplish these tasks, prior art editors typically are too inflexible to use other devices.
  • a router can be used to select a source material
  • editor 2 can simultaneosly perform one effect with a router while using a switcher for another effect.
  • the present invention makes more flexible use of the resources at hand, permitting more than one resource to be used simultaneously as opposed to being inflexibly committed to using several resources to accomplish a task in a conventional fashion.
  • the ability to permanently hardwire a plurality of devices to one or more editors 2, according to the present invention is advantageous. Further, because the present invention includes a hierarchically structured information model that allows editor 2 to access and control the full capabilities of each device, maximum device usage is readily attained without wasting time to reconnect the devices within the editing suite.
  • the CPU board 102 controls the video subsystem 135 which includes the video input board 108, the image processing (or “crunching") board 110, the video output board 112, and the graphics board 114.
  • the video subsystem 135 enables the present invention to provide user text display, video storage, recall and display.
  • the video input board 108 receives the video input channels in either composite 136 or component 137 format.
  • the video board 108 then decodes and digitizes this input information 136, 137 tc provide two digital video streams 140, 142 to the image processor board 110.
  • the video input board 108 can select from eight composite inputs 136, or two component inputs 137.
  • the digitized video 140, 142 is fed from the video input board 108 to the image processor board 110 where it is re-sized to a number of possible output size formats.
  • the image board 110 can store images in Y;, R-Y;, B-Y component analog format using on-board RAM, which RAM can be uploaded or downloaded to the disk 118 via the CPU board 102.
  • the re-sized video 144 from the image board 110 goes via the video output board 112 to be displayed on the user monitor 36.
  • the display on monitor 36 will assist the user in selection of video timecodes, source selections, etc.
  • the video output board 112 permits placement of the re-sized pictures anywhere on the monitor 36, under control of the CPU board 102.
  • the video output board 112 includes RAM for storage of pictures in the RGB format, and is capable of uploading and downloading these pictures via the RGB format
  • a second input 146 to the video output board 112 receives display data from the graphics board 114.
  • the video output board 112 combines the two video inputs 144, 146 to produce the full user display seen, for example, on the user monitor 36. This allows an integrated user display with both text and live or still video pictures driven by a RAM digital to analog converter (RAMDAC) , such as a Brooktree BT463 RAMDAC, located on the video output board 112.
  • RAMDAC RAM digital to analog converter
  • the graphics board 114 produces the text display data in an on-card frame store under control of a local processor, such as an Intel 80960, closely coupled to an on-card framestore.
  • the graphics processor card 114 and framestore 148 together comprise a complete display processor 150 that receives display commands from the CPU board 102 via the VME bus 100.
  • th ⁇ graphics board 114 includes a RAMDAC (such as the Brooktree BT458) which permits the board to be used as a text only display. This capability allows the present system to be configured without a video subsystem live channel capability by removing the video input board 108, the image processor board 110, and the video output board 112.
  • Applicants' EDL permits the use of a preferably fast storage medium, such as a video RAM disk, to be used as a temporary "scratch" or "cache" device.
  • a preferably fast storage medium such as a video RAM disk
  • cache storage is typically much faster than, for example, a tape recorder, faster image access is available.
  • a user of the present invention can specify "cache A" (using, for example, keyboard 22, with the result that a desired segment of video from a source A will be recorded to a cache. (Typically each cache can hold 50 to 100 seconds of video) .
  • Applicants' EDL permits any subsequent reference to "A" to invoke, automatically and transparently to the user, the images now stored on the cache.
  • Applicants' EDL does not burden the user with keeping track of where within a cache an image may be stored, this information being tracked automatically by the EDL.
  • applicants' tree-like EDL structure permits the edit segments to be virtual, with editor 2 managing the media for the user, allowing the user to concentrate on less mechanical chores.
  • applicants' invention permits the desired segments of video to be copied to one or more caches.
  • A, B, C are on the same reel of video tape, a single video tape recorder could be used to copy each segment into cache.
  • any user EDL references to the relevant segments of A, B, C automatically invoke the cache(s) that store the images.
  • the user need not be concerned with where within a cache or caches the images are recorded.
  • applicants' system 2 knows what devices are present in the suite and can access these devices by software command, typically no suite rewiring is needed. The end result is that the user quickly and relatively effortlessly and with little likelihood for human error can create a desired effect using cache devices.
  • a prior art EDL would not recognize that references to the relevant segments of A, B, C should automatically and transparently invoke a cache (assuming that a cache were used) . Instead, the user (rather than the prior art editing system) must manage the media. The user must laboriously and painstakingly create an EDL stating what segments of A, B, C were recorded in what segments of cache, a chore requiring many keystrokes at a keyboard. Further, a prior art system would typically require rewiring, or at least the presence of several, typically expensive, router devices.
  • the resource management capability inherent in the present invention permit a user to specify a device as a layer backup recording medium, e.g., a device 12 in Figure 1 or 3A. Every time an edit is made, editor 2 will cause the edit to be recorded upon the layer backup recording medium, in addition to the system primary recording medium (e.g., another device 12). Thus, each edit onto the layer backup device is recorded to a new location on the backup medium.
  • This advantageously provides a user with a ready source of intermediate layers should, for example, further work on an intermediate layer be required. Applicants' EDL automatically knows where to find these intermediate layer images (e.g., on the layer backup recording device, and where thereon) .
  • an audio board 116 provides the ability to record stereo audio in an on-board RAM for later playback upon control of the CPU card 102.
  • the audio card board 116 preferably includes circuitry memorizing the intervention required from the CPU board 102 when in record or play mode. If the required audio storage exceeds the on-board RAM capacity, audio may be recorded to disk 118 via the CPU board 102, while allowing the on-board RAM to operate as a flow-through buffer.
  • the present invention allows a user to save, for later recall, a "picture" of a given set-up, for example a wipe or key, a switch transition, and the like.
  • the user can record data representing the contents of the internal memory within the controlled device, and can also record the video output from the controlled device. For example, if the controlled device is a VTR 14, its video output can be connected as input 136, 137 to the video input board 108 to allow creation of a framestored image within the video sub-system 135.
  • Applicants' system in essence attaches the set-up data to the framestored image to provide the user with a palette of images for possible later use.
  • these parameters are scanned and the data provided as VALUES 209 parameters (to be described) which describe the current parameters of the controlled device.
  • a user may easily modify these data, whereupon applicants' software modify the parameters in question and issue the proper command message to the device.
  • the set-up data within a controlled device was in a format that was both totally unintelligible to the user, and not allowing user parameter changes to be easily accomplished. The user of a prior art system could, however, blindly feed the data back into the editor to reproduce whatever the effect the data represented.
  • a user of a prior art system would label the diskette or other storage record of the set-up data with a name (i.e., set-up #4, wipe-effect #17), and hopefully remember at some later date what visual image was represented by, say, set-up #4.
  • the set-up data is made available to the user in an understandable format allowing parameter changes to be easily made.
  • the set-up data is associated not merely with a name, but with a visual representation of the actual video effect. A user can actually see, for example on monitor 36, what the visual effect created by a given set-up produced. There is no need to remember what "set-up #17" was.
  • applicants' system is capable of receiving as a first input a conventional EDL from an off ⁇ line editor, and receiving as a second input picture information from an off-line editor's picture storage medium, and producing therefrom a visual image corresponding to the type EDL.
  • picture information (and audio information) is available, which information is often dumped to a diskette and stored for later re-use on the same editor. This information is not available as input to a conventional on-line editor, or even as input to another off-line editor not substantially the same as the producing off-line editor.
  • the EDL with its timecode information, and picture and audio information available as outputs from some off-line editors may be transported, on a diskette for example, to be used as input to an on-line editor according to the present system.
  • the present system is able to receive as input all timecode, picture and audio information available from the off-line edit session.
  • an on-line editor 2 is capable of creating a virtual EDL of unlimited length, which is able to maintain a complete history of the edit session permitting, for example, a user to "un-layer" edits that include multiple layers.
  • Figs. 11A-11C an information model, wherein the various boxes indicate data objects (referred to herein also as nodes or nodal lists) , the arrows indicate relationships between data objects (with multiple arrow heads meaning one or more in the relationship) , "c” means conditionally, i.e., 0 or more.
  • An "*" before an entry in Fig. 11A-11B means the entry is an identifying attribute. i.e., information useful to locate the object.
  • a "o" signifies an attribute, e.g., a data value for the object.
  • Figs. 11A-11C then represent the hierarchical database-like structure and analysis of such an EDL. The nature of this structure is such that attribute information within a box will pass down and be inherited by the information in descending boxes that are lower in the hierarchy.
  • an EDL is a list of nodes, each of which itself can contain a list of nodes.
  • an EDL there might be acts, and within acts there might be scenes, and within the scenes there might be edits.
  • One node list may includes scenes and edits, while another node list may have edits only (i.e., a group of edits may be combined to produce a scene) .
  • the EDL will assign and track a unique nodal identification for each event: e.g., each act, each scene, each edit.
  • the EDL consisted only of a flat (e.g., one-dimensional) list of edits shewing video source in and out times and a timecode. There could be no higher or lower levels because prior art EDLs dealt with only one tape and therefore had no need to store information relating to media source, or where within media information was recorded. There was no concept of assigning node designations, and no revision identifying data was logged.
  • Applicants' EDL software is capable of an arbitrarily complex hierarchy of edits, including edits of higher and lower levels.
  • the highest, lowest, and only level in the prior art is an EDL, because a prior art EDL consisted only of a "flat" list of edits.
  • E_N0DE 164 contains, via a tree-like hierarchical structure, all information needed to reconstruct a node in an editing session.
  • Box 164 contains, for example, the identification number of the node in question, identification number of the revision (if any) and also advises which node (if any) is a parent to this node (i.e., whether this node is a subset or member of a higher level list) .
  • This box 164 also contains the previous revision number (i.e., revision 2) and byte size and type of the node ( where type is a number advising what type of E_N0DE we are dealing with) .
  • revision 2 i.e., revision 2
  • byte size and type of the node where type is a number advising what type of E_N0DE we are dealing with
  • Applicants' EDL software can locate every E_N0DE, whether it is in memory or on disk, upon specification of an E_NODE number and revision number.
  • the E_N0DE 164 is the base class from which all other E_N0DE objects inherit their attributes.
  • the first level of this inheritance consists of E_C0MMENT 168 and E_TIME 170.
  • E_COMMENT 168 descends from and therefore inherits the attributes of the E_N0DE 164.
  • E_C0MMENT 168 descends from and therefore inherits the attributes of the E_N0DE 164.
  • E_N0DE number a revision number
  • a parent a previous revision
  • a size a type of a comment text the user might wish to insert
  • a reminder label typically as a reminder label for the benefit of the user (e.g., "Coke commercial, take
  • the E_TIME 170 node inherits the attributes of the E_N0DE 164 and adds time information thereto.
  • Time information in applicants' EDL is specified in terms of offset relative to other nodes in the EDL.
  • the arrow from E__TIME 170 to itself indicates the E_N0DE to which a particular E_N0DE is relative in time.
  • the principal time relationships used are relative to a parent and relative to prior sibling(s) .
  • all time references were required to be absolute and be referenced to the timecode. This rigid requirement in the prior art created numerous headaches when an edit wa deleted from a tape, because the next following edit on th tape was required to be advanced in time to fill the hole.
  • E_PARENT 172 list is the node that supports the concept of having sub-nodes. Not all nodes, however, will be a parent because some nodes will always be at the lowest hierarchical level.
  • the connection between node lists 164 and 172 reflects that the E_Parent node 172 inherits all the attributes of the E_Time 170 node and the E_Node 164.
  • an E_GR0UP box 173 (and nodes dependent therefrom) depends from E_PARENT 172. The E_GROUP box 173 will be described shortly with reference to Figure 11B.
  • an E_CHANNELS node 174 and CHJNODE 173 enable the present invention to track different video and audio channel data by providing information as to what editing event occurred previously and what editing event follows. For each channel of information. Node 174 tracks the channel number, and whether audio or video information is involved. If a user creates an edit by first cutting to reel #7, then dissolving to reel #2, etc., E_CHANNELS node 174 provides the directional information enabling the EDL to know what is occurring and what has occurred. As indicated in Figure 20A by element 175 (shown in Figure 11C) there are nodes dependent from node 174, which nodes will be described shortly with reference to Figure 11C.
  • a video wall is a plurality of video monitors, typically arranged in a matrix, where all monitors can display in a synchronous relationship the same image, or can be combined to display subportions of a larger image that appears when the matrix is viewed as a whole, or can be combined to depict, for example, a single or multiple scrolling image. Other effects are also possible.
  • a video wall is depicted generally in Figure 1 by the plurality of user monitors 37, 37', 37", etc. Editor 2 controls several audio and/or video channels to devices 12, 14, etc. to display such a video wall.
  • the E_GROUP node 173 depends from node 172 and exists only if the E_PARENT node 172 has been named (e.g., "Edit 1").
  • nodes 192 E_SEGMENT, 194 E_BIN, 156 E_MEDIA and E_DEVICE 158 are nodes 192 E_SEGMENT, 194 E_BIN, 156 E_MEDIA and E_DEVICE 158.
  • the E_SEGMENT node 192 descends from the E_PARENT box 172 and permits the user to expand or contract a view of the EDL.
  • the tree-like depiction 69 could be contracted or expanded. If the tree 69 depicted say three scenes and the user now wished to concentrate on but one of the scenes, node 192 permits the user to contract the structure accordingly.
  • Figure 11B shows nodes E_EDIT_SEG 193 and E_CLIP 186 depending from node 192.
  • Node 193 denotes the lowest level of video in a product, e.g., an edit segment containing a single effect and specifying a single video source used in that effect.
  • Node 193 is analogous to a prior art edit line, but is more useful because it coordinates with applicants' hierarchial EDL.
  • information at node 193 is available to a user familiar with prior art EDLs, and as such provides a familiar interface to such users.
  • Node 186 E_CLIP contains all information needed to reconstruct the output video tape after on an-line editing session, and includes a timecode anchor to which all offsets on E_CLIP 186 are referred. Thus each E_CLIP 186 is assigned and retains its own unique timecode which allows identification of specific fields or frames of video tape. As shown in Figure 11B, node 194 E_BIN contains zero or more E_CLIP 186, as a mechanism for the user to organize the user's clips.
  • Node E_BIN 194 also depends from E_PARENT 172 and is provided to the editor user, much the way a directory is available to a computer user, to allow more than one clip at a time to be dealt with in an organized fashion.
  • contents of E_BIN 194 may be selectively viewed by the user in a window on the display monitor 36 on a clip-by-clip basis. Different views of each clip are available, including preferably the "head" and/or the "tail” frames of a clip (i.e., the first and last frames) .
  • the clip frame displayed on the monitor 36 will change as the user slides the indicator bar 70 left or right (i.e., causes the source media to be transported backward or forward) .
  • E_BIN 194 and E_CLIP 186 replicate the structure at the top of the hierarchical database tree represented in Figs. 11A-11C, and allow applicants' EDL database to provide the user with a representation of this tree-like structure, depicted as element 69 in Fig. 7.
  • E_NODE 164 Media and device information is available to E_NODE 164 via boxes 156 E_MEDIA and 158 E_DEVICE. Boxes 156 and 158 might reflect that E_CLIP 186 was recorded on a cassette (cassette #2, as opposed to say a reel of tape,, or a framestore) , which cassette #2 is mounted on a cassette recorder #17.
  • the diamond shaped box 160 indicates that the E_RECORD box 154 (see Figure 11C for detail) is capable of functioning much like a correlation table. Given the identifying attributes for an E_CLIP 186 object and given the identifying attributes for an E_MEDIA 156, applicants' virtual EDL software can correlate this information for a particular E_RECORD 154.
  • E_RECORD 154 information in node E_RECORD 154 is used by applicants' EDL to identify and track any recordings made of the E_CLIP 186, or any portion thereof (i.e., a recording might be one frame, many frames, etc.).
  • the type of information retained in box 154 includes the identifying attributes for the E_CLIP and the media that the E_CLIP was recorded on. Because, as Figure 11B depicts, E_REC0RD 154 depends from E_GR0UP 173, E_RECORD inherits the former's attributes and may be virtual, containing history revisions as well as timing and channel information which are attributes of a recording.
  • Other information within box 154 includes clip offset from the timecode anchor, media offset and duration identify the portion of the clip that was recorded.
  • the media might be a one hour tape which has two different half-hour clips recorded thereon. The media offset advises where within the one hour tape each recording took place, while the duration identifies the portion of the clip that was recorded.
  • E_SYNC_PT 190 depends from E_CHANNELS 174 and providas timing information relating to the action, including speed and frame offset. Co-equal with box 190 is E_ACTI0N 180.
  • the E_TRAN_TO box 184 provides the EDL database with information identifying the transition image and where the image went.
  • E TRAN TO box 184 will so note in the E KEY box 188, placing a record in the E_CLIP box 186 enabling a user to later know what image was used to create the hole for the key and what clip (E_CLIP 186) represents the edit after the transition.
  • the E_RAW_MSG 178 box provides the facility to send an arbitrary byte sequence to a device, and as such is primarily a debugging feature of applicants' software.
  • Co ⁇ equal EJTRIGGER 179 inherits the attributes of E_CHANNELS 174.
  • E__TRIGGER 170 When a trigger occurs on a specific channel (e.g., a GPI trigger on a specific audio or video channel) , E__TRIGGER 170 lets the user specify what effect the trigger should accomplish during the edit. For example, if the trigger is needed for a video effect, box 17.0 permits the trigger information to be kept with the video information in the event the video and audio information are split during an edit session. Where the trigger is, for example, to an effects device, E_TRIGGER 170 also provides parameter information detailing the type of effect, the rotation, the perspective, the rate of image size change, and so forth.
  • the E_LAYER 176 box descends from and thus inherits the attributes of the E_CHANNEL node 174.
  • E_LAYER node 176 will either cover or uncover another layer of video, with full nodal information as to every layer being retained.
  • applicants' EDL software is able to track and retain a record of this information, allowing a user to later "un-layer” or “re-layer” an image, using the historical node information retained in the CPU board 102.
  • the ability to store this E_LAYER 176 information allows a user to later go back and strip off layers to recreate intermediate or original visual effects.
  • Fig. 12A is a time bar representation wherein video sources from three sources VTR A, VTR B, VTR C are shown.
  • VTR A contain the background video, perhaps mountain scenery, the material from VTR A to run from time t 0 to time t 5 . This background represents one layer of image.
  • time t lr material from source VTR B (perhaps an image of a car) will be keyed (e.g. superimposed over) , such that the car appears in front of the background.
  • the car image which represents an additional layer, will be recorded on the final output tape (not represented) from time t j to time t 3 , e.g., the key extending until time t 3 .
  • an image of a telephone number contained on source VTR C is to appear superimposed (or keyed) over the car (from time t 2 to time t 3 ) and then superimposed over the background from time t 3 to t 4 .
  • the telephone number image represents yet another layer of video. Note that at time t 3 the key of VTR B ends, but the layer above (e.g., the key to VTR C) continues until time t 4 .
  • the present invention advantageously permits a determination from context as to what was the source of prior material involved in, say, a transition involving a key, dissolve, or wipe.
  • Applicants' editor 2 is capable of examining the recorded entry time for the current edit, and learning from the hierarchical EDL what material was active at that time point. Further, this may be done even if multiple layers were active at that time point.
  • a prior art editor at best can provide a "tagging" function for a simple source, but not for multiple sources.
  • References in Fig. 12B and Fig. 12C track the nomenclature used in Figs. 11A-11C, with the various boxes indicating E_N0DES such as element 164 in Fig. 11A.
  • E_CHANNELS 174 denotes the "next" link of E_CHANNELS 174, e.g., the arrows showing the sequence of nodes for a given channel.
  • box 174 in Figs. 12B and 12C is denoted as an EXIT box rather than an E__CHANNELS box because in the present example box 174 represents a dead end, with no further links to other boxes.
  • Boxes labelled L c and L E are E_LAYER 176 nodes, and include an extra linking arrow.
  • a KEY box 188 denotes a key effect
  • an EXIT 174 box denotes the exit of a layer
  • a CUT 184 box denotes a cut to a source
  • an L c box 176 denotes a cover layer
  • a L E box 176 denotes expose layer.
  • L c the arrow denotes the starting node of a layer that will cover the current layer
  • L E the arrow denotes the end node of the layer which, upon its end, will expose the current layer.
  • the actual model implemented by applicants' software includes "previous" node links for backward traversal, these links are not shown in Figs. 12B and 12C in the interest of presenting readable figures.
  • E_LAYERS 176 These nodes 176 must be linked to the layer that is either being covered or being exposed, and will always have an extra link which connects a lower E_LAYERS 176 to the layer that either covers or exposes it. This extra link will be referred to as a layer link herein.
  • a layer link At any point within an E_CLIP 152, there may be one or more active layers.
  • Each E_LAYER 176 "cover” node denotes the beginning of a new upper layer
  • each E_LAYER 176 "expose” node denotes the end of an upper layer.
  • the traditional concept of "current node in a linked list" must be extended to current node per layer because applicants' "current node” is actually a list of nodes, one per layer.
  • 12A-12C has a parent node (not shown) , and a first child that is the CUT 184 to VTR A which occurs at time t 0 .
  • EXIT node 174 at time t 5 denotes the termination of this initial layer (layer 0) .
  • a new layer 1 is then created by inserting a layer cover node L c 176 between the CUT 184 and the EXIT 174, the insertion occurring at time t j .
  • the cover node L c 176 has two outgoing links: a first link 175 to the next event on the same layer (shown as L E 176) , and a second link 177 to the KEY B 188 which begins the next layer.
  • the EXIT E 174' of the key layer (layer 1) points to a layer expose L E 176 which is inserted between L c 176 and EXIT 174.
  • Fig. 12C shows the resultant adjustment to the channel and level links made by applicants' software.
  • the node L E 176' has moved from a position between nodes L E 176 and EXIT 174, to a position between nodes L c 176" and EXIT 174'. This repositioning is accomplished by readjusting channel and layer links whenever a node's time is changed.
  • EXIT node 174" will be the first node checked as it is the node whose timing is changed in our example.
  • the E_RAW_MSG node list 178 provides the facility to send an arbitrary byte sequence to a device, and as such is primarily a debugging feature of applicants' software.
  • Fig. 11C provide the E_CHANNELS box 174 with information as to the type of an edit transition action in question, including when it began, its duration, and when the edit ceases.
  • the E_TRAN_TO box 184 provides the EDL database with information identifying the transition image and where the image went. If the transition edit calls for a key, the E_TRAN_T0 box 184 will so note in the E_KEY box 188, placing a record in the E_CLIP box 186 enabling a user to later know what image was used to create the hole for the key and what clip (E_CLIP 186) represents the edit after the transition.
  • the E_SYNC_POINT box 190 provides timing information relating to the action, including speed and frame offset.
  • the E_SEGMENT box 192 descends from the E_PARENT box 172 and will contain information as to the name of the node.
  • the co-equal level BIN box 194 and E_CLIP box 186 replicate the structure at the top of the hierarchical database tree represented in Fig. 11B by BIN 156 and E_CLIP 160.
  • Applicants' EDL database is in fact capable of providing a user with a representation of this tree-like structure, this representation being depicted as element 69 in Fig. 7.
  • the EDL software will create and assign a unique edit node or E_NODE 164 reference number and will store identifying information within the CPU board 102.
  • This additional E_NODE will contain information that at offset 10 an edit was made, constituting a first revision, which edit lasted say 2 minutes. Anytime any edit is made, the EDL software creates a new historical E_NODE, while still retaining the earlier parent node and all information contained therein.
  • the user inputs data to the editor 2 (using console 20 for example) identifying the desired clip, whereupon the RECORD node 154 correlates all information available to it and determines whether in fact the desired clip has been recorded (it may perhaps never have been recorded) . If the clip has been recorded, the EDL software will send the appropriate device commands to display a desired frame for the user, on monitor 36, for example. If the clip has not been recorded, the EDL software will determine how to build the clip based upon the information in the subnodes of the clip, and will send the appropriate device commands to cause the desired clip to be built and displayed.
  • Applicants' above-described hierarchical multi ⁇ level EDL structure maintains a history of recordings that can be used as "submasters" in later building more complex layered effects.
  • a sub aster is an EDL representation of an intermediate visual product or effect, that may be used as an element or building block in constructing an even more complex visual product or effect. Because applicants' EDL provides a full record of how a video segment was built, submasters are automatically generated which permit a user to reproduce a previous session or image that would normally require more than a ' single pass of the relevant source material.
  • the present invention permits the user to specify a desired effect exactly (e.g., "dissolve A to A”), whereupon editor 2 will calculate the steps required to produce that effect.
  • a desired effect e.g., "dissolve A to A”
  • editor 2 will calculate the steps required to produce that effect.
  • applicants' EDL will recognize the command "dissolve A to A” even though building the effect in a single pass may be physically impossible because both scenes may be on one medium.
  • the information within applicants' EDL specifies the effect, and editor 2 translates that specification into actions that are physically possible with the devices at hand, for example with an A64 disk recorder.
  • the described effect in applicants' EDL is a virtual edit segment EDL describing what the user requires the end result to be (e.g., "dissolve A to A").
  • the EDL command be physically capable of single command execution (e.g., "dissolve A to A").
  • applicants' EDL allows a user to "trace" the origin of source material used to create a ulti- generational audio and/or video program.
  • This tracing ability permits the user to generate an EDL that describes not merely how to recreate a multi-generational program, but how to generate a first generation version of the program.
  • the present invention further permits viewing and performing (e.g., executing) such first generational version of the program. More specifically, it will be appreciated that a second generation program (e.g., a program including a copy of original source material) will be of lower quality than a first generation program, and that a third generation program (e.g., a program including a copy of a copy of original source material) will be of even lower quality.
  • the present invention can provide the user with an option of viewing a first generation version of the clip, or actually performing (e.g., reconstructing) the first generation version.
  • a prior art EDL might allow (assuming the EDL could be deciphered) reconstruction, but using multi- generational images, for example, perhaps an image of scene 2 recorded atop an image of scene 1 (scene 1 now being second generation) .
  • a prior art system might also employ a software program called "TRACE", available from the Grass Valley Group associated with Tektronix of Beverton, Oregon, to try to track back a multigenerational prior art EDL to earlier generation material.
  • TRACE must be typically be executed outside the editing system.
  • the present invention entirely within the system, can present the program using original source material for scene 1 and for scene 2.
  • applicants' described method of creating a unique and complete hierarchical database of every edit made during an edit session is applicable to systems other than video editors.
  • Applicants' method could, for example, be implemented to track and log every edit change made within a word processing computer program, permitting a user to "un-edit” (or "unlayer") and recover earlier, intermediate versions of a document.
  • applicants' method can recreate not simply a list of keystrokes, but the actual intermediate documents that existed at various stages of drafting or editing.
  • Fig. 13 an information model is presented showing the hierarchical database-like structure and analysis used in applicants' universal interface software. It is the function of applicants interface software to permit editor 2 to interface in a universal fashion with any peripheral or video control device.
  • an informational box DEVICE 200 represents whatever external peripheral device 12, 12' , 14, 14', 16, 16' etc. is to be controlled by the editor 2.
  • DEVICE 200 contains attributes of the device such as device name (disk recorder 1, for example) , device model (A-62, for example) , manufacturer (Abekas) , type of communications required by the device to establish "handshaking" at a lowest level of communications (e.g., the device manufacturer typically specifies SMPTE, ethernet, or RS- 232) .
  • DEVICE 200 also contains a single digit identifying the type of protocol required for the device (i.e., identifying the method used for establishing the communication link, and how messages are transmitted, including byte counts, checksums, etc.) and also contains a code identifying the type of device (i.e., whether it is a transporter such as a VTR, or a switcher, signal router, special effects box, etc) .
  • DEVICE 200 also includes information as to each such device's machine address (the address being somewhat analogous to an identifying telephone number on a party-line system) .
  • information for the box DEVICE 200 is preferably input from the editor control panel 20 by the user via a specification text or data files, or will already reside in memory 118 or in an external diskette which the user will read into the CPU processor board 102.
  • a unique specification file will have been created by the manufacturer of the present invention for each known accessory device.
  • the preferably text file nature of this file will allow a user, even a user with minimal knowledge of computer programming, to create a text file from scratch.
  • box DEVICE 200 Very significant in the concept of box DEVICE 200 is the ability of applicants' interface logic to model a higher upper level physical device in terms of combinations of lower level virtual devices.
  • the peripheral device desired to be controlled is an Abekas
  • A-62 digital disk recorder This recently developed device includes two separately controllable video disks and one keyer (a keyer is a type of switcher use to key video signals on top of video signals, as in creating an image of a weatherman standing before a map) .
  • a keyer is a type of switcher use to key video signals on top of video signals, as in creating an image of a weatherman standing before a map.
  • A-62 really has the attributes of two device types: on one hand it "looks like” a transport device (a recorder) but on the other hand it also "looks like” a switcher.
  • a prior art on-line editor attempting to control the Abekas A-62 will ignore the switcher aspects of the device, and interface to the A-62 as though it were a transport device only. This compromise interface deprives a prior art editor of being able to control all the capabilities within the A-62.
  • applicants' interface software models the A-62 as being three virtual sub-devices (two separate recorders and one switcher) within an upper level (the A-62) . As a result, editor 2 is able to control all functions of which the A-62 is capable.
  • DEVICE box 200 containing a software "model" of the new device, namely a device consisting five sub- devices: three recorders and two switchers.
  • a data file providing this information as input to DEVICE box 200 is preferably in text file form, e.g., understandable to a user, a user will be able to create his or her own model, or can readily modify one or more existing models.
  • the manufacturer of the present invention will analyze the device function capabilities and protocol requirements, and in short time will promulgate an appropriate text file.
  • the DEVICE 200 box communicates with an INPUT PORT box 202, an OUTPUT PORT box 204 and a COM PORT box 206.
  • the multiple arrowhead notation means "one or more".
  • the box DEVICE 200 may receive and contain information from many INPUT PORTS 202, and may send information to many OUTPUT PORTS 204.
  • the INPUT/OUTPUT PORTS 202, 204 contain information pertaining to the physical cable connections between editor 2 and the various peripheral devices being controlled. Such information includes the type and number of ports (e.g., how many audio and video ports) and channel assignment thereto.
  • the COM PORT box 206 contains information pertaining to the state of the communications port in use (e.g., which port from which processor card 104, 104' in Fig. 9).
  • the COM PORT box 206 also has communications protocol information (for example, whether we are dealing with a Sony, an Ampex, a Grass Valley, etc. protocol) and communications port machine address information.
  • communications protocol information for example, whether we are dealing with a Sony, an Ampex, a Grass Valley, etc. protocol
  • communications port machine address information for example, whether we are dealing with a Sony, an Ampex, a Grass Valley, etc. protocol
  • a single communications port can control more than one device.
  • a first signal leaving a first recorder might be in perfect time synchronization, but upon then passing through an effects device and then through a second recorder, the signal might be delayed by several frames. If this delayed first signal is then to be combined with a second signal that always has passed through several devices, each of which may contribute a time delay, it becomes increasingly difficult to track and maintain proper synchronization. Further, the delay associated by a device can vary with the device's mode of operation. In addition, delays associated with an audio signal might be very different from delays associated with an accompanying video signal.
  • Editing suites typically use router boxes for controllably making interconnections to devices.
  • a router box typically has many input ports and fewer output ports. Devices connected to a given input port can be controllably directed (or "routed") to a desired output port. Once devices are cabled into a router, the editor is able to command combinations of router interconnections to allow different devices to be connected to other devices.
  • each transport device e.g., VTR
  • VTR is assigned a unique "cross point" number which acts a reference to the input port of a switcher to which prior art systems assume the transport is connected. This assumption made in the prior art that a cross point can involve but one transport and one switcher represents a rather inflexible model of the potential device connections with an editing suite.
  • Applicants' interface software will handle the necessary details, knowing what commands in what format must be issued at what time to achieve the desired results.
  • the software can read a file containing details of the configuration of the editing suite, a description of how the suite is physically cabled.
  • the box DEVICE 200 provides a model that knows, for example, that the output of a given VTR is routed to a given input of a special effects box, and that the output of the special effects box is routed to a given input of a switcher.
  • applicants' interface software allows editor 2 to control the routing of video and audio within the editing suite, without requiring any outside action (such as pushing a device control button) .
  • applicants' point-to-point model is software based, the model is flexible since it is readily modified with software commands.
  • the present system dynamically controls the routers, keeping track of what transports are currently being routed to which switcher inputs. Any re-routing is accomplished by software command; no reconnecting of cables is required.
  • the present system electronically commands the router to bring in a desired transport, and assign the transport to a given cross point.
  • Such flexible reconfiguration is simply not available with prior art systems. Reconfiguration would require a user to reconfigure the router. However since the prior art editor had no knowledge of what new cabling - -was accomplished, a user would have to manually control devices (e.g., select a cross point on a router) to bring the devices into the suite.
  • a the present invention flexibly allows reconfiguration using software in a manner that allows the editor 2 to be aware of the configuration existing at any moment. All suite control is centralized, for example, at the control panel 20, with no necessity for the user to manually engage device functions. In the present system, if a user has an EDL requiring, for example, a certain effect possible with devices obtainable via a router, applicants' configuration software will dynamically reconfigure the required router to bring in whatever devices might be required to build the desired visual program. Because the editor 2 "knows" the configuration in the editing suite, and because DEVICE 200 allows virtual modeling of any device, the present system can create configuration models in terms of what devices should be brought into create a given effect.
  • the user can program a desired effect, say WIPE A/B, into the CPU board . 102, preferably from the console 20 with a data or text file.
  • a video switcher capable of ten different simultaneous effects (e.g., "dissolve”, “wipe”, “key”) can be modelled as ten virtual effects boxes, each capable of one effect (e.g., a "dissolve” box, a "key” box) .
  • the software model may be "layered" such that the software can decide from modeling that the output of an effects box (including the output from a virtual effects box) should be routed to the input of a second effects box (or second virtual effects box) , and so forth, to produce whatever effect a user is requested.
  • the present system is able to actually make design decisions for the user.
  • the user can request the editor 2 to produce a certain visual result, whereupon the editor 2, armed with knowledge from the DEVICE box 200 as to what devices are available, can create software models of whatever configurations (if any) will accomplish the desired effect, given the devices present.
  • the user is free to be artistically creative, rather than technically creative.
  • a user who traditionally produces a given effect with the same hardware and procedure might not be thwarted upon arriving at the editing suite and learning that a necessary piece of equipment for the procedure is not working.
  • the user would simply input the desired effect whereupon applicants' interface software would advise what procedures and available hardware are capable of producing the effect.
  • the present system can take an effect having, say, three keys with a wipe underneath, and permit the user to add another key, and simply map the effect onto the switcher. This capability simply does not exist in prior art on-line editing systems.
  • a prior art editor will have a dedicated table of commands to be sent to different effects boxes. However the user has no way to simply re-layout the desired effects, e.g., to specify which effects are to be performed on which devices.
  • Each DEVICE 200 can support zero or more VALUES 207.
  • the present system creates a VALUE box 207 for each VALUE entry in the device specification file.
  • VALUE parameters include gain, pattern, horizontal position, etc. These values may be “set” by the user and “fetched” when the present system builds device command messages.
  • the DEV_CMD box 208 in Fig. 13 retains the information read into the CPU board 102 from the data or (preferably) text file, indicated by element 162 in Fig. 9. It is the text file 162 that informs the interface software and thus the editor 2 what commands a device or virtual sub-device will support.
  • the DEV_CMD box 208 attributes include the name of the command (e.g., PLAY, STOP, WIPE, DISSOLVE, etc.),. the type of command (a method of grouping commands within a protocol, as required by the device manufacturer) , and delay (how long after the editor 2 sends the command does the command take effect) .
  • the contents of the DEV_CMD box 208 are provided to the each communications processor board 104 to load a scheduling table contained within the Z80 processor found on board 104.
  • Fig. 4 shows the contents of a scheduling table.
  • the DEV_CMD box 208 consists of one or more CMD_ITEM boxes 210. It is the contents of the CMD_ITEM box 210 that describe how to actually build the command in question for a device, i.e., the precise byte structure and format required. For example, the contents if the CMD_ITEM box 210 might pertain to a Sony VCR. If the command PLAY is to be issued to the Sony VCR, there will be two CMD_ITEM boxes 210: the first box containing 20 (hex) , and the second box containing 01 (hex) .
  • Each CMD_ITEM box 210 has a type.
  • Simple types include HEX_NUMBER (as in the Sony VCR example above) .
  • Other types include FETCH ("values") which will find a VALUE 207 and load it on the stack.
  • FETCH values
  • Other types support arithmetic and logical stack operations, and a number of special purpose types have been created such as MSB LSB which pops the top stack item and pushes the most significant bit followed by the least significant bit.
  • Applicants' interface software advantageously provides the CMD_ITEM box 210 contents as input to a stack calculator.
  • the user via text or data files, is able to create complex and arbitrary commands "on the fly".
  • the user via text or data files, is able to create complex and arbitrary commands "on the fly".
  • the command pertains to the controllable video gain of a switcher
  • the user can issue the command "GAIN" from the keyboard 22 on the control panel 20.
  • the GAIN command is built by the stack calculator. Since the stack calculator supports conditional testing, looping, jumping, arithmetic operations and the like, great flexibility is available.
  • the GAIN command (not unlike many other commands) would be coded in software to access a specific data value. New values and commands could not be added without changing the software, a task not readily accomplished by a user. A lay user could not readily modify these bytes, certainly not within the few seconds it would take someone using the present invention.
  • the ACTION box 212 corresponds to the E_ACTION 180 box appearing in Fig. 11C, and describes the characteristics of a given action.
  • An action might be CHANGE KEY GAIN, RECORD, WIPE PATTERN, DISSOLVE, etc., and the name of the action is an identifying character string.
  • the ACTION box 212 also identifies the type of action, i.e., a switcher transition, a wipe, a dissolve, and further contains maximum and minimum values . where applicable (e.g., maximum gain, minimum gain).
  • the diamond shaped box 214 indicates a function that here correlates an action with a device, i.e., what action does a given device support.
  • the TIMELINE TASK box 218 contains information as to what must be done to accomplish a command. For example, if a transport device is to be used, the TIMELINE TASK box 218 will provide information as to the name of a function requiring some physical movement at the device end to accomplish a given task. With reference to the E_SYNC_POINT box 190 shown in Fig. 13, this box 190 advises the editor system to issue a TIMELINE TASK 218 command to prepare the device. For example, before a transport device can be ordered to RECORD, TIMELINE TASK 218 ensures that the servo within the transport has moved to bring the tape into position for recording.
  • Fig. 13 may be used without departing from the spirit of the present invention.
  • Table 1 is an example of an actual text file, namely a text file defining a VTR, a SONY model BVW-75.
  • the relationship between the contents of this text file and the box elements in Fig. 13 is readily apparent.
  • the text file provides the information contained in the DEVICE box 200 in Fig. 13: we are dealing with a TRANSPORT device, a SONY model BVW-75.
  • This device requires SONY protocol in an SMPTE communications format.
  • the device provides one channel of video and four channels of audio.
  • entries preceded by a hash mark (#) are comments and are not required.
  • the "# Device Types" commentary refers to the device codes returned by the Sony protocol.
  • the text file shown in Table 1 also includes data listed under COMMANDS, which data relates to information provided to DEV_CMD box 208, CMD_ITEM box 210, and TIMELINE TASK box 218 in Fig. 13.
  • the COMMANDS are internally generic to editor 2 in the present invention, and may in fact be customized or compound commands. Information for COMMANDS is originally taken from the protocol manual for the device in question. For example, the command RECORD to editor 2 will be issued as a hex code 20 02 as required by Sony BVW- 75 manual. The editor 2 treats the RECORD command as having zero delay and being of message type zero. While this Sony transport device does not require a message type, message type is needed for some protocols such as Ampex.
  • the text file also provides information required by the INPUT PORT box 202, the OUTPUT PORT box 204, and the COM PORT box 206 in Fig. 13.
  • the text file advises editor 2 , for example, that this tape recorder has one video input (VI) , the video coming from an external source, and four externally supplied audio inputs (A1-A4) . Further, this device has two video outputs, each apparently providing the same video signal VI, and four audio outputs (A1-A4) .
  • a text file may be created for any device by analyzing the device and the accompanying protocol and technical manuals, and expressing the device in terms of the parameters set forth in Fig. 13.
  • a text file for an Abekas switcher model A-82 may be treated by the present invention as comprising virtual sub-devices.
  • VALUES data may be easily specified and then manipulated by a user.
  • Figs. 14A and 14B software within applicants' configuration file permits a user to configure the keyboard 22 to suit the user's needs.
  • the layout of the keys on the keyboard 22 may be changed by moving the keycaps and changing the raw key map within the keyboard configuration file.
  • Applicants intend to provide automated means for changing this file at a later date.
  • the Raw_Key_Map 220 maps raw keyboard scan codes into key symbols.
  • the key symbols are essentially character strings which correspond to the label on the key cap.
  • the user can further configure the mapping between a key symbol and the state of the key (e.g., UP, DOWN, SHIFTED or not, CTRL or not, etc.) with the system command to be bound to that key symbol and state.
  • Table 2 attached hereto and incorporated by reference herein is a listing of function calls presently available from applicants' key board mapping.
  • text files for configuration information, device specifications, keyboard mapping and other functions are handled through a library of routines which are loadable into the main CPU board 102. These routines are able to handle buffered input and token recognition.
  • the text file is opened and the first token is scanned into the system.
  • the first token will specify what information follows. For example, consider a text file stating:
  • MENU menu__name MENU_ITEM [ MENU_ITEM ... ] ⁇ so the system then scans for a menu name, in this case the name is "Test”. Next the system looks for a " ⁇ " followed by one or more MENU_ITEMS. Each MENU_ITEM is signified by the keyword ITEM and has its own syntax. The character " ⁇ " finishes the MENU.
  • the trim editing function of the system 2 provides an immediate and accurate means of visually searching for a specific edit point, called a "mark.”
  • the trim editing function works in conjunction with MARK keys on keyboard 22 by capturing live video on either side of the MARK point.
  • a horizontal "clip" 250 of video frames 252 is then displayed, preferably on the display monitor 36, that may be scrolled to the left and right to allow quick trimming.
  • Time codes 253 for each frame 252 are displayed below the frame 252. Approximately 16 frames 252 either side of the mark point will be captured for display. Because the video is captured and stored inside the editing system, trimming with the editing function of the system 2 does not require the source video tape recorder to be continuously jogged forward and back.
  • the editing function acquires and caches high quality images, which are timed accurately and stored with an associated time code 253 for each image. This operation gathers and accurate strip of video around any given mark point, which the user then slides back and forth in electronic form like a strip of film for fine tuning of the mark points.
  • live video is fed into image processor board 110 ( Figure 9) on channels A or B (lines 140 or 142) from video input board 108 when either a MARK IN or MARK OUT key is pressed.
  • live video is meant a video feed suitable for showing as a part of a television program, whether the video feed is captured in real time by a video camera, comes from a video tape, is a broadcast feed or a satellite feed.
  • live is used to describe such a feed to distinguish it from the clips used in the trim editing function for establishing precise edit points.
  • the image processor 110 stores the video images in a memory location that operates like a recirculating shift register.
  • a set 250 of seven frames comprising mark point frame 254 and three frames 252 on either side of the mark point 254 are displayed for each mark point, out of a total of 35 frames stored for each mark point.
  • An indicator border 256 surrounds each mark point frame 254.
  • program material can be viewed in one set 250 of seven frames 252 and 254 and source material viewed in the other set 250 of seven frames.
  • the two sets 250 can then be moved within the 35 frames for their respective mark points to adjust the mark points relative to one another, using the trackball or the PREVIOUS and NEXT keys to scroll along the 35 frames for the mark points.
  • a line 258 of six source images is also shown in the display of Figure 16 . Five images 260 are inactive, i.e.
  • a frozen source video frame appears in them, and one image 262, highlighted by border 264, is active, i.e., the source video appearing in it is live.
  • the source represented in the active image 262 is the source from which the frames 252 and 254 in source set 250 originate.
  • Program video is live in master/switcher window 266.
  • the program video in window 266 is the origin of the program set 250 of program frames 252 and 254.
  • An edit workspace window 268 shows edit commands that have been entered in the system 2 during the trim edit before their execution.
  • a single line of a set 250 of seven frames including a mark point frame 254 and three frames 252 on either side of the mark point frame 252 from one video segment can also be displayed, preferably using monitor 36, to allow selection of a frame 252 from the segment that shows, for example, a particular position in a pitcher's windup as the MARK IN point.
  • the line 258 of source images 260 and 262, the master/switcher window 266 and the edit workspace window 268 have the same significance as in Figure 16.
  • the set 250 of source frames 252 and 254 is replaced by a set 250 of program frames 252 and 254.
  • an edit decision list window 270 which shows edit commands after they have been executed, is available. Either the Figure 16 or Figure 17 versions of the display could be used to select these MARK IN and MARK OUT points.
  • Figure 18 shows a third trim edit display option, in which the two sets 250 of frames 252 and 254 show the beginning and the end of a source video 262 segment. Because the MARK IN and MARK OUT frames 254 are the beginning and the end of the segment, they are shown at the beginning and the end of their respective frame sets 250. As in Figures 16 and 17, different MARK IN and MARK OUT frames 254 can be selected with the Figure 18 display.
  • Figure 19 shows the display, as seen for example on monitor 36, after a proposed trim edit has been established. The seven frames 252 and 254 of a s ⁇ t 250 is divided into, for example, 4 frames 252 and 254 of program and three frames 252 of source in a single line.
  • This display allows careful examination and adjustment, if necessary, of a proposed splice between source video 262 and program video 266. If adjustment is necessary, the source and program video frames can be scrolled as in the Figure 26 display.
  • the edit command to execute the splice as shown in edit workspace window 268 is executed, the edit decision list window 270 is updated to show the resulting edit decision list.
  • the following keys are used to provide quick positioning of the sets 250:
  • START Selects the first or earliest frame 252 in the set 250.
  • END Selects the last or latest frame 252 in the set 250.
  • NEXT Steps one frame 252 forward or later in time.
  • PREV Steps one frame 252 reverse or earlier in time.
  • the user presses the CANCL key while in the function.
  • the trackball or position keys are used to view a different frame 252 contained in the set 250.
  • the SELECT key (just above the trackball) is then pressed to select the new MARK point.
  • the original set 250 will still be in memory, i.e., the clip is not recaptured centered around the new MARK point.
  • the MARK point is thus no longer centered in the clip.
  • the MARK point is stored in memory by its time code identification along with the corresponding video frame.
  • JogFields number of fields to jog using JogFields command DmcSpeed ⁇ INT 65536 -65536131072 ⁇
  • # 0x00000400 timecode (never enabled during insert)
  • ChTi ecode channel timecode is recorded on
  • # 0x00000200 Audio 2
  • # 0x00000400 timecode # Audio 3 on other device families
  • VarFwdLimit Maximum speed at which to use VarPTay VarFwdLimit ⁇ INT 13107265536131072 ⁇ # 2X, IX, 2X
  • VarRevLimit Maximum speed (-) at which to use VarPlay VarRevLimft ⁇ INT -65536 -655360 ⁇ # -IX, -IX. 0
  • # TSOLimit Maximum deviation from play allowed in TS0 TSOLimit ⁇ UINT 167126665535 ⁇ # 25.5%, 0.1%. 99.9%

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)
  • Management Or Editing Of Information On Record Carriers (AREA)

Abstract

Un système d'édition vidéo en ligne comporte un logiciel d'interface (132) qui contient, de préférence dans un format de fichier texte, des informations sur les protocoles et une cartographie des fonctions architecturales relatives aux périphériques que l'usager peut commander via l'éditeur. Ce système permet de commander simultanément jusqu'à 48 dispositifs montés en série ainsi que 8 systèmes (12) commandés par interface d'usage général (GPI) (12). Le logiciel d'interface (132) permet en outre à un usager d'introduire l'effet de son choix dans l'opération d'édition. Le matériel (2) et le logiciel (132) du déposant donnent la possibilité de constituer un historique hiérarchisé virtuel de longueur illimitée où l'on trouve chaque étape de chaque opération d'édition réalisée avec ce système. Outre les données traditionnelles de code de temps, le système de liste de décision d'édition (EDL) (38) offre à l'usager une sortie graphique ou visuelle et lui permet de voir concrètement l'arrêt sur image correspondant au point d'édition vidéo désiré.
EP92922558A 1991-10-21 1992-10-20 Systeme d'edition video en ligne. Withdrawn EP0609352A4 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US78148191A 1991-10-21 1991-10-21
US781481 1991-10-21
PCT/US1992/008952 WO1993008664A1 (fr) 1991-10-21 1992-10-20 Systeme d'edition video en ligne

Publications (2)

Publication Number Publication Date
EP0609352A1 EP0609352A1 (fr) 1994-08-10
EP0609352A4 true EP0609352A4 (fr) 1994-12-14

Family

ID=25122893

Family Applications (1)

Application Number Title Priority Date Filing Date
EP92922558A Withdrawn EP0609352A4 (fr) 1991-10-21 1992-10-20 Systeme d'edition video en ligne.

Country Status (3)

Country Link
EP (1) EP0609352A4 (fr)
CA (1) CA2121682A1 (fr)
WO (1) WO1993008664A1 (fr)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5339393A (en) * 1993-04-15 1994-08-16 Sony Electronics, Inc. Graphical user interface for displaying available source material for editing
WO1995033335A1 (fr) * 1994-05-30 1995-12-07 Sony Corporation Processeur d'images et systeme d'edition d'images
JP2747251B2 (ja) * 1995-07-24 1998-05-06 日本電気ホームエレクトロニクス株式会社 画像・音声編集システム
TW332293B (en) * 1996-04-23 1998-05-21 Matsushita Electric Ind Co Ltd Editing control apparatus and editing control method
US5969716A (en) * 1996-08-06 1999-10-19 Interval Research Corporation Time-based media processing system
US6154600A (en) * 1996-08-06 2000-11-28 Applied Magic, Inc. Media editor for non-linear editing system
WO1998012702A1 (fr) * 1996-09-20 1998-03-26 Sony Corporation Systeme et procede d'edition, et dispositif et procede de gestion de sequences video
JP3211679B2 (ja) * 1996-09-25 2001-09-25 松下電器産業株式会社 編集装置および編集方法
GB2323734B (en) * 1997-03-27 2001-04-11 Quantel Ltd A video processing system
US6166731A (en) * 1997-09-24 2000-12-26 Sony Corporation Editing digitized audio/video data across a network
KR100258119B1 (ko) 1997-11-29 2000-06-01 전주범 대화형 멀티미디어시스템에 있어서 유저정보 편집 및 편집된정보 재생방법
US20020118954A1 (en) 2001-12-07 2002-08-29 Barton James M. Data storage management and scheduling system
US7543325B2 (en) 1999-03-30 2009-06-02 Tivo Inc. System for remotely controlling client recording and storage behavior
US6757906B1 (en) 1999-03-30 2004-06-29 Tivo, Inc. Television viewer interface system
WO2000062298A1 (fr) 1999-03-30 2000-10-19 Tivo, Inc. Systeme de correction de position de lecture automatique apres avance ou recul rapide
EP1166270A1 (fr) * 1999-03-30 2002-01-02 Tivo, Inc. Systeme d'indication visuelle de progression dans des dispositifs multimedias
US6847778B1 (en) 1999-03-30 2005-01-25 Tivo, Inc. Multimedia visual progress indication system
US7665111B1 (en) 1999-10-20 2010-02-16 Tivo Inc. Data storage management and scheduling system
US8689265B2 (en) 1999-03-30 2014-04-01 Tivo Inc. Multimedia mobile personalization system
US6868225B1 (en) 1999-03-30 2005-03-15 Tivo, Inc. Multimedia program bookmarking system
US20030182567A1 (en) 1999-10-20 2003-09-25 Tivo Inc. Client-side multimedia content targeting system
US7383508B2 (en) * 2002-06-19 2008-06-03 Microsoft Corporation Computer user interface for interacting with video cliplets generated from digital video
US11348155B2 (en) * 2020-05-28 2022-05-31 Diamonds Direct, LC Step through process of generating custom jewelry

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2225685A (en) * 1988-11-30 1990-06-06 Sony Corp Television signal editing
EP0489301A1 (fr) * 1990-11-30 1992-06-10 Kabushiki Kaisha Toshiba Appareil de gestion des images animées

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5897737U (ja) * 1981-12-22 1983-07-02 日本ビクター株式会社 磁気記録再生装置
US4600989A (en) * 1982-03-03 1986-07-15 Cavri Systems, Inc. Universal computer, recorded video interface
JPS59135680A (ja) * 1983-01-24 1984-08-03 Asaka:Kk ビデオ編集用ビユワ−
US4729044A (en) * 1985-02-05 1988-03-01 Lex Computing & Management Corporation Method and apparatus for playing serially stored segments in an arbitrary sequence
US4746994A (en) * 1985-08-22 1988-05-24 Cinedco, California Limited Partnership Computer-based video editing system
US5051845A (en) * 1989-04-27 1991-09-24 Gardner Larry J Closed-loop post production process
US5012334B1 (en) * 1990-01-29 1997-05-13 Grass Valley Group Video image bank for storing and retrieving video image sequences

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2225685A (en) * 1988-11-30 1990-06-06 Sony Corp Television signal editing
EP0489301A1 (fr) * 1990-11-30 1992-06-10 Kabushiki Kaisha Toshiba Appareil de gestion des images animées

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GARDNER L.J. ET AL.: "A Closed-Loop Digital Video Editing System", SMPTE JOURNAL, vol. 99, no. 8, August 1990 (1990-08-01), SCARSDALE, NY US, pages 634 - 638, XP000142532 *
See also references of WO9308664A1 *

Also Published As

Publication number Publication date
EP0609352A1 (fr) 1994-08-10
WO1993008664A1 (fr) 1993-04-29
CA2121682A1 (fr) 1993-04-29

Similar Documents

Publication Publication Date Title
US5649171A (en) On-line video editing system
WO1993008664A1 (fr) Systeme d'edition video en ligne
KR100551459B1 (ko) 편집시스템,편집방법,클립관리장치,및클립관리방법
US6430355B1 (en) Editing device with display of program ID code and images of the program
EP0625783B1 (fr) Methode et appareil pour l'affichage de matériau source de montage disponible
US5206929A (en) Offline editing system
US5760767A (en) Method and apparatus for displaying in and out points during video editing
US5307456A (en) Integrated multi-media production and authoring system
US6400378B1 (en) Home movie maker
US6327420B1 (en) Image displaying method and editing apparatus to efficiently edit recorded materials on a medium
WO1998047146A1 (fr) Dispositif d'edition et procede d'edition
JP2000100129A (ja) 編集システム及び編集方法
CA2553481C (fr) Procede de production televisee
GB2329750A (en) Editing digitized audio/video data
CA2553603C (fr) Technique de production de television
Rosenberg Adobe Premiere Pro 2.0: Studio Techniques
Gardner et al. A closed-loop digital video editing system
Eagle Vegas Pro 9 Editing Workshop
KR20000016596A (ko) 편집 장치 및 편집 방법
KIND et al. Copyright and Disclaimer
Grisetti et al. Adobe Premiere Elements 2 in a Snap
JP2000100128A (ja) 編集システム及び編集方法
JPH11232281A (ja) 動画像編集方法
JP2000100131A (ja) 編集システム及び編集方法
JP2000100133A (ja) 編集システム及び編集方法

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19940419

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE DK ES FR GB IT NL SE

A4 Supplementary search report drawn up and despatched

Effective date: 19941025

AK Designated contracting states

Kind code of ref document: A4

Designated state(s): DE DK ES FR GB IT NL SE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

17Q First examination report despatched

Effective date: 19961216

18W Application withdrawn

Withdrawal date: 19961230