GB2246933A - Production of multi-layered video composite - Google Patents

Production of multi-layered video composite Download PDF

Info

Publication number
GB2246933A
GB2246933A GB9108172A GB9108172A GB2246933A GB 2246933 A GB2246933 A GB 2246933A GB 9108172 A GB9108172 A GB 9108172A GB 9108172 A GB9108172 A GB 9108172A GB 2246933 A GB2246933 A GB 2246933A
Authority
GB
United Kingdom
Prior art keywords
frame
layer
store
image
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9108172A
Other versions
GB9108172D0 (en
Inventor
Michael Seymour
Matthew Raymond Starr
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rank Cintel Ltd
Original Assignee
Rank Cintel Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rank Cintel Ltd filed Critical Rank Cintel Ltd
Publication of GB9108172D0 publication Critical patent/GB9108172D0/en
Publication of GB2246933A publication Critical patent/GB2246933A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B2220/00Record carriers by type
    • G11B2220/90Tape-like record carriers

Abstract

A multi-layer video composite is formed on a frame-by-frame basis by loading a background frame into a background frame store (14), loading a frame of the next layer into a foreground frame store (12) and supplying the outputs to a compositor (18). The composited layers are looped back to the background frame store and a frame of the next layer is loaded into the foreground framestore (12). The layers may be buffered in a RAM (22) to increase processing speed. Once the final layer in the foreground frame store has been composited with the previous layers in the background frame store the composite is output. The foreground frame store includes a digital video effects generator to enable interactive editing of each layer. The source of each frame and the edit and composite sequence is stored so that after composition any layer can be edited interactively by separating that layer from the composite and editing it in the DVE. <IMAGE>

Description

PRODUCTION OF NULTI-LAYERED VIDEO COMPOSITE FIELD OF THE INVENTION This invention relates to computer graphics systems, and in particular to the compositing of multiple graphics and/pixel planes.
BACKGROUND TO THE INVENTION Most current systems for creating multi-layered video images operate with either fixed large RAM memory or with very fast, high bandwidth disc drives allowing the operator to lay out an entire sequence of frames - one layer at a time. During each part of the process the next layer is electronically combined with the result of the previous combination and re-stored. Thus, the video image is built up one their at a time with the number of frames being included each pass.
As an example of this technique a simple scene may comprise a background layer of sky, an intermediate layer of buildings and a top layer of human figures. For a scene length of 100 frames the background sky will be laid down first for all 100 frames, the intermediate layer will then be layered down for the 100 frames and combined with the background layer and the top layer will then be layed down and combined with the combination of the background and intermediate layers. Thus, the system must have sufficient memory to hold 100 frames of both the background layer and the layer being added.
In addition to requiring a large amount of memory this techniques suffers from a number of further problems. In particular, the final image cannot be seen until all the work is complete. There is no interactive preview. This is highly inconvenient for the operator and reduces greatly the control he has when compiling images. Moreover, when the final work has been completed, any problems with the relative positioning of a given layer with respect to another layer cannot be corrected without starting from scratch. Again, this is highly inconvenient and extremely wasteful of operator time.
Some systems employ digital 4:2:2 tape which removes the problem of fixed RAM or disc drives. However, these systems operate on essentially the same principals and do not address either of the two main problems mentioned.
As the composition of multi-layered video images requires a very high quality professional equipment, all existing processes are expensive. They are also lengthy. Lower cost solutions loose vital quality through 'generation loss' and still do not address the two fundamental problems referred to.
Similar problems arise with editing systems which create video images which are formed from a combination of sources.
It is often desirable to be able to have one of these sources as computer generated graphics. Electronic "painting" allows a video image to be touched up, complex titles and other such features to be added. The existing methods are restrictive, expensive and time consuming.
The current systems, allow the operator to record an entire sequence of frames and then touch up those frames, progressively replacing the existing frames. During the touch up operation, a single frame appears on a work screen and the user of the system can ~paint" over this frame by the use of a pen of a digitising tablet or a similar A/D device. The painting process is a pixel process and thus cannot be modified once completed. The new frame is then returned to the storage medium. The video output is created a frame at a time.
The existing techniques suffer from the following disadvantages: a) The process is quite slow, as each frame needs to be individually modified.
b) The touch up work cannot be previewed. While the work completed can be viewed, existing systems do not allow an automated preview of a series of changes nor do they allow the changes to be edited if the changes introduce new problems.
c) The amount of video that can be worked on is strictly limited to the storage capacity at hand, as ti.is storage is extremely expensive, only a fixed number of seconds is normally available.
As with multi-layering systems, some systems employ Digital 4:2:2 tape. Again, this method is essentially the same and does not address any of the three main problems mentioned. Similarly the use of storage solutions does not give an acceptable level of quality.
One aspect of the invention aims to overcome these disadvantages through provision of an apparatus and method of forming lengthy video touch up sequences which does not require large amounts of internal storage nor limits the ability to edit the result.
In view of the foregoing, it would be desirable to be able to create multi-layered video images such that the final picture is the combination of multiple sources, without losing image quality either through use of an analog step down or through successive output/inputting of the same image. Furthermore, it is desirable to be able to build the layers up interactively in a way that allows preview through image creation and enables composites to be re-worked or modified at any stage without having to undo unaffected areas of work.
The present invention in its various aspects and embodiments aims to overcome the disadvantages with the prior art systems identified above.
SUMMARY OF THE INVENTION The invention is defined by independent claims 1, 12, 22, 26 and 28 to which reference should be made. Broadly, a first aspect of the invention composites a multi-layer video image frame by frame rather than layer by layer.
In one embodiment of the invention the layers of an image are provided from a number of sources; externally stored video frames, internally generated graphics and internally stored stills. Using a combination of frame stores and digital compositors the image is built up for a single frame layer by layer. The source of each layer is recorded and the sequence of edits and composites is also recorded so that even after composition work can be 'undone' by recreating the frame and editing any given layer.
Editing of layers is performed in a DVE frame store which acts as a priority foreground frame store. The image is built up for a frame by loading a frame of the new layer into the DVE frame store. Editing is performed as desired and the compositing is with the contents of the background frame store. The output of the compositor is then looped back into the background frame store and written over the previous contents. A frame of the next layer is then written into the DVE frame store and the process repeated until all layers have been added.
To edit an intermediate layer after compsition a third frame store is required. This may either be by done by the system meu frame store or an independent frame store. Layers above the layer to be edited are composited using the stored reference, edit and composited using the stored reference, edit and composit instructions and stored in the menu frame store, the layers below the layer to be edited are treated similarly and stored in the background frame store. The layer to be edited is placed in the DVE frame store and edited interactively appearing to the editor to be correctly placed between the layer structure.
This embodiment has the advantage that multi-layer composites may be built up in such a way that post-production editing can easily be performed. Moreover large amounts of memory are not required and editing both pre-production and post-production is interactive; the editor can edit any layer in the context of the remainder of the image.
In a second aspect of the invention the foreground and background frame stores are used to edit internally generated video data onto externally supplied video images. In one embodiment video frames from e.g. a tape recorder are stored frame by frame in the background frame store and internally generated graphics such as text are loaded into the foreground frame store, edited and composited in the compositor. The output can then be displayed and/or stored as desired.
BRIEF DESCRIPTION OF DRAWINGS Embodiments of the invention will now be described, by way of example, and with reference to the accompanying drawings, in which; Figures la) - c) show examples of three sources of layers for the multi-layer video composite; Figure 2 is a schematic diagram showing how one aspect of the invention may be put into practice; Figure 3 (a) is a schematic representation of an image having a number of layers; Figure 3 (b) illustrates the appearance to the viewer of the composite image of figure 3a; Figure 4 is a schematic diagram showing how the apparatus of figure 2 may can refigured in a second aspect of the invention; Figure 5 illustrates an editing technique; Figure 6 illustrates the output of the apparatus using the figure 4 configuration; and Figure 7 illustrates how an external source can be edited selectively with one of a number of internal sources.
DESCRIPTION OF PREFERRED EMBODIMENT Each layer of an image to be composited is called a 'Cel' layer.
A Cel can either be an internally generated graphics (Matte signal) or an externally tagged source also with a Matte signal.
The externally provided source can be of two types; A 'still' frame with/or without matte, or moving source (with/or without matte) on tape. Stills are normally transferred to internal storage and the moving video maintained on external tape. These three variance are shown in Figures la), b) and c) respectively.
GRAPHICS (FIGURE lA)) Graphics are computer generated images which are stored mathematically. As such they can be rendered or reproduced at any time. They are stored in a compressed mathematical form until they are used. Graphics could be stored as pixels - but then they would be equivalent to stills and would be handled as such. The primary difference between stills and graphics is that graphics can be animated, thus while being stored in less space than a single frame, a graphic may represent a large number of frames when rendered. Clearly, if one needs to modify a graphic, the master is always available to reproduce the graphics and change them in any way, without loss of image quality.
STILLS (FIGURE lB)) A still is a frame of video or a collection of pixels (less than a frame). As one frame is approximately 1 Megabyte, stills can be stored on the internal disc drives of the machine for quick and random access.
VIDEO SOURCE (FIGURE lC)) A video source can be live or supplied from tape. However, as the source enters the system it is best thought of as a series of frames or stills, played at 25 or 30 frames per second (NTSC or PAL standards).
As such a video signal required a high bandwidth and data storage (25 Megabytes per second). For these reasons any video sequences used in the composite picture are left on tape (external to the machine) and are not re-stored internally, except temporarily during the actual compositing process.
Figure 2 shows a block diagram of a system for putting the invention into practice.
The system relies on the fact that compositing an image frame by frame requires a very much smaller storage space. The storage required is the number of layers making an individual frame together with a small amount of additional memory for recording compositing instructions.
The system relies on a general microprocessor and graphics production unit 28 which can read data from and write data to disc drives and tape machines 20 and 23. These machines comprise the stills and video sources. The processor 28 is accessed by the user from an interface 26 which may comprise a keyboard and other controls as desired.
The system has three frame stores, 10, 12, 14. The first of these is a user menu frame store 10 for viewing and controlling the system. The contents of this frame store may be supplied to a monitor (not shown) for viewing purposes. In some circumstances, to be described, the menu frame store may also be used for storage of video data.
Frame stores 12 and 14 are, respectively, foreground and background frame stores. The actual priority of the two frame stores can be specified. However, priority is normally given to the foreground frame store which also includes a DVE (digital video effects) device. The DVE frame store 12 has the ability to operate on stored data to transform the actual still/frame and its mat signal so as to re-position the image relative to the background or to its original position.
The outputs of foreground and background frame stores 12 and 14 may be composited by compositor 18. Similarly, the outputs of menu frame store 10 and the foreground frame store 12 may be composited by compositor 16. Although the compositors are specific hardware integers, their operation is controlled by the processor 28.
The processor itself may be a micro-processor. However, it is preferred to use a series of transputers with associated 1 Mb D Rams and a 16 Mb DRam.
The bus arrangement of Figure 2 allows great flexibility. The output of each frame store can be; a) Output directly; b) Combined with another frame store and output; c) Combined and then looped back through the system; d) Not combined and looped back; or e) Ignored (written over) In addition, the system comprises a Ram memory 22 which can be used to hold frames, graphics and stills which are being composited for a particular frame and to store details of the compositing as will be described later.
The system operates as follows. A typical multi-layer image is shown in Figure 3. In this instance the image has a background layer (layer 1) of sky with subsequent layers of cloud, sun, mountains etc. and a top layer which in this case is a thin black strip. In the Figure 3 example there are 10 separate layers.
In the first instance the first cel is put down. This is the background sky cel. At this time, the system tags and records its source - whether it is graphics a referenced still or an SMPTE time code reference on an input video tape.
Frames added after the first, background, frame are tagged in the same manner and are combined on top of the current frame or frames by placing the frame of the new layer in the foregroundiDVE frame store 12, manipulating it as desired using the DVE and then combining it with the frame in the background frame store using compositor 18. The output of compositor 18 is then looped back and input into the background frame store 14, writing over the existing frame and becomes the new background.
The compositing is invariable, and the background frame cannot be modified. However, the way it was created is stored and can be repeated at any time. Thus, both the layer references have been stored and the sequences of moves applied by the DVE. Moreover, the user can view the existing background and the frame in the foreground frame store while he is operating on the foreground layer before compositing Thus, a single cycle produces a new layer, assuming a single frame delay and buffering with the Ram store 22. The system can build the compositive picture at a rate of 25 layers per second.
Subsequent layers are manipulated in the same way, with the background frame store 14 always representing the compositive background up to the new layer to be added. Thus, when layer 8 in Figure 3a) is to be added layer 8 is loaded into the foreground frame store 12 and the composite of layers l to 7 is present in the background frame store 14.
Finally, the top layer, layer 10 is composited with the background layer which comprises the composition of layers 1 to 9 to produce the final image as shown in figure 3.
Once a multi-layer image has been produced for one frame, the producer may want to edit various layers. For example, in figure 3b) the editor may wish to operate on either the background or foreground layers or one of the intermediate layers - moving one of the clouds for example.
To edit the top layer, the remaining lower layers are combined by retrieving the layers using the already stored references and repeating the composition according to the stored instruction set. This combination is stored in the background frame store 14. The cel to be edited, the top layer, is placed in the foreground frame store and can then be modified interactively over the background. Once editing has been completed the rest of the cel frames are re-layered as before.
The background layer can be edited in a similar manner. However, the reverse operation is performed. All layers above the cel to be edited are combined and placed in the background frame store.
The cel to be edited is again placed in the foreground frame store. However, the priority of the frame stores is reversed so that the DVE frame store 12 is now the background priority in the mix. Thus, editing of the cel, which may consist of movement and correction and which must take place in the DVE, is performed in context behind all the other frames. The frame is viewed as background by the user while editing.
The third, more usual case is where one of the intermediate layers is to be edited. For example, if the cloud layer 5 is to be moved with respect to the mountains of layers 4, 6 and 7. In this case, a third frame store is required and the menu frame store 10 is used for non-menu functions. Alternatively, an additional frame store could be added.
The method operates as follows; assume that cel x is incorrect and requires modifications. In a first cycle cels 1... (x - 1) are combined and placed in the menu frame store 10. In a second cycle cels (x + 1).. .n are combined and placed in the non-DVE frame store 14. The cel to be edited, cel x, is then placed in the DVE frame store and edited as required. Cel x can be seen in front of cels [ l...(x - 1)1 and behind cels [ (x f l)...n ] . Even with the doubling of cycle time and after the set-up time it will be possible to modify images at the rate of 12 or 6 frames per second.
The embodiment described can also be used to vary the order of layers. For example, in figure 3 if it was desired to swap layers 4 and 5 such that the cloud layer 4 appeared in front of the mountain layer 5 layers I to 3 would be combined and placed in the menu frame store. Layers 6 to 10 would be combined and placed in the non-DVE frame store and one of layers 4 and 5 is then added by loading into the foreground frame store. The image, without one of the layers, say layer 4 is then composited and the process-repeated. This time, layers 1 to 5 are placed in the menu frame store, layers 6 to 10 in the background frame store and layer 4 in the foreground frame store. On compositing layers 4 and 5 will have been transposed.
The embodiment also allows the user to preview forthcoming images several frames in advance. When the relationships between the various layers are correct at a give time x, the user may wish to preview the composite at some future time unit y. The system can automatically move forward, adjusting all the cels as necessary.
All the stills are moved to their correct position for time y and all tape sequences are found and the relevant frames 'grabbed' and all internal graphics are rendered. Should the number of layers exceed the Ram store size, they can be accessed during the process. This will slow down the process, depending on image complexity, but in practice the process will work normally after set-up.
After all new modifications at time y are complete and the operator believes the project to be finished, the final processing can take place. Successive frames can be buffered in the Ram store 22 so as to reduce time by reducing tape pre-roll.
This can be done up to the limit of the Ram store available.
The final process, as well as the preview composition process, forms the image frame by frame. However, should the resulting master output be incorrect, the cel list or creation process is still stored and can be modified and re-worked. In this manner the two problems with the prior art methods, lack of interacting editing and the inability to correct mistakes can be overcome.
The final process may be performed on a layer by layer basis as with the prior art. The preview composition of the various layers is still performed layer by layer as described and the advantages of interactive editing and error correction are still available. Layer by layer final creation is not favoured in view of the large memory requirements.
It will be appreciate that during final image creation each frame is created in a single pass, that is in a single set of operations.
Figure 4 shows how the hardware of figure 2 can be reconfigured for use in performing other editing techniques. In particular the following description is related to the mixing of several sources of data.
Figure 5 shows a typical segment of video which is wtagged" by the system. As in the previous embodiment a start and finish SMPTE timecode (known per se in the art) is logged. In this instance the contents of the clip is the output of an animation.
The actual recording of the animation may be not necessarily be performed in real time (Real time being a precise video rate of 25 frame or 50 fields, per second in the PAL, standard or 30 frames, or 59.97 fields in the NTSC Standard). Animations often take several seconds or minutes per frame to render due to image complexity and thus real time production is not possible. The clip is then played back for review. If at any point during the playback the clip is stopped, the on screen frame is automatically re-rendered and an identical frame replaces the video frame. However, as the new frame represents the stored animation and is an active part of the system, the user can edit the new frame. The painting software and the picture database have all been updated and activated to the relevant point in the animation.
The 1clip" can be logged anywhere, it may be logged on a tape machine, in internal memory or on digital discs. The system transparently, and as a background task, monitors the clips use and constantly updates the animation.
In the example of figure 5, the animation 100 is rendered to tape, then played back, the user stops the sequence thereby selecting a unique frame, 00914. The animation is updated and the user is able to edit the animation.
For the process to be truly useful, a non-pixel paint system would be used for the touch up work, thus when the user stops at a point in the animation, the contents of the frame can be manipulated, in an intelligent object based way, rather than just dealing with a simple pixel plane. When a pixel system is used, it is impossible to separate any external "realn video images from computer generated images such as brush strokes or colourings as they are both permanently written into a single framestore.
The process described, whilst being useful, has a number of problems.
Firstly, when an edit is completed, the system is required either to replace the selected frame in the sequence or completely re-render the animation. Secondly, large amounts of internal memory are required if the system is not to use video tape; and thirdly the system does not allow for any flexibility to go back and review the original material; there is no preview function.
It is possible to store part of the image externally and part of the image internally. It is proposed that the background of a picture, being normally recorded video images, is logged as a clip on the tape, and is touched up, as before, but that all the graphical elements are recorded separately to internal memory.
The respective matte signal is also stored to allow later real time compositing. Both the touch up graphics additions and the matte signals are logged with references back to the external clip.
The internal touch up work can then be played simultaneously with the external source. The images are all combined in real time and previewed without actually performing any irreversible operations. An example of this is shown in figure 6.
As before, if the output is stopped the system updates the internal animation to the same point as the preview, so that work can continue. The problem of previewing changes before they are made is thus solved.
This solution reduces the amount of memory needed as not all the external original source images that will be touched up and stored. It is proposed to further reduce the memory required by encoding the remaining internal touch up images. If a system such as Delta encoding, run length encoding or both are employed by the system, large amounts of video can be stored in a greatly reduced memory space. The touch up images, stored internally are prime examples of images that can be used for encoding as they typically occupy only a relatively small proportion of the screen and have adjacent frame relations.
This method increases dramatically the length of touch up work that can be performed in a single session. Typically 30 frames or one second of video, using traditional methods requires approximately 30 Megabytes of RAM storage, this could be replaced with an equally sized system using the methods described herein, which would allow minutes of separately combined touch up for the same cost.
The methods so far presented here reduce the amount of memory required, but still allow real time preview of any change. A further enhancement is possible. Multiple modules can be established, allowing the comparison of existing graphics with an altered set of modifications. Selection between modules is interactive allowing the user easily to compare two or more alternative ideas. An example of this possibility is shown in figure 7 where two alternative sources are provided.
In figure 7 the external video source would have a simple 2 dimensional transform performed on it in the frame store and the internal graphics would be encoded and thus not be stored as indicated. However, the internal graphics are able to be played in real time and would thus appear to be stored as shown.
Regardless of the enhancements described with reference to figures 6 and 7, if the internal images were able to be played through an internal Digital Effects Device a DVE frame store, the images in the above stated DVE framestore could be manipulated in real time with a series of 2 or 3 dimensional transforms. Thus if the internal images were animated text, a string of letters can then be "flown" in over the background external source. As the internal and external elements are all now operating in real time, the system is useful for live events such as sports telecasts. The system shown in figure 2 is ideal for this task when modified as shown in figure 4.
It is important to note that DVE framestore may need to consist of more than one "actual" framestore. For example, graphics may need to be buffered before they are written into the DVE engine itself.
In figure 4, the background is supplied from tape directly to the background frame store whereas the images to be edited are supplied to the frame store as described. Compositing the of the edited images is performed by compositor 18 and the output supplied to the output tape machine. Thus, the hardware shown in figures 2 and 4 can be used to generate multi-layer images and to allow interactive editing and means of images from a number of sources. In particular it is possible to edit an image using the background and foreground frame stores without affecting the original stored image.

Claims (28)

1. A method of compositing a video image having a plurality of layers, comprising: (a) loading a frame of a background layer; (b) loading a frame of a further layer; (c) interactively editing the further layer with respect to the background layer; (d) compositing the background layer and the further layer; (e) storing the composited layers as the new background layer; (f) loading a frame of a fresh further layer and repeating steps c) to e) for the fresh further layer; and (g) repeating steps f) until a frame of all layers of the image have been composited.
2. A method according to Claim 1, wherein the loading of a frame of any given layer comprises storing the frame in a frame store and recording a frame origin reference.
3. A method according to Claim 2, wherein the compositing of the background and further layers comprises recording edit and composit instructions for future reference.
4. A method according to any preceding claim, wherein the background layer is loaded into a first frame store and each further layer is loaded successively into a second frame store, the composited background and further layers being stored in the first frame store in place of the previous background layer.
5. A method according to Claim 4, wherein the second frame store includes a video editing device and the interactive editing of the further layer is performed by that device.
6. A method according to Claim 1 to 5, wherein a foreground layer of a composited multi-layer image is edited by recreating the image layers from the stored frame references, edit and composit instructions, loading the foreground layer into the second frame store, loading the remaining, composited, layers into the first frame store, editing the foreground layer and recompositing the edited foreground layer with the remaining layers stored in the first frame store.
7. A method according to Claims l to 5, wherein a background layer of a composited multi-layer image is edited by recreating the image layers from the stored frame references, edit and composit instructions, loading the background layer into the second frame store, compositing the remaining layers and loading them into the first frame store, editing the background layer and recompositing the edited foreground layer with the remaining layers stored in the first frame store.
8. A method according to Claims 1 to 5, wherein an intermediate layer of a composited multi-layer image is edited by recreating the image layers from the stored frame references, edit and composit instructions, compositing layers above the layer to be edited and loading the composition into one of the first and a third frame store, compositing layers below the layer to be edited and loading the composit into the other of the third and first frame stores, loading the layer to be edited into the second frame store and editing the layer, and recompositing the image by compositing the contents of the first, second and third frame stores.
9. A method according to any of claims 1 to 8, comprising compositing a multi-layer image according to the method of any preceding claim at a time x and interactively previewing the composit at a later time y by performing the same series of edits and composits on the layers of the image at time y as were performed on the layers of the image at time x.
10. A method of compositing a video image having a plurality of frames comprising creating a first preview frame at a first time x by applying the method of any of claims 1 to 8 to a frame of each layer of the image, creating a second preview frame at a second, later time y by applying the method of Claim 9 to a frame of each layer of the image occuring at a time (y-x) after the first preview frame, processing the video sequence by compositing frame-by-frame according to the stored edit and composit instructions and providing an output of the composited sequence for display and/or recordal.
11. A method according to Claim 10, wherein frames to be processed are stored in a RAM buffer.
12. Apparatus for editing video images comprising a first frame store, a second frame store including means for editing at least one video frame, a compositor for compositing the outputs of the first and second frame stores, means for supplying video data from more than one source to the frame stores and means for controlling the supply means, the frame stores and the compositor.
13. Apparatus according to Claim 12, wherein the supply means supplies video data from external sources and the control means comprises means for tagging the supplied video data with time and source references.
14. Apparatus according to Claim 12 or 13, wherein the supply means supplies video data from internal sources.
15. Apparatus according to Claims 12, 13 or 14, wherein the supply means comprises a network of video busses connecting the video data sources with each of the control means and the first and second frame store.
16. Apparatus according to Claim 15, wherein the video busses connect the output of each of the first and second frame stores to the input of both first and second frame stores.
17. Apparatus according to any of claims 12 to 16, comprising a third frame store and a further compositor, the compositor being arranged to composit the output of the second and third frame stores.
18. Apparatus according to Claim 16 and 17, wherein the video busses connect the output of the third frame store with the input of the first and second frame store and the output of each of the first and second frame stores to the input of the third frame store.
19. Apparatus according to any of Claims 16, 17 or 18, wherein the control means controls the routing of data from the output of each frame store and determines whether the output is output directly, combined with the output of another frame store and output, combined with the output of another frame store and stored in one of the frame stores or elsewhere, stored in one of the frame stores or elsewhere without combination, or ignored.
20. Apparatus according to any of claims 12 to 19, comprising a memory for storing frames to be edited of composited.
21. Apparatus according to any of Claims 12 to 20, wherein the means for editing a video frame associated with the second frame store comprises a digital video effects device.
22. Apparatus for compositing a multi-layer video image, comprising at least two sources of video data, first and second frame stores an editing device associated with the second frame store, means for supplying layers of the image to the frame stores frame-by-frame, compositing means coupled to the output of the first and second frame stores for compositing a frame of a layer of the image stored in the second frame store with a frame one or more layers of the image stored in the first frame store, means for storing the output of the compositing means for controlling the supply of video data, the frame stores and the compositing means.
23. A method of editing video data received from a plurality of sources, comprising supplying video data from a first source to a first frame store, supplying video data from a second source to a second frame store, and compositing the contents of the first and second frame stores to produce an output image.
24. A method according to Claim 23, wherein the first image source is an external source supplied frame by frame and the second source is generated from stored stills or graphics.
25. A method according to Claim 23 or 24, wherein the second frame store is a digital video effects frame store and the video data stored therein can be edited before composition with the video data stored in the first frame store.
26. A method of editing computer generated animation sequences, comprising storing the sequences on a reviewabgle medium, playing back the stored sequences, stoppig the sequence at a desired edit point and forming an identical video frame to the video frame at the edit point, editing the frame, assigned the edited frame a reference identical to that of the frame at the edit point and storing the edited frame.
27. A method according to Claim 26, wherein reviewable medium is video tape.
28. A method of interactively editing video graphics, comprising providing at least one source of video information, having at least one frame, digitally logged and played in sync with a concurrent animation device, each said logged series of frames corresponding to frames of an identical animation, the said logged frames and the animation being combined in real time to form the resultant composite frames and automatically tracking and updating the animation if any frame of the composite is to be individually selected, thereby allowing editing of the selected frame.
GB9108172A 1990-06-13 1991-04-17 Production of multi-layered video composite Withdrawn GB2246933A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AUPK061890 1990-06-13
AUPK089990 1990-06-28

Publications (2)

Publication Number Publication Date
GB9108172D0 GB9108172D0 (en) 1991-06-05
GB2246933A true GB2246933A (en) 1992-02-12

Family

ID=25643885

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9108172A Withdrawn GB2246933A (en) 1990-06-13 1991-04-17 Production of multi-layered video composite

Country Status (1)

Country Link
GB (1) GB2246933A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0664527A1 (en) * 1993-12-30 1995-07-26 Eastman Kodak Company Method and apparatus for standardizing facial images for personalized video entertainment
EP0664526A2 (en) * 1994-01-19 1995-07-26 Eastman Kodak Company Method and apparatus for three-dimensional personalized video games using 3-D models and depth measuring apparatus
EP0827112A2 (en) * 1996-08-26 1998-03-04 Adobe Systems, Inc. Adjustment layers for composited image manipulation
EP0828232A2 (en) * 1996-08-07 1998-03-11 Adobe Systems, Inc. Method for concatenated rendering of digital images
US6091446A (en) 1992-01-21 2000-07-18 Walker; Bradley William Consecutive frame scanning of cinematographic film
EP1107605A2 (en) * 1999-12-02 2001-06-13 Canon Kabushiki Kaisha A method for encoding animation in an image file
GB2427531A (en) * 2005-06-16 2006-12-27 Gamze Ltd Image composition and animation using layers
CN1960490B (en) * 2005-11-04 2010-08-18 腾讯科技(深圳)有限公司 Method for converting GIF file to SWF file
IT202000003707A1 (en) * 2020-02-21 2021-08-21 Toma Francesca Augmented paper photo album

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2144607A (en) * 1983-07-28 1985-03-06 Quantel Ltd Improvements relating to video graphic simulator systems
GB2171875A (en) * 1985-02-28 1986-09-03 Rca Corp Superimposing video images
US4689616A (en) * 1984-08-10 1987-08-25 U.S. Philips Corporation Method of producing and modifying a synthetic picture

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2144607A (en) * 1983-07-28 1985-03-06 Quantel Ltd Improvements relating to video graphic simulator systems
US4689616A (en) * 1984-08-10 1987-08-25 U.S. Philips Corporation Method of producing and modifying a synthetic picture
GB2171875A (en) * 1985-02-28 1986-09-03 Rca Corp Superimposing video images

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6091446A (en) 1992-01-21 2000-07-18 Walker; Bradley William Consecutive frame scanning of cinematographic film
EP0664527A1 (en) * 1993-12-30 1995-07-26 Eastman Kodak Company Method and apparatus for standardizing facial images for personalized video entertainment
EP0664526A2 (en) * 1994-01-19 1995-07-26 Eastman Kodak Company Method and apparatus for three-dimensional personalized video games using 3-D models and depth measuring apparatus
EP0664526A3 (en) * 1994-01-19 1995-12-27 Eastman Kodak Co Method and apparatus for three-dimensional personalized video games using 3-D models and depth measuring apparatus.
EP0828232A2 (en) * 1996-08-07 1998-03-11 Adobe Systems, Inc. Method for concatenated rendering of digital images
EP0828232A3 (en) * 1996-08-07 1999-06-02 Adobe Systems, Inc. Method for concatenated rendering of digital images
EP0827112A3 (en) * 1996-08-26 1999-06-30 Adobe Systems, Inc. Adjustment layers for composited image manipulation
EP0827112A2 (en) * 1996-08-26 1998-03-04 Adobe Systems, Inc. Adjustment layers for composited image manipulation
US6185342B1 (en) 1996-08-26 2001-02-06 Adobe Systems Incorporated Adjustment layers for composited image manipulation
EP1107605A2 (en) * 1999-12-02 2001-06-13 Canon Kabushiki Kaisha A method for encoding animation in an image file
EP1107605A3 (en) * 1999-12-02 2004-03-10 Canon Kabushiki Kaisha A method for encoding animation in an image file
GB2427531A (en) * 2005-06-16 2006-12-27 Gamze Ltd Image composition and animation using layers
CN1960490B (en) * 2005-11-04 2010-08-18 腾讯科技(深圳)有限公司 Method for converting GIF file to SWF file
IT202000003707A1 (en) * 2020-02-21 2021-08-21 Toma Francesca Augmented paper photo album
EP3869778A1 (en) * 2020-02-21 2021-08-25 Toma, Francesca Augmented paper photo album

Also Published As

Publication number Publication date
GB9108172D0 (en) 1991-06-05

Similar Documents

Publication Publication Date Title
US5982350A (en) Compositer interface for arranging the components of special effects for a motion picture production
US6532043B1 (en) Media pipeline with multichannel video processing and playback with compressed data streams
JP3492392B2 (en) Electronic video storage device and electronic video processing system
US7020381B1 (en) Video editing apparatus and editing method for combining a plurality of image data to generate a series of edited motion video image data
US5528310A (en) Method and apparatus for creating motion picture transitions according to non-linear light response
EP1872268B1 (en) Icon bar display for video editing system
US4258385A (en) Interactive video production system and method
US20010036356A1 (en) Non-linear video editing system
US9959905B1 (en) Methods and systems for 360-degree video post-production
US20100026782A1 (en) System and method for interactive visual effects compositing
US5526132A (en) Image editing device with special effects using a recording medium in which two-channel reproduction and single-channel recording are simultaneously possible
US10554948B2 (en) Methods and systems for 360-degree video post-production
US5315390A (en) Simple compositing system which processes one frame of each sequence of frames in turn and combines them in parallel to create the final composite sequence
GB2246933A (en) Production of multi-layered video composite
JPH10285527A (en) Video processing system, device and method
US6272279B1 (en) Editing method of moving images, editing apparatus and storage medium storing its editing method program
US10582135B2 (en) Computer generated imagery compositors
US20050034076A1 (en) Combining clips of image data
CA2160477C (en) Media pipeline with multichannel video processing and playback
US6449009B1 (en) Image transfer method for telecine
Bancroft Recent advances in the transfer and manipulation of film images in the data and HDTV domains
JPH05103233A (en) Video manufacturing device for data video
Carson The Integrated Digital Production Suite
CA1155953A (en) Interactive video production system and method
GB2091515A (en) Interactive Video System

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)