GB2311681A - Video editing systems - Google Patents

Video editing systems Download PDF

Info

Publication number
GB2311681A
GB2311681A GB9713716A GB9713716A GB2311681A GB 2311681 A GB2311681 A GB 2311681A GB 9713716 A GB9713716 A GB 9713716A GB 9713716 A GB9713716 A GB 9713716A GB 2311681 A GB2311681 A GB 2311681A
Authority
GB
United Kingdom
Prior art keywords
resolution
image
quality
handling means
data handling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9713716A
Other versions
GB2311681B (en
GB9713716D0 (en
Inventor
Paul Bamborough
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LIGHTWORKS EDITING SYSTEMS Ltd
Original Assignee
LIGHTWORKS EDITING SYSTEMS Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB939311991A external-priority patent/GB9311991D0/en
Priority claimed from GB939320386A external-priority patent/GB9320386D0/en
Application filed by LIGHTWORKS EDITING SYSTEMS Ltd filed Critical LIGHTWORKS EDITING SYSTEMS Ltd
Priority to GB9713716A priority Critical patent/GB2311681B/en
Publication of GB9713716D0 publication Critical patent/GB9713716D0/en
Publication of GB2311681A publication Critical patent/GB2311681A/en
Application granted granted Critical
Publication of GB2311681B publication Critical patent/GB2311681B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/36Monitoring, i.e. supervising the progress of recording or reproducing
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B2220/00Record carriers by type
    • G11B2220/20Disc-shaped record carriers
    • G11B2220/25Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
    • G11B2220/2508Magnetic discs
    • G11B2220/2512Floppy disks

Description

Video Editing Systems This specification relates to video editing systems.
In the production of a work recorded on video media, such as cinematographic films, video tapes, video discs and so forth, editing is required. For example, shots from several cameras will be cut and pasted together, there will be fades from one shot to another, and so forth.
In many current systems an edit decision list is produced. This is a list setting out which clips from which source are to be put together in what order to make the final product. There will also be instructions as to fading and so forth. In the case of films, an editor will work from a cut-list, cutting specified frames from appropriate reels of material and splicing them together as required. Separate information will be produced for "opticals" where optical printing is needed, say, to combine frames for a fade. In the case of video tapes it is known to supply the edit decision list in the form of computer software to equipment which will assemble a final video tape from various source tapes.
The end result is an artistic work on cinematographic film, video tape or other media, which is of "broadcast' quality; that is, the quality and resolution of the images is of the high standard appropriate to th; ultimate playback system, be it a projector in 4 my, a tape to be broadcast by a television company, 4 a tape to be used as a master from which domestic qiality tapes or video discs are produced.
It is inconvenient to work with broadcast quality media in the production of the edit list itself and indeed one object of using an edit list is that this is subsequently applied to the broadcast quality material.
Accordingly, various "off line" systems have been developed. In these, lower resolution versions of the images are used by an editor to produce the edit decision list. In one early proposal, an editor would view and work with images from several video machines of nearer domestic than broadcasting quality. The tapes on these contained the same material - in terms of frames, time codes etc. as the original broadcast quality sources, but at a lower quality.
More recently, digital systems have been developed.
Such systems produce low resolution, often datacompressed, digitised representations of the original source material. The digitised representations are stored in fast access storage media such as magnetic disks, optical disks and so forth, and are manipulated in random access memory. In the case of cinematographic film, it is first transcribed into suitable video media by a telecine machine such as the Rank Cintel URSA or BTS FDL 90, before the low resolution images are made.
Once the decisions have been made, the edit decision list is produced and typically this is in the form of a listing on a diskette. The diskette is used with an online video editing system to produce the final broadcast quality product. This process is known as conforming.
Such off-line systems have been produced by EMC, Avid and OLE Limited (Lightworks).
1 It will be appreciated that the advantage of us lower resolution images, perhaps compressed up to 100 times, is that the power of the equipment (and thus its cost) can be less, in terms of processing speed, memory, storage requirements and so forth. Nevertheless, the image quality is still good enough for editors to work with.
Whilst such a two stage process is to some extent inconvenient, in terms of having to produce e.g. a diskette for an on line editing machine, it would be prohibitively expensive for all editing to be carried out on currently available on line editors. Time is also a factor, and high resolution images take longer to manipulate than the lower resolution images used in off line editing systems.
The problems are increasing as there are moves towards even higher resolution media. For example, High Definition Television ("HDTV") is planned. By way of example, a frame of material in an off line system - of low resolution and compressed - may occupy 20 to 50 Kb.
At conventional broadcast quality resolution the frame may occupy 1 Mb of storage space. With HDTV material, it would occupy 5 Mb of space.
It is also proposed to use digitised film images at full cine resolution of eg. 3000 lines by 4000 pixels. Such images can be input into a digital system, manipulated, and then output at full resolution.
Such high resolution equipment is considerably more expensive than existing on line editing equipment and it is not cost effective to use it for routine editing and making edit decisions.
There is disclosed herein a system for editing video material comprising: (a) first digital data handling means for receiving and storing a sequence of image frames of relatively high quality; (b) means for converting the sequence of image frames into a corresponding sequence of image frames of relatively low quality; (c) second digital data handling means for receiving and storing the sequence of relatively low quality image frames; (d) operator input means associated with the second data handling means to enable manipulation of the sequence of relatively low quality image frames for edit decisions to be made; (e) communication means connected between the first and second data handling means, for communicating to the first data handling means edit decisions after manipulation of the frames in the second data handling means, substantially as such decisions are made; (f) means within the first data handling means for processing a first edit decision, and for manipulating the relatively high quality image frames accordingly, whilst the operator manipulates the relatively low quality frames in the second data handling means to make a second edit decision for communication to the first data handling means; and (g) means within the first data handling means for producing an output sequence of relatively high quality image frames in accordance with the first second and any subsequent edit decisions communicated from the second data handling means.
Thus, whilst the operator is spending time manipulating frames and e.g. assessing artistic possibilities before making an edit decision, the results of earlier decisions have been transmitted to the first data handling means. This then processes the decisions and applies them to the high quality images. Processing time in the first data handling means is used more effectively, and the intent ion is that shortly after the last edit decision has been made, the high quality output material will be available.
The material which the operator views and works with in the second data handling means is, in terms of artistic content, identical to that in the first data handling means. All of the same frames are available and coded to enable correlation with the frames in the first data handling means. The difference is that the images are of lower quality.
The relatively high quality image frames which the first data handling means processes, may be of conventional broadcast quality or of higher HDTV quality, or even of film resolution. In practice they may be slightly less than full broadcast quality for some applications.
In practice, in accordance with normal video editing procedures, a number of sequences will be held in the data handling means, and decisions made concerning the selection of frames. Generally, selection will be of a number of frames, i.e. a shorter clip from a complete sequence loaded in.
Depending upon the nature of the editing carried out by the operator, and upon the speed of the first data handling means, it is possible that the latter may be in an idle state between acting upon edit decisions.
It is therefore proposed that the first data handling means be in communication with a plurality of second data handling means. These may be operating on the same project, or on completely separate projects.
In preferred arrangements, the first data handling means receives high quality source material, and produces the lower quality material which is provided to the second data handling means. Thus the first data handling means preferably includes means for reducing the resolution of the images and/or compressing the data, for supply to the second data handling means.
The first data handling means must itself process images of high quality. Nevertheless a certain amount of compression is possible without adverse degradation of image quality, say 8 or 10:1. The resolution should be maintained.
Typically, therefore, images will be reduced in resolution - for example, by omitting alternate lines or by more sophisticated techniques - and compressed by say 20:1 or more for supply to the second data handling means. In the first data handling means the resolution of the source material will be maintained but compression of 8 to 10:1 may be used. This will assist operation but without visible degradation of picture quality.
The first and second data handling means will typically differ in terms of e.g. the memory, storage space, processor speed and so forth. The first data handling means will have increased, typically by four times, bandwidth. However, a common processor could be used.
Indeed, the two data handling means could be combined in the same physical unit. The first data handling means could be used as a stand alone system, having for example sufficient power to handle four pictures at once, at full speed. These will be of lower resolution, perhaps similar or slightly better quality than those of the second data handling means.
They appear on the graphics screen as usual, but also on 4 separate video outputs. This opens the way to on-thefly editing of multicamera material, as if working live or with multiple tape machines, but with all the advantages of random-access. One possibility is to take an EDL based on a live cut, import it, and then quickly improve it until it is perfect. This will be for sports and the like. The machine may have a fifth video output, so that the edit can be watched as it is built.
The machine can also work with much more sound simultaneously. It may have at least 8 real channels of production-quality sound, all playable at once. This can be upgraded to 16 real channels. There is a distinction between real channels and the number of tracks that can be laid - so-called virtual tracks.
These, of course, have no real limits.
As discussed above, the machine can play pictures that are much less compressed and therefore of much higher quality. These pictures are of full horizontal resolution, and 50/60 fields. In practice they will be of a quality good enough for some on-line uses, such as news and some corporate work. They may not be full broadcast quality.
It is necessary to understand that the way in which large amounts of picture can be put into a computer is to compress them. Beyond 2-1 compression, the process is no longer lossless. That is, one cannot go back to exactly where one started. It then becomes a question of how far to go without noticing the losses. Current consensus seems to be that beyond 3-1 compression there is no longer a system that remains unnoticeable through repeated round-trips, such as will be experienced when adding effects etc. For pure delivery of the finished product one can probably go up to 6 or 8-1 before noticing much at all, and standards are evolving for doing that. However, editing is not delivery, and does require round-trips.
For full broadcast quality, one cannot compress more than, say, 4-1, which is about 6 Mb/second or 3 minute quality. This is 20 Gb/hour (7 disks). It would be very expensive to construct such a machine in terms of storage.
A more practical machine will sustain a single picture or two pictures at 2 Mb/second, plus a great deal of sound. This is equivalent to about 9-minute quality, and looks good.
All of this requires extremely large amounts of disk storage. About 150 Gb may be required on-line, which equates to 15-250 hours depending on quality.
There are also disclosed herein improvements in data compression techniques. Whilst these are of particular use in the context of the video editing system described above, they will be of use in other contexts only.
Techniques are known for compression techniques for the digital files that describe images. These are well documented in such sources as 'Digital Image Processing' by William K Pratt, published by John Wiley & Sons, ISBN 0-471-01888-0.
These techniques include the simpler methods such as run length coding, which will give a compression ratio of typically around 3:1. Whilst such methods are of some use, it is often desirable to have a system that will give a compression ratio which is much higher. One such method is the scheme referred to as 'JPEG', being the method established by the Joint Photo-Interpreters Experts Group. This method is a highly mathematical technique involving the two-dimensional discrete Cosine transform, as explained in "Digital Imaging Processing" pages 242-247. The JPEG technique is currently a Draft International Standard for the coding of Continuous Tone Still Images, ISO DIS 10918, and offers users compression ratios of up to 50:1 with little image degradation. The principle of JPEG is as follows:1. Divide the image up into regular tiles. Each tile is either 8x8 pixels or 16x16 pixels.
2. Take the two-dimensioned DCT of each block.
Mathematically this is implemented by taking the DCT of each of the lines of eight (or sixteen) pixels in each of the eight (or sixteen) lines per block, writing the answer back into that address of the block, and repeating the operation on each of the columns. This is possible because the 2-D DCT has a mathematical property known as separability.
At this point no compression has taken place, but the data has been decorrelated.
3. Each block then has its energy assessed. This can be done in many ways, such as summing the squares of the 2nd to 64th coefficient in the block.
4. Classification then takes place dependant on energy. Blocks are then labelled into High, Medium, and Low energy, with various classes between.
5. Dependant on which class each block is in, it is thresholded (ie. an offset is removed from each element), and quantised (ie. it is represented as a nearest multiple of a constant.
6. The remaining data (which will contain a large number of zeroes) is then run length, or Huffman coded to reduce the data even further.
7. The remaining bits are then 'packeted' with a header and information on the energy class. Error correction is quite often added at this stage.
8. Decoding is the reverse of coding.
A further scheme is MPEG, the Motion Picture Experts Group. This is described in Draft International Standard ISO/TEC DIS 11172. This offers potentially higher compression ratios, by comparing subsequent frames of a motion picture sequence.
For most digital imaging, it is usual to have 'scan lines'. The image is represented as a sequential sequence of values, describing the intensity (or brightness) of picture elements (pixels). Thus the data structure consists of the brightness value of the first pixel in the first line, followed by the second pixel of the first line, and then subsequent pixels of the first line through to the last pixel of the first line. Then the data structure contains the brightness value of the first pixel of the second line through to the last pixel of the second line. This is continued through to the last line of the image. This is a 'normal' image data structure.
It is, however, sometimes necessary to describe a given image in more than one resolution. Such an example may be where a very high resolution image is to be eventually created, such as a full page colour picture for a high quality magazine. This picture may consist of, say 5000 lines of 4000 picture elements each. This resolution is far higher than can easily be displayed on a colour monitor, such devices being typically less than 1000 lines by 1500 pixels. Thus at a 'one-to-one' resolution only a small part of the final image can be displayed. A typical solution to this dilemma is to store a reduced resolution or subsampled version of the final image which is the same resolution as the colour monitor.
There is disclosed herein a method for compressing digitised image data in data handling means, wherein: (a) a plurality of different image resolutions are provided from given source material; and (b) data in respect of different respective image resolutions is compressed by different respective amounts.
In the preferred arrangement, higher resolution images may be compressed by lower amounts.
Thus, for example there may be a "pyramidal" or "Laplacian" structure. At the bottom may be a high resolution image, for example corresponding to broadcast quality source material. This will be compressed by a relatively small amount, such as 8 or 10:1, to minimise picture degradation. Higher up the pyramid there will be a picture at lower resolution, for example for use in the video editing system described above. This will be compressed by a greater amount, and for example 20:1.
At the top of the "theoretical" data structure there is one singular value, describing the overall average brightness of the picture. The resolution level below this would be, say 2x2 rather than lxl. Below that, the resolution level will be 4x4 and so forth. Instead of describing a complete level, the difference could be stored between the previous level and the predicted level, predicted by linear interpolation.
Different techniques could be used at different levels.
Thus, for example, at a level for use as the relatively low quality image, used in the editing system described earlier, the resolution may be set by omitting alternate lines. By contrast at a level for producing HDTV quality images the resolution may be set by averaging out groups of say four pixels.
An object of the present invention is the provision of a data compression technique which is versatile and of particular, but not exclusive, benefit in a video editing system as described above.
There has now also been developed an improved system for generating the levels of different image resolutions.
Thus, according to the present invention, there is provided a method of storing data relating to an image, comprising the steps of: (a) Providing an image as a series of pixels, at a first resolution; (b) Storing data relating to a second, lower resolution version of the image in which the value of each pixel is calculated as an average of a number of associated pixels at the first resolution; and (c) Storing data from which the values of the associated pixels can be calculated by a simultaneous equation technique.
Starting at the highest resolution, it is necessary to store data in respect of that resolution, and in respect of the next level - i.e. the lower resolution image to be processed in the "Lightworks" unit. This lower resolution image may have one quarter of the pixels of the first level, this being achieved by averaging out the value of four pixels in the first level. Thus, if four adjacent pixels forming a square in the first level have values A, B, C, D, then the corresponding single pixel in the level above will have the value (A+B+C+D)/4.
Each value requires 8 bits and thus 40 bits are required to store data for this part of the image, in both resolutions.
According to an improved method, less data needs to be stored. The value of the single pixel of the second level is still stored as (A+B+C+D)/4 and requires 8 bits. There are then stored three difference expressions, namely:- (A+B+C-D)/4 (A+B-C+D)/4 (A-B+C+D)/4 From these it is possible, using simultaneous equation techniques, to derive the values of A, B, C and D.
The three difference equations require 9 bits (8 for the number and 1 for the sign). Thus the total data to be stored is 8 bits plus 3 x 9 bits, i.e. 35 bits. This is a saving of 5 bits over storing the pixel values separately.
Over an entire image there is a significant saving in the amount of data to be stored.
There is still immediate access to the lower resolution values. There is a delay in access to the higher resolution values as these have to be derived. However, that is acceptable in the context of the video editing systems described herein.
Preferably, a plurality of compression operations are carried on by parallel compressors. It is known, for example, to use parallel compressors to compress different parts of a particular image. For example four compressions could be used, each for a quarter of the image. By contrast, preferably parallel processing techniques are carried out on different respective resolution levels. Thus one processor may handle certain low resolution levels whilst one or more can simultaneously handle higher resolution levels. At the lower resolution, a single processor may be able to handle a number of levels in the time that it takes one, two or more to handle a single high resolution level.
An advantage of this arrangement is that when a compressed image is required, all desired resolutions can be obtained simultaneously.
Thus in the context of the video editing system described earlier, source material may be taken in to the first data handling means. There, parallel processors will work on (a) high resolution, mildly compressed files for use in the first data handling means; and (b) lower resolution more highly compressed files for use in the second data handling means.
It will be appreciated that the compression and other techniques described herein may be used in systems other than the editing of films, video tapes and the like.
Figure 1 shows diagrammatically apparatus in accordance with the invention. The "Heavyworks" machine represents the first data handling means which may for example be an EISA system with a 66 MHz Intel 486 DX processor or a 586 processor, and say 16 Mb or more of RAM. The "Lightworks" machine represents the second data handling means. This may be an ISA system with a 33 MHz Intel 486 DX processor, again with 16 Mb or more of RAM. This unit is provided with a control panel and a monitor.
The "Heavyworks" unit takes in full resolution digitally stored files, and provides compressed image data for the "Lightworks" unit where edit decisions are made.
Control data is then passed back to enable processing of full resolution files.
The procedure of the communication flow can be considered by the process that the operator would go through in a real job. It is as follows:1. All material that may be required in a given job will be 'digitised' into the Heavyworks unit.
2. At the Lightworks terminals, the operator will require a 'clip list' from the Heavyworks unit.
This will be the list of all clips available at the Heavyworks unit.
3. From this list, the operator will request that several of these clips are made available for him at his Lightworks terminal.
4. This request will be passed over the link between the two machines, and the Heavyworks unit will process the requested clips into 'Lightworks' form.
This is a lower quality higher data compressed form. They will then be transmitted over the link to the lightworks terminal.
5. Having received the clips, the operator will now perform the non linear editing functions necessary to form a complete work. As he makes each decision, this decision is communicated over the link to the Heavyworks.
6. During the process of the construction of a low quality version of the final work on the lightworks, the Heavyworks unit will be continually creating a high quality representation of this work with its own high quality digitised files. It is acknowledged that this process is not 100% efficient, as the operator is likely to change his mind several times in the artistic process of creating a completed work, and this will involve several unnecessary conforming steps. However, it has the advantage of minimising the time between the operator finishing the work on the lightworks and the work being available on the heavyworks station in 'on-line' quality.
As regards the hardware of the link, there are several suitable communication media. The first is the Bit Parallel form of Recommendation 601 of the CCIR. The second is the bit serial form of the above. The third is using the SCSI protocol (Small Computer Systems Interface). This is usually the protocol small computers for talking to their discs, but a 'dual ported' version exists in which there can be two 'masters', which could be the Lightworks and Heavyworks units respectively. A fourth format of the communication media could be the FDDI (Fibre Distributed Data Interchange). This is the standard for Fibre Optics systems. Yet another standard could be the 'Ethernet' protocol, of transmitting computer to computer communications over Co-Axial cable.
Figure 2(a) shows how conventional parallel data compression works. Each quarter of an image is compressed by its own compressor. Figure 2b shows a pyramidal data structure, with different levels covering different resolution. At the top is a single pixel, whose value is the average of the entire image. Lower down are higher resolution images. Figure 2c shows how compressors can be used in parallel on these different levels. Thus three compressors work in parallel. One handles the top two layers, one the next two, and one the bottom. As noted earlier, the degree of compression for the lower layers - that are the highest resolution may be more gentle than for higher levels.
Nevertheless, the same degrees of compression could be applied and the concept of using parallel compressors in this way is advantageous.
The present application is a divisional application of United Kingdom Patent Application No. 9525301.9.

Claims (2)

Claims
1. A method of storing data relating to an image, comprising the steps of: (a) Providing an image as a series of pixels, at a first resolution; (b) Storing data relating to a second, lower resolution version of the image in which the value of each pixel is calculated as an average of a number of associated pixels at the first resolution; and (c) Storing data from which the values of the associated pixels can be calculated by a simultaneous equation technique.
2. A method as claimed in claim 1 substantially as hereinbefore described.
GB9713716A 1993-06-10 1994-06-10 Video editing systems Expired - Fee Related GB2311681B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9713716A GB2311681B (en) 1993-06-10 1994-06-10 Video editing systems

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB939311991A GB9311991D0 (en) 1993-06-10 1993-06-10 Video editing systems
GB939320386A GB9320386D0 (en) 1993-10-04 1993-10-04 Video editing systems
GB9713716A GB2311681B (en) 1993-06-10 1994-06-10 Video editing systems
GB9525301A GB2295482B (en) 1993-06-10 1994-06-10 Video editing systems

Publications (3)

Publication Number Publication Date
GB9713716D0 GB9713716D0 (en) 1997-09-03
GB2311681A true GB2311681A (en) 1997-10-01
GB2311681B GB2311681B (en) 1997-12-10

Family

ID=27266717

Family Applications (2)

Application Number Title Priority Date Filing Date
GB9525301A Expired - Fee Related GB2295482B (en) 1993-06-10 1994-06-10 Video editing systems
GB9713716A Expired - Fee Related GB2311681B (en) 1993-06-10 1994-06-10 Video editing systems

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB9525301A Expired - Fee Related GB2295482B (en) 1993-06-10 1994-06-10 Video editing systems

Country Status (1)

Country Link
GB (2) GB2295482B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1255251A1 (en) * 2001-05-02 2002-11-06 ClicknShoot Method of remote processing of an article, particularly method of selective digitizing of video tapes
US20210314647A1 (en) * 2017-02-03 2021-10-07 Tv One Limited Method of video transmission and display

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2296600B (en) 1994-08-12 1999-04-07 Sony Corp Portable AV editing apparatus
GB2329752B (en) * 1994-08-12 1999-05-12 Sony Corp Video Editing Method
GB2312078B (en) * 1996-04-12 1999-12-15 Sony Corp Cataloguing video information
GB9716248D0 (en) 1997-08-01 1997-10-08 Discreet Logic Inc Editing image data
GB2351629A (en) * 1999-04-28 2001-01-03 Snell & Wilcox Ltd Video processing apparatus for showing encoder effects

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1604501A (en) * 1977-05-16 1981-12-09 Matra Reconstitution or restoration of images
EP0320755A2 (en) * 1987-12-18 1989-06-21 International Business Machines Corporation Image processing system and method employing combined black and white and gray scale image data

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1080226A (en) * 1964-08-13 1967-08-23 Bernard Lippel Improvements relating to binary picture transmission method and system
US4672444A (en) * 1985-11-14 1987-06-09 Rca Corporation Method for transmitting a high-resolution image over a narrow-band communication channel
GB8630887D0 (en) * 1986-12-24 1987-02-04 Philips Electronic Associated Encoding & displaying pictures
US5057932A (en) * 1988-12-27 1991-10-15 Explore Technology, Inc. Audio/video transceiver apparatus including compression means, random access storage means, and microwave transceiver means
JP2756301B2 (en) * 1989-04-10 1998-05-25 キヤノン株式会社 Image editing method and apparatus
US5267351A (en) * 1989-12-22 1993-11-30 Avid Technology, Inc. Media storage and retrieval system
US5218672A (en) * 1990-01-19 1993-06-08 Sony Corporation Of America Offline editing system with user interface for controlling edit list generation
GB9022761D0 (en) * 1990-10-19 1990-12-05 Eidos Plc Improvements in or relating to video editing systems
JPH04178074A (en) * 1990-11-13 1992-06-25 Nec Corp Coding decoding system for picture signal and its device
DE69222102T2 (en) * 1991-08-02 1998-03-26 Grass Valley Group Operator interface for video editing system for the display and interactive control of video material
GB2287849B (en) * 1994-03-19 1998-03-11 Sony Corp Video signal editing apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1604501A (en) * 1977-05-16 1981-12-09 Matra Reconstitution or restoration of images
EP0320755A2 (en) * 1987-12-18 1989-06-21 International Business Machines Corporation Image processing system and method employing combined black and white and gray scale image data

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1255251A1 (en) * 2001-05-02 2002-11-06 ClicknShoot Method of remote processing of an article, particularly method of selective digitizing of video tapes
US20210314647A1 (en) * 2017-02-03 2021-10-07 Tv One Limited Method of video transmission and display
US11792463B2 (en) * 2017-02-03 2023-10-17 Tv One Limited Method of video transmission and display

Also Published As

Publication number Publication date
GB2295482A (en) 1996-05-29
GB2295482B (en) 1997-12-10
GB2311681B (en) 1997-12-10
GB9525301D0 (en) 1996-02-21
GB9713716D0 (en) 1997-09-03

Similar Documents

Publication Publication Date Title
EP0702832B1 (en) Video editing systems
US8774274B2 (en) Compressing and decompressing multiple, layered, video streams employing multi-directional spatial encoding
US6006276A (en) Enhanced video data compression in intelligent video information management system
US5815604A (en) Interactive image manipulation
EP1237370B1 (en) A frame-interpolated variable-rate motion imaging system
Ng et al. Data compression and transmission aspects of panoramic videos
US8644690B2 (en) Large format video archival, storage, and retrieval system
US5301018A (en) Method and apparatus for shuffling image data into statistically averaged data groups and for deshuffling the data
CN1981522A (en) Stereoscopic television signal processing method, transmission system and viewer enhancements
US5905846A (en) Image decoding apparatus and process thereof and image reproduction apparatus
US5729294A (en) Motion video compression system with novel adaptive quantization
US5999657A (en) Recording and reproducing apparatus for digital image information
EP0796013B1 (en) Video image processing apparatus and the method of the same
GB2311681A (en) Video editing systems
US7724964B2 (en) Digital intermediate (DI) processing and distribution with scalable compression in the post-production of motion pictures
US5841935A (en) Coding method and recording and reproducing apparatus
US6137920A (en) Method and system for generating image frame sequences using morphing transformations
Ng et al. On the data compression and transmission aspects of panoramic video
CA2326674C (en) Video compression in information system
JP2947581B2 (en) Recording / playback method of DCT compressed video data
US6032242A (en) Methods and systems for generating alternate and zigzag address scans based on feedback addresses of alternate and zigzag access patterns
JPH07298195A (en) Image information compressing/expanding device
Plotkin The digital compression facility/spl minus/a solution to today's compression needs
MXPA06009734A (en) Method and system for digital decoding 3d stereoscopic video images.

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20000610

728V Application for restoration filed (sect. 28/1977)
7281 Application for restoration withdrawn (sect. 28/1977)