GB2269293A - Apparatus for performing video effects manipulations upon image data - Google Patents

Apparatus for performing video effects manipulations upon image data Download PDF

Info

Publication number
GB2269293A
GB2269293A GB9216164A GB9216164A GB2269293A GB 2269293 A GB2269293 A GB 2269293A GB 9216164 A GB9216164 A GB 9216164A GB 9216164 A GB9216164 A GB 9216164A GB 2269293 A GB2269293 A GB 2269293A
Authority
GB
United Kingdom
Prior art keywords
data
manipulation
image
input image
sections
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9216164A
Other versions
GB2269293B (en
GB9216164D0 (en
Inventor
Stephen Mark Keating
John William Richards
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Broadcast and Communications Ltd
Sony Europe BV United Kingdom Branch
Original Assignee
Sony Broadcast and Communications Ltd
Sony United Kingdom Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Broadcast and Communications Ltd, Sony United Kingdom Ltd filed Critical Sony Broadcast and Communications Ltd
Priority to GB9216164A priority Critical patent/GB2269293B/en
Publication of GB9216164D0 publication Critical patent/GB9216164D0/en
Publication of GB2269293A publication Critical patent/GB2269293A/en
Application granted granted Critical
Publication of GB2269293B publication Critical patent/GB2269293B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • G06T3/02
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Abstract

A system is described for performing video effects manipulations upon image data that has been split into N tiles A1, A2, A3, A4. Each of the N tiles undergoes N manipulations by a video effects unit to determine its contribution towards an output tile A1*, A2*, A3*, and A4*. These contributions are collected in an assembly store (62, Fig 11) from where they can be directly read as the full image or read as post-manipulation tiles for subsequent processing in split form. <IMAGE>

Description

APPARATUS AND METHOD FOR PROCESSING IMAGE DATA This invention relates to the processing of image data.
The processing of image data is a well established technical field. Image data can be captured in many different forms. For example, image data may be captured by still/moving image photographic cameras or still/moving image electronic cameras. Once the image data has been captured, it is typically transformed into a stream of data according to a known image data format, such as PAL, NTSC, SECAM or SMPTE 240M.
There exist many different pieces of equipment for performing manipulations upon image data in the aforementioned formats. The manipulations can take many different forms. For example, the manipulation may be recording onto magnetic tape, processing through a digital video effects system, motion compensated standards conversion or spatial filtering.
As the technical field of the capture, manipulation and reproduction of image data has advanced, this has made possible the use of increasingly high resolution systems. At the present time this technical field is at the start of a transitional period between the use of formats such as PAL, NTSC and SECAM and the use of a new generation of high definition of video formats. In future, it is probable that systems having further increases in resolution beyond the current high definition standards will be introduced (e.g. super high resolution systems may evolve).
A major obstacle that stands in the way of such evolution is the vast amount of investment and development effort that must be expended to produce apparatus for manipulating data in these new higher resolution formats. As spatial and temporal resolution increases the rate at which image data must be handled increases to an extent that the sophistication of the equipment used must be significantly increased so as to cope This is a hugely expensive undertaking since sophisticated new equipment will have to be developed and purchased and the investment in existing equipment will be lost.
One possible way in which the use of higher resolution formats may be facilitated is to split the source stream of data into a number of separate split streams of data each having only part of the information content of the source stream of data. The lowering of the information content of the split streams of data may be sufficient to allow existing or slightly modified equipment, originally designed for use with lower resolution formats, to be employed. This technique can be thought of as providing a hierarchial philosophy in which systems of increased resolution can be built from combining pieces of equipment that were originally produced for lower resolution operation.
Whilst the above is a superficially attractive approach, it brings with it its own set of problems which must be solved if its use is to be practical. In particular, when it is desired to use a video effects unit, the unit must produce a coordinated output in the super high resolution domain, whereas the effect must actually be carried out upon the separate streams of data each carrying only part of the information content of the super high resolution format. Furthermore, the type of sophisticated manipulation that is possible with modern video effects units is such that the spatial frequency content of the split streams may be altered and the way that the information is divided between the different streams may also be altered.
The invention provides apparatus for processing image data, said apparatus comprising: an image data source for generating source data representing an input image; a data splitter for splitting said source data into N premanipulation data sections each representing a different area within said input image; a video effects unit for separately performing N manipulations upon each of said N pre-manipulation data sections to determine a contribution of each of said N pre-manipulation data sections to a different one of N post-manipulation data sections each representing a different area within a manipulated image; and an assembly store for receiving and concatenating said contributions of each of said N pre-manipulation data sections to different ones of said N post-manipulation data sections.
The invention solves the problem of allowing a largely standard video effects unit to operate within such a hierarchical system for supporting higher resolution images than it was originally designed for. In particular, the source data is split into data sections each representing a different area within the input image (split by tiling).
This splitting technique is the one most suited to subsequent processing by a video effects unit since the maximum spatial resolution is able to be accessed during the manipulation and effects such as scaling which modify the spatial frequency content of the split streams may be more straight forwardly dealt with.
In order to cope with the possibility that a given premanipulation data section (input image tile) maybe mapped to points within other areas in the post-manipulation data sections, the video effects unit carries out a separate manipulation for each possible mapping from an input data section to an output data section.
Furthermore, it will be appreciated that since the largely standard video effects unit does not have a sufficient address range to cope with the full super high resolution input image, the sequence of manipulations that are performed allow for this by each performing the relevant mapping for a different part of the address space of the super high resolution image.
That N separate manipulations have to be performed for each of the N pre-manipulation data sections reduces the speed at which the system is able to operate. The splitting of the source data into N parts that are then separately and serially processed will reduce the speed of operation by a factor of N. If each of these parts must then be subject to N separate and serial manipulations, then this will reduce the speed by a further factor of N to give an overall rate reduction of N2. One way of alleviating this problem would be to use a video effects unit comprising a plurality of parallel connected video effects units each performing a different one of the N manipulations.
Whilst this approach increases the speed of operation, it also increases the hardware costs.
Another possibility is that the system may be able to reduce the time it takes to perform certain of the N manipulations. If the system can identify at an early stage that a given pre-manipulation data section will make no contribution to a particular post-manipulation data section under that particular manipulation, then that manipulation could be terminated early. Calculating whether any of the boundary points of a pre-manipulation data section can be found within a postmanipulation data section might be one test that could form part of such an early identification scheme.
Whilst the arrangement of the video effects unit to perform these N2 manipulations does mean that all the relevant contributions are generated, these contributions required collecting together so that the full post-manipulation data sections can be arrived at. The assembly store provides this facility.
In some embodiments the manipulated image in its full resolution form could be directly read from the assembly store, e.g. the system could be used to produce film special effects with the film images being converted into the pre-manipulation data sections, manipulated by a high definition video effects unit and then read directly from the assembly store to an electron beam recorder to create the output film images. However, in preferred embodiments of the invention the video effects unit will only be providing some of the necessary manipulations and in order to facilitate subsequent processing such embodiments comprise means for sequentially reading said N post-manipulation data sections from said assembly store; and a data combiner for combining said N post-manipulation data sections to form output data representing said manipulated image.
A convenient way to specify manipulations to be performed by a video effects unit is to use a transformation matrix. Such transformation matrices provide a unique mathematical definition of a particular manipulation.
In order to facilitate the continued use of transformation matrices to specify manipulation whilst recognizing that different address spaces are being utilized, in preferred embodiments each transformation includes a pre-manipulation translation to render concentric respective co-ordinate spaces of said input image and said video effects unit prior to applying transformations specified in said input image co-ordinate space and a post-manipulation translation reversing said pre-manipulation translation after application of transformations specified in said input image co-ordinate space.
The use of transformation matrices may be extended to also provide the splitting between respective post-manipulation address spaces by providing that each transformation includes an inter-section translation dependent upon what position said pre-manipulation data section has in said input image relative to said post-manipulation data section to which said contribution is being determined.
The use of pre-manipulation data sections corresponding to tiles of the input image facilitates the subsequent video effects operations, but has the disadvantage of potentially introducing edge effects into the centre of the recombined image due to processing artifacts. In order to assist to overcoming this problem, within preferred embodiments said areas within said input image have overlapping edges.
This overlapping provides a redundancy that can be used to reduce edge effects upon recombination.
When such overlapping edges are present, the system needs some way of identifying where the "true" non-overlapping edges of each area lie and, to this end, in preferred embodiments area key data is associated with each area within said input image and defines borders at which said areas within said input image touch without overlapping, said area key data also being subject to said N manipulations to form manipulated area key data.
The provision of such area key data may be further exploited to control the concatenation within the assembly store in embodiments in which said assembly store is responsive to said manipulated area key data to concatenate said N post-manipulation data sections to form areas within said manipulated image with borders that touch without overlapping.
It will be appreciated from the above that the non-real time nature of the video effects processing that can be performed could be a disadvantage. Expensive studio time and the time of skilled operators may be wasted in the experimentation and adjustment necessary to perfect a particular effect being carried in non-real time.
Accordingly, in preferred embodiments the invention comprises means for translating data defining a video effect obtained during off-line editing of a lower resolution version of said input image into data for controlling operation of said video effects unit to perform a sequence of N manipulation upon each of said N pre-manipulation data sections to bring about said video effect upon said input image.
In this way, a particular effect can be perfected at relatively high speed in a low resolution system and only once its final form has been reached will it be translated to be performed at the higher resolution afforded by the system of the invention.
The invention also provides a method of processing image data, said method comprising the steps of: generating source data representing an input image; splitting said source data into N pre-manipulation data sections each representing a different area within said input image; separately performing N manipulations upon each of said N premanipulation data sections to determine a contribution of each of said N pre-manipulation data sections to a different one of N postmanipulation data sections each representing a different area within a manipulated image; and concatenating said contributions of each of said N premanipulation data sections to different ones of said N postmanipulation data sections.
An embodiment of the invention will now be described, by way of example only, with reference to the accompanying drawings in which: Figure 1 illustrates a non-real time system for manipulating image data with the equipment originally designed for manipulating image data of a lower information content; Figures 2 and 3 illustrates a tiling splitting technique; Figure 4 illustrates a data splitter for use with the tiling splitting technique of Figures 2 and 3; Figure 5 illustrates a particular pre-manipulation section, its overlap with adjacent data sections and associated area key data; Figure 6 illustrates the relative coordinate spaces of a super high resolution image frame and the respective pre-manipulation data sections; Figure 7 illustrates an example manipulation that may be performed by a video effects unit;; Figure 8 illustrates the contribution of a particular premanipulation data section to respective post-manipulation data sections; Figure 9 illustrates how the overall transformation matrix for mapping from one pre-manipulation data section to a given postmanipulator data section can be divided into component transformations; Figure 10 illustrates pre-manipulation translations, postmanipulation translations and inter-section translations for respective pre-manipulation data sections and post-manipulation data sections; Figure 11 schematically illustrates the arrangement of a video effects unit within the overall hierarchical system and the data format at various points within this system; Figure 12 illustrates the assembly stores; and Figure 13 illustrates a real time system for manipulating image data with the equipment originally designed for manipulating image data of a lower information content.
Figure 1 illustrates a system for the non real time manipulation of image data. Various apparatus for generating the source stream of image data is illustrated. These include: a still image photographic camera 2 and a slide converter 4; a movie camera 6 and a film scanner 8; a high definition moving image electronic camera 10, a high definition digital video tape recorder 12 and a video upconverter 14; and a super high resolution moving image electronic video camera 16 and a super high resolution digital video tape recorder 18 (illustrating the hierarchial manner in which the system can cope with increased resolution equipment developed in the future).
The image capture of these devices takes place in real time.
Subsequent processing in this embodiment takes place in non real time.
The non real time source stream of data from one of the slide scanner 4, the film scanner 8 or the video data mixer 14 is fed to a respective data splitter 20. The split streams of data from the data splitter 20 take the form of a multiplexed stream of data comprising a sequence of contemporaneous portions of each of the split streams, e.g.
the image data from a single frame may be split into four separate streams which are then output in sequence from the data splitter 20.
The non real time multiplexed data streams from the data splitter 20 are recorded on a high definition digital video tape recorder 22. In the case of the super high resolution camera 16 and the super high resolution digital video tape recorder 18, the super high resolution digital video tape recorder captures the image data in real time but plays it back through the data splitter 20 in non real time.
The split data is then fed to a post production processing unit 24. It will be appreciated that, when a plurality of sources are being simultaneously fed to the post production unit 24, the separate sources are synchronised together so that corresponding split streams are processed together.
The post production processing unit 24 can be a standard piece of high definition equipment such as a high definition video effects unit, high definition filtering system, a keying unit and video recorders for preforming manipulations such as foreground and background matting.
The output from the post production processing unit 24 is fed to a further high definition digital video tape recorder 26 via which it is passed to a data combiner 28. The data combiner 28 combines the split streams of data into an output stream of data. This can then be passed through a super high resolution film transfer unit 30 onto a 70mm film projection system 32 or direct to a super high resolution video channel 34. Alternatively, the combined output stream of data could be split again with accompanying pan/scan/zoom operations by a video down conversion system 36 and then displayed via a standard high definition channel 38 or placed onto 35mm film by an electron beam recorder 40.
Figure 2 illustrates a super high-resolution frame A that is split by tiling into pre-manipulation data sections Al, A2, A3 A4, each representing a different area within the super high-resolution frame A. In order to counteract edge effects when the data sections Al, A2, A3, A4 are recombined, each of the data sections Al, A2, A3, A4 includes an overlap region 42 at its border with its neighbours.
Figure 3 schematically illustrates how the different data sections Al, A2, A3, A4 are sequentially fed through the post production unit 24 of Figure 1. Following the four data sections Al, A2, A3, A4 representing the super high-resolution frame A, there is shown a data section B1 that is the first data section from the following super high-resolution frame B.
Figure 4 illustrates a non real time data splitter 20 for use with tiling splitting. The super high resolution data is fed to a swing buffer arrangement 44. Respective frames of the super high resolution data are stored in respective frame stores on either side of the swing buffer 44. A timing circuit 46 extracts the timing and synchronisation information from the super high resolution signal.
This timing circuit controls the operation of a read address generator 48 and a write address generator 50. The read address generator 48 and the write address generator 50 can comprise incrementing counters addressing appropriately programmed PROMs for mapping the incrementing count into a predetermined sequence of addresses within the frame stores of the swing buffer 44. Whilst a current frame of super high resolution data is being fed into one of the frame stores under control of the write address generator 50, the data from the previous frame is being read out of the other frame store under control of the read address generator 48.
It will be appreciated that with appropriate programming of the PROMs the data can be read into the frame stores in its normal raster scan pattern but read out in any sequence and order that is desired.
The data is read out as four raster scanned quadrants to bring about tiling splitting.
A select and control unit 52 responsive to the respective read and write addresses is used to supply the appropriate signals to the control inputs of the frame stores and to an output selector 53.
Figure 5 shows an individual data section (tile) in more detail.
A colour image data section can be considered to be made up of four component signals. Three of these component signals are the R, G and B component colour signals representing the image for the data section Al, including the overlapping portions 42. The fourth component comprises area key data. This area key data is used to identify those parts of the data section Al that are active in the sense that they should be written into the assembly store (discussed later) and eventually reproduced within the reconstituted super high-resolution frame. The area key data is set active within the area bounded by the lines 54.
Figure 6 shows the relationship between the coordinates spaces of the super high-resolution frame and those of the four data sections Al, A2, A3, A4. The super high-resolution coordinates base extends between -x and x horizontally and -y and y vertically with its centre at a position (0,0). In contrast, the coordinates space of the video effects unit extends between -x/2 and x/2 horizontally and -y/2 and y/2 vertically with its centre at a position (0,0).
The relationship between the coordinates of particular points in the super high-resolution domain and their corresponding data sections Al, A2, A3, A4 in the video effects unit domain is shown in the lower half of Figure 6.
Figure 7 illustrates a simple manipulation that one may wish to perform with the video effects unit. The overall desired effect is that the super high-resolution frame should first be rotated by 450 clockwise about its centre (rotation R) and then linearly translated rightwards and downwards by a vector T.
The various manipulations that can be performed by the video effects unit can be mathematically represented as transformation matrices. These matrices can then be used to perform the manipulation by matrix multiplication. An effect that is a composite of more than one of such primitive manipulations can be built up by producing a composite matrix that is the product of the various primitive matrices making up the manipulation. In the case of the manipulation illustrated in Figure 7, the overall effect in the super highresolution domain could be considered by multiplying each of the points first by a matrix representing the rotation R and then by a matrix representing the translation T or by a single matrix that is the product T*R.
Matrices representing differing primitive manipulations are as follows: rotation (roll) by an angle o in the xy plane of the image is given by R (e) =
cos(8) sin (0) 0 0 in(8) cos(8) 0 O 0 0 10 0 0 Ol
rotation (yaw) by an angle 6 in the xz plane is given by y(e) =
cos (8) 0 sin(8) O 0 1 0 0 -sin(#~) 0 cos(8) o 0 0 0 1
rotation (pitch) about an angle 6 in the yz plane is given by p(o) =
1 0 0 0 o cos(#) -sin(O) O o sin(O) cos(8) o O 0 0 1
scaling to independently alter the dimensions in the x, y and z directions is given by s(x,y,z) =
sx o o o o sy 0 0 o 0001
When the various primitive transformations have been identified, their respective matrix forms can be multiplied to yield an overall matrix transformation such as::
zl
=
ABCD EFGH IJKL M N O P
U V 0
in which u and v are the input image coordinates and x', y' and z' are the output image coordinates. In practice it is more convenient to determine the inverse of this generalised matrix and then determine input image u and v values to be read into a particular positions specified by a given x', y' and z' position in the output image. In the case where hidden surfaces occur, hidden surface and line removal techniques will be needed as is known in the field.
Figure 8 illustrates the four manipulations performed upon the pre-manipulation data section Al to map it to each of the four possible post-manipulation data sections so as to determine its respective contributions thereto. The "*" next to each quadrant corresponding to a post-manipulation data section in the output image on the right hand side of Figure 8 indicates which of these quadrants is currently active in that the contribution of Al thereto is being determined.
A comparison of the manipulated image produced in Figure 7 and the four manipulations illustrated in Figure 8 shows that the premanipulation data section Al makes some contribution to all of the post-manipulation data sections except the bottom left hand data section. This illustrates part of the problem addressed by the present invention whereby each of these different contributions has to be accounted for with equipment that was originally designed for use in a smaller address space.
Figure 9 illustrates the component primitive manipulations (each corresponding to one of the matrices given earlier) from which the manipulation shown at the top of Figure 8 is formed.
The first step is to perform a pre-manipulation translation 0 to move the centre of the image in the super high-resolution domain (as illustrated in Figure 6) to correspond to the centre of the coordinate space of the video effects unit. Following this the two manipulations R and T, described in relation to Figure 7, are in turn performed.
Finally, a post-manipulation translation 0' is performed to reverse the pre-manipulation translation. The overall manipulation that should be performed is given by the matrix product O'*T*R*0. In practice, the single matrix formed by this matrix product would be used to specify and carry out the manipulation.
An additional factor not illustrated in Figure 9 is that which must be considered when the pre-manipulation data section does not correspond to the same quadrant as the post-manipulation data section to which its contribution is being determined. In that case, an extra inter-section translation would be added to the end of the above mentioned manipulations to take account of the translation needed to move the centre of the post-manipulation data section to the centre of the pre-manipulation data section.
It will be appreciated that there are many possible different combinations of pre-manipulation, post-manipulation and inter-section translations. In practice, each pair of pre-manipulation data sections Al, A2, A3, A4 and post-manipulation data sections Awl*, A2*, A3*, A4* will be unique. Figure 10 schematically illustrates the translations for each of these pairs.
The top left translation is the pre-manipulation translation, the top right translation is the post-manipulation translation and the bottom centre translation is the inter-section translation. The directions and lengths of the various translations illustrated in Figure 10 correspond to those needed to perform the translations in practice.
As previously mentioned, the non-real time nature of the system of Figure 1 has the result that is advantageous if a particular desired effect can be perfected during off-line editing and then translated into the necessary manipulations for the different data sections. In the example of Figure 7, off-line editing using known lower resolution equipment would be used to perfect the magnitude and directions of the manipulations R and T. Once these had been perfected in the lower resolution system, they can be translated to form the 16 different manipulations required for the super high resolution format by inserting the perfected R and T matrices between the pre-manipulation translation and post-manipulation translation for each data section pair as illustrated in Figure 10.
Figure 11 illustrates the functional blocks and data formats of this video effects system. A data splitter 56 that has the form illustrated in Figure 4 converts the data from super high resolution format into four successive image quadrants 58. Each of these four quadrants is fed in turn to a high definition digital multiple effects unit (video effects unit) 60. The high definition digital multiple effects unit 60 stores each input quadrant in a frame memory and then performs four manipulations upon it in the manner discussed previously.
The contributions to each post-manipulation data section (output quadrant) are fed to an assembly store 62 into which they are written under control of an area key signal to form the manipulated super highresolution image 64. The post-manipulation data sections 66 are then sequentially read from the assembly store so that they can undergo further processing (e.g. foreground/background matte processing - not illustrated) before they eventually reach a data combiner 68. The data combiner 68 can again have the form illustrated in Figure 4 and serves to convert the tiled data back into super high-resolution format.
Figure 12 illustrates the assembly store 62 of Figure 11 in more detail. The video data and area key data from the high definition digital multiple effects unit 60 are fed to a pair of field/frame stores 70, 72 arranged as a swing buffer. The area key data enables the writing of the video data into the field/frame stores 70, 72 in a manner that prevents revealed background image portions being stored and removes the overlapping regions 42 from each quadrant. Timing signals delimiting each quadrant are fed to a controller 74 which coordinates the operation of a write address generator 76 and a read address generator 78. The write address generator 76 and the read address generator 78 operate over the full address space of the super high resolution frame and control the writing to and reading from the appropriate sections of the field/frame stores 70, 72.Field store controllers 80, 82 are fed with the generated write addresses and read addresses together with read/write selection signals, output enable signals, the area key data and timing control signals C1, C2. The field/frame store controllers 80. 82 operate such that one of the field/frame stores 70, 72 is being written to concatenate all of the contributions to the post-manipulation data sections whilst the other field/frame store 70, 72 is being read from.
Figure 13 shows a system for a higher speed manipulation of image data. The difference between this system and that of Figure 1 is that between the data splitters 20 and the data combiners 28 there exists a synchronised multi channel architecture comprising four post production processing units 24 with corresponding banks of high definition video tape recorders 22 and 26. It will be appreciated that the four post production processing units 24 must have a coordinated operation and so a controller 42 is provided to synchronise their operation. For example, if the post production processing unit is performing a fade or wipe operation then it is important that this should progress at the same rate for each of the split streams of data if distortion upon recombination is to be avoided. If each of the post production processing units 24 contains four video effects units, then real time video effects manipulation can be achieved. Such an arrangement would be expensive in terms of hardware requirements.

Claims (12)

1. Apparatus for processing image data, said apparatus comprising: an image data source for generating source data representing an input image; a data splitter for splitting said source data into N premanipulation data sections each representing a different area within said input image; a video effects unit for separately performing N manipulations upon each of said N pre-manipulation data sections to determine a contribution of each of said N pre-manipulation data sections to a different one of N post-manipulation data sections each representing a different area within a manipulated image; and an assembly store for receiving and concatenating said contributions of each of said N pre-manipulation data sections to different ones of said N post-manipulation data sections.
2. Apparatus as claimed in claim 1, comprising: means for sequentially reading said N post-manipulation data sections from said assembly store; and a data combiner for combining said N post-manipulation data sections to form output data representing said manipulated image.
3. Apparatus as claimed in any one of claims 1 and 2, wherein each manipulation performed by said video effects unit upon a data section corresponds to a transformation specified by a transformation matrix.
4. Apparatus as claimed in claim 3, wherein each transformation includes a pre-manipulation translation to render concentric respective co-ordinate spaces of said input image and said video effects unit prior to applying transformations specified in said input image coordinate space and a post-manipulation translation reversing said premanipulation translation after application of transformations specified in said input image co-ordinate space.
5. Apparatus as claimed in any one of claims 3 and 4, wherein each transformation incudes an inter-section translation dependent upon what position said pre-manipulation data section has in said input image relative to said post-manipulation data section to which said contribution is being determined.
6. Apparatus as claimed in any one of the preceding claims, wherein said areas within said input image have overlapping edges.
7. Apparatus as claimed in claim 6, wherein area key data is associated with each area within said input and defines borders at which said areas within said input image touch without overlapping, said area key data also being subject to said N manipulations to form manipulated area key data.
8. Apparatus as claimed in claim 7, wherein said assembly store is responsive to said manipulated area key data to concatenate said N post manipulation data sections to form areas within said manipulated image with borders that touch without overlapping.
9. Apparatus as claimed in any one of the preceding claims, comprising means for translating data defining a video effect obtained during off-line editing of a lower resolution version of said input image into data for controlling operation of said video effects unit to perform a sequence of N manipulation upon each of said N premanipulation data sections to bring about said video effect upon said input image.
10. A method of processing image data, said method comprising the steps of: generating source data representing an input image; splitting said source data into N pre-manipulation data sections each representing a different area within said input image; separately performing N manipulations upon each of said N premanipulation data sections to determine a contribution of each of said N pre-manipulation data sections to a different one of N postmanipulation data sections each representing a different area within a manipulated image; and concatenating said contributions of each of said N pre manipulation data sections to different ones of said N postmanipulation data sections.
11. Apparatus for processing image data substantially as hereinbefore described with reference to the accompanying drawings.
12. A method of processing image data substantially as hereinbefore described with reference to the accompanying drawings.
GB9216164A 1992-07-30 1992-07-30 Apparatus and method for processing image data Expired - Fee Related GB2269293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9216164A GB2269293B (en) 1992-07-30 1992-07-30 Apparatus and method for processing image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9216164A GB2269293B (en) 1992-07-30 1992-07-30 Apparatus and method for processing image data

Publications (3)

Publication Number Publication Date
GB9216164D0 GB9216164D0 (en) 1992-09-09
GB2269293A true GB2269293A (en) 1994-02-02
GB2269293B GB2269293B (en) 1996-04-24

Family

ID=10719526

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9216164A Expired - Fee Related GB2269293B (en) 1992-07-30 1992-07-30 Apparatus and method for processing image data

Country Status (1)

Country Link
GB (1) GB2269293B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2139846A (en) * 1983-05-09 1984-11-14 Dainippon Screen Mfg Magnifying or otherwise distorting selected portions of scanned image
EP0198630A2 (en) * 1985-04-03 1986-10-22 Sony Corporation Method and system for image transformation
GB2183118A (en) * 1985-11-19 1987-05-28 Sony Corp Image signal processing
EP0260144A2 (en) * 1986-09-12 1988-03-16 Westinghouse Electric Corporation Method for generating variably scaled displays
GB2214038A (en) * 1987-10-05 1989-08-23 Int Computers Ltd Image display system
GB2231471A (en) * 1989-04-25 1990-11-14 Quantel Ltd Image processing system with individual transformations for image tiles
EP0449469A2 (en) * 1990-03-30 1991-10-02 Digital F/X, Inc. Device and method for 3D video special effects

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0527097A3 (en) * 1991-08-06 1995-03-01 Eastman Kodak Co Apparatus and method for collectively performing tile-based image rotation, scaling and digital halftone screening

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2139846A (en) * 1983-05-09 1984-11-14 Dainippon Screen Mfg Magnifying or otherwise distorting selected portions of scanned image
EP0198630A2 (en) * 1985-04-03 1986-10-22 Sony Corporation Method and system for image transformation
GB2183118A (en) * 1985-11-19 1987-05-28 Sony Corp Image signal processing
EP0260144A2 (en) * 1986-09-12 1988-03-16 Westinghouse Electric Corporation Method for generating variably scaled displays
GB2214038A (en) * 1987-10-05 1989-08-23 Int Computers Ltd Image display system
GB2231471A (en) * 1989-04-25 1990-11-14 Quantel Ltd Image processing system with individual transformations for image tiles
EP0449469A2 (en) * 1990-03-30 1991-10-02 Digital F/X, Inc. Device and method for 3D video special effects

Also Published As

Publication number Publication date
GB2269293B (en) 1996-04-24
GB9216164D0 (en) 1992-09-09

Similar Documents

Publication Publication Date Title
KR100254525B1 (en) Apparatus and method for creating film-like video
US4258385A (en) Interactive video production system and method
US5392071A (en) Apparatus and method for processing image data
US5107252A (en) Video processing system
US6026179A (en) Digital video processing
JP3384799B2 (en) Electronic image manipulation device and method
Leonard Considerations regarding the use of digital data to generate video backgrounds
CN105144229B (en) Image processing apparatus, image processing method and program
US5253065A (en) Digital video effects system for reproducing moving effects
US6075887A (en) High definition color modification
US8401339B1 (en) Apparatus for partitioning and processing a digital image using two or more defined regions
US20010043219A1 (en) Integrating live/recorded sources into a three-dimensional environment for media productions
EP0264961B1 (en) Television special effects system
US6400832B1 (en) Processing image data
JPH06350937A (en) Picture synthesis reproduction device
JP2006141042A (en) Media pipeline with multichannel video processing and playback
JPH10150585A (en) Image compositing device
WO1996024909A1 (en) Image editing apparatus
US5023720A (en) Single channel video push effect
JPH0134514B2 (en)
WO1990016131A1 (en) Digital video recording
GB2269293A (en) Apparatus for performing video effects manipulations upon image data
GB2246933A (en) Production of multi-layered video composite
JPH0965374A (en) Three-dimensional picture recording device and three-dimensional picture reproducing device
JPH0142557B2 (en)

Legal Events

Date Code Title Description
730A Proceeding under section 30 patents act 1977
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20110730