GB2119594A - Video processing systems - Google Patents
Video processing systems Download PDFInfo
- Publication number
- GB2119594A GB2119594A GB08306789A GB8306789A GB2119594A GB 2119594 A GB2119594 A GB 2119594A GB 08306789 A GB08306789 A GB 08306789A GB 8306789 A GB8306789 A GB 8306789A GB 2119594 A GB2119594 A GB 2119594A
- Authority
- GB
- United Kingdom
- Prior art keywords
- frame
- information
- address
- picture
- addresses
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000000694 effects Effects 0.000 claims abstract description 13
- 230000002123 temporal effect Effects 0.000 claims abstract description 11
- 238000000034 method Methods 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 7
- 230000015654 memory Effects 0.000 claims 7
- 230000015572 biosynthetic process Effects 0.000 claims 1
- 238000003786 synthesis reaction Methods 0.000 claims 1
- 230000007246 mechanism Effects 0.000 abstract description 21
- 230000006835 compression Effects 0.000 description 11
- 238000007906 compression Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 3
- 230000003111 delayed effect Effects 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000002401 inhibitory effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2628—Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Circuits (AREA)
Abstract
A processing system includes a processor 10 and frame store 11. Both the frame store and processor are under the control of address mechanism 12. An incoming pixel is processed with previously stored information and the proportion of processed information restored is controlled to prevent errors in picture information density as store location may be accessed a number of times in a frame period. The addressing mechanism 12 can generate the desired information from address information which is only provided on some of the store locations and which is only updated over more than one frame period. The mechanism includes spatial and temporal interpolators to effect this operation. <IMAGE>
Description
SPECIFICATION
Video processing systems
The invention relates to a video processing system inter alia for use in special effects in television.
In known systems used for special effects for example video information is received by a framestore acting as a buffer and in order to produce the special effect desired, an output processor typically receives data from selected addresses within the framestore to reconstitute a picture of different shape or size to that input to the store.
Whilst such systems work over a limited range of special effects as currently available, the way in which they operate inhibits their flexibility.
An object of the invention is to provide a system capable of producing greater flexibility in picture manipulation whilst maintaining picture quality so that the resultant picture is not noticably degraded.
According to the invention there is provided a video processing system for picture shape manipulation comprising.~ frame storage means for receiving picture point information in a plurality of locations equivalent to a video frame;
addressing means for addressing selected frame store locations a plurality of times within a frame period in dependence on the manipulation required;
processing means for processing the picture information at any given location each time that location is addressed; and
control means for varying the processing provided in dependence on the density of the picture information at a given picture location within a frame period.
According to a further aspect of the invention there is provided an addressing mechanism for a framestore processing system comprising.~ means for determining only selected ones of the desired framestore addresses required to be accessed;
means for updating the selected ones of the desired addresses at a rate slower than normal frame rate; and
address interpolation means for calculating all the desired addresses to be accessed from the available address data in both spatial and temporal modes so that all the addresses to be accessed are available updated at normal frame rates.
The invention will now be described with reference to the accompanying drawings in which:
Figure 1 shows a basic embodiment of the system relating to one aspect of the present invention;
Figure 2 shows various aspects of picture manipulation;
Figure 3 shows the processing aspects of the system of Figure 1 in more details;
Figure 4 shows further details of the processing;
Figure 5 shows a further aspect of picture manipulation;
Figure 6 shows a system capable of providing suitable address manipulation;
Figure 7 shows address interpolation techniques for the system relating to a second aspect of the present invention;
Figure 8 shows an address interpolation configuration for dealing with the spatial manipulation; and
Figure 9 shows an arrangement for both spatial and temporal interpolation of the addresses.
A basic embodiment relating to one aspect of the invention is shown in Figure 1. A processor 10 receives incoming picture information together with information previously stored in framestore 11. The x, y address accessed from the framestore designating a given picture point is determined by the output of address mechanism 12. In addition a control parameter Z is provided by the address mechanism for controlling the processing within processor 10.
The arrangement shown is a radical departure from known systems in that the address mechanism in effect defines the 'shape' of the image to be produced by a process which builds up a picture by accessing certain framestore locations more than once within the frame period so that effectively all the original picture points are arranged to go somewhere even though the locations will be different to their original pixel positions. In addition the address mechanism produces the control parameter to ensure that correct fractions of the picture points are added to the framestore.
The processing and addressing mechanism will now be explained in more detail. The address and storage mechanism is more complicated than at first apparent in that we have devised a system whereby the picture points can be considered as part of a grid of cells and that these picture points need not be assigned solely to one cell but can have portions assigned to a number of cells as represented in Figure 2. Thus a standard picture (without processing) made up of a plurality of picture points would be stored in successive cells within the framestore as represented by picture points P1, P2, P3 and P4 of Figure 2(a), and the address mechanism 12 can be considered as producing a standard addressing sequence with the value of Z being fixed typically at a value equal to 1.Although only pixels P1 to P4 are shown it is clear that all the cells of the frame store would contain a pixel in this mode.
The representation of Figure 2(b) shows the change in the picture points, where a picture (still of standard size) is scrolled horizontally and vertically by 1/2 a picture point. For simplicity only P, and P2 are shown. The pixels P, and P2 can now be seen to each have four portions allocated to adjacent store cells. In practise the portion of a given pixel added to a cell is known from the x and y address provided by our address mechanism 12, which is arranged to produce a main address and remainder, the remainder giving the fraction allocated to a given cell and comprise the Z parameter.
Thus for Figure 2b, the framestore 11 and processor 10 under the control of address mechanism 12 effectively initially receive pixel P, and after processing partially assign it to cells C, and C2 and the appropriate cells on the next line
Cq and Cr. Pixel P2 is then processed and placed in C2, C3, Cr and Cs. The process in practice takes into account the information already alloted from pixel P1 in cell C2 for example so that this cell location is accessed for a read operation followed by the processing dependent on the value of Z provided and then the processed output is written into the store.
In practice where scrolling takes place without picture size change (zoom) then the portions of the adjacent pixels allocated to a cell will always add up to 1 at the end of the processing. In the
Figure 2(b) illustration, four read, processing and write operations will be required to produce the desired picture information with Z in this example being 1/4.
An embodiment of the processor 10 suitable for providing the necessary basic picture manipulation is shown in Figure 3.
A multiplier 13 receives the incoming information and after multiplying the information, this is passed to adder 14. The value of Z will always be chosen to be between 0 and 1.
In the example of 2(a) above as already explained, the value of Z can be considered as equal to 1 and in the 2(b) arrangement it will be 1/4 since four quarters from adjacent pixels are used to generate the information within cell C2.
Thus providing the contents of the store is cleared to zero at the start of each picture the interpolation for generating the information for each cell is automatically achieved merely by the succession of read-process-write operations.
However, this mechanism is only suitable for scrolling as it stands, when there is a change in picture size then additional manipulation is required to avoid errors in picture build up, as now explained with reference to Figure 2c. Here a 2:1 reduction in picture size is represented (without scroll) so that pixel P1 is within cell C1 and P2 is partially within cell C1 and C2. Similarly P3 is wholly within cell C2 and P4 is partly within C2 and
C3. If some adjustment of the data quantity were not made then the resultant information within cell C2, for example, would be twice that desired (considering picture intensity, for example). Thus it is necessary to provide the means of adjusting this situation and from Figure 2(c) it can be seen that the adjustment value (K) is required to be 1/2 to provide the desired adjustment at each cell.
Thus although this is a simple case, the general
rule in fact follows this explanation in that
K=compression ratio. Compression ratios of 32:1
have been successfully achieved. Although K can be considered as fixed for a given compression
(e.g., K=3 for 3 times compression) the compression need not be the same across the entire picture and indeed variable compression gives rise to a host of special effects which can be achieved with the present system.
Although the mechanism for (read-add manipulated new picture point-write) in a single cycle may initially seem straightforward, in fact it is a very powerful tool which performs interpolation and filtering all in the one operation or sequence of operations without the need of elaborate additional devices.
Considering Figure 2(d) it can be seen that compression and scrolling can be achieved by moving the addressing by 1/2 pixel (only Pa and
P2 are shown for simplicity).
Although Figure 3 shows the basic mechanism involved, it has been somewhat simplified and more comprehensive system for producing the desired processing is shown in Figure 4 and shows how the fractional part of the address and the density compensation referred to above is used. A multiplier 20 is now included to provide the density compensation. As illustrated in Figure 2(b) a pixel may be manipulated so that four portions are allotted to each of four adjacent store cells. In order to cope with this manipulation at reasonable speeds it is typically necessary to include additional processing and storage over the mechanism as in Figure 3 where the 4 points would have to be computed sequentially during one input pixel.The single multiplier, adder and framestore of Figure 3 has been replaced by four framestores 34 to 37 each with their associated multipliers 30-33 and adders 16 to 19 respectively. The outputs from the framestores are received by summer 38 to produce the combined output. Such a system allows the incoming pixel to be available to each of the four relevant store cells and their associated processing. The address mechanism 12 is now shown as producing the main x, y address for the four respective framestores (A, B, C and D) and in addition the fractional (Z) part of the address (a, b, c, and d) and the density compensation value (K). The main address for pixel P, for any of the illustrated situations will be A=C1, B=C2, C=Cq and D=Cr. The fractions will vary. Thus for Figure 2(a) a=1 and b, c and d=O, and K=1.
For Figure 2(b) when dealing with P1, the fractional addresses a, b, c and d=1/4, and K=1.
Any combination of these portions will always
equal 1 due to the presence of summer 38.
For the Figure 2(c) situation, for pixel P1; a=1 and b, c and d=O and K=1/2, for pixel P2; A, B, C and D will be as before but a and b=1/2 and c and d=O with K=1/2.
It is to be remembered that each store 34 to 37 is a complete framestore. Thus when writing into the system the addresses always express four different but adjacent cells, but on readout from the system, the address for each of the framestores is the same. In other words, for processing you want to access 4 cells, but on readout you only want to access a single cell.
The provision of multiplier 20, effectively
reduces the incoming data, which is required
where compression is taking place otherwise a
build up of contributions from many picture points
into a cell would give too large a density of
information. For special effects the compression
will not be the same for each cell within the frame.
For clarity multiplier 20 has been shown separate from multipliers 30-33 but in practice
multiplier 20 could typically be incorporated within divider 30, 31, 32 and 33.
Another special effect will now be described to
illustrate the versatility of the system. Figure 5 shows an effect equivalent to page P being
turned. Merely generating the 'shape' you want
allows the correct sequence of addressing to be
achieved as well as the correction for the buildup of the picture. Thus at the edge E of page P there will be a greater build up (but compensated by multiplier 20, than in the overlap portion F).
In practise the flap F will appear to be transparent so that picture information
underneath will also be visible.
If it is desired to make the flap opaque so that the underlying picture is obscured this can be achieved by the operational cycle of read~
replace-write, which can be simply produced by inhibiting the connection between the framestore
1 1 output and the adder 14 in the Figure 3 configuration. Typically it is convenient to generate an indentification 'tag' via the address mechanism to ensure that the system knows which area is above the other area.
The way in which the address mechanism 12 can operate for any shape desired is now shown in more detail in Figure 6.
It can be seen from the above examples that the address sequence chosen by the operator effectively defines the shape and size of the output picture as well as compensating for the accumulated information by the provision of x, y, z and K parameters.
The simple shapes of Figure 2 can easily be generated by the keyboard 21 for input to computer 20 so as to provide the sequence of cell address locations desired to be accessed as well as entering the value of Z and K required.
Thus the grid of framestore cells can be determined to be within the boundary of the desired picture shape or not, when they are, these addresses are accessed during manipulation.
Alternatively, standard mathematical formulae can be entered to generate the desired shapes in the computer 20. In the case of a circle for example, the standard text book equation for the circle is entered and simply by defining the cell address of its centre and the circle radius, then it is possible to determine whether a particular cell address is within the boundaries of the circle or not, and this defines the resultant picture shape.
The desired shape area is also an indication of the compression relationship and so this also can be calculated to determine the K value. The value of compression (K) for a given picture area can be determined for example by employing standard area computational techniques (see p. 129-131 of Hewlett Packard HP25 handbook 1975) where
K is proportioned to the area.
The calculated values for x, y together with the appropriate value for K for that cell for a given shape are then passed to disc store 23. In practice disc 23 contains a whole range of shapes including shape sequences to allow picture transformation from one shape to another to be achieved. Once the shapes have been entered, the computer 20 is not necessarily required thereafter, and the system merely uses the disc store 23 as its shape data source. Although the disc access is much faster than that from the computer, it nevertheless is generally not rapid enough to cope with video rates of addressing. In order to overcome this problem we have incorporated an additional mechanism represented by address interpolator 24 which operates as illustrated in Figure 7.The disc store in practice only holds coarse cell address data as shown by points K, L, M, N of which K and L represent typically the 1 st and 8th successive pixel address horizontally and M and N the equivalent address points 8 lines below. Points O, R, S and T are typically the equivalent address points 8 frames later. Thus updating the addressing at this rate can be handled by the disc and the addresses between the available points are interpolated therefrom both spatially and temporally as illustrated. We have found that this technique does not produce any noticeable degradation to the picture produced.
Although the computer 20 has been described as providing all the values for x, y, Z and K, where only coarse addresses are provided to disc 23,
then it may be convenient to only provide correspondingly coarse values for the other parameters and then these also are interpolated to derive all the desired information. Alternatively the parameters can be calculated following the address interpolation process within interpolator 24, using the computation referred to above.
The address interpolation technique as described also works where the disc is producing an effects sequence and whilst the change in addressing is produced by the disc updated every
8 frames (in this example), the address interpolation produces a gradual change over the 8 frames, by giving greater weighting to the adjacent frame than the remote frame.
An arrangement for providing the spatial address interpolation is shown in Figure 8. The coarse addresses are received by address delay latch 41 which provides a delay equivalent to 8 lines of address. The delayed address is passed to multiplier 42 and the current coarse address is passed to multiplier 43 before addition in adder 44. The adder output passes to a further delay 45, which has a delay equivalent to 8 picture point addresses and this delayed output passes to multiplier 46. The undelayed output from adder 44 passes to multiplier 47 prior to receipt by adder 48, which also receives the output frorn multiplier 46.
The Figure 8 arrangement in practice is duplicated to give the necessary interpolation for both the x and y address.
Thus from Figure 7, as the K, L, M, N coarse addresses are available, any other interpolated address e.g., address W can be determined therefrom. The value of k and I will vary between 0 and 1 typically in 1/8 steps as the addresses are calculated. These values for the multipliers can conveniently be provided by look up tables incremented by the address clocks.
The spatial address interpolator is incorporated in the Figure 9 arrangement to produce the temporal interpolation. The output from the disc store 23 is for ease of explanation shown as being provided from a first shape store 23A (holding the
KLMN addresses of Figure 7 for example) and a second shape store 23B (holding the QRST values). After spatial interpolation, the address values are then available for temporal interpolation using multipliers 52 and 53. The resultant outputs are available via adder 54. The values fort again vary between 0--1 in 1/8 steps conveniently using a look up table 55. This allows any change in address shape between the 8 frame period to be gradually introduced.
Although the system has been described as having coarse addresses operating around 8 addresses and 8 frames, this value is not mandatory.
In practice although the system has been generally described in relation to handling intensity values for the video information, when handling colour data the system would typically be triplicated so that one part handles the luminance data and the other part the chrominance (e.g., colour difference) information.
In the N.T.S.C. system this is coded as Y, I and Q information respectively. From this it can be seen that althrough a relatively large number of framestores are needed, the results obtained justify such a configuration.
Alternatively the colour can be handled on a
RGB basis.
Although the system has been described for use in special effects for broadcast T.V. it can be used for other types of video systems requiring picture manipulation in a total free form after generation.
Claims (21)
1. A video processing system for picture shape manipulation comprising:~
frame storage means for receiving picture point information in a plurality of locations equivalent to a video frame;
addressing means for addressing selected frame store locations a plurality of times within a frame period in dependence on the manipulation required;
processing means for processing the picture information at any given location each time that location is addressed, and
control means for varying the processing provided in dependence on the density of the picture information at a given picture location within a frame period.
2. A system as claimed in Claim 1, wherein the addressing means includes a generator for defining address locations to sub-pixel accuracy.
3. A system as claimed in Claim 1 or 2, wherein said processing means includes an adder for adding previously stored information from said frame storage means to a proportion of the incoming picture point information determined by said control means.
4. A system as claimed in Claim 3, wherein said processing means includes a multiplier for varying the proportion of information passed to said adder by varying a control parameter provided by said control means.
5. A system as claimed in any one of Claims 1 to 4 wherein said addressing means includes a memory for retaining address information pertaining to at least one desired picture shape and said control means includes a memory for retaining control information associated with the at least one desired picture shape.
6. A system as claimed in Claim 5, wherein said memories are provided within a common storage medium.
7. A system as claimed in any one of Claims 1 to 6, wherein said frame storage means includes a plurality of frame stores, said addressing means having access to an address location in each of said frame stores simultaneously, and wherein said processing means includes a plurality of processors adapted to operate simultaneously whereby information on an incoming picture point is processed and stored within each frame store.
8. A system as claimed in Claim 7, wherein adding means are provided to receive each output from said frame stores so as to produce summated picture point information therefrom.
9. A system as claimed in any one of Claims 1 to 8 wherein said addressing means includes,
means for determining only some of said selected ones of the desired frame store addresses required to be accessed,
means for updating the desired addresses at a rate slower than normal frame rate, and
address interpolation means for calculating all the desired addresses to be accessed from the available address data in both spatial and temporal modes so that all the addresses to be accessed are available updated at normal frame rates.
10. A system as claimed in Claim 9, wherein said determining means includes a memory for retaining at least one addressing sequence.
1 1. A system as claimed in Claim 10, wherein said updating means is adapted to access different portions of said memory.
12. A system as claimed in Claim 1 wherein said memory and said updating means is provided by a moveable recording medium containing only said selected frame store addresses from selected frames.
13. A system as claimed in Claims 10, 1 1 or 12, wherein generator means are provided to determine desired addressing sequences prior to receipt by said memory.
14. A system as claimed in any one of Claims 9 to 13, wherein said address interpolator means includes a first spatial address interpolator for determining all the relevant addresses from the information available associated with a first frame,
a second spatial address interpolator for delivering all the relevant addresses from the information available associated with the next available frame, and
a temporal address interpolator for delivering the relevant addresses associated with intermediate frames from information provided by both said first and second spatial address interpolators.
15. A system as claimed in Claim 14, wherein each of said spatial address interpolators includes an arithmetic processor for variably providing synthesised addresses in dependence of their spatial relationship with said address information currently available.
16. A system as claimed in Claim 15, wherein said temporal address interpolator includes an arithmetic processor for variably providing synthesised address information in dependence on their temporal relationship with the address information currently available, whereby the addresses within one frame are modified over intermediate frame periods to effect a gradual change from frame to frame.
17. A system as claimed in Claim 15 or 16, wherein an arithmetic information generator is provided to variably control the desired synthesis on a temporal basis.
18. A method of processing picture information comprising the steps of:
receiving picture point information in a plurality of locations equivalent to a video frame,
addressing selected frame store locations a plurality of times with a given frame period in dependence on the manipulation required,
processing the picture information at any given location each time that location is addressed, and
varying the processing provided in dependence on the density of the picutre information at a given picture location within a frame period.
19. A method as claimed in Claim 18, wherein the addressing step comprises:~
providing only selected ones of the desired frame store addresses required to be accessed,
updating the selected ones of the desired addresses at a rate slower than normal frame rate,
calculating all the desired addresses to be accessed by interpolating the available address information both in spatial and temporal modes so that all the addresses to be accessed are available updated at normal frame rates.
20. A video processing system substantially as described herein and as illustrated in the accompanying drawings.
21. A method of processing picture information substantially as described herein.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB08306789A GB2119594B (en) | 1982-03-19 | 1983-03-11 | Video processing systems |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB8208054 | 1982-03-19 | ||
GB08306789A GB2119594B (en) | 1982-03-19 | 1983-03-11 | Video processing systems |
Publications (3)
Publication Number | Publication Date |
---|---|
GB8306789D0 GB8306789D0 (en) | 1983-04-20 |
GB2119594A true GB2119594A (en) | 1983-11-16 |
GB2119594B GB2119594B (en) | 1986-07-30 |
Family
ID=26282299
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB08306789A Expired GB2119594B (en) | 1982-03-19 | 1983-03-11 | Video processing systems |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2119594B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2563401A1 (en) * | 1984-04-19 | 1985-10-25 | Quantel Ltd | VIDEO PROCESSING SYSTEM FOR PRODUCTION EFFECTS IN TELEVISION |
DE3515037A1 (en) * | 1984-04-25 | 1985-10-31 | Quantel Ltd., Kenley, Surrey | IMAGE SIGNAL PROCESSING DEVICE |
EP0186206A2 (en) * | 1984-12-27 | 1986-07-02 | Sony Corporation | Method and system for effecting a transformation of a video image |
GB2172167A (en) * | 1985-03-07 | 1986-09-10 | Sony Corp | Video signal processing |
EP0198630A2 (en) * | 1985-04-03 | 1986-10-22 | Sony Corporation | Method and system for image transformation |
EP0205252A1 (en) * | 1985-05-08 | 1986-12-17 | Sony Corporation | Video signal processing |
EP0221704A2 (en) * | 1985-10-21 | 1987-05-13 | Sony Corporation | Video signal processing |
FR2593622A1 (en) * | 1985-11-13 | 1987-07-31 | Sony Corp | DATA PROCESSING FACILITY, IN PARTICULAR IMAGES, AND ADDRESS GENERATING AND ARITHMETIC PROCESSING CIRCUITS |
EP0248626A2 (en) * | 1986-06-03 | 1987-12-09 | Quantel Limited | Video signal processing |
EP0398810A1 (en) * | 1989-05-19 | 1990-11-22 | Sony Corporation | Apparatus for image transformation |
EP0442825A2 (en) * | 1990-02-16 | 1991-08-21 | Sony Corporation | Page turning effect generating apparatus |
EP0506429A2 (en) * | 1991-03-29 | 1992-09-30 | The Grass Valley Group, Inc. | Video image mapping system |
US5164716A (en) * | 1983-04-06 | 1992-11-17 | Quantel Limited | Image processing system |
US5239628A (en) * | 1985-11-13 | 1993-08-24 | Sony Corporation | System for asynchronously generating data block processing start signal upon the occurrence of processing end signal block start signal |
US5714977A (en) * | 1988-02-24 | 1998-02-03 | Quantel Limited | Video processing system for movement simulation |
EP2012301A1 (en) * | 2006-04-25 | 2009-01-07 | Mitsubishi Electric Corporation | Image combining apparatus and image combining method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2031687A (en) * | 1978-09-14 | 1980-04-23 | Micro Consultants Ltd | Television standards conversion |
GB1568378A (en) * | 1976-01-30 | 1980-05-29 | Micro Consultants Ltd | Video processing system |
GB1594341A (en) * | 1976-10-14 | 1981-07-30 | Micro Consultants Ltd | Picture information processing system for television |
-
1983
- 1983-03-11 GB GB08306789A patent/GB2119594B/en not_active Expired
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB1568378A (en) * | 1976-01-30 | 1980-05-29 | Micro Consultants Ltd | Video processing system |
GB1594341A (en) * | 1976-10-14 | 1981-07-30 | Micro Consultants Ltd | Picture information processing system for television |
GB2031687A (en) * | 1978-09-14 | 1980-04-23 | Micro Consultants Ltd | Television standards conversion |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5164716A (en) * | 1983-04-06 | 1992-11-17 | Quantel Limited | Image processing system |
GB2158671A (en) * | 1984-04-19 | 1985-11-13 | Quantel Ltd | Improvements in or relating to video signal processing systems |
FR2563401A1 (en) * | 1984-04-19 | 1985-10-25 | Quantel Ltd | VIDEO PROCESSING SYSTEM FOR PRODUCTION EFFECTS IN TELEVISION |
US5150213A (en) * | 1984-04-19 | 1992-09-22 | Quantel Limited | Video signal processing systems |
FR2563677A1 (en) * | 1984-04-25 | 1985-10-31 | Quantel Ltd | IMPROVEMENTS RELATING TO VIDEO SIGNAL PROCESSING SYSTEMS |
DE3515037A1 (en) * | 1984-04-25 | 1985-10-31 | Quantel Ltd., Kenley, Surrey | IMAGE SIGNAL PROCESSING DEVICE |
EP0186206A2 (en) * | 1984-12-27 | 1986-07-02 | Sony Corporation | Method and system for effecting a transformation of a video image |
EP0186206A3 (en) * | 1984-12-27 | 1988-03-30 | Sony Corporation | Method and system for effecting a transformation of a video image |
EP0194066A3 (en) * | 1985-03-07 | 1990-03-14 | Sony Corporation | Video signal processing |
GB2172167A (en) * | 1985-03-07 | 1986-09-10 | Sony Corp | Video signal processing |
EP0194066A2 (en) * | 1985-03-07 | 1986-09-10 | Sony Corporation | Video signal processing |
EP0198630A2 (en) * | 1985-04-03 | 1986-10-22 | Sony Corporation | Method and system for image transformation |
US4965844A (en) * | 1985-04-03 | 1990-10-23 | Sony Corporation | Method and system for image transformation |
EP0198630A3 (en) * | 1985-04-03 | 1988-08-17 | Sony Corporation | Method and system for image transformation |
EP0205252A1 (en) * | 1985-05-08 | 1986-12-17 | Sony Corporation | Video signal processing |
EP0221704A3 (en) * | 1985-10-21 | 1989-07-26 | Sony Corporation | Video signal processing |
US4953107A (en) * | 1985-10-21 | 1990-08-28 | Sony Corporation | Video signal processing |
EP0221704A2 (en) * | 1985-10-21 | 1987-05-13 | Sony Corporation | Video signal processing |
FR2593622A1 (en) * | 1985-11-13 | 1987-07-31 | Sony Corp | DATA PROCESSING FACILITY, IN PARTICULAR IMAGES, AND ADDRESS GENERATING AND ARITHMETIC PROCESSING CIRCUITS |
US5239628A (en) * | 1985-11-13 | 1993-08-24 | Sony Corporation | System for asynchronously generating data block processing start signal upon the occurrence of processing end signal block start signal |
EP0222405A3 (en) * | 1985-11-13 | 1989-08-23 | Sony Corporation | Data processor |
EP0248626B1 (en) * | 1986-06-03 | 1991-10-16 | Quantel Limited | Video signal processing |
EP0248626A2 (en) * | 1986-06-03 | 1987-12-09 | Quantel Limited | Video signal processing |
US6225978B1 (en) | 1988-02-24 | 2001-05-01 | Quantel Limited | Video processing system for movement simulation |
US5714977A (en) * | 1988-02-24 | 1998-02-03 | Quantel Limited | Video processing system for movement simulation |
US5325446A (en) * | 1989-05-19 | 1994-06-28 | Sony Corporation | Apparatus for image transformation |
EP0398810A1 (en) * | 1989-05-19 | 1990-11-22 | Sony Corporation | Apparatus for image transformation |
US5327501A (en) * | 1989-05-19 | 1994-07-05 | Sony Corporation | Apparatus for image transformation |
EP0442825A2 (en) * | 1990-02-16 | 1991-08-21 | Sony Corporation | Page turning effect generating apparatus |
US5233332A (en) * | 1990-02-16 | 1993-08-03 | Sony Corporation | Page turning effect generating apparatus |
EP0442825A3 (en) * | 1990-02-16 | 1992-04-01 | Sony Corporation | Page turning effect generating apparatus |
EP0506429A3 (en) * | 1991-03-29 | 1995-02-08 | Grass Valley Group | |
EP0506429A2 (en) * | 1991-03-29 | 1992-09-30 | The Grass Valley Group, Inc. | Video image mapping system |
EP2012301A1 (en) * | 2006-04-25 | 2009-01-07 | Mitsubishi Electric Corporation | Image combining apparatus and image combining method |
EP2012301A4 (en) * | 2006-04-25 | 2010-09-15 | Mitsubishi Electric Corp | Image combining apparatus and image combining method |
Also Published As
Publication number | Publication date |
---|---|
GB2119594B (en) | 1986-07-30 |
GB8306789D0 (en) | 1983-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US4709393A (en) | Video processing systems | |
GB2119594A (en) | Video processing systems | |
JP3384799B2 (en) | Electronic image manipulation device and method | |
US4611232A (en) | Video processing system for picture rotation | |
US4988984A (en) | Image interpolator for an image display system | |
US5173948A (en) | Video image mapping system | |
US5025394A (en) | Method and apparatus for generating animated images | |
US4275418A (en) | Video noise reduction systems | |
JP3190762B2 (en) | Digital video special effects device | |
US5119442A (en) | Real time digital video animation using compressed pixel mappings | |
GB2157910A (en) | Improvements in or relating to video signal processing systems | |
EP0264961A2 (en) | Television special effects system | |
GB2117209A (en) | Video processing systems | |
US5150213A (en) | Video signal processing systems | |
US5220428A (en) | Digital video effects apparatus for image transposition | |
US5157517A (en) | Parallel interpolator for high speed digital image enlargement | |
US4689682A (en) | Method and apparatus for carrying out television special effects | |
GB2162020A (en) | Video processing systems | |
US6774952B1 (en) | Bandwidth management | |
JPH0746467A (en) | Automatic conversion circulation effect device and method | |
US7697817B2 (en) | Image processing apparatus and method, and recorded medium | |
US20020102031A1 (en) | Composition of an image | |
WO1992005664A1 (en) | Video image composition | |
JP2000020014A (en) | Picture display device | |
GB2278212A (en) | An interpolator |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PE20 | Patent expired after termination of 20 years |
Effective date: 20030310 |