GB2245124A - Spatial transformation of video images - Google Patents

Spatial transformation of video images Download PDF

Info

Publication number
GB2245124A
GB2245124A GB9107726A GB9107726A GB2245124A GB 2245124 A GB2245124 A GB 2245124A GB 9107726 A GB9107726 A GB 9107726A GB 9107726 A GB9107726 A GB 9107726A GB 2245124 A GB2245124 A GB 2245124A
Authority
GB
United Kingdom
Prior art keywords
output
input
pixel
pixels
pixel values
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9107726A
Other versions
GB9107726D0 (en
Inventor
Matthew Raymond Starr
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rank Cintel Ltd
Original Assignee
Rank Cintel Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rank Cintel Ltd filed Critical Rank Cintel Ltd
Publication of GB9107726D0 publication Critical patent/GB9107726D0/en
Publication of GB2245124A publication Critical patent/GB2245124A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

A digital video effects system, that can be used as a subsystem of a digital video system, performs real-time, anti-aliased spatial transforms of input video i.e. it provides an output signal where the picture is a spatially distorted version of the input picture. Any defined area of the input video frame can be placed anywhere in the output frame with independent changes in size and direction along either axis, by manipulation first vertically and then horizontally. Manipulation in each direction is performed by spatial interpolation of contiguous strings of input pixels to produce contiguous expanded, compressed or translated strings of output pixels. Pixel values of an input string are clocked in real time through two series connected latches 86, 88, and multiplied in multiplier/accumulators 82, 84 by pairs of weighting fractions FA, FB generated by a fraction generator 80 according to the proportion of overlap of the required output pixels with corresponding input pixels. The weighted input pixel values contributing to each output pixel are accumulated in the multiplier/accumulators 82, 84 and/or added by an adder 90, and output on a video output. <IMAGE>

Description

TRANSFORMATION OF VIDEO IMAGES The present invention relates to the
transformation of video images, such as may be used in special effects and compositing of digital video signals, and is particularly suitable for use in systems where real-time spatial transformations must be generated without artifacts. Simulated perspective is a typical example, the mapping of flat images onto non- linear surfaces is another.
In this specification, real-time data processing means processing at full NTSC/PAL data rates digitally encoded to CCIR recommendations 601 and 656 (27 Mbytes/second), and trapezoid means quadrilateral with at least one pair of parallel sides.
Prior art special effects systems include Quantel and Abekas systems. In these systems, when a video image is transformed, each output image pixel is formed by the summation of a four by four grid of differently weighted input image pixels. This method of image transformation is thus a onestep, two dimensional method.
A one dimensional, two-step method of transformation is described in IEEE (USA) Computer graphics and Applications, Vol. 6, No.1, Jan. 1986, pp 7180 in an article by Karl M. Fant. This method is however of restricted scope.
The invention is defined in the appendant claims 1, 14 and 16. Advantageous features of the invention are defined in the subclaims dependent therefrom.
The invention provides a method and apparatus for transforming by spatial interpolation an input pixel string comprising a first number of pixels into an output pixel string comprising a second number of pixels, where the second number may be less than, equal to, or greater than the first. Application of the method and apparatus of the invention to a number of parallel input pixel strings of a video image can be used to transform a selected region of the video image in one dimension and, by carrying out two sequential transformations in non-parallel directions, in two dimensions.
The method and apparatus allow generation of a compressed, translated or expanded output pixel string from an input pixel string, in real time, with a minimum of aliasing artifacts and very little loss of information or image quality in the output string.
An embodiment of the invention will now be described, by way of example, with reference to the drawings, in which:
Figure 1 is a schematic diagram showing transformation along one line of a video image in a first direction, Figure 2 is a schematic diagram showing transformation of a 2D region of a video image in a first direction, Figure 3 is a schematic diagram illustrating transformation of a video image in a first direction and a second direction perpendicular to the first, Figure 4 is a block diagram of apparatus embodying the invention for transformation of video images, Figure 5 is a block diagram of sequence controller and memory (director) hardware, Figure 5a is a schematic diagram of the director controller Figure 6 is a detailed block diagram of sequence controller and memory (director) hardware, Figure 7 is a block diagram of the parameter controller and memory, Figure 8 is a schematic diagram of pixel scaling at boundaries of an image transformation region, Figure 9 is a schematic diagram of translation of a string of pixels as part of a transformation, Figure 10 is a schematic diagram of expansion of a string of pixels as part of a transformation, (expand mode), Figure 11 is a further schematic diagram of pixel line expansicn, Figure 12 is a schematic example of pixel line expansion, Figure 13 is another schematic example of pixel line expansion, Figure 14 is a schematic diagram of compression of a string of pixels as part of a transformation (compress mode), Figure 15 is a schematic example of pixel line compression, Figure 16 is a block diagram of pixel line transformation hardware, Figure 17 is a further block diagram of pixel line transformation hardware, Figure 18 is a block diagram of the fraction generator, Figure 19 is a simplified block diagram of the fraction generator operating in expand mode, Figure 20 is a simplified block diagram of the fraction generator operating in compress mode, Figure 21 is a schematic diagram showing lighting transformation, and Figure 22 is a schematic diagram showing a method of obtaining simulated perspective on a video image.
An embodiment of the invention will now be described in which an original two-dimensional image is manipulated in one dimension at a time, once in the vertical direction and once in the horizontal direction, with the combination providing the required overall spatial rearrangement in two dimensions. The actual pixel interpolation process must be able to be done for every output pixel of every frame, requiring fast hardware. To mix completely in both dimensions, simultaneously, over a variable number of pixels would require complex hardware, whereas to mix over one dimension simply requires a serial mixer, which can more easily achieve the speed required. Such a hardware unit performs two operations in series or cascade, a vertical interpolation along "columns" followed by a horizontal interpolation along "rows". Rows correspond to the actual video horizontal scan lines.
Accordingly, the apparatus consists of two major sections that are nearly identical, one for the vertical pass and one for the horizontal pass. The vertical processing section consists of video and matte inputs, a set of framestores and the actual arithmetic circuits. Matte is sometimes referred to as shape in this specification. The output of the vertical processing section goes to a set of intermediate framestores, which act as the input to the horizontal processing section, while the output of the horizontal processing section goes direct to a video bus output. The whole system is synchronized to video timing, and there is a two-frame delay from input to output.
An arbitrary segment of linearly arranged input pixels is mapped onto an arbitrary segment of linearly arranged output pixels, by specifying start position, finish position, processing direction, size change and runlength (number of pixels to process). Since the input and output pixels are the same size, the ratio of output to input pixels determines whether an apparent expansion or compression takes place. In Figure 1, such expansion and compression is shown for segments 1A and 3A of a line of pixels respectively. As shown in that figure, segment 1A is expanded to be output as segment 1B. Segment 3A is compressed to form output segment 3B, and segment 2A is output as segment 2B with an unchanged number of pixels.
Fractions are generated in the hardware (see below) and are multiplied by the luma/chroma values of input pixels, and the results are added to generate the luma/chroma values of new output pixels. It is possible to generate apparent fractional displacements in this way, giving high quality output images that are relatively free of artifacts and which preserve most of the information contained in the input image. In this description, the concept of a fractional pixel is used although in reality there is no such thing, the illusion of fractional pixels is achieved by scaling. I Each column/row can be subdivided into regions that can be transformed differently. The transition between these regions can occur with minimal overhead. Furthermore, adjacent parts of the columns/rows can be defined to lie in the same transformation region. Thus 2-dimensional transformation regions can be specified as shown in Figure 2; where the transformation of one source region 10 to a destination region 10A in one direction is shown. It is possible to automatically vary the start position, destination position, runlength and size change from one column/row to the next. Thus it is possible to define trapezoidal shaped transformation regions which are necessary to fully subdivide polygonal input areas of the input image, degenerating to triangles or lines one row/column wide.
The transformation of three regions of an original input image 42 is illustrated in Figure 3. Selected regions 4, 5, 6 are first respectively transformed in a vertical direction, shown as regions 4A, 5A, 6A. The resulting intermediate image 44, 46 is then redivided into a new set of transformation regions 7, 8, 9, and transformed horizontally 7A, 8A, 9A to provide a processed image 48.
The spatial transformation hardware is shown in Figure 4. It consists of an original image input framestore 12 with inputs for luminance, two chrominance or colour-difference signals and a shape signal (see below). These original input image framestores are similarly connected to a vertical spatial interpolator 30.
A vertical sequence controller 14, having an associated RAM memory 16, determines the sequence and form of the individual regions for transformation in the vertical direction. The controller 14 is connected to a vertical parameter controller 18 also having an associated RAM 20, which supplies the transformation parameters for each region to the vertical spatial interpolator 30 which includes an interpolation controller 29. The spatial interpolator 30 includes hardware for pixel string expansion and compression.
The vertical spatial interpolator 30 is connected with column stores 36. The column stores are connected to vertically processed image framestores 34 which hold an intermediate image resulting from vertical processing. The vertically processed image framestores 34 are connected to a horizontal spatial interpolator 32 which includes a horizontal interpolation controller 31.
As described above for vertical processing, for horizontal processing, there is a horizontal sequence controller 22 having an associated RAM 24 and which determines the sequence and form of regions for transformation in the horizontal direction. The horizontal sequence controller 22 is connected to a horizontal parameter controller 26 which also has an associated RA14 28. The RAM memories 16, 20, 24, 28 are connected to a system bus 21, which is connected to a software-controlled host processor 19. The horizontal parameter controller 26 supplies the transformation parameters for each region to the horizontal spatial interpolator 32. The horizontal spatial interpolator 32 also includes hardware for pixel string expansion and compression.
The horizontal spatial interpolator 32 is connected to row stores 38 which have a light signal input in addition to lama, chroma B and chroma R and shape signal inputs.
The row stores 38 are connected via lighting maps 40 to luma, chroma B, chroma R and shape signal outputs. The lighting maps 40 are programmable and apply an 8-bit "light" parameter to each signal. The light parameters have the effect of signal transfer functions as described below. The light maps 40 are - connected to mixing circuit 41 by luma, chroma B and chroma R and shape outputs/inputs. A background video signal is also input to the mixing circuit 41.
The original image input stores 12, vertical spatial interpolator 30, column stores 36, vertically processed image framestores 34, horizontal spatial interpolator 32, row stores 38, output maps 40 and mixing circuit 41 are interconnected by outputz. and inputs for luma, chroma B, chroma R, and shape signals. Within the transformation hardware, the luma, Cb, Cr and Shape signals all have the same sample period of 74 ns (4:4AA format). There are four parallel internal video paths for luma (Y), two chromadifference (Cr and Cb) and shape (Sh) signals. The apparent increase in chroma resolution comes about by using each input chroma byte twice.
The structure and functions of these various hardware elements are considered below:- i) Oriqinal Imacre Input Framestores 12:- A pair of identical framestores is used so that while reading takes place from one, writing takes place into the other. At the end of the frame, each framestore switches modes. A second pair of framestores operates in a similar fashion, except that when writing into it the video lines are effectively transposed by 900 where transposition is either a 90' turn or a rotation around a diagonal axis. This is necessary for rotational transformations of greater than 45 as described below.
Each framestore 12 comprises dynamic RAYS (DRAIMs) with a 4 to 1 interleave. The input 0' and 900 framestores 12 (and intermediate processed framestore 34) all employ this interleaving scheme. There are different requirements for reading and writing of pixels, so there are two distinct modes for reading and writing.
When writing to a framestore, the data is continuous and ordered, and so the incoming pixels can be accommodated by cycling through the 4 banks. When reading, the accesses are at random addresses within the row/column, so fast page mode read is used. Frames are written and stored with the fields interleaved, and the spatially higher field is always stored above the lower field regardless of whether it is the odd or even field that is the higher field.
n The 0' and 900 input framestores 12, each consist of 4 banks of DRAM pairs, and each bank is divided into two halves (1 DRAM), with the lower half starting at integrated circuit 'chip' row address 0, and the upper half starting at chip row address 256. The chip row and column addresses are not the same as the image frame row and column numbers.
For the 0' input framestores the incoming data is in the form of horizontal scan lines, each line consisting of 720 pixels, with interlaced fields. Each field is divided into four on a column basis, so that as the input pixels of field I arrive, the first pixel of each line goes to bank 0, the second to bank 1, the third to bank 2, and the fourth to bank 3. At the fifth pixel, the sequence repeats. Pixels in different fields that are aligned vertically must be in different banks, so for field 2, the sequence is different, with the first pixel of the line going to bank 2, the second to bank 3, the third to bank 0, and the fourth to bank 1. When reading columns from the 00 framestores, fast page mode is used, with two banks being enabled simultaneously for page mode reads. Two banks are read together either bank 0 and 2, or bank 1 and 3.
In the 900 framestore, the incoming lines are written in the same direction that they are read out, that is as columns. Hence there is a transposition of 900. When writing, the banks are selected for writing cyclically. When reading, all four banks are enabled for column mode reading simultaneously.
In the vertically processed image framestore 34, see below, pixel columns from the column stores are written into the banks which are selected in a cyclic fashion. The order of bank selection varies with alternate columns. For reading of image rows, pairs of banks are simultaneously enabled for fast page mode reading.
ii) Sequence Controller/Memory (Director) 14, 16 and 22, 24:These system components are referred to as "directors" for each direction. The function of each sequence controller 14, 22 is to control the order of processing of the individual transformation regions and the number of iterations of rows/columns within the individual transformation regions of the input image by passing a sequence of transformation region numbers for each column/row to the parameter controller 18, 26. Each region number passed to the parameter controller 18, 26 points to an area of parameter memory 20, 28 where the appropriate parameters are stored for the processing of that region. The sequence controller 14, 22 derives this sequence of numbers from a table stored in dedicated SRAM 16, 24 by the system software.
There are two independent directors which are substantially identical in function, one for vertical processing 14, 16 and one for horizontal processing 22, 24.
The stucture of a director is shown in Figures 5, 5a and 6. The director is controlled by a director controller, shown in Figure 5a, which is a state machine which has inputs for a clock signal and handshake signals from the associated parameter controller 18, 26, fraction generator 80 and column or row stores. The director includes an access address latch 52 which stores the address of that portion of SRAM 16 to be accessed at the start of the first line of pixels. The SRA14 16 has information of region numbers and iteration counts, i.e. the numbers of pixels in line segments of various regions. At the start of a line segment of a transformation region, an address counter 56 holds the address of the iteration count. The number of lines to be transformed is stored in a loop counter 54. The address counter 56 then holds the address of the region number of the first transformation region. For each region, the address stored in the director address counter 56 is incremented until the number of regions to be transformed is reached, and the region numbers are passed from the SRAM via a latch 60 with clock enable to the parameter controller 18, 26.
When the end of a line is reached (as indicated by a flag) a latch 58 for loopback becomes operative. This stored the SRAM address of the region number of the first transformation region of a line such that the SRAM is again correctly addressed at the start of the next subsequent line. The address stored by the director address couhter 56 is then reloaded from the latch 58 unless the number of iterations which remain is zero, in which case the director address controller is incremented.
iii) Parameter Controller/Memory 18, 20 and 26, 28:- For each transformation region there is a set of parameters to define the nature of the transformation from input region to output region. Specifically these include the source and destination addresses including both integer pixel (i.e. which column/row segment to operate upon) and fractional (i.e. address within the column/row) parts, the number of pixels to process (run-length), the size change values, and various flags bits.
well the amounts by which the parameters are to change laterally i.e. in adjacent rows/columns of that particular region, are specified. Parameters are in fixed point form i.e. they consist of integer and fraction parts. The use of "fractional" pixels is important in eliminating aliasing effects, as described later.
There are two parameter controllers 18, 26, one for each direction. The parameter controller 18, 26 accepts the transformation region number from the director (sequence controller 14, 22) and then prepares and updates the associated set of parameters for use by the interpolation controller 29, 31. These parameters are stored in dedicated SRAM 20, 28 by the software. Once every iteration the parameter controller 18, 26 updates these items by adding incremental "delta" values back into them so that they change from output column/row to output column/row.
As shown in Figure 7, the SRAdN1 20, 28 of each parameter controller 18, 26 stores both initial (main) parameter values and corresponding incremental ('delta') values. These are added in the parameter adder 112 to give current parameter values which are fed back to the parameter SRAM 20, 28 for further increments.
Another function of the parameter controller is calculation of a weighted average of the size change value at the common boundary of two contiguous regions in order to smooth the transition between them (i.e. to reduce aliasing effects).
It is necessary that when the first and/or last pixel in an output segment, of a line (a run) is a 'partial' pixel that scaling of intensity should occur to minimize aliasing effects. This has the effect of smoothing the transition between contiguous transformation regions. The scaling factor is derived from the destination fraction in the case of the first output pixel, and from the run length fraction for the last output pixel and is applied to the shape signal only. The scaling factor, which is known as weighted average size must also take into account the size change associated with that particular pixel run, and this is done automatically in the hardware shown in Figure 7.
In Figure 8 a run of output pixels POI to P3' is shown, as well as the runlength 100 and start destination position 102. The destination start position has a fractional component, and this is the main factor for determining the scaling factor of the first pixel 106 in the line segment. In particular for expand mode only which is discussed below, for the first output pixel, the destination fraction 104 is multiplied by the size to give a scaled size value as a scaling factor.
For the last pixel, the sum of destination fraction 104 and runlength 100, itself composed of an integer and fraction, gives a quantity known as modified run length of which the fraction component is the scaling factor for the last pixel 108. It should be remembered that ends-of-run scaling (rescaling at region boundaries) only takes place on the shape output. The luma and chroma values are modified by the shape when they pass through the mixing circuit 41, as described later.
In terms of hardware, as shown in Figure 7, the parameter SRAM 20, 28 has an output for run length and destination fraction which is connected to an ALU 114. The ALU 114 has its output 116 fed back to a secondary input and connected to a latch 116 for truncation to give the modified run length fraction at its output 120.
The output 120 is connected to an input at a multiplier/accumulator 122 which has one input also connected to a destination fraction (or source fraction) output 124 from a latch 126. A second input to the multiplier/accumulator has a size input 128 from a latch 130.
In expand mode, the size i is multiplied by the destination fraction modified in the multiplier/accumulator 122. A scaled size (i.e. destination fraction x size) is output to an adder 132 where it is either added or subtracted from the start source address from an input 134. A corrected source value is output in expand mode to a latch 136 which provides an FB value with which the fraction generator is initialised, as described below.
For compress mode, the destination fraction output is input to a latch 138 to provide a REM signal which initialises the fraction generator, as also described later. A scaled size is generated by multiplying source fraction x size. This scaled size is used in compress mode initialisation which is also described below.
iv) Spatial Interpolators 30, 32:- The two spatial interpolators (horizontal 32 and vertical 30) perform substantially identical functions. For each output column in turn, the vertical spatial interpolator 30 takes segments of input columns from the original input store 12 and one-dimensionally interpolates these according to the sequence programmed for each frame. The horizontal spatial interpolator 32 produces output rows by interpolation from segments of rows stored in the vertically processed store 34.
The spatial interpolator controller 29, 31 accepts the parameters from the parameter controller 18, 26 for each transformation region and processes strings of input pixels to produce strings of output pixels. Two separate algorithms are used for compression and expansion of a string of pixels. Segments of pixel lines are translated, expanded and compressed, as described in turn below:- Translation Figure 9 is a representation of a line of input pixels with pixel values PO to P4, the length of the arrows. A sequence of translated output pixel values PO', Pl', etc. are generated by adding weighted proportions of pairs of adjacent pixels. Thus, PO' = (FA x PO) + (FB x P1) P11 = (FA x P1) + (FB x P2) P2' = (FA x P2) + (FB x P3) etc, where FA + FB = 1.
The weighting coefficients, FA and FB, are fractions and if the same values of FA and FB are used for each new pixel, then when the new pixels are displayed there will be an apparent fractional displacement, relative to the original pixels. The extent of the apparent displacement depends on the values of FA and FB.
Translation is considered by the hardware as an expansion with a 1:1 expansion ratio. This hardware is described below.
Expansion Expansion is shown in Figure 10. New Pixel values POI, Fit P2', etc, are generated from an original sequence PO, Pl, P2, etc, by combining pairs of adjacent original pixel values. However the fractional coefficients FA and FB are different for each new generated pixel. Furthermore, the same pair of original pixels can be used more than once, so more new pixel values can be generated from fewer 6riginal pixels, although the information contained is of course essentially the same. Hence an apparent expansion has taken place.
Figure 11 is an alternate representation of the expansion algorithm, which shows it as a 1 pixel wide "window" 101, that moves along and samples the input pixels such as PO, Pl, P2 for example. At each new position of the input window a new output pixel is generated from two input pixels. Thus, PO' = (FAOPO) + (FBOPl), P11 = (FAlPO) + (FB1Pl), P21 = (FA2Pl) + (FB2P2).
The proportions of the input pixels depend on the position of the window, and the displacement Z of the window at each sample governs the rate at which input pixels are processed, so that the apparent size change depends on Z. The quantity Z is a fraction, and the expansion ratio is 1/Z.
The expansion algorithm can use a one input pixel wide averaging window to smooth transitions between input pixels, as shown in the example of Figure 12. A half input pixel averaging window may be used as shown in the example of Figure 13. In these figures, M denotes the input signal for pixel 1, for example.
i Compression A group of adjacent pixel values can be multiplied by weighting coefficients and summed, to produce a resultant output pixel. In the example shown in Figure 14, input pixel values PO and P3 have smaller weighting than Pl and P2, and it can be said that the output pixel POI is made up of two "partial" pixels (PO and P3) and two "whole" pixels (Pl and P2). This means that a non-integer number of pixels has gone into producing a single output pixel, so effectively causing compression, with a "compression ratio" that is non-integral. If an input pixel is only partially overlapped by a given output pixel, then the remainder of the pixel can be used for the next output pixel, so that all input pixels contribute equally to the output (except for the input pixels on the boundaries of each input string, which are treated as a special case). In Figure 14, PO to P6 are input pixels, PO' and Pl' are output pixels and the compression ratio is 1/z: 1. Thus, PO' = (FBOPO) + (ZPl) + (ZP2) + (FB3P3) where FBI = Z, FB2 = Z, FBO + Z + Z + FB3 = 1, FA4 + FB3 = Z, similarly Pl' = (FA4P3) + (ZP4) + (ZP5) + (FB6P6), and so on.
The compression alaorithm combines values from input compression pixels to make up complete output pixels as shown in Figure 15 for a ratio of 2:1, where IPI denotes the signal for input pixel 1 for example.
Although the expansion and compression algorithms operate differently, they produce the same results in the 1:1 case. The mechanism is implemented with a continuous scale of expansion/compression. Transitions between the two algorithms are smooth and do not require special programming.
The algorithms as implemented require a minimum amount of hardware for each video path. Hardware for one video path is shown in Figure 16, and for one video path and the shape path in Figure 17, Two consecutive pixel input values are input to the hardware, namely a current value B and a previous value A. These two values are held by respective series connected latches 86, 88. A common clock is connected to elements of the hardware, so that each element performs one task in each clock cycle. Each latch has a clock-enable, and stores the pixel value present at its input during the clock cycle, when the clock-enable is on.
The fraction generator 80 produces a pair of numbers for each cycle of its operation called FA and FB. The FA and FB values are 12 bits in length, i.e. they range from 0 to 2048 which is 2 to the power of 11. They are regarded as fractions, since there is in the hardware an implicit divide by 2048 at a later stage. Hence an FB value of 2048 is equivalent to 100%, while an FB value of 1024 is 50% and so on. FB is the coefficient of themost recently read input pixel value, while FA is the coefficient of the previous pixel value.
These two numbers are fed to each video path in parallel, such that FB and current signal input value B are fed to a first Multiplier/Accumulator unit 84 and FA and previous input signal value A are fed to a second Multiplier/Accumulator unit 82.
The fraction generator 80 is a synchronous i.e. clocked arithmetic unit in which for each clock cycle a new FA and FB pair is generated. Each fraction generator 80 is initialised by its respective parameter controller 18, 26.
For each cycle, video signals related to BFB and AFA are respectively accumulated. These signals are added in an adder 90 until an output pixel is provided to a video output by way of a hard-wired binary division unit 92, as shown in Figure 16. Both the Expand and the Compress algorithms allow the final result to be scaled by the binary division unit 92.
Operation The operation of the hardware shown in Figure 16 in expansion mode is as follows: For any expansion, no more than two input pixels can contribute to any output pixel. At the start of the processing of an output pixel, the first contributing pixel is latched into one 88 of the latches, while the next input pixel is latched into the other 86. The fraction generator provides the fractions appropriate to the contribution from each input pixel to the output pixel. These fractions are multiplied by the 1 appropriate input pixel values in multiplier/ accumulator 82, 84 and added in adder 90 and output to the following stage as the output pixel value. Then the value of the second input pixel replaces the first pixel value in the latch 88 while the third input pixel is latched to replace the second in latch 86. The fractions are changed to suit the new pixels. These fractions are multiplied by the input pixel values, and the result is output. This continues, outputting one pixel per clock cycle, until the final input pixel which contributes to this output pixel has been processed.
The operation of the hardware shown in Figure 16 in compression mode is as follows: at the start of processing of an output pixel, the fraction generator 80 calculates the fraction of the first input pixel that contributes to the output pixel. In the example in Figure 15 this is IPI/2/2, which is 1/4. This factor is multiplied by the input pixel value and put in the accumulator of the multiplier/accumulator 84.
Then the next input pixel is processed, being multiplied by its fraction, and added to the accumulator. This continues until the final input pixel which contributes to this output pixel has been processed (in Figure 15, this is input pixel 3). At this point, the value in the accumulator is equal to the value for that output pixel. This output is passed out via the adder 90. Then the next sequence of input pixels is processed, starting with the last processed pixel (if it contributes to two output pixels).
The shape video path hardware of the spatial interpolators 30, 32 is basically similar to that for each video path, as shown in Figure 16 and 17. However there is additionally a multiplier 91 connected between the adder 90 and the hardware output. Fractional antialiasing corrections are selectively applied to the multiplier in order to produce shape signals applicable in the mixing circuit 41, see later, to produce apparent fractional pixels by scaling.
Fraction Generator The basic hardware of the fraction generator 80 is shown in Figure 18. The fraction generator includes an ALU (arithmetic logic unit) 140 with an "o" size input 141. As discussed below, the value "om is essentially unity in expand mode and is the compression - is factor (lying between zero and unity) in compress mode. The ALU 140 is only operative in compress mode. It has an output 142 which provides a remainder REM- "o" value. The REM-"o" signal is input to a multiplexer 144 with clock enable which has a second input 148 for a 'fractional' signal FB. FB = REM if REM-"o" is negative otherwise FB = "o" if REM-"on is positive. The output of the multiplexer 144 is an unlatched REM signal which is input to a latch 150. The REM signal is initialised by the parameter controller at the input 146 to the latch 50. The latch 150 acts to buffer the REM signal and provide that REM signal at its output 152. The REM signal is input to a further multiplexer 154 with clock enable and is fed back to a second input of the ALU 140.
The multiplexer 154 also has a size "on input and an output 158 for the FB values. After initialisation, the multiplexer 154 is output-enabled only during compress mode. The FB signal output by the multiplexer 154 is latched by a latch 160 which has an initialising FB value input 156. The latch 160 has an output 162 for the latched FB signal.
The latched FB signal from output 162 is passed via an input 164 to a second ALU 166. This ALU 166 has a second input 167 for size "in in expand mode or size "o" in compress mode. As discussed below, the value "i" is essentially the complement with respect to two of the expansion factors, in expand mode,- and is unity in compress mode. A combined value FB+"i" (expand mode) or FB-"o" (compress mode) is provided at the output 168 of the ALU 166, and passed to a buffer 172, which is output-enabled in expand mode only to provide the FB+"i" value to the latch 160.
The unlatched FB signal is provided to selective inverter 174 which provides an unlatched FA value at its output 176. The FA value is latched by a latch 180.
Fractions are basically produced as follows. In expand mode as shown in Figure 19, size "i" and a fed back FB value are input to the second ALU 166. This adds the two input signals to output a combined FB+ffi" value. After passing through the buffer 172 and latch 160, this is output as the FB value. The FB+li" value from the ALU 166 is inverted by the inverter 174 and passes via the latch 180 to be output as the FA value, in the same clock cycle.
In compress mode, as shown in Figure 20, size "On is subtracted from an FB remainder REM value in the ALU 140 and latched by the latch 150 to give the new REM value, which is fed back to the ALU 140. The REM value is input to the multiplexer 154 together with the "on size value. The output of the multiplexer 154 is the FB value such that FB=REM if REM-"o" is negative, else FB="o". FA is produced one cycle later when size "o" is applied to the ALU 166 (shown in Figure 18) and FB-"off is output from it.
Initialisation Expand initialization depends on knowing the source fraction and the destination fraction, i.e. the fractional components of the respective addresses of the first input and output pixels of a run. These fractions are required for calculating the proportion of the first input pixel needed to produce the first output pixel, and hence the proportion of the second input pixel required. To calculate this initialize value, the source fraction is read from parameter RAM, and is modified by adding or subtracting a scaled version of the destination fraction. The scaling of the destination fraction is done by multiplying it by a scale factor "size" which is inversely proportional to the expansion ratio, such that for a 1:1 transformation the destination fraction is unchanged, while for n:1 expansion the scale factor is 1/n. This scaled destination fraction is either added or subtracted from the source fraction, depending on how the source/destination direction flags are set. These flags indicate the directions in which the input/output pixel strings are to be respectively read/generated during expansion or compression. One string may be read in the opposite direction to the other in order for example to reflect an input image. Note that it is possible if the scaled destination fraction is sufficiently large for the integer part of the start position to change by 1.
For compress mode initialisation, the fractional components of the destination and source addresses stored in parameter RAM determine the initial values for the fraction generator. From the destination fraction is derived a quantity REM(O)-= (1 - destination fraction) corresponding to the amount of the output pixel that needs to be "filled". However, in a similar way to the expand case, for compress mode initialisation, the source fraction value is likely to be non-zero, so there will be some amount of destination displacement corresponding to that source fraction. In practice, the value of (1 - source fraction) is scaled by multiplication by the size value and used to "correct" the destination fraction.
Fraction Generator Operation If "size" is the actual quantity specified in the list of parameters, then 2048:5 size < 4096 o = 2048 i = 4096-size (Expansion mode) 0 <'size < 2048 o = size i = 2048 (Compression mode) The "zoom factor", or apparent magnification is o/i in each case. (The hardware uses the most significant bit of the "size" parameter to determine which mode to use).
For an expansion ratio of 2048/i a sequence of coefficients FB(O), FB(1).. .. FB(n) and FA(O), FA(1).... FA(n) for expand mode is generated as follows. At each cycle, or iteration, a pair of FA and FB values is generated. An initial value FB(O) that is derived from the corrected source address is passed to the fraction generator. FA(O) is given by:
FA(O)=2048-FB(O) On the next iteration, FB(l)=FB(O)+i =FB(O)+i-2048 FB(O)+i:52048 or FB(O)+i>2048 FA(l)=2048-(FB(O)+i) =4096-(FB(O)+i) FB(O)+i:52048 FB(O)+i>2048 In general FB(n)=FB(n-1) Expand Initialize or =FB(n-l)+i(n) FB(n-l)+L52048 or =FB(n-l)+i(n)-2048 FB(n-l)+i>2048 or =FB(n-1) Hold mode Usually i is constant during a run, but under certain circumstances, such as a transition between contiguous runs it will have a different value.
In compress mode, a quantity REM is defined to be the amount of an output pixel that remains to be "filled". From this is derived FB and then FA is derived from FB. For a compression of o/2048 where O<o<2047 a sequence REM(O), REM(l).... REM(n) and of coefficients FB(O), FB(l),...FA(n) and FA(O), FA(l),...FA(n) is generated as follows. An initial value REM(O) is derived from the corrected destination address, and passed to the fraction generator. In the first cycle, the first REM, FB and FA values are given by REM(l)=REM(0)-o REM0)>o or =RE14(0)-o+2048 REM(O):5o FB(l)=o REM(O)>o or REM(O):so =REM(O) PA(l)=0 In the next cycle, the following values are generated.
REM(2)=REM(l)-o =RE!4(1)-o+2048 RE14(0)>o REM(O):5o FB(2)=o =REM(1) REM(1)>o REM(l):5o FA(2)=o-FB(1) In general, REM(n)=REM(n-1)-o REM(n-1)>o =REM(n-1)-o+2048 REM(n-l):o or FB(n)=o =REM(n-1) REM(n-1)>o or REM(n-l):5o FA(n)=o-FE(n-1) REM(n-l):5o or REM(n-1)>o =0 Pixel String Compression Example By way of example, the compression of an input pixel string by a size ratio Z illustrated in Figure 14 can now be described in more detail. The compression of the seven input pixels shown in Figure 14 takes seven clock cycles, during which the values of REM, FA and FB are generated as shown in Table 1.
Table 1
Clock Cycle REM FB FA 0 REMO FBO 1 REM1>o FB1 = Z FAl = 0 2 REM2>o FB2 = Z FA2 = 0 3 REMRo FB3 = REM3 FA3 = 0 4 REM4>o FB4 = Z FA4 = Z-REM3 REM5>0 FB5 = Z FA5 = 0 6 REM6<o FB6 = REM6 FA6 = 0 At the start of the process, REMO and FBO are set according to the initialisation process described above using the source and destination fractions, size ratio and source/destination direction flags. Compression then proceeds as illustrated as pairs of pixel values are clocked through the series connected latches 86,88 of the spatial interpolator shown in Figure 16, a pair of fractions FA and FB being generated by the fraction generator 80 during each clock cycle. The fractions are numbered in this example according to their clock cycle.
When the entire output string, in this example only comprising two pixels, PO' and Pl', has been generated, the fraction generator 80 may be reinitialised. The details of re- initialisation depend on whether the next output string to be generated is contiguous with the first or not. If it is contiguous but has a different size ratio, the parameter controller 18,26 (Figure 4) causes a weighted size value intermediate between the previous and new size values to be used to initialise generation of the new output pixel string so as to reduce aliasing artefacts at the boundary between the output strings.
v) Vertically Processed Imaqe Framestores.
While the original input image is being scanned column by column, the output columns from the vertical processing must be stored in intermediate framestores, known as the vertically processed image framestores 34, until the complete frame has been scanned and processed. As with the original input framestores 12, there are two, which alternate between read and write mode once per frame.
vi) Line Stores 36, 38 There are two sets of linestores, one for columns 36 and one for rows 38. The column-stores 36 convert the erratically timed and ordered outputs from the vertical spatial interpolator 30 into the ordered sequence required for operation of the interleaved writes into the vertically processed store 34. Similarly the row-stores 38 convert the randomly- timed horizontally processed outputs into a form consistent with horizontal video timing. Flash-clear RAMs are used to allow unwritten areas to assume a zero value.
vii) Liqhting maps 40 and outputs:- All of the output video paths pass through the lighting maps which consist of separate but similar programmable RAM maps for each of Luma, Chroma B, Chroma R and Shape, as shown in Figure 21. These store possible output values of the video and shape signals. For each horizontal transformation region there is an associated 8-bit "Light" parameter passed from the horizontal parameter controller 26 that determines which of the 256 possible transfer functions is applicable for that region. The 8 bit light parameter and 8 bit input video (or shape) signal values are combined to produce 16 bit addresses in order to access a RAM map. Effectively, the 'light' parameters alter the values of the video and shape signals for each region in what is essentially an image modulation technique. Since the luma, Cr,Cb and shape maps are independent, a range of effects is possible. For example, this configuration allows different objects to be highlighted by using non-linear transfer functions, or apparent colour shifts to take place, or "shadows" to be created by reducing the shape value for the shadow region.
viii) Mixinq Circuit 41 At the outputs of the lighting maps 40, the shape output signal is in synchronism with the video output signal. However, the shape path is configured so that the edges of the output shape are anti-aliased, i.e. "smoothed". The output video signal is "keyed" i.e. multiplied by, the shape signal in mixing circuit 41 so that the edges of the output video become antialiased.
The shape signal controls the mixture of fractional intensities of two video signals, one of which is the transformed video image signal, and the other of which is a signal received at a video input 42 (Figure 4) which may be a background signal. This may be considered as a 'fade in/fade out' operation. Considering a transformed region and an untransformed background region, for example, aliasing is avoided by smoothing the transition of pixel lines across the boundary between them. The desired smoothing is put into effect by the application of appropriate fractional shape signals at the boundary. The mixing circuit 41 has two programmable memory look up tables (one for each video signal) by which the shape signal is applied to produce fade in/fade out, before a composite output signal is produced in an adder within the mixing circuit 41.
Possible Effects Real-time, fully antialised spatial transforms of input video in digital component 4:2:2 format (per CCIR Recommendations 601 and 656) are possible, i.e. to produce an output signal where the picture is a spatially distorted version of the input picture.
Output frames are generated at the same rate as the input frames with a two-frame-time delay from input to output. Any defined area of the input video frame can be placed anywhere in the output frame with independent changes in size and direction along either axis, vertically and horizontally. Many small areas of the image can be manipulated to provide complex effects such as folding, rotation and shattering. The hardware has been designed for high flexibility, high image quality, and to operate with low software overhead.
A variety of image effects are achievable using the present system. These include perspective views. rotations, the display of previously flat regions as non-linear surfaces, and a "tiles" effect as described below.
a) Perspective Transformations This method relates to the area of special effects in processing of digital video signals, where apparent movement of the image in 3D-space is required. The implementation can involve digital computer hardware, of the type described above or can be purely in software.
A 2-dimensional image can be made to appear to be at any distance and orientation in 3D-space with respect to the viewer by performing the appropriate subdivision and transformations. The method of simulating perspective by video image subdivision and vertical processing followed by image subdivision and horizontal processing is shown in Figure 22. An original image 62 showing an object 64 is divided into regions 66. The image is transformed vertically to provide an intermediate image 68. The intermediate image 68 is again subdivided into a new set of regions 70. Horizontal transformation then results in an output image 72 showing the object in simulated perspective 74. The image regions may be trapezoidal (as previously defined), triangular or even a segment of one line. The regions may be contiguous or otherwise.
This process can also be carried out simultaneously so that a particular group of pixels can be replicated at each of a number of locations in the output. A further enhancement is to use an interactive device, such as a spaceball or tracker ball, to control the apparent position of the image in real-time.
To summarize, a 2-dimensional image with an arbitrary polygonal outline is subdivided into trapezoid shaped regions, or triangles in the special case, and each of these input regions undergoes a different spatial transformation to map onto an output region which is also trapezoidal (or triangular) to give the variation in size over the output image that would occur in a true perspective view. The nature of the transformation depends on the required position and orientation of the output region is 3-spacr relative to the input region. If a sufficient number of regions are used then a good approximation to a perspective view is attained.
The original image transformation is carried out in two passes as described above, once in the vertical direction to give an intermediate output which is subsequently processed in the horizontal direction. The combination of the two passes gives rise to the required arrangement of the output in two dimensions. In the one-dimensional transform, an arbitrary number of linearly arranged input pixels is mapped onto an arbitrary number of linearly arranged output pixels, by a process of interpolation and accumulation of the input pixels to generate new output pixels. Depending on the ratio of an input to output pixels, an apparent expansion or contraction occurs. More than one segment of the line may be involved, with each segment having a different transformation onto the output segment. The segments may or may not be contiguous.
The trapezoids are formed by adjacent groups of the line segments of varying length, where the variation in length follows an arithmetic progression. Hence the parallel sides of the trapezoids are parallel to the direction of processing. The non-parallel sides of the trapezoids correspond to the ends of the segments, and these may be contiguous with the segments of other trapezoids.
b) Rotations When an effect such as a rotating rectangle is performed, then the size of the columns and rows must be changed as the rectangle rotates to provide a constant apparent size on the screen. This function is an inverse cosine function and so it is apparent that it is impossible to rotate a picture by 900 in one pass. The picture is manipulated to allow a continuous rotation that has no obvious discontinuity. This is achieved by writing into input framestores 12 at 90' to normal and so effectively rotating the F r picture by 90'. A continuous rotation is then possible by changing the amount and direction of the skew at the same time as changing the input by 90'. In practice two sets of input framestores 12 are used with the 90' rotated image stored in one, so that both images are available at all times, enabling simultaneous asynchronous rotations of various parts of the picture. The framestores 12 are capable of being both written to in one plane or read from in the other plane at full pixel rate.
C) Non-linear surfaces The transformation on each region can be done in such a way that the overall transformed image appears to be wrapped over a non-linear surface. Examples of such effects could be described as warps, ripples, cylindrical wraps or page turns according to the subjective effect. They can be further enhanced by using the lighting maps to simulate illumination and shadowing. These effects usually require more transformation regions than ordinary perspective views, the transformation regions being trapezoidal, triangular or linear.
d) Tiles effect The original image is first subdivided into rectangular "tiles" which then rotate independently about their horizontal axes, while moving away from the viewer. A possible variation of this effect is to reverse the sequence of operation of the effect, so that small rotating tiles move towards the viewer and stop to form a complete picture.
- 25

Claims (20)

1. A method for generating from a contiguous string of a first number of input pixels of a region of a video image, a contiguous string of a second number of output pixels, the properties of each pixel being encoded as a set of pixel values, the method comprising the steps of:
(a) selecting the contiguous string of input pixels from the video image; (b) selecting the second number of output pixels; and (c) evaluating the pixel values of each output pixel by summation of weighted pixel values of corresponding, sequential input pixels.
2. A method according to claim 1, in which each set of pixel values comprises a luminance signal value, two chrominance signal values and a shape signal value.
3. A method according to claim 1 or 2, in which the output pixel string is generated in real time.
4. A method according to claim 1, 2 or 3, in which the pixel values of successive pairs of input pixels are weighted by multiplication by successive pairs of fractions generated by a fraction generator, and each pair of weighted pixel values is summed to contribute at least a portion of an output pixel value.
5. A method according to any preceding claim, in which the second number of output pixels is greater than or equal to the first number of output pixels and in which at most two input pixel values are weighted and summed to evaluate each output pixel value, the weighting fractions for evaluating each output pixel value being determined according to the proportion of overlap between that output pixel and each corresponding input pixel.
c
6. A method according to claim 5, in which each output pixel value is the weighted mean of the corresponding input pixel value or values.
7. A method according to any of claims 1 to 4, in which the second number of output pixels is less than the first number of input pixels, the weighting fractions for evaluating each output pixel value being determined according to the proportion of overlap between that output pixel and the corresponding input pixels.
8. A method according to any of claims 1 to 4, in which the second number of output pixels is greater than or equal to the first number of input pixels, at most two weighted input pixel values being summed to evalute each output pixel value, the weighting fractions for each output pixel being determined in pairs for multiplication by corresponding sequential pairs of input pixel values according to the proportion of overlap between the output pixel and each corresponding input pixel.
9. A method according to any of claims 1 to 4 or claim 8, in which a source fraction and a destination fraction, being the fractional parts of a source address and a destination address, which are the respective start addresses of the input and output pixel strings, are combined with a size ratio, being the ratio of the input and output pixel string lengths, to initialise the generation of the output string with a predetermined displacement of the string in the video image.
10. A method according to claim 9, in which the displacement between the first pixels of the input and output pixel strings is set equal to the destination fraction divided by the size ratio, added to or subtracted from the source fraction depending on the value of a source/destination direction flag which indicates in which directions the input and output strings are to be read and generated respectively.
11. A method according to any of claims 1 to 4, in which the second number of output pixels is less than the first number of input pixels, two or more weighted input pixel values being summed to evaluate each output pixel value, the weighting fractions for each output pixel being determined by the fraction generator according to the proportion of overlap between the output pixel and each input pixel.
12. A method according to any of claims 1 to 4 or claim 11 in which a source fraction and a destination fraction, being the fractional parts of a source address and a destination address, which are the respective start addresses of the input and output pixel strings, are combined with a size ratio, being the ratio of the input and output pixel string lengths, to initialise the generation of the output string with a predetermined displacement of the string in the video image.
13. A method according to claim 12, in which the displacement between the first pixels of the input and output strings is set equal to the source fraction divided by the size ratio, added to or subtracted from the destination fraction according to the value of a source/destination direction flag, which indicates the directions in which the input and output strings are respectively to be read and generated.
14. A method for transforming a two dimensional region of a video image comprising the steps of:
(a) selecting a two dimensional region of the video image; (b) sequentially selecting input pixel strings of the region parallel to a first direction; (c) operating on each selected input string to generate an output string wherein an input string comprises a first number of pixels and the corresponding output string comprises a second number of pixels, the properties of each pixel being encoded as a set of pixel values, and the pixel values of each output pixel being evaluated by summation of weighted pixel values of corresponding, sequential input pixels, the output pixel strings forming adjacent pixel strings of a partially transformed video image region transformed in one dimension; (d) storing the output video image region in a video frame store; (e) sequentially selecting adjacent input pixel strings of the partially transformed video image region parallel to a second direction not parallel to the first direction; and, (f) operating on each selected input string to generate an output string, wherein an input string comprises a first number of pixels and the corresponding output string a second number of pixels, the pixel values of each output pixel being evaluated by summation of weighted pixel values of corresponding, sequential input pixels, the output pixel strings forming adjacent pixel strings of an output video image region.
15. A method according to claim 14 in which each set of pixel values comprises the values of a luminance signal, two chrominance signals and a shape signal.
16. An apparatus for generating from a contiguous string of a first number of input pixels of a video image a contiguous string of a second number of output pixels, comprising:
a video input for inputting pixel values; store means for storing sequential pairs of input pixel values:
a fraction generator for generating pairs of weighting fractions, and outputting the fractions on respective outputs; two multiplier/accumulators each having inputs connected respectively to the store means to receive one of the stored pixel values and to a fraction generator output, for multiplying stored pixel values by corresponding weighting fractions, each multiplier/accumulator having an output for ouputting weighted pixel values; and an adder having inputs connected to respective multiplier/accumulator outputs and having an output for outputting summed, weighted pixel values of the output pixel string.
17. An apparatus according to claim 16, further comprising; a clock for producing timing signals for timing each component of the apparatus.
18. An apparatus according to claim 17, in which; the store means comprises two serially connected, clock enabled latches, the video input being connected to the input of first latch for clocking pairs of input pixel values sequentiall. through the latches; and in which input pixel values are clocked sequentially in pairs into the clock enabled latches, the fraction generator generates weighting fractions for multiplication by the stored input pixel values read by the multiplier/accumulators, the fractions being evaluated according to the proportions of overlap between the input pixels whose values are stored and the corresponding output pixels, the multiplied pixel values being combined in the accumulators and/or the adder to produce output pixel values.
19. An apparatus according to any of claims 16 to 18, further comprising a divider for scaling the pixel values output by the adder to produce a video output.
20. An apparatus according to any of claims 16 to 19, further comprising a host processor coupled to each component of the apparatus at least for sending timing signals to the components.
- 30 Published 1991 at The Patent Office. Concept House, Cardiff Road. Newport. Gwent NP9 I RH. Further copies may be obtained from Sales Branch. Unit 6. Nine Mile Point, Cwmfelinfach. Cross Keys. Newport, NP1 7HZ. Printed by Multiplex techniques ltd. St Mary Cray. Kent.
GB9107726A 1990-04-11 1991-04-11 Spatial transformation of video images Withdrawn GB2245124A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
AUPJ958290 1990-04-11
AUPK067590 1990-06-18
AUPK067490 1990-06-18
AUPK098690 1990-07-03

Publications (2)

Publication Number Publication Date
GB9107726D0 GB9107726D0 (en) 1991-05-29
GB2245124A true GB2245124A (en) 1991-12-18

Family

ID=27424284

Family Applications (2)

Application Number Title Priority Date Filing Date
GB9107726A Withdrawn GB2245124A (en) 1990-04-11 1991-04-11 Spatial transformation of video images
GB9107732A Withdrawn GB2244887A (en) 1990-04-11 1991-04-11 Spatial transformation of video images

Family Applications After (1)

Application Number Title Priority Date Filing Date
GB9107732A Withdrawn GB2244887A (en) 1990-04-11 1991-04-11 Spatial transformation of video images

Country Status (1)

Country Link
GB (2) GB2245124A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6091446A (en) * 1992-01-21 2000-07-18 Walker; Bradley William Consecutive frame scanning of cinematographic film

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0576696A1 (en) 1992-06-29 1994-01-05 International Business Machines Corporation Apparatus and method for high speed 2D/3D image transformation and display using a pipelined hardware
JP3499302B2 (en) * 1994-09-20 2004-02-23 株式会社東芝 Television receiver
US5587742A (en) * 1995-08-25 1996-12-24 Panasonic Technologies, Inc. Flexible parallel processing architecture for video resizing
EP0777198A1 (en) * 1995-11-30 1997-06-04 Victor Company Of Japan, Limited Image processing apparatus
GB2323991B (en) * 1997-04-04 1999-05-12 Questech Ltd Improvements in and relating to the processing of digital video images
US6097434A (en) * 1998-03-25 2000-08-01 Intel Corporation System and method for correcting pixel data in an electronic device
US20020180733A1 (en) * 2001-05-15 2002-12-05 Koninklijke Philips Electronics N.V. Method and apparatus for adjusting an image to compensate for an offset position of a user
CN111260654B (en) * 2018-11-30 2024-03-19 西安诺瓦星云科技股份有限公司 Video image processing method and video processor

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1455822A (en) * 1973-05-23 1976-11-17 British Broadcasting Corp Sampling rate changer
GB2047040A (en) * 1978-03-08 1980-11-19 Secr Defence Scan converter for a television display
GB2111340A (en) * 1981-02-04 1983-06-29 Ampex Digital chrominance filter for digital component television system
GB2181923A (en) * 1985-10-21 1987-04-29 Sony Corp Signal interpolators
EP0369301A2 (en) * 1988-11-17 1990-05-23 Dainippon Screen Mfg. Co., Ltd. An apparatus of and a method for image reproducing with variable reproduction scale
GB2236452A (en) * 1989-07-14 1991-04-03 Tektronix Inc Determining interpolation weighting coefficients for low ratio sampling rate converter
GB2237953A (en) * 1989-09-07 1991-05-15 Samsung Electronics Co Ltd Interleaving of interpolated video samples

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3177295D1 (en) * 1980-04-11 1993-02-04 Ampex PRE-DECIMATING FILTER FOR IMAGE CHANGE SYSTEM.
GB2160051A (en) * 1984-04-26 1985-12-11 Philips Electronic Associated Video signal processing arrangement

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1455822A (en) * 1973-05-23 1976-11-17 British Broadcasting Corp Sampling rate changer
GB2047040A (en) * 1978-03-08 1980-11-19 Secr Defence Scan converter for a television display
GB2111340A (en) * 1981-02-04 1983-06-29 Ampex Digital chrominance filter for digital component television system
GB2181923A (en) * 1985-10-21 1987-04-29 Sony Corp Signal interpolators
EP0369301A2 (en) * 1988-11-17 1990-05-23 Dainippon Screen Mfg. Co., Ltd. An apparatus of and a method for image reproducing with variable reproduction scale
GB2236452A (en) * 1989-07-14 1991-04-03 Tektronix Inc Determining interpolation weighting coefficients for low ratio sampling rate converter
GB2237953A (en) * 1989-09-07 1991-05-15 Samsung Electronics Co Ltd Interleaving of interpolated video samples

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6091446A (en) * 1992-01-21 2000-07-18 Walker; Bradley William Consecutive frame scanning of cinematographic film

Also Published As

Publication number Publication date
GB2244887A (en) 1991-12-11
GB9107726D0 (en) 1991-05-29
GB9107732D0 (en) 1991-05-29

Similar Documents

Publication Publication Date Title
US4602285A (en) System and method for transforming and filtering a video image
US5369735A (en) Method for controlling a 3D patch-driven special effects system
US4694407A (en) Fractal generation, as for video graphic displays
US4908874A (en) System for spatially transforming images
US4631750A (en) Method and system for spacially transforming images
US5384904A (en) Image scaling using real scale factors
US4468688A (en) Controller for system for spatially transforming images
US4472732A (en) System for spatially transforming images
JP3278693B2 (en) Display device and operation method thereof
US5173948A (en) Video image mapping system
US4752828A (en) Method for producing a geometrical transformation on a video image and devices for carrying out said method
US4757384A (en) Video signal processing systems
KR20160018669A (en) Device and method for calculating holographic data
US6166773A (en) Method and apparatus for de-interlacing video fields to progressive scan video frames
GB2245124A (en) Spatial transformation of video images
US11055820B2 (en) Methods, apparatus and processor for producing a higher resolution frame
US5646696A (en) Continuously changing image scaling performed by incremented pixel interpolation
JPS62139081A (en) Formation of synthetic image
JPH11508386A (en) Apparatus and method for real-time visualization of volume by parallel and perspective methods
JP2813881B2 (en) Video signal processing device
EP0449469A2 (en) Device and method for 3D video special effects
JPH0440176A (en) Television special effect device
US7697817B2 (en) Image processing apparatus and method, and recorded medium
JPH06230768A (en) Image memory device
JPH03128583A (en) Method and apparatus for spatially deforming image

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)