GB2254751A - Digital video effects; ray tracing for mapping a 2-d image onto a 3-d surface - Google Patents

Digital video effects; ray tracing for mapping a 2-d image onto a 3-d surface Download PDF

Info

Publication number
GB2254751A
GB2254751A GB9107495A GB9107495A GB2254751A GB 2254751 A GB2254751 A GB 2254751A GB 9107495 A GB9107495 A GB 9107495A GB 9107495 A GB9107495 A GB 9107495A GB 2254751 A GB2254751 A GB 2254751A
Authority
GB
United Kingdom
Prior art keywords
address
image
ray
digital video
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9107495A
Other versions
GB2254751B (en
GB9107495D0 (en
Inventor
David John Hedley
Howard John Teece
Andrew Ian Trow
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Broadcast and Communications Ltd
Original Assignee
Sony Broadcast and Communications Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Broadcast and Communications Ltd filed Critical Sony Broadcast and Communications Ltd
Priority to GB9107495A priority Critical patent/GB2254751B/en
Publication of GB9107495D0 publication Critical patent/GB9107495D0/en
Priority to JP4089145A priority patent/JPH05183810A/en
Publication of GB2254751A publication Critical patent/GB2254751A/en
Application granted granted Critical
Publication of GB2254751B publication Critical patent/GB2254751B/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

A digital video effects system maps a source video image comprising an array of source pixel values onto a 3-D non-linear object surface to produce an output image. The system comprises a memory for temporarily storing an array of image pixels, control means for establishing a function defining the 3-D non-linear surface and an address generator responsive to the surface function established by the control means to compute memory read addresses by tracing rays for respective output pixels. The address generator performs recursive subdivision of the surface function for each ray such that each subdivision generates a plurality of surface patches, and determines a memory read address for the output pixel corresponding to a ray on detecting an intersection between the ray and a patch at a desired resolution. The surface function used, is a Bezier surface function and the address generator applies parallel ray tracing on an object transposed into perspective viewing space. <IMAGE>

Description

DIGITAL VIDEO EFFECTS SYSTEM This invention relates to a digital video effects system for the manipulation of pictures represented by video signals.
The art of manipulating pictures represented by digital video signals is well established. In essence the manipulation is accomplished by: digitising an analogue video signal by sampling it and converting each sample into a binary word of, for example, 8 or 10 bits representing that sample; storing fields or frames of the digitised signal in memory; and controlling either the reading from or the writing to the memory so as to produce from each field or frame a picture that differs from that represented by the input video signal in that at least one geometrical parameter thereof is changed. Such geometrical parameter may, for example, comprise the location of the picture along one or more of up to three axes and/or the angular position of the picture about one or more axes.Other such parameters may comprise the size of the picture (in the horizontal and/or vertical direction thereof), the extent of shearing of the picture, and the perspective of the picture.
Figure 1 of the accompanying drawings shows in simplified block diagram form the general overview of a typical digital video effects system for effecting such manipulation of a picture. The general kind of apparatus now to be described with reference to Figure 1 has been embodied in a variety of known propriety items of digital video effects equipment, and the operation and construction thereof is well known to those skilled in the art. The digital video effects system comprises a digital video effects unit, which is designated 10 in Figure 1, and a control unit 24.
A video signal V1 representing a picture P1 that is to be manipulated is input into the digital video effects unit 10 at 11. In this prior art digital video effects unit, manipulation of the input picture P1 is performed by controlling the read addresses to the memory 14, although write side address mapping is also known. This control of the read side addresses is effected by the address generator 20.
As the mapping process may involve compression of the picture, and, in the absence of corrective measures, compression of the picture can give rise to aliasing which will degrade the quality of the output image, a filter 12 is provided to compensate for the effects of compression. A filter controller 18 determines local scaling factors representative of the amount of compression for localised areas of the image, these local scaling factors being used to control the filter 12 to apply appropriate amounts of filtering to respective areas of the image.
A pixel interpolator 14 can be provided to enable output pixel values to be computed where the address generator 20 does not generate a one-to-one mapping between a storage location in the memory 13 and the output pixels.
A synchronisation delay 15 allows the data output from the memory 13 to be aligned with frame synchronisation information. A digital linear keyer 16 allows the data output from the memory 13 (which is representative of foreground information) to be keyed into the background (represented by a signal B input to keyer 16) for forming the output video V2 of the output picture P2. A key processor 22 controls the operation of the digital linear keyer 16. The digital video effects unit 10 is under the control of the control unit 24 which can, but need not, be implemented as a conventional personal computer or a computer workstation with appropriate control software.
A known example of a digital video effects system having the above architecture can perform 3D linear manipulations of high definition video with a very high video output quality. However, the known system is limited to linear manipulations of video, being unsuitable for providing video texture mapping, or free form modelling onto a non-linear surface.
"Video texture mapping" is a term which derives from computer graphics where it is used to describe the mapping of an image onto a 3D surface. Originally this technique was developed for simulating surface texture, an appropriate image being mapped to give the desired effect.
However, it has also been used for mapping other images to create other effects. See, for example, the colour plates in the book "Computer Graphics, Principles and Practice", Second Edition by Messrs Foley, Van Dam, Feiner and Hughes, published by Addison Wesley in 1990. In computer graphics applications, however, such video texture mapping is rarely, if ever, performed in real time.
An object of the invention is to enable video texture mapping whereby a source video image can be mapped onto a non-linear object in real time while maintaining a very high video output quality.
In accordance with the present invention there is provided a digital video effects system for mapping a source video image comprising an array of source pixel values onto a non-linear 3D object surface to produce an output image, the system comprising a memory for temporarily storing an array of image pixels, control means for establishing a function defining the 3-D non-linear surface and an address generator responsive to the surface function established by the control means to compute memory read addresses by tracing rays for respective output pixels, the address generator determining a memory read address for the output pixel corresponding to a ray on detecting intersection between the ray and the surface at a desired resolution.
Through the use of an address generator based on ray tracing for a function defining a 3-D non-linear object surface, the invention enables source image data to be mapped onto a non-linear object in real time and with very high image quality. By real time processing is meant that a sequence of source images can be played, for example from a video tape recorder at normal playing speed and this source image data can be mapped onto a non-linear static or changing object surface.
Preferably, the address generator, for determining a memory read address, performs recursive subdivision of the surface function for each ray such that each subdivision generates a plurality of surface patches. By the use of ray tracing with recursive subdivision, efficient mapping with automatic hidden surface removal is possible.
Preferably, the address generator accumulates, on each subdivision, part of a horizontal address component and part of a vertical address component and stores a depth value for each patch, and wherein the address generator, where intersections are detected between a ray and more than one patch at the desired resolution, selects the address indicated by the horizontal and vertical address components of the patch having the shallowest depth value. These preferred features of the address generator mean that automatic elimination of hidden lines and surfaces are possible.
The digital video effects system preferably comprises pixel interpolation means and the desired resolution is preferably a sub pixel resolution whereby the address generator determines, for each ray, major horizontal and vertical address components for addressing the memory and residual horizontal and vertical address components for controlling the interpolator to interpolate between pixels values accessed from the memory to determine an output pixel corresponding to the ray. These further preferred features means that an accurate output pixel value can be calculated where the mapping does not provide a one-to-one relationship between the samples in the memory and the output pixels.
In the preferred embodiment of the invention, the address generator comprises an asynchronous stage including a plurality of parallel address processors, each address processor performing recursive subdivision of the surface function for determining intersection between that surface function and a respective ray and a synchronous section which receives the results of the asynchronous stage and generates output addresses in the correct scanning order for the pixels of the output image. This provides a flexible structure enabling real time processing even for high definition video signals.
Preferably, the address generator traces parallel rays for determining intersections, the control means establishing a surface function representative of the desired 3-D non-linear object surface transposed into perspective viewing space. Ttie use of parallel rays provides for more efficient ray intersection testing than with conventional diverging rays.
The digital video effects system preferably also includes an area mapper for logically sectioning the source image area into a plurality of sub-areas and for computing a mapping of the sub-area corners in accordance with the surface function; a scaling factor generator responsive to the output of the area mapper to generate horizontal and vertical local scaling factors for respective sub-areas in dependence upon the respective degrees of horizontal and vertical compression of those sub-areas on mapping onto the surface function; and a digital filter with variable horizonal and vertical bandwidths for filtering the pixels of the source image before storage in the memory in accordance with the local scaling factors for the areas to which the pixels belong.These preferred features enable localised filtering of source image samples to mitigate the aliasing effects which can be experienced as a result of the compression, rotation, skewing etc.
which can occur as a result of the mapping of the source image onto the 3-D non-linear surface.
Preferably, the control means is in the form of a workstation incorporating control logic. This enable flexible generation of and manipulation of the surface function for the surface of the object onto which the source image is to be mapped.
In the preferred embodiment of the invention, the surface function is a Bezier surface function, wherein the control means determines a set of control points defining the Bezier surface function for the desired surface and wherein the area mapper and the address generator are responsive to the set of control points.
An article by R. Pulleyblank and J Kapenga entitled "The feasibility of a VLSI Chip for Ray Tracing Bicubic Patches", published in the Journal IEEE Computer Graphics and Applications, March 1987 at pages 33 to 44, discusses the possibility of producing a VLSI chip for performing a ray tracing algorithm based on the subdivision of Bezier curves and surfaces. However, the article is directed to the production of a chip for computer graphics applications where speed is not a primary consideration. The article does not address the problems of digital video effects generators where it is necessary to provide video texture mapping in real time.
An example of image signal processing apparatus in accordance with the present invention is described, by way of example only, with reference to the accompanying drawings in which: Figure 1 is a schematic block diagram of a typical digital video effects system; Figure 2 is an illustration of a Bezier curve; Figure 3 shows the division of a Bezier curve; Figure 4 is an illustration of a Bezier surface; Figure 5 is a schematic block diagram of a digital video effects system in accordance with the invention; Figures 6A and 6B are used to explain the problems of the frequency response of a sampled image signal in the frequency domain; Figure 7 illustrates tiling of the source image area; Figure 8 is a block diagram of a tile mapper which forms part of the system illustrated in Figure 5; Figure 9 is a flow diagram describing the operation of mapping logic in the tile mapper of Figure 8;; Figure 10 illustrates a tile selected from Figure 7; Figure 11 shows the corresponding tile on the Bezier surface of Figure 4; Figure 12 is a block diagram of a scaling factor generator which forms part of the system illustrated in Figure 5; Figure 13 is a block diagram of a filter which forms part of the system illustrated in Figure 5; Figure 14 is a schematic diagram for illustrating the operation of the ray tracing and subdivision address generator which forms part of the system illustrated in Figure 5; Figure 15 is a flow diagram of logic forming part of the ray tracing and subdivision address generator; Figure 16 is a schematic block diagram of a hardware implementation of the logic of Figure 15; Figure 17 is an overview of the ray tracing and subdivision address generator incorporating the hardware logic illustrated in Figure 16;; Figures 18A and 18B are used to explain the processing of picture edges; Figure 19 is used to explain the operation of an edge detector for use in the system of Figure 5; Figure 20 is a hardware implementation of edge detection and processing logic for use in the system of Figure 5; and Figure 21 is a schematic diagram of a keyer for the system illustrated in Figure 5.
An example of an image processing system in accordance with the invention enables a source video image to be mapped in real time onto a 3D non-linear surface through the use of a ray-tracing technique.
Before this mapping can be performed, the object surface onto which the video image is to be mapped has to be defined. In the preferred embodiment of the invention, the object surface is defined in terms of the so-called "Bezier surfaces".
Before proceeding further with the description of an example of a digital video effects system in accordance with the invention, there follows a brief introduction to the concepts of Bezier curves and surfaces. This method for constructing curves and surfaces was developed by a French Engineer, Bezier, for use in the design of car bodies. A Bezier curve is a two-dimensional curve constructed by forming a polynomial function from a set of control points.
Figure 2 illustrates a Bezier curve CO which is defined by 4 control points PO, P1, P2 and P3. From these co-ordinate points, there can be calculated a Bezier co-ordinate function P(u) representing the three parametric equations for the curve which fits the input control points Pk, where Pk = (xk, yk, Zk) for k = 0, 1, 2, 3.This Bezier coordinate function can be calculated as:
where u is the distance along the curve from PO (u=O) to P3 (u=1); each Bk.n (u) is a polynomial function called a "blending function" and defined as Bk.n (u) = C(n,k)uk (1 - u)n#k; and C(n,k) represent the binomial coefficients
An important property of any Bezier curve is that it lies within the convex hull defined by the control points. Another important property of Bezier curves is that the tangent to the curve at an end point is along the line joining that end control point to the adjacent control point.In order, therefore, to ensure continuity between a first Bezier curve and a second Bezier curve at an end point it is merely necessary to ensure that the common end point of the two curves and the respective adjacent control points all lie on a straight line.
This is illustrated in Figure 2 for the curves CO and C1 and control points P2, P3, P4.
Another feature of Bezier curves is the way in which it is possible to subdivide a Bezier curve to generate two Bezier curves.
Figure 3 illustrates the generation of the control points for Bezier curves C1 and Cr, being Bezier curves formed by subdividing, by two, the Bezier curve CO. The control points for the left hand curve C1 are Lg, L1, L2 and L3. The control points for the right hand curve Cr are Rn, R1, R2 and R3.It can be seen by examination of Figure 3 that these control points can be determined by the following equations: Lo = PO R3 = P3 L1 = (P1 + L0)/2 R2 = (R3 + P2)/2 L2 = ((P2 + P1)/2) + L1)/2 R1 = ((P2+P1)/2+R2)/2 L3 = (L2 + R1)/2 Ro = L3 It will be noted that the two new Bezier curves can be generated from the original Bezier curve by simple additions and divisions by two. In a binary system, therefore, the new Bezier curves can be generated by a simple combination of additions and shift operations.
A Bezier surface is defined by two sets of Bezier curves specified by appropriate input control points. Figure 4 shows a surface generated using this method with respect to control points represented by the small plus signs. The generally horizontal, gh, and generally vertical, gv, line segments in Figure 4 describe lines of constant latitude and longitude (u and v) over the surface for respective polynomials
For further information regarding Bezier curves and Bezier surfaces, the reader's attention is directed to Chapter 11 of the book entitled "Computer Graphics, Principles and Practice", Second Edition by Messrs Foley, van Dam, Feiner and Hughes, published by Addison Wesley in 1990.
Figure 5 shows, in simplified block diagram form, an image processing system in accordance with the invention. A digital video effects unit 30 includes a video path similar to that in the prior art including a source image input 11, a filter 12, a memory 13 with a pixel interpolator 14 and synchronisation delay circuitry 15. The digital video effects unit also includes a digital keyer 17. The digital video effects unit 30 is under the control a workstation 25.
Although the workstation may have the same hardware configuration as in the prior art, the workstation 25 is provided with software for defining an object onto which a source video image is to be mapped in terms of Bezier surface functions. The output of the workstation includes sets of co-ordinates for the Bezier control points for those surfaces (eg. the points PO, P1, P2 and P3 shown in Figure 2). For controlling the operation of the filter 12, the memory 13 and the keyer 17, the digital video effects unit 30 comprises a filter controller 19, an address generator 21 and a key processor 23 which are specifically adapted to respond to Bezier control point data from the workstation 25, will be explained in more detail hereinbelow.In general terms, the filter controller 19, the address generator 21 and the key processor 23 use the control points to control, respectively, the operation of the filter 12, the read-side addressing of the memory 13 and the operation of the keyer 17.
As in the prior art, the source image is filtered in order to avoid aliasing effects which result from compression of the image. As is known to those skilled in the art, an image can be characterized by a two-dimensional parameter known as the spatial frequency. The concept of spatial frequency can more readily be appreciated by considering an image in the form of a series of uniformly spaced straight lines. The image has a single spatial frequency which is inversely proportional to the spacing of the lines. (The spatial frequency is horizontal if the lines are vertical lines spaced horizontally, vertical if the lines are horizontal lines spaced vertically, and diagonal in other cases).Clearly, if the image is compressed, so that the lines appear to the viewer to become closer together whereby the angle they subtend to the eye of the viewer decreases, the spatial frequency increases.
The scaling theorem in Fourier analysis states that if an image signal is compressed in the spatial domain, then the spatial frequency of the image is increased.
The image signal to be processed is a sampled signal. Nyquist's law concerning the sampling of signals states that, in order not to lose information contained in a signal, the signal must be sampled at a frequency (FS) that is equal to at least twice the bandwidth (FB) of the signal. Naturally, this criterium is complied with when the digital input signal is formed initially by sampling a filtered analogue signal. The frequency spectrum (Fourier transform) of the sampled signal in the frequency domain is shown in Figure 6A of the accompanying drawings, which is a graph of amplitude versus Frequency (Hz). The frequency spectrum comprises a base band component BB (up to FB). Also, the base band is reflected symmetrically around the sampling frequency FS and its harmonics 2FS, 3FS etc. to produce higher frequency components HFC. Provided that Nyquist's Law is complied with (so that FS/2 is greater than FB) and provided that the signal is band limited (low-pass filtered) so as to have a cut-off frequency of about FS/2, the higher frequency components HFC will be suppressed on output.
As explained above, when the sampled signal is subjected to compression in the spatial domain, its Fourier transform exhibits expansion in the frequency domain. Thus, the bandwidth of the components BB, HFC increase as illustrated in Figure 6B. This can result in aliasing of the signal in that the bandwidth FB of the signal can exceed the Nyquist limit (FS/2) so that part of at least the lowest one of the higher frequency components HFC extends down into and is mixed with the base band so as to degrade the signal and therefore the image that it represents.
The filtering requirements for the source image are determined by the filter controller 19.
The filtering requirements for the source image can vary significantly over the image in the case of a mapping onto a non-linear surface due to the differing amounts of compression which result from the mapping. A compromise has to be met between over-filtering, which would give a blurred image, and under-filtering, which would not remove the aliasing effects. Also, a compromise has to be reached as to the resolution to which the filtering is applied. It would be very expensive, computationally, to determine the degree of filtering required separately for each source image sample (or source image pixel).
According, the approach taken in the filter control logic 19 is similar to that discussed in UK patent application GB 9012025.4 (Sony Corporation), filed 30 May 1990, in that the filter control logic 19 considers the source image space as a rectangle made up of tiles t (a tile is an area of the source image space constructed from a group of individual pixels, P).
Figure 7 is a schematic representation of a simplified source image area, or space made up of an array of sub-areas in the form of 8 x 8 square tiles, t, each tile being made up of an array of 4 x 4 pixels, P. In practice, the tiles need not be square, and could be larger. Certainly, in a practical example, there will be many more tiles - eg, for a high definition television image there could be 550 x 281 tiles if each tile is 4 x 4 pixels in size of which 480 x 258 relate to the active picture. The mapping of these tiles onto the desired shape is computed from the Bezier surface of the target object, specifically from the Bezier control points output by the workstation 25 as will be described later.The generally horizontal and generally vertical lines of constant latitude and longitude of the Bezier surface illustrated in Figure 4 could represent the mapping of the edges of the tiles of the source image space onto that surface. Note that there is a one-to-one mapping between each of the tiles of the source image space represented in Figure 7 and the corresponding mapped tiles on the Bezier surface illustrated in Figure 4.
The filter control logic comprises a tile mapper 32 for computing the mappings of the corners of the tiles for the input space onto the target object surface and a local scaling factor generator 34 for computing local scaling factors from the computed mappings in order to generate signals for controlling the filter 12.
Figure 8 is a schematic block diagram of the tile mapper 32, which includes control point storage 40, mapping logic 42 and mapping storage 44. The input to the tile mapper is in the form of an array of Bezier control points defining the target object, as output by the control unit 25. These control points are stored in control point storage 40.
Note, that in general, the target object will be defined in terms of a plurality of surface patches, each patch being itself defined in a Bezier surface function in terms of the control points for a constituent set of Bezier curve functions. However, for ease of explanation, it will be assumed in the following example that the target object comprises a single patch defined in terms of a single set of sixteen control points.
Once the control points have been stored in the control point storage, the mapping logic 42 then determines, for each tile corner point in the tiled source image space, the mapping of that point into mapped space. In performing this processing, the mapping logic accesses the control points in the control point storage and stores the computed corner point mappings in the mapping storage 44. The mapping logic 42 processes each tile corner point in turn, computing for that corner point a mapped coordinate value. The processing of the mapping logic 42 is summarised in Figure 9.
The mapping logic processes the tiles row-by-row, and within each row, column-by-column. In order that a mapped coordinate value for each of the corners of the tiles is generated, the mapping logic starts at step 48 with treating the first, i.e. the top, horizontal edge of the top row of tiles, and selects, at step 50, the first vertical tile edge (i.e. the left hand tile edge). This means that by step 52, the mapping logic has selected the top left hand corner of the image. At step 52, the mapping logic initialized variables x, y and z for accumulating the mapping for the point currently in question (i.e.
initially the top left hand corner of the image).
The mapping logic computes the mapping for the point in question by considering each control point in turn. The control points are stored in the memory as a two dimensional array of control points; the mapping logic 42 accessing this array of control points using variables j and k. Thus, at step 54 the mapping logic initializes the variable j. At step 56 the mapping logic initializes the variable k. At step 58, for given values of j and k, the mapping logic evaluates the horizontal blending function for u and j to generate a horizontal blending value bu, and evaluates the vertical blending function for v and k to generate a vertical blending value, bv.
Then, the mapping logic updates the currently accumulated x, y and z values for the tile corner point in question. The new value for the x coordinate is computed by adding to the previous value of the variable x, the x coordinate value of the control point identified by the variables j, k multiplied by the horizontal blending value bu and the vertical blending value bv. Similarly, the new value for the variable y is computed by adding to the previous value of the variable y, the y coordinate of the control point identified by the variables j and k multiplied by the horizontal blending value bu and the vertical blending value bv. In the same way the new value of the variable z is computed by adding to the previous value of the variable z, the z coordinate of the control point identified by the variables j and k multiplied by the horizontal blending value bu and the vertical blending value bv. By incrementing the k values until all values of k have been considered (step 60) and by incrementing the j values until all j values have been considered (step 62) the contribution of the coordinate values of each control point can be accumulated. At step 64, when the respective coordinate values for each of the control points has been considered for the current tile corner point in question the x, y and z values for that tile corner point can be stored in the mapping storage 44.
At step 66 the mapping logic determines whether there is another tile edge to be considered and if so the mapping logic determines the v value for this next tile edge and returns control to step 52 where the values for x, y and z are re-initialised for generating the mapping for this new point. The mapping logic continues to step along a line of tiles until the right hand edge of the last tile has been considered. When the right hand edge of the current line of tile corner points has been considered the mapping logic then proceeds at step 68 to test whether there is another horizontal tile edge to be processed. If there is, the u value is updated accordingly and the next tile edge is considered. All of the vertical edges, including the top and the bottom edges will be processed so that mappings are generated for each corner point of the tiles of the source image space.
By comparison of the corner points of the tiles in the original source image, and the mappings of those coordinate points computed by the mapping logic 42, it is possible to determine local scaling factors for each of those tiles.
Thus, from the control points for the Bezier surface function output by the workstation 25, the mappings of corner points of the tiles are determined. The differences between the original and mapped corner points are then used by the scaling factor generator 34 to determine local scaling factors for each of the tiles. The individually determined local scaling factors are used to control the filter 12 for applying a degree of filtering to each tile of the source image appropriate to the compression effective for that tile.
Figure 10 shows a local area (specifically the tile labelled ts in Figure 4) of the input image or picture, represented by the digital signal V1, prior to manipulation by mapping. As illustrated, prior to manipulation, the tile ts is square. The tile has corners a, b, c and d. Its edges c-d and b-a horizontal and its edges c-b and d-a are vertical. Figure 11 shows the tile ts of Figure 10 after manipulation as Its. The positions of the corners a, b, c and d after manipulation, that is their positions in the manipulated image, are represented in Figure 11 as Ia, Ib, Ic and Id, respectively.
The resulting shape of the manipulated tiles is assumed to generally approximate to a parallelogram as represented by the dashed lines in Figure 10, so that local scaling factor computation means such as are described in patent application GB 9012025.4 (Sony Corporation), filed 30 May 1990, may be used.
Figure 12 represents a local scaling factor computation means 34 which can compute a local horizontal scaling factor lh and a local vertical scaling factor lv for each tile of the source image where the tiles are mapped to form a quadrilateral which need only be generally in the shape of a parallelogram. A first computation section 72 computes values xv, xh, yv and yh from the coordinates of points Ia, Ib and Ic for each mapped tile, using the coordinates for those points stored in the mapping storage 44. These values, 74, are output to the second computation section 76. This second computation section computes the corresponding values of lh and lv using trigonometric relationships. The corresponding values of lh and lv, 78, are then output to the filter 12 in synchronism with the receipt of the image values to be filtered.The theory behind the generation of the lh and lv values by the local scaling factor computation means, will not be discussed in further detail herein, this being discussed fully in the aforementioned UK patent application number GB 9012025.4.
The filter 12 is preferably a finite impulse response (FIR) filter. The design of 2D FIR filters is known per se and therefore the preferred form of filter will not be described in great detail herein.
As is known to those skilled in the art, a 2D-FIR filter is operative during each clock period T (-1/FS), that is for each sample pixel, to calculate a filtered output word or sample by multiplying a predetermined set of vertically and horizontally spaced samples of the input signal by respective weighting coefficients and summing the products of the multiplication operations. To this end, a 2D FIR filter generally will comprise a plurality of multipliers supplied with respective weighting coefficients, a plurality of one sample delay elements and one line delay elements (line stores) arranged upstream or downstream or the multipliers to delay the samples of the input signal by different amounts to form the predetermined set or vertically and horizontally spaced samples, and a summing means to sum the delayed, weighted products.Figure 13 is a schematic representation of a suitable filter in a rather general and simplified form, the filter 12 comprising a plurality of multipliers 12M, a plurality of delay elements 12D and a summing means 12S. Respective different weighting coefficients are supplied to the multipliers 12M on lines extending from a bandwidth control means 12B.
The bandwidth control means 12B is included in the filter 12 so that the values of the horizontal and vertical local scaling factors lh and lv can be used directly to compute appropriate values for the weighting coefficients to achieve a reduction in horizontal and vertical bandwidth of the filter 12. Preferably, a range of sets of weighting coefficient values appropriate to all possible values of lh and lv is computed at the design stage and the bandwidth control means 82 is implemented in the form of PROM that stores a look-up table of the pre-computed sets of weighting coefficient values. In this way the bandwidth control means can output appropriate values of the weighting coefficients to the multipliers 12M at each clock period in accordance with the values of lh and lv received from the local scaling factor computation means 34.
As explained above, the required reductions in horizontal and vertical bandwidths within a tile are related to the ratios of the horizontal and vertical dimensions and the amount of roll and shear of the manipulated tile compared to the unmanipulated image tile. These factors are taken into account in the design of the look-up table in the bandwidth control means 34, whereby the values lh and lv (rather than ratios computed therefrom) can be supplied directly to the bandwidth control means 34 as the horizontal and vertical local scaling factors.
The filtering of the source image signals is arranged to be performed in real time in synchronism with the receipt of the source image samples; the filtered source image samples being stored in the memory 13.
The filtered image, when is stored in the memory 13 in the received sequence, can then be mapped onto the target object by generating appropriate memory read addresses. The address generator 21 does this using a ray tracing sub-division technique known a Bezier subdivision. The technique used relies on the fact, mentioned earlier, that Bezier curves can easily be sub-divided into two curves by simple operations on the control points which only require additions and divisions by two.
The purpose of the address generator 21 is to generate address data defining the location of filtered source image data in the memory 15 which can be used to generate each pixel of the object image for subsequently forming the output image. The process is based on the concept of generating a ray for each object image pixel and detecting whether that ray intersects the object, and if so to determine one or more storage locations in the memory 13 containing source image data to be used to form the object pixel for that ray. To determine the intersection between the ray and the object surface, the technique involves, for each ray, recursively sub-dividing the Bezier surface or patch into quadrants.
As mentioned earlier, in the case of a curve, the control points can be used to define a convex hull which completely contains the curve. The same applies to a Bezier surface, where the control points can be used to define a convex volume with polygonal sides, completely containing the surface. For ease of calculation, rather than considering the convex hull, the address generator uses a simpler "bounding box" based on the maximum and minimum of the x and y components, respectively, of the control points. The so-called "bounding box" is arranged to contain the surface patch and to be aligned with the coordinate axes. At each stage in the subdivision process, the address generator computes whether the ray intersects this bounding box. The subdivision process continues until a desired resolution is obtained.This subdivision technique will now be described in more detail with reference to Figures 14 and 15.
The sub-division process is illustrated schematically in Figure 14. Conceptually, each of the rays starts from an origin '0' and passes through object space. Figure 14 illustrates a situation where an intersection has been established for the top left bounding box from the first subdivision S1 and a second subdivision S2 has been made within this bounding box as indicated by the dotted lines. This process continues until the desired resolution or accuracy has been achieved (here at subdivision Sn), and a result sub-patch RS can be determined. As illustrated in Figure 14, the rays for respective pixels diverge from the origin '0'. However, such a diverging ray system provides severe processing penalties when testing whether an intersection between a ray and a bounding box has occurred.The preferred embodiment of the invention uses an approximation to the diverging ray approach in which parallel rays are employed, as will be described later.
Figure 15 is a flow diagram setting out the basic steps in the subdivision process.
The subdivision process starts at step 80 with the control points for the original patch. In the present example the original patch is the complete object surface. At this stage a depth value representative of a maximum possible depth is set in a depth register.
The first step, step 82 is to subdivide the original patch into four sub-patches by the appropriate calculations on the original control points, and to store the control points for the sub-patches on a stack.
The calculations to be performed to generate the control points for the sub-patches are based on the equations mentioned earlier with reference to Figure 3. As explained earlier, in a binary system, the calculations reduce to a simple combination of additions and shift operations. Associated with the control points (x, y and z values) for each sub-patch is are variables for assembling u, v and z values for that sub-patch.
At each subdivision, an additional bit is formed for each of the u and v variables. A first subdivision SO generates four subpatches with (u,v) values (0,0), (0,1), (1,0) and (1,1) respectively.
Subdivision S1 of the top left sub-patch will generate four subpatches with (u, v) values (00,00), (00,01), (01,00), (01,01). These (u,v) values are illustrated for the first two subdivisions SO and S1 in Figure 14. It can readily be seen how this process can be extended for further subdivisions.
The control points for the first sub-patch on the stack are taken at step 84 and a bounding box is computed for this sub-patch. At step 86 a test is performed to see if the current ray intersects the current bounding box. If no intersection with the bounding box is detected at step 86, this sub-patch is discarded at step 88. If it is determined at step 90 that there is a further sub-patch on the stack, the logic branches back and the control points for the next sub-patch are taken from the stack at step 84. If no sub-patches are left on the stack the process terminates at step 92, the output of the process being the read address as derived from the u and v values stored in an output register which were accumulated for the result subpatch for which the closest intersection was detected.The horizontal address is generated by multiplying u by the maximum horizontal address (typically 1920 for high definition television). The vertical address is generated by multiplying v by the maximum vertical address (typically 1035 for high definition television).
If, at step 86 an intersection between the ray and the bounding box is detected, a test is made at step 94 as to whether the depth value for the sub-patch is less than the depth in the depth register.
If it is, then a test is made at step 96 to determine whether the desired resolution has been obtained. If the tests at step 94 and step 96 are both positive, the depth for the result sub-patch is stored in the depth register at step 98 and the u and v values for the sub-patch are stored in the aforementioned output register for subsequent output in step 92. The data stored in the depth and output registers overwrite any data previously stored there. If the depth value for the sub-patch is determined, in step 94, to be greater than or equal to the content of the depth register, then the result sub-patch is discarded at step 88 and processing continues at the test at step 90, to determine whether there is a further sub-patch to be processed.If, in step 96, it is determined that the desired resolution has not yet been obtained for a given ray intersection, the process branches back to step 82 where the current sub-patch is further subdivided into four sub-patches, the control points and the u, v and z data relating to the four sub-patches being added to the top of the stack.
The address generator produces a key signal Ko for each object image pixel, the key signal Ko having a first value for object image pixels for which an intersection is found (i.e. within the object) and a second value for object image pixels for which no intersection is found (i.e. outside the object).
Figure 16 is a schematic overview of one hardware implementation of the sub-division logic described functionally with respect to Figure 15. The sub-division logic 100 comprises a patch memory interface 102 for interfacing with a memory (not shown) containing the Bezier control points for the patch to be sub-divided. Connected to the patch memory interface are fifteen sub-divide and select circuits 104. The patch memory (not shown) is organized as a stack of shift registers arranged to contain the control points of the original patch and, during processing, to contain the control points for the patches resulting from the sub-division process.
Each of the sub-divide and select circuits operates bit serially to sub-divide the rows of the control point array, to select one of the halves, to sub-divide the columns of that half and to select one of its halves using the sub-division formulae described above with reference to Figure 3. There are fifteen sub-divide and select circuits for each of the x, y and z of the patch control data. As the circuits operate in a binary manner, the sub-division process can be performed as a combination of add and shift operations. The circuit 106 performs the final (i.e. the sixteenth) sub-division and produces at its output the control points for the four quadrants (i.e. the four sub-patches) produced from the original patch; Fifteen sub-divide and select circuits 104 and the further 106 provide sixteen bits resolution for addressing the source pixel data from the memory 13.In other words, as described above, each sub-division produces one bit of the u and v values for addressing the memory 13 so that 16 subdivisions provide 16 bit addresses.
The circuit 108 computes the bounding box for each of the four quadrants. In other words, the bounding box circuit 108 selects, for each quadrant, the maximum x and y values and the minimum x and y values in any of the control points for that quadrant. Thus, conceptually, the bounding box circuit 108 defines a box which wholly contains the quadrant (i.e. the sub-patch).
The test circuit 112 checks whether an x and y value defining the location of the ray is within the minimum and maximum x value, and the minimum and maximum y value, respectively, and this for each quadrant.
As mentioned above, the preferred embodiment of the invention utilizes a parallel ray system. In other words, at the time of generating the control points defining the original objects surface, the control points for that object surface are transposed from true original object space into a perspective viewing space. In other words, a perspective projection mapping of the control points is performed such that the x and y values for each control point are multiplied by a factor depending on the z value of that control point.
The effect of this transposition would be to convert a cube in true object space into a truncated pyramid in perspective space with the base of the pyramid towards the viewer and with the sides sloping towards the apex disappearing away from the viewer. This means, rather than applying diverging rays, it is possible to apply parallel rays where each ray has a particular x and y value independent of the z value at any point along its length. The use of parallel rays with the perspective transposition of the control points is generally equivalent to the use of diverging rays with non-transposed control points. With this approach, the test for intersection between a ray and a bounding box, reduces to merely testing whether the ray lies within the x and y boundaries of that box.
If it is determined in the test circuitry 112, that the ray in question passes through the bounding box for one of the four quadrant sub-patches, a comparison of the minimum z values for that quadrant sub-patch is made at 114 to a z value currently stored for that ray in a depth register. If it is determined that the bounding box is nearer to the viewer than the point indicated by the z value currently stored in the depth register, this means that an intersection has been detected which is nearer to the viewer than any previous intersection detected for that ray.
Sort circuitry 116 sorts the various depth values stored and discards the control data for quadrant subpatches where no intersection occurred. Circuitry 110 checks that the output of the bounding box circuit 108 is at the desired resolution. The check resolution circuit 110 provides signals to control circuit 120 for controlling operation of the address generation circuitry 100. The output of the address generation circuitry 100 is a u value, a v value and an object key bit Ko = 1 where an intersection is determined for the ray in question and an object key bit Ko = 0 where no intersection is found. The u and the v values determine the location in the memory 13 at which the data which is to form the output pixel for that ray is to be located.The object key bit Ko is used for keying the object into a background image for generating the final output image.
The circuit outlined in Figure 16 could be implemented on a single VLSI circuit. The circuit outlined in Figure 16, however, merely produces an address in the memory 13 for a single ray (i.e. for a single object pixel at a time). However, a typical high definition image comprises an array of 1920 x 1125 pixels. Using current technology it is not possible to generate a circuit as illustrated in Figure 16 which is able to process such a large number of rays at a rate suitable for real time processing. Using current technology, it is not practicable for one circuit such as shown in Figure 16 to process all the pixels of a high, or normal, definition television system. It will also be noted, that because of the recursive nature of the processing, different circuits will produce the output at different times, in accordance with the amount of processing which needs to be performed.
Figure 17 is a schematic diagram of an implementation of the address generator 21 which takes account of these various aspects. A plurality of the circuits 100 are arranged as an asynchronous address generation network on a plurality (in the preferred embodiment 16) of cards 123. Each of the cards 123 carries a plurality of the circuits 100. Initial control point data is supplied from the control unit 25 via a bus 122 and is stored in a control point memory 121 which the circuits 100 can access. However each circuit 100 incorporates or is associated with its own patch memory (not shown). The circuits 100 are additionally connected to synchronous bus 124 which is in turn connected to an intelligent buffer stage 126. The buffer 126 comprises processing logic for polling the individual circuits 100 on the cards 122 to determine the addresses for accessing the memory 15.The buffer 126 enables the addresses to be assembled so that the data may be read from the memory 13 in the order required for generating the output video image sequence. The addresses are output on the address bus 128.
In the above description, it has been assumed that there is a one to one mapping between the object image pixels and the input image pixels.
It has to be noted, however, that due to compression and/or rotation, the locations of the mapped samples may not correspond to exact pixel sites (discrete raster or array positions spaced by 1/FS) of the output image signal or output array.
Accordingly, it is desired that the desired resolution referred to above corresponds to a sub-pixel accuracy. In this way, the address generated by the address generation logic comprising a major address portion (e.g. 12 bits for high definition), which is used for addressing the memory 13 and a residual address portion (e.g. 4 bits) which is used for controlling a pixel interpolator 14 which is positioned after the memory 13. The pixel interpolator is responsive to the value of the position increment between the location in the filtered image specified by the major address portion and the exact position in the filtered image which would map onto the current object pixel, as represented by the residual address portion, to interpolate between the values of available filtered input samples or words in the memory 13.The pixel interpolation logic does this to produce, for each object pixel, an interpolated sample or word which corresponds to that point in the input image which, using the mapping function, would map exactly on to the selected object pixel site. That interpolated sample or word, rather than specific input samples or words, is output from the pixel interpolation means. Thus, a precise desired relationship (as dictated by the mapping function) is maintained between pixels of the object image and the different positions of the memory 13. An interpolation technique of this type is just set forth is described in UK Patent Application Publication No. GB-A-2 172 167 (Sony Corporation).
If the subdivision process is taken to a sub-pixel resolution, this provides good mapping within the body of individual objects.
However, it can still lead to aliased edges of those objects. Figures 18A and 18B illustrate this problem and the solution to the problem respectively. Figure 18A illustrates two adjacent cells each corresponding to a respective pixel position, with a single ray centred on the centre of each pixel. The edge 130 of an object passes through the pixel cells with the object defined to the bottom and right of the object edge. It can be seen that the ray in the left hand pixel position lies outside the edge of the object. The ray for the right hand pixel position, however, lies within the object. The result of sampling as indicated in Figure 18A would produce an output image with the left-hand pixel position being set to the background and the righthand pixel being set to the image. This would clearly result in an aliased, or jagged, edge.
Figure 18B illustrates the approach adopted in the present system. Where an edge is detected, at the desired pixel resolution, multiple rays are fired at the pixel position, in order to determine the percentage of the patch centred on that pixel which is contained within the object. In Figure 18B, for example from the 36 rays fired at the left hand pixel position, 15 rays are seen to intersect the object. This information can be used in order to set that pixel to an appropriate value. However, it will be noted that in order to be able to set the intensity of that pixel at the correct value, the intensity of the adjacent object, or background, needs to be ascertained. As a result of this, the edge detection and processing is performed by an edge detector 36 and an edge processor 38 in the key control processor 23.The output of the key control processor is in the form of an antialiased edge map which is superimposed on the output from the memory 13 and pixel interpolator 14 to generate an output video signal as indicated schematically in Figure 5.
Figure 19 illustrates the principle of the edge detection logic.
Figure 19 shows one corner of an output image which is transversed by an edge 130. The small squares indicate pixel cells and the black dots in those squares represent the rays traced in order to determine the intersections with the object. In the example shown in Figure 9 it is assumed that the area to the top left of the edge 130 is outside the object (i.e. this would be displayed as a background colour pixels) and the area to the bottom right of the edge 130 is within the object. The edge actually passes through the following numbered cells PC (vertical, horizontal): (4,0), (3,0), (3,1), (2,1), (2,2), (2,3), (1,3), (1,4), (0,4), (0,5), (0,6).
It can be seen from Figure 19 that certain of the rays fell within the object, and certain of the rays fell outside the object. If these pixel values were displayed directly, the result would be an aliased edge showing a step like form. The objective of the edge detect and sub-division logic is to avoid the stepped edge effect.
The edge detection logic operates by comparing groups of four pixel cells. The edge detection logic tests whether, within any group of four pixel cells there are: (a) four hits; (b) four misses; or (c) a mixture of hits and misses and/or address discontinues. In cases (a) and (b) the edge detection logic detects that no edges occur within that group of four pixel cells. Where a mixture of hits and misses are found, it is assumed that an object background edge passes within that group of four pixel cells. Where an address discontinuity is detected, it is assumed that there is a fold or the like in the object surface.
In both these latter cases, the area of four cells is then supersampled by firing more rays within that group of four cells to determine the exact path of the edge. The sub-division address generation process as described above with reference to Figures 14 and 15 is performed for each of the rays to determine the memory address of the appropriate source pixel (or group of source pixels where there is no exact mapping) wnich will determine an image value for that ray. The result of each of the further ray tracing operations for each pixel cell are then averaged to determine an edge value for that pixel cell. It is these averaged pixel cell values which are then output to form the edge map to be superimposed on the output image data in the keyer 17.
Figure 20 illustrates, in su atic form, the preferred embodiment of the edge detector 36 and the edge processor 38.
The edge detector 36 includes control logic for performing the comparison of the groups of four pixel cells as described above with reference to Figure 19. In order that it may compare the addresses generated by the address generator 21, it receives the addresses output by the address generator shown in Figure 17 from the address bus 133.
The edge detector produces an edge key signal Ke having a first value for pixels corresponding to an edge, and a second value for other pixels.
The edge processor 38 comprises an asynchronous computation stage 134 and an intelligent buffer 136. Each of the cards 138 contains a plurality of ray trace and sub-division circuits as described with reference to Figure 16. In other words, each of these circuits is essentially the same as the circuit 100 on the cards for the address generator described with reference to Figure 16. Each of the circuits 100 performs the ray trace and sub-division processing as described above for a plurality of rays within a pixel cell identified as being on an edge by the edge detector 36. The addresses generated by the circuit 100 are used for addressing the mapped source image data. As, using current technology, it is not possible to address the memory 13 with sufficient bandwidth to support the address generation of the address generator shown in Figure 17 and the address generation necessary for the edge detection and processing, the edge processing logic comprises one or more storage units 140 in which correspond generally to the memory 13, but without the pixel interpolation stage 14. Thus, the information content of the memory 13 is replicated in respective memories 140; the data being supplied via the bus 141. It is these memories 140 that are addressed by the addresses generated by the circuits 100 in the edge processor 38 illustrated in Figure 20. In the preferred embodiment, for bandwidth reasons and to enable the efficient generation of pixel data, each of a plurality of processor cards 138 comprises one of the memories 140.In addition, the edge processor receives background information over the bus 145 in order that background-object edge supersampling may be performed. The output of the asynchronous stage 134 comprises pixel values for each of the rays for each edge pixel cell.
The intelligent buffer 136 comprises logic which interrogates the individual processing circuits 100 via the data bus 142 to determine individual intensity values for the constituent rays. The buffer 136 collates and processes these individual intensity values to generate a composite intensity value for each pixel cell. The buffer 136 aligns these pixel cell values so that they may be output over a data bus 144 to be keyed with the output pixels from the pixel interpreter 14 and for merging with the background information to generate the composite output picture.
Figure 21 is a schematic overview of the keyer 17. The keyer receives as input, background information B representative of background information for forming the background of the output image OI, the object image pixels of the object image 0, the edge pixels of the edge image EDGE, a key signal Ko for the object image, and a key signal K e for the edge image. The key signal for the object image Ko is in the form of one's and zero's defining a mask identifying which pixels of the object image 0 and which pixels from the background are to be displayed on the screen. Thus, the key signal Ko is used to control a gate 150 for gating the object image 0.The inverse of the signal Ko, produced by an inverter 151 is used to control a gate 152 for gating the background image information. The output of the gates 150 and 152 are combined in an adder 154 to generate the basic output signal on a line 156. In order to avoid edge aliasing effects, the edge image EDGE is superimposed on the output of the adder 154. The key signal Ke is used for controlling a gate 158 to gate the edge signal EDGE. An inverter 157 inverts the signal K,, the inverted signal being used to control a gate 160 for gating the output of the adder 154 on the line 156. The output of the gates 158 and 160 are added in an adder 162 to produce the output image on the line 164. The effect of the keyer shown in Figure 21 is to superimpose the object image 0 on the background in accordance with the mask defined by signal Ko, and then, on the result of that superimposition, to further superimpose the edge pixels defined by the signals EDGE in accordance with the key signal Ke.
There has been described a digital video effects system which enables real-time video texture mapping onto non-linear object surfaces. It will be appreciated that many additions and/or modifications to the particular system described are possible within the scope of the present invention.

Claims (10)

1. A digital video effects system for mapping a source video image comprising an array of source pixel values onto a non-linear 3D object surface to produce an output image, the system comprising a memory for temporarily storing an array of image pixels, control means for establishing a function defining the 3-D non-linear surface and an address generator responsive to the surface function established by the control means to compute memory read addresses by tracing rays for respective output pixels, the address generator determining a memory read address for the output pixel corresponding to a ray on detecting intersection between the ray and the surface at a desired resolution.
2. A digital video effects system as claimed in Claim 1 wherein the address generator, for determining a memory read address, performs recursive subdivision of the surface function for each ray such that each subdivision generates a plurality of surface patches.
3. A digital video effects system as claimed in Claim 2 wherein the address generator, on each subdivision, accumulates part of a horizontal address component and part of a vertical address component and stores a depth value for each patch, and wherein the address generator, where intersections are detected between a ray and more than one patch at the desired resolution, selects the address indicated by the horizontal and vertical address components of the patch having the shallowest depth value.
4. A digital video effects system as claimed in Claim 3 comprising pixel interpolation means, wherein the desired resolution is a subpixel resolution whereby the address generator determines, for each ray, major horizontal and vertical address components for addressing the memory and residual horizontal and vertical address components for controlling the interpolator to interpolate between pixels values accessed from the memory to determine an output pixel corresponding to the ray.
5. A digital video effects system as claimed in any one of the preceding Claims wherein the address generator comprises an asynchronous stage including a plurality of parallel address processors, each address processor performing recursive subdivision of the surface function for determining intersection between that surface function and a respective ray and a synchronous section which receives the results of the asynchronous stage and generates output addresses in the correct scanning order for the pixels of the output image.
6. A digital video effects system as claimed in any one of the preceding Claims wherein the address generation traces parallel rays for determining intersections, the control means establishing a surface function representative of the desired 3-D non-linear object surface transposed into perspective viewing space.
7. A digital video effects system as claimed in any one of the preceding Claims comprising: an area mapper for logically sectioning the source image area into a plurality of sub-areas and for computing a mapping of the sub-area corners in accordance with the surface function; a scaling factor generator responsive to the output of the area mapper to generate horizontal and vertical local scaling factors for respective sub-areas in dependence upon the respective degrees of horizontal and vertical compression of those sub-areas on mapping onto the surface function; and a digital filter with variable horizonal and vertical bandwidths for filtering the pixels of the source image before storage in the memory in accordance with the local scaling factors for the areas to which the pixels belong.
8. A digital video effects system as claimed in any one of the preceding Claims wherein the surface function is a Bezier surface function, wherein the control means determines a set of control points defining the Bezier surface function for the desired surface and wherein the area mapper and the address generator are responsive to the set of control points.
9. A digital video effects system as claimed in any one of the preceding Claims wherein the control means is a computer workstation incorporating control logic.
10. A digital video effects system substantially as hereinbefore described with reference to Figures 2 to 21 of the accompanying drawings.
GB9107495A 1991-04-09 1991-04-09 Digital video effects system Expired - Lifetime GB2254751B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB9107495A GB2254751B (en) 1991-04-09 1991-04-09 Digital video effects system
JP4089145A JPH05183810A (en) 1991-04-09 1992-04-09 Digital vedio effect device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9107495A GB2254751B (en) 1991-04-09 1991-04-09 Digital video effects system

Publications (3)

Publication Number Publication Date
GB9107495D0 GB9107495D0 (en) 1991-05-22
GB2254751A true GB2254751A (en) 1992-10-14
GB2254751B GB2254751B (en) 1994-11-09

Family

ID=10692934

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9107495A Expired - Lifetime GB2254751B (en) 1991-04-09 1991-04-09 Digital video effects system

Country Status (1)

Country Link
GB (1) GB2254751B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1993012502A1 (en) * 1991-12-18 1993-06-24 Ampex Systems Corporation Video special effects system
EP0567219A1 (en) * 1992-04-24 1993-10-27 Sony United Kingdom Limited Video special effects apparatus and method
EP0574111A1 (en) * 1992-04-24 1993-12-15 Sony United Kingdom Limited Lighting effects for digital video effects system
EP0751486A2 (en) * 1995-06-30 1997-01-02 Matsushita Electric Industrial Co., Ltd. Method and apparatus for rendering and mapping images

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0362123A2 (en) * 1988-09-26 1990-04-04 Visual Information Technologies, Inc. High-speed image rendering method using look-ahead images
US4928250A (en) * 1986-07-02 1990-05-22 Hewlett-Packard Company System for deriving radiation images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4928250A (en) * 1986-07-02 1990-05-22 Hewlett-Packard Company System for deriving radiation images
EP0362123A2 (en) * 1988-09-26 1990-04-04 Visual Information Technologies, Inc. High-speed image rendering method using look-ahead images

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1993012502A1 (en) * 1991-12-18 1993-06-24 Ampex Systems Corporation Video special effects system
EP0567219A1 (en) * 1992-04-24 1993-10-27 Sony United Kingdom Limited Video special effects apparatus and method
EP0574111A1 (en) * 1992-04-24 1993-12-15 Sony United Kingdom Limited Lighting effects for digital video effects system
US5361100A (en) * 1992-04-24 1994-11-01 Sony United Kingdom Limited Apparatus and method for transforming a video image into a three dimensional video image with shadows
EP0751486A2 (en) * 1995-06-30 1997-01-02 Matsushita Electric Industrial Co., Ltd. Method and apparatus for rendering and mapping images
EP0751486A3 (en) * 1995-06-30 1997-12-29 Matsushita Electric Industrial Co., Ltd. Method and apparatus for rendering and mapping images
US5852446A (en) * 1995-06-30 1998-12-22 Matsushita Electric Industrial Co., Ltd. Rendering apparatus and rendering method, and mapping apparatus and mapping method

Also Published As

Publication number Publication date
GB2254751B (en) 1994-11-09
GB9107495D0 (en) 1991-05-22

Similar Documents

Publication Publication Date Title
US5461706A (en) Lighting effects for digital video effects system
US4821212A (en) Three dimensional texture generator for computed terrain images
US4583185A (en) Incremental terrain image generation
EP0208448B1 (en) Methods of and circuits for video signal processing
US4489389A (en) Real time video perspective digital map display
US4667190A (en) Two axis fast access memory
US4715005A (en) Terrain/seascape image generator with math model data base
US6782130B2 (en) Rendering of photorealistic computer graphics images
EP0221704B1 (en) Video signal processing
EP0637813B1 (en) Image processing
US4343037A (en) Visual display systems of the computer generated image type
US6707452B1 (en) Method and apparatus for surface approximation without cracks
Cohen et al. Photo‐Realistic Imaging of Digital Terrains
KR100567204B1 (en) An improved method and apparatus for per pixel mip mapping and trilinear filtering
EP0223160A2 (en) Memory efficient cell texturing for advanced video object generator
US20010020948A1 (en) Method and apparatus for effective level of detail selection
GB2266425A (en) Lighting effects for digital video effects system
US4899295A (en) Video signal processing
GB2157910A (en) Improvements in or relating to video signal processing systems
JPH0752925B2 (en) Video signal processor
EP0656609B1 (en) Image processing
GB2254751A (en) Digital video effects; ray tracing for mapping a 2-d image onto a 3-d surface
EP1058912B1 (en) Subsampled texture edge antialiasing
GB2254750A (en) Digital video effects system; avoiding aliasing at object edges by supersampling.
US5313566A (en) Composite image generation with hidden surface removal using a single special effect generator

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)
PE20 Patent expired after termination of 20 years

Expiry date: 20110408