GB2246057A - Shading a 3-dimensional computer generated image - Google Patents

Shading a 3-dimensional computer generated image Download PDF

Info

Publication number
GB2246057A
GB2246057A GB9112331A GB9112331A GB2246057A GB 2246057 A GB2246057 A GB 2246057A GB 9112331 A GB9112331 A GB 9112331A GB 9112331 A GB9112331 A GB 9112331A GB 2246057 A GB2246057 A GB 2246057A
Authority
GB
United Kingdom
Prior art keywords
region
ray
sub
pixel
corner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9112331A
Other versions
GB9112331D0 (en
Inventor
Simon James Fenney
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rank Cintel Ltd
Original Assignee
Rank Cintel Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rank Cintel Ltd filed Critical Rank Cintel Ltd
Publication of GB9112331D0 publication Critical patent/GB9112331D0/en
Publication of GB2246057A publication Critical patent/GB2246057A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Abstract

A region (30) of a 3-dimensional computer generated image is shaded by firing a first cone shaped ray (32) at the region. A list of ray/object (A, B, C) intersections is compiled and then a second thin ray (34) is fired from each corner pixel in the region (30). The ray/object intersections of each thin ray are compared with the intersection in the list and a shade is derived and stored for each corner pixel from the closest intersection with the thin ray (34). Shades for the other pixels in the region are then derived from the shades stored for the corner pixels. <IMAGE>

Description

SHADED 3D IMAGES BA(3CQMUND OF $ INVENIICN This invention relates to the computer generation of 3D shaded images and in particular to the interactive modelling of 3D shaded images for the creation of 3D video animations.
when creating a computer generated 3D image an artist positions objects, lights, and cameras involved in the animation on the computer screen interactively. He then uses a rendering algorithm to produce the 3D image seen by the camera he selects. High quality rendering algorithms which would be used to produce the final image can take up to 40 minutes to compute a video frame. This is clearly far to slow for interactive use and instead well known fast but low quality rendering algorithms such as wire frame or Gouraud shaded polygons are employed.
Various algorithms are typically used in producing 3D shaded images and these include hidden surface removal algorithms to locate the objects in the scene and ensure that no hidden surfaces give contributions to the final image and ray tracing algorithms for determining the shades or colours to be applied to individual pixels in the display. A basic ray tracing algorithm is described in "An Improved Illumination module For Shaded Display" (T. Whitted, Cbmm. ACM, 23, 6, PP. 343-349, June 1980) and also in A Hidden Surface Algorithm For Computer Generated Half Tone Pictures" (J. Warlock, Unif. Utah CoMputer STI Dept., TR4-15, 1969 NTIS, AD-733671).A ray tracing scheme using the variation of Warnocks algorithms is described in "Ray Tracing With Cones" (J. Amanatades, Computer Graphics, Vol. 18, No. 3, PP 129 to 135, July 1984). However, this method will not necessarily sub-divide down to individual pixels.
Ray tracing is a method of generating 3D images by computer and is generally used in the production of high quality images featuring reflections, refractions and shadows.
In the basic ray tracing algorithm, each pixel of the display has a shade determined for it by "firing a ray" groan this pixel through "the camera" and into the synthetic scene as seen by the artist on his computer screen.
Figure 1 shows the type of image the artist will see on the screen and this comprises a simulated pin hole camera 2 on which the pixels for the display are positioned on the back plate. Two objects are in the scene in figure 1 and these are a sphere 4 and a cube 6. In figure 1 the shade of pixel 8 is determined by firing an infinitely thin ray 10 through the pin hole camera. This intersects the sphere and the colour at the intersection point with the sphere is the colour which will be used pixel 8. The colour may be modified by firing further rays from the intersection point on the sphere to approxImate shadows, reflections and refractions. A reflected ray 12 intersecting the cube is shown in figure 1. The rays used in these ray tracing systems are generally modelled as infinitely thin lines.
Figure 2 shows the scene of figure 1 in plan view and an artist could position the objects on his screen in this manner if desired.
Anamatides proposed a variation on the normal ray tracing algorithm using "cone" shaped rays emenating frcm each pixel instead of the infinitely thin rays used in the basic ray tracing methods. This helps overcome the problem of aliasing where there is a boundary between one object and another or between an object and the background since the cone encompasses the whole pixel and thus that pixel can take contributions from two or more intersecting objects or from the background when its colour is calculated.
All the ray tracing algorithms and the Anamatides algorithm using cone shaped rays require rays to be fired for every pixel in the scene and intersection tests performed for every object that ray intersects. They are thus computationally expensive in terms of time and not directly suitable for interactive display when, for ewmple, a 3D animation sequence is being produced.
c# The present invention provides a method and apparatus for producing a 3D shaded image suitable for use in an interactive application with a tradeoff of quality for speed. A preferred embodiment has user controllable/resolution versus time tradeoff.
The invention is defined with more precision in the appended claims to which reference should now be made.
DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT An embodiment of the invention will now be described in detail by way of example with reference to the accompanying drawings in which: Figure 1 shows a schematic view of objects and pixels to be shaded as would be seen by an artist on a computer screen; Figure 2 shows a plan view of the arrangement of figure 1; Figure 3 shows a plan view of ray/object intersections in an emobodiment of the invention; Figure 4 shows one possible sub-division of a region of pixels to be shaded; and Figure 5 is a flow diagram illustrating the method of one embcdiment of the invention.
Figure 3 shows a sub-set of the pixels to be shaded and within this a region 30 has been selected by an artist to determine at least approximately what the shades of the pixels in the region will be. The process of designating the region 30 is performed in the computer in a well known manner.
A fat cone shaped ray 32 is fired via the simulated pin hole camera of figure 1 into the 3D dimensional space to be rendered and this ray is large enough to cover all the pixels in the region 30. Ray/object intersection tests are then carried out for the whole of the fat cone shaped ray 32 to determine which of the objects lie at least partially within the ray. Thus for the arrangement of figure 3 objects B and C will be detected but not object A. The ray object intersections are then compiled into a distance ordered list.
Clearly the ray/object intersection tests only need to be perfomed for the pixels within the region 30 and not for all pxiels within the cone 32.
Following this a thin ray 34 is fired from each corner pixel in the region. These thin rays can be modelled at infinitely thin rays or as thin cone shaped rays.
Object intersections of the thin rays 34 are then compared with the intersections in the distance order list compiled from the fat cone shaped ray 32. Preferably the thin ray intersections are first compared with the closest objects intersected by the fat ray 32. As can be seen from figure 3 two of the thin rays in the corners of the region hit the background while the top left and bottom right rays intersect objects B and C respectively.
A shade for each corner pixel is then derived from the closest ray/object intersection of the thin ray. This value is stored in an array which has an entry for every pixel to be shaded.
A decision is then made as to whether sufficient data is available to produce an acceptable shading model of the region 30 or whether the region needs to be sub-divided into smaller regions. The decision as to whether or not sub-division is required is made in accordance with the quality/resolution required in the final image and the time available for the shading to be performed. Examples of the criteria on which sub-division of the regions is made, depending on accuracy and time constraints, are given below: 1. If the corner pixel shadings differ by some threshold then sub-divide further. This can of course result in objects falling between gaps in the pixels at which thin rays are fired but will give acceptable results. To guarantee a certain degree of accuracy in the final image a maximum and minimum sampling distance can be set.Sub-division will continue until the distance between corner pixels is less than the mEz m um sampling distance but will not continue when this distance is less then the minimum sampling distance.
2. If the thin rays fired at corner pixels hit more than one (or two or more) different objects including the background then sub-divide.
3. If the corner pixels do not intersect the closest object intersected by the fat ray 32 then subrdivide. If this approach is taken then it is preferable to fire a further fat cone shaped ray at each sub-divided area and to compile a distance order list of intersections for that sub-divided area.
4. Designate a minimum area for sub-division and continue sub-dividing until the sukrdivisions reach the size of a minimum user area. This criterion can be used in combination with any of the other criteria listed above.
Other criteria for determining whether or not sub-division needs to be made can obviously be selected.
Figure 4 illustrates a sub-divided region. In this particular embodiment the sub-divided regions are arranged to overlap by one pixel as indicated in the figure. This reduces the number of corner rays which need to be fired at each sub-divided subregion because adjacent regions share two corner pixels. The original group could be divided into two rectangles which require two new rays to be fired or as shown in figure 4 can be divided into four regions. Sub-division into four regions with an overlap of one pixel requires five new rays to be fired as opposed to 12 new rays if there was no overlap.
Once the region has been subdivided down to a size where the resolution can be determined to an acceptable level in accordance with the criteria selected for sub-division each region is shaded by using a linear interpolation between the shades (colours) determined for each corner pixel to determine the appropriate shading for all the other pixels in the region and store these in the array.
When a whole screen of pixels is to be shaded the user selects a region size on which shading is to be first performed and the first step is to fire a fat cone shape ray at each region in the scene and determining the distance ordered list of objects for each of those regions. The thin rays for each corner pixel in the regions is then fired and the ray intersection tests performed for that region before going onto the next region. When subdivision occurs it is preferable that a first sub-division is initially made on every region where this is necessary in turn rather than continuing sub-division on a fist selected region until the required quality/resolution is achieved. A further sub-division can then be made on each region in turn.Using such an approach permits a time limit to be applied to the method such that if this limit is exceeded during the sub-division process it will be terminated. This type of approach can be implemented by placing the regions on a first in first out (FIFO) queue and performing the sub-division on each region in the queue in turn. Thus the subdivision is performed in a "breadth first" manner rather than a "depth first" order which would be the result if the regions were stored in a stack.
Using this breadth first approach ensures that the image detail will be roughly even throughout the entire initial region irrespective of when the process terminates. The use of a time limit allows the artist to guarantee that the shading of an entire image will terminate within a certain amount of time. This is a great advantage in an interactive system. Preferably the system allows the artist to decide whether the time limit should take precedence over any other criteria used to determine when the shading process should terminate.
A flow diagram illustrating the method of a basic embodiment of the invention is shown in figure 5 and this illustrates how the breadth first approached to sub-division can be achieved by use of the decision boxes 52 and 54.
Thus in summary the inputs required by the system are as follows.
A. The image to be shaded. This is a rectangular block of pixels specified by minimum and maximum values of pixel coordinates this can be as large as the whole screen or can be a subset of the screen. Whe two or more subsets of the screen are designated then the use of parallel processing will speed up the shading process.
B. The time limit in which the region is to be shaded.
C. The maximum and minimum distances between corner pixels.
The maximum distance forces the system to keep subdividing a region until the corner pixels at which thin rays are fired are at least this close together. The minimum distance prevents the system from continuing to subdivide once this distance between pixels has been reached. Thus the maximum distance can be used to guarantee a certain amount of accuracy in the picture whilst the minimum is used to prevent it spending too much time in calculation.
Using the invention as described above will produce a "flurry" edged image which is an approximation to the final resolution required by the artist but enables him to have some idea of how the scene he is creating will look. The system can be set up to shade a full quality image simply by adjusting the criteria to force sub-division to continue to single pixel level. Thus by storing the data from the approximate images derived interactively the production of final full quality images can also be speeded up.
The invention can be used with any representation scheme for objects provided that the scheme can be ray traced. For example a model of the scene could be stored directly as splines or could comprise constructive solid geometry (CSG) combined objects stored in an array. There is no requirement to convert these to polygonal approximations as required by techniques such as Gouraud shading.
The shading applied to the corner pixels can be performed in a number of well known ways. Two possible methods are: 1. Shade the pixel using a simple Lambert shading. This comprises taking the dot product of the light direction with the surface normal and then rrolltiplying by surface colour.
This is a relatively quick calculation to do since surface normals are preferably calculated as part of the ray/object intersection tests.
2. Do a ç lete recursive ray trace for the corner pixels.
This gives approximations to the actual reflections, shadows and refractions that would be visible in a full ray trace.
This assists in interactively positioning objects so shadows and reflections are positioned correctly but is rather slower than the Lambent shading technique.
The output of the system will in most cases be a set of horizontal and vertical linear colour blends and a conventional piece of hardware that evaluates such colour blends can be provided in the system to speed up operation.
Alternatively the interpolation between corner pixels can be performed in software.
The whole system can be implemented on a conventional video graphics computer system or specific hardware can be provided to perform each step of the shading process.

Claims (14)

1. A method for shading 3-dimensional computer generated images comprising the. steps of designating a region of the image to be shaded, firing a first cone shaped ray at the region, -the cone shaped ray encompassing every pixel in the region testing for any objects intersected by the first ray, compiling a list of all ray/object intersections in the region, firing a second ray from each corner pixel in the region, testing for any objects intersected by each second ray, comparing the intersections of each second ray with the intersections in the list, deriving a shade for each corner pixel from the closest intersection with its respective second ray, storing the result, and deriving shades for other pixels in the region from the shades stored for the corner pixels.
2. A method according to claim 1 including the step of sub-dividing each region and firing further second rays for each corner pixel in each subdivided region.
3. A method according to claim 2 in which each sub-divided region overlaps its neighbouring regions by one pixel.
4. A method according to claim 2 in which the sub-dividing step continues until the distance between corner pixels is less than a predetermined limit.
5. A method according to claim 2 in which the sub-dividing step terminates when the distance between corner pixels is less than a predetermined limit.
6. A method according to claim 2 in which the sub-dividing step terminates if the second rays from the corner pixels intersect less than a predetermined number of objects in the scene.
7. A method according to claim 2 in which the subdividing step terminates after a predetermined period of time.
8. A method according to claim 2 in which the subdividing step continues until each subdivided portion of the region comprises one pixel.
9. A method according to claim 2 including the steps of firing a second cone shaped ray at the subdivided region and compiling a list of ray/object intersections for that sub-divided region.
10. A method according to claim 1 in which the step of deriving shades for other pixels in the region comprises linearly interpolating between the shades derived for the corner pixels.
11. A shading system for 3-dimensional computer generated images comprising means for firing a first cone shaped ray at a designated region of the image to be shaded, the first ray encompassing every pixel in the region, means for testing for ray/object intersections in the region, means for compiling a list of the ray/object intersections, means for firing a second ray from each corner pixel in the region, means for testing for any objects intersected by each second ray, means for comparing the intersections of each second ray with the intersectiosn in the list, means for deriving a shade for each corner pixel from the closest intersection with its respective second ray, means for storing the result and means for deriving shades for other pixels in the region form the corner pixel shades.
12. A shading system according to claim 11 including means for sub-dividing each region and means for firing further second rays from each corner pixel in each sub-divided region.
13. A shading system according to claim 12 including a user controlled input for selecting a criterion to terminate subdivision of the region.
14. A shading system according to claim 11 in which the shade deriving means comprises a linear interpolator.
GB9112331A 1990-07-03 1991-06-07 Shading a 3-dimensional computer generated image Withdrawn GB2246057A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AUPK098790 1990-07-03

Publications (2)

Publication Number Publication Date
GB9112331D0 GB9112331D0 (en) 1991-07-24
GB2246057A true GB2246057A (en) 1992-01-15

Family

ID=3774800

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9112331A Withdrawn GB2246057A (en) 1990-07-03 1991-06-07 Shading a 3-dimensional computer generated image

Country Status (1)

Country Link
GB (1) GB2246057A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0580020A2 (en) * 1992-07-08 1994-01-26 Matsushita Electric Industrial Co., Ltd. Image producing apparatus
GB2298111A (en) * 1995-01-31 1996-08-21 Videologic Ltd Improvements relating to computer 3d rendering systems

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0580020A2 (en) * 1992-07-08 1994-01-26 Matsushita Electric Industrial Co., Ltd. Image producing apparatus
EP0580020A3 (en) * 1992-07-08 1994-06-01 Matsushita Electric Ind Co Ltd Image producing apparatus
US5546515A (en) * 1992-07-08 1996-08-13 Matsushita Electric Industrial Co., Ltd. Image processing apparatus
GB2298111A (en) * 1995-01-31 1996-08-21 Videologic Ltd Improvements relating to computer 3d rendering systems
GB2336983A (en) * 1995-01-31 1999-11-03 Videologic Ltd Improvements relating to computer 3D rendering system
GB2336982A (en) * 1995-01-31 1999-11-03 Videologic Ltd Improvements relating to computer 3D rendering systems
GB2298111B (en) * 1995-01-31 2000-01-19 Videologic Ltd Improvements relating to computer 3d rendering systems
GB2336982B (en) * 1995-01-31 2000-01-19 Videologic Ltd Improvements relating to computer 3D rendering systems
GB2336983B (en) * 1995-01-31 2000-01-19 Videologic Ltd Improvements relating to computer 3D rendering systems

Also Published As

Publication number Publication date
GB9112331D0 (en) 1991-07-24

Similar Documents

Publication Publication Date Title
JP2508513B2 (en) Image generator
US6493858B2 (en) Method and system for displaying VLSI layout data
US5175806A (en) Method and apparatus for fast surface detail application to an image
US4888713A (en) Surface detail mapping system
US5442733A (en) Method and apparatus for generating realistic images using a discrete representation
US5856829A (en) Inverse Z-buffer and video display system having list-based control mechanism for time-deferred instructing of 3D rendering engine that also responds to supervisory immediate commands
US6518963B1 (en) Method and apparatus for generating patches from a 3D mesh model
US5805782A (en) Method and apparatus for projective texture mapping rendered from arbitrarily positioned and oriented light source
US5025400A (en) Pseudo-random point sampling techniques in computer graphics
US6999073B1 (en) Method and system for generating fully-textured 3D
JP3759971B2 (en) How to shade a 3D image
EP1953701A1 (en) Hybrid volume rendering in computer implemented animation
JPH0636040A (en) Method of forming electric signal of video image frame
WO1995004331A1 (en) Three-dimensional image synthesis using view interpolation
JPH07325934A (en) Method and equipment for provision of graphics enhanced to virtual world
JP2004522224A (en) Synthetic rendering of 3D graphical objects
US6515658B1 (en) 3D shape generation apparatus
US7209136B2 (en) Method and system for providing a volumetric representation of a three-dimensional object
Romney Computer assisted assembly and rendering of solids
Shelley et al. Path specification and path coherence
US5793372A (en) Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically using user defined points
EP0656609A1 (en) Image processing
US6744440B1 (en) Image processing apparatus, recording medium, and program
GB1605135A (en) Variable image display apparatus
KR100429092B1 (en) Graphic image processing method and apparatus

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)