GB2410663A - 3d computer graphics processing system - Google Patents

3d computer graphics processing system Download PDF

Info

Publication number
GB2410663A
GB2410663A GB0401957A GB0401957A GB2410663A GB 2410663 A GB2410663 A GB 2410663A GB 0401957 A GB0401957 A GB 0401957A GB 0401957 A GB0401957 A GB 0401957A GB 2410663 A GB2410663 A GB 2410663A
Authority
GB
United Kingdom
Prior art keywords
data
scene
pathway
subfield
operable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0401957A
Other versions
GB0401957D0 (en
Inventor
Melvyn Slater
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University College London
Original Assignee
University College London
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University College London filed Critical University College London
Priority to GB0401957A priority Critical patent/GB2410663A/en
Publication of GB0401957D0 publication Critical patent/GB0401957D0/en
Priority to PCT/GB2005/000306 priority patent/WO2005073924A1/en
Publication of GB2410663A publication Critical patent/GB2410663A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing

Abstract

A 3D computer graphics processing system is described for generating a 2D image of a 3D scene. The 3D scene is pre-processed to determine intersections between objects within the 3D scene and a plurality of rays arranged in a pre-defined structure. The determined intersections are subsequently used by a rendering apparatus to generate the 2D images. The processing performed by the rendering apparatus to identify intersections between a ray projected from a viewpoint and the objects within the 3D scene is advantageously reduced by using the determined intersection data to identify a subset of candidate objects which the project ray might intersect.

Description

24 1 0663 3D COMPUTER GRAPHICS PROCESSING APPARATUS AND METHOD The present
invention relates to the field of three dimensional (3D) computer graphics, and more particularly to the rendering of objects in a 3D computer model scene.
Ray tracing is a common technique used to render realistic 2D images of a 3D computer model scene. In ray tracing, a ray is projected from a defined viewpoint into the 3D scene through each pixel in an image to be rendered, in order to determine a corresponding value for each pixel in the image. An object in the 3D scene which the projected ray intersects is identified, and the pixel value is calculated from the identified object's properties.
Additionally, in order to calculate shadows, reflections and refractions of objects in the scene, the projected ray is reflected/refracted from the object (depending on the objects' properties) to define one or more secondary rays and the intersections of the secondary rays with other objects in the scene are identified. The resulting image pixel value is then determined taking into account the
I
object properties intersected by the primary ray and the object properties of objects intersected by the secondary rays. Such a technique is described in "An Improved Illumination Model for Shaded Display" by T. Whitted in Communications of the ACM 1980, 23(6), pages 343 to 349.
One problem with the ray tracing technique is that intersection calculations must be performed between every object in the 3D scene and every ray (and secondary ray) that is projected into the 3D scene, in order to determine the objects that will contribute to the image pixel value. This results in a large number of intersection calculations that must be carried out, which significantly increases the computational requirements and the amount of time required to render an image. Additionally, because such a technique is computationally intensive, it is generally unsuitable for real-time applications.
A number of techniques have been developed to try to reduce the time required to render an image using ray tracing. For example, the papers "Automatic Creation of Object Hierarchy for Ray Tracing" by Goldsmith, J. and Salmon, J. (1987) IEEE CG&A 7(5), pages 14-20 and "Ray Tracing Complex Scenes" by Kay, T.L. and Kajiya, J.T. (1986), Computer Graphics (SIGGRAPH) 20(4), pages 269-278 both describe a technique of creating hierarchies of bounding boxes around objects in the scene, such that an object can be disregarded efficiently if the projected ray does not intersect the bounding box.
As another example, the paper Analysis of an Algorithm for Fast Ray Tracing Using Uniform Space Subdivision" by Cleary, J.G. and Wyvill, G. (1988) The Visual Computer, 4, pages 65-83 describes a technique of subdividing the 3D space into a plurality of equally sized sub-volumes or voxels, where each voxel has an associated list of the objects which lie at least partly within the voxel. During the rendering process, each voxel that the projected ray passes through is considered and only those objects on the list are checked for intersections. A similar technique is described in the paper "Space Subdivision for Fast Ray Tracing" by Glassner, A.S. (1984) IEEE Computer Graphics and Applications, 4(10), 1522 where the space is subdivided into an octree data structure comprising compartments of space which include at least a part of the same list of objects, so that compartments which the projected ray passes through are considered to identify a set of candidate objects for intersection.
However, these techniques still require significant searching along each of the projected rays in order to determine intersections between objects in the 3D scene and the rays. The conventional way of identifying intersections is described in the book "Computer Graphics', by Foley, Van Dan, Feiner & Hughes from page 702 to 704. The present invention has been made with such problems in mind.
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 is a schematic block diagram illustrating the main components of an image rendering system embodying the present invention; Figure 2 schematically illustrates the generation of a 2D image of a 3D scene from a defined viewpoint using a ray tracing technique; Figure 3 is a schematic block diagram showing the functional components of a rendering pre-processor forming part of the system shown in Figure 1, which allows the generation of intersection data defining intersections between objects in a 3D scene and tiles
of a parallel sub-field;
Figure 4, which comprises Figures 4A to 4C,
illustrates a parallel sub-field as defined in an
embodiment of the present invention; Figure 5 is a flow diagram illustrating the processing steps employed by the rendering pre-processor shown in Figure 3, to generate the intersection data; Figure 6 is a flow diagram illustrating the processing steps performed when inserting an object into a
parallel subfield during the processing shown in
Figure 5; Figure 7, which comprises Figures 7A and 7B, illustrates the way in which the intersection data is generated by the rendering preprocessor shown in Figure 3; Figure 8 illustrates the intersection data which is generated for the example objects shown in Figure 7; Figure 9 is a schematic block diagram showing the functional components of an image renderer which allows the generation of image data for a particular viewpoint using intersection data received from the rendering preprocessor shown in Figure 3; Figure 10 is a flow diagram illustrating the processing steps employed by the image renderer shown in Figure 9 to generate image data for a particular viewpoint; Figure 11 is a flow diagram illustrating the processing steps performed when generating 2D image data using 3D scene data and the intersection data obtained from the rendering pre-processor shown in Figure 3; Figure 12 is a flow diagram illustrating the processing steps employed by the image renderer shown in Figure 9 to generate 2D image data for a particular viewpoint of the 3D scene according to another embodiment of the present invention; and Figure 13 is a flow diagram illustrating the processing steps performed to remove an object from the intersection data, to allow dynamic movement of objects in the 3D scene.
FIRST EMBODIMENT
Overvi en Figure 1 schematically illustrates the main components of an image rendering system embodying the present invention. As shown, the system includes a rendering pre-processor 1 which pre-processes received 3D scene data 3 which defines the 3D scene to be rendered and an image renderer 5 which generates a 2D image of the 3D scene from a user defined viewpoint 7. In this embodiment, the rendering pre-processor 1 identifies intersections between a plurality of predefined rays and objects within the 3D scene. In this embodiment, the rays are arranged in a plurality of sub-sets, with the rays of each sub- set being parallel to each other and having a different orientation to the rays of other sub-sets. The sub-sets of parallel rays will be referred to hereinafter as parallel sub-fields (PSFs) and these are defined by the PSF data 9. The predefined rays defined by the PSFs attempt to provide a finite approximation to all possible rays that may be projected into the 3D scene by the image renderer 5 (and subsequent secondary rays as well) from a defined viewpoint.
In operation, the rendering pre-processor 1 identifies the intersections between the objects in the 3D scene and the predefined rays defined by the PSF data 9. The rendering pre-processor 1 then stores, in a storage device 11, intersection data 13 associated with each ray, that identifies all of the objects intersected by that ray.
After the intersection data 13 has been generated for a 3D scene, the image renderer 5 operates to process the 3D scene data 3 to generate 2D images of the 3D scene from the user defined viewpoint 7. In particular, in this embodiment, the image renderer 5 uses a ray tracing technique to project rays into the scene through the pixels of an image to be rendered from the user defined viewpoint 7. This is illustrated in Figure 2, which schematically illustrates the viewpoint 7, the 2D image 15 and the ray 14 which is projected, through pixel 16 of the 2D image 15, into the scene 18. Figure 2 also illustrates the PSF 33 whose orientation is closest to that of the projected ray 14 and the PSF ray 20 which is closest to the projected ray 14. When projecting a ray 14 into the scene 18, instead of searching for objects 22 which may intersect with the projected ray 14, the image renderer 5 identifies the PSF ray 20 which is closest to the projected ray 14. The image renderer 5 then uses the intersection data associated with the identified PSF ray 20 to determine a candidate set of objects which the projected ray 14 is likely to intersect. This candidate set of objects is typically a small fraction of the full set of objects in the 3D scene. The image renderer 5 then uses any conventional ray-object traversal technique to identify the intersections 24 between the projected ray 14 and those candidate objects 22 and determines the corresponding pixel value in the image 15 based on the object properties at the intersection point 24 for the object 22 which is nearest the user defined viewpoint 7. Further, if shadows, reflections and refractions are to be considered, then the image renderer 5 projects the above described secondary ray or rays from the identified intersection point 24 of the nearest object to find the objects intersected by each secondary ray, again using the intersection data associated with the PSF ray that is closest to the secondary ray. The 2D image data 15 generated by the image renderer 5 is written into a frame buffer 17 for display on a display 19.
The inventors have found that operating the system in the above way allows the image renderer 5 to be able to generate 2D images of a 3D scene from a user defined viewpoint 7 substantially in real time. This is because when projecting rays 14 into the 3D scene, the image renderer 5 can identify the list of candidate objects 22 which the projected ray 14 may intersect, simply by identifying the PSF ray 20 which is closest (in orientation and position) to the projected ray 14. This is a constant time lookup operation which does not require the image renderer 5 to search along the projected ray 14 to identify objects 22 that it intersects.
As discussed above the processing of the 3D scene data 3 by the rendering pre-processor 1 is performed in advance of the image rendering performed by the image renderer 5. In the following more detailed discussion, it will be assumed that the rendering pre-processor 1 and the image renderer 5 form part of separate computer systems.
Rendering Pre-Processor Figure 3 is a block diagram illustrating the main components of a computer system 21, such as a personal computer, which is programmed to function as the rendering pre-processor 1. The computer system 21 is programmed to operate in accordance with programming instructions input, for example, as data stored on a data storage medium 23 (such as an optical CDROM, semiconductor ROM or a magnetic recording medium) and/or as a signal 25 (for example an electrical or optical signal) input to the computer system 21, for example, from a remote database, by transmission over a communication network such as the Internet and/or entered by a user via a user input device 27, such as a keyboard, mouse etc. As shown in Figure 3, the computer system 21 is operable to receive 3D scene data 3 and PSF data 9 either as data stored on a data storage medium 29 (such as an optical CD ROM, a semiconductor ROM etc.) and/or as a signal 31 (for example an electrical or optical signal) input to the computer system 21, for example, from a remote database, by transmission over a communication network such as the Internet and/or entered by a user via a user input device 27, such as a keyboard, mouse etc. As shown, the 3D scene data 3 and PSF data 9 is received by an input data interface 37 which stores the 3D scene data 3 in a 3D scene data store 47 and which stores the PSF data 9 in a PSF data store 43.
In operation, the central controller 39 operates in accordance with the programming instructions to carry out the functions of the rendering preprocessor 1, using working memory 41 and, where necessary, displaying messages to a user via a display controller 34 and a display 38.
Parallel Sub Field
As discussed above, the rendering pre-processor 1 identifies the intersections between the objects 22 within the scene 18 defined by the 3D scene data 3 and PSF rays 20 defined by the PSF data 9. Figure 4A schematically illustrates an example of a PSF 33 which has a particular orientation relative to the x-axis, the y-axis and the z-axis. The PSF 33 comprises a plurality of parallel rays 35 which all extend from a base 36 in the same direction and which are arranged in a grid to allow for easy referencing thereof. Since the PSFs 33 are intended to provide a finite approximation to all possible rays which may be projected into the 3D scene 18 by the image renderer 5, the PSF data 9 includes a plurality of PSFs oriented in different directions. In this embodiment the centre of each PSF 33 is defined as its origin and the PSFs 33 are overlaid so that their origins coincide, although this is not essential. Figure 4B illustrates a second PSF 33 rotated through an angle from the vertical about the y-axis so that the parallel rays 35 in the PSF 33 are orientated at an angle from the vertical. Figure 4C illustrates a third PSF 33 in which the PSF shown in Figure 4B is further rotated through an angle about the z-axis, so that the parallel rays 35 are orientated at an angle from the vertical and, when resolved into the x,y plane, are at an angle O relative to the x- axis.
The different PSFs 33 can therefore be identified by the different values of the angles and 0. A particular PSF ray 35 is therefore identified by its grid reference within the PSF 33 and the values of and O for the PSF 33.
A detailed understanding of the way in which the PSFs are generated and defined is not essential to the present invention. A more detailed description of the generation of the PSFs 33 is given later in the section entitled "Generation of Parallel Sub Fields".
The reader is referred to this more detailed description for a better understanding of the PSFs 33.
In order to calculate the above described intersection data 13, the rendering pre-processor 1 effectively locates the 3D scene 18 defined by the 3D scene data 3 into the volume of space defined by the PSFs 33. The rendering pre-processor 1 then determines, for each ray 35 within each PSF 33, which objects 22 within the 3D scene 18 the ray 35 intersects.
As those skilled in the art will appreciate, a large amount of memory would be required to store data representative of the intersections between the objects 22 in the scene 18 and every ray 35 within every PSF 33. Therefore, in order to reduce the amount of memory required to store the intersection data 13, in this embodiment, the base 36 of each PSF is partitioned into a plurality of tiles (each containing a plurality of the rays 35) and the rendering pre processor 1 stores intersection data 13 for each tile, identifying all objects in the scene 18 which are intersected by one or more of the rays 35 of the tile.
A more detailed description of the way in which the rendering preprocessor 1 determines the intersection data 13 will now be described with reference to Figure 5. Figure 5 is a flow chart illustrating the processing steps performed by a central controller 39 of the computer system 21, to calculate the intersection data 13 for each tile within each PSF 33.
As shown, at step sl, the central controller 39 retrieves the PSF data (e. g the and data which defines the PSF) for the first PSF 33 from the PSF data store 43. The processing then proceeds to step s3 where the central controller 39 inserts each object 22 (as defined by object definition data 45 stored in the 3D scene data store 47) into the current PSF 33 in order to determine intersections between the rays 35 of each tile and each object 22. The processing then proceeds to step s5 where the central controller 39 stores the intersection data 13 that is determined for each tile in the current PSF 33 in an intersection data store 49. The processing then proceeds to step s7 where the central controller 39 determines whether or not there are any more PSFs 33 to be processed. If there are, then the processing proceeds to step s9 where the central controller 39 retrieves the PSF data for the next PSF 33 from the PSF data store 43 and then the processing returns to step s3. Once all of the PSFs 33 have been processed in the above way, the processing ends.
Insert objects into PSF In step s3 discussed above, the central controller 39 inserts each object 22 into the current PSF 33. The way in which this is achieved will now be described with reference to Figures 6 to 8. Figure 6 is a flow chart which illustrates in more detail the processing steps performed in step s3. As shown, in step sll, the central controller 39 retrieves the object definition data 45 for the first object 22 within the 3D scene 18, from the 3D scene data store 47. The object definition data 45 defines the location and orientation of the object in the scene 18 and comprises, in this embodiment, a list of vertex positions in threedimensional space, a list of polygons constructed from the listed vertices and surface properties such as light reflection characteristics as defined by a bi-directional reflection distribution function (BRDF).
Figure 7A schematically illustrates the orientation and location of two objects (labelled A and B) within a current PSF 33 whose orientation is defined by the angles (+i,O,). Figure 7A also schematically illustrates the tiles 51-1 to 51-9 of the PSF 33 and, as represented by the dots 35, some of the rays 35 within each tile 51. As those skilled in the art will appreciate, although only nine tiles are defined, in practice, each PSF 33 will be divided into a larger number of smaller tiles 51. The number of tiles that each PSF 33 is divided into is a trade-off between the required storage space and the time taken to generate a 2D image 15 using the image renderer 5. In particular, smaller tiles means that there will be less objects per tile and therefore the final rendering will be faster, but smaller tiles requires more storage space to store the intersection data.
In order to identify the tiles 51 that have rays 35 which intersect with the objects A and B. the objects A and B are projected down onto the base 36 of the PSF 33. Since it is easier to perform this projection in a vertical direction, each object is initially rotated, in step s13, in three dimensions to take into account the orientation of the current PSF 33 into which the object is being inserted. In order to perform this rotation, the central controller 39 uses a pre-stored rotation matrix that is associated with the current PSF 33 (which forms part of the PSF data retrieved in step sl from the PSF data store 43). This rotation matrix is then applied to the retrieved object definition data 45 to effect the rotation. The processing then proceeds to step s15 where the rotated object is projected vertically onto the base of a nominal vertically orientated PSF, which is computationally easier than projecting the object onto the base 36 of the original PSF 33.
Figure 7B illustrates the objects A' and B' after they have been rotated and projected onto the base 36 of the nominal vertically orientated PSF 33. The central controller 39 then determines, in step s17, which tiles 51 the projected object overlaps. The processing then proceeds to step sl9 where the central controller 39 adds an object identifier to the intersection list for the determined tiles 51 of the current PSF 33. For example, for object A,, this intersects with tiles 51 2 to 51-6, 51-8 and 51-9 of the PSF orientated at (+lOj) The identifier for object A is therefore added to the intersection data for these tiles.
The processing then proceeds to step s21 where the central controller 39 determines whether or not there are any more objects 22 in the 3D scene 18 to be processed. If there are, then the processing proceeds to step s23 where the central controller 39 retrieves the object definition data 45 for the next object from the 3D scene data store 47. The processing then returns to step S13 where the next object is rotated as before. After all of the objects 22 have been processed in the above way, the processing returns to step s5 shown in Figure 5.
After all of the PSFs 33 have been processed in the above way, the central controller 39 will have calculated, for each tile 51 in each PSF 33 a list of objects 22 which are intersected by one or more of the rays 35 in that tile 51. Figure 8 schematically illustrates the way in which this intersection data 13 is stored in this embodiment. As shown, the intersection data 13 includes an entry 53 - 1 to 53 -N for each of the N PSFS 33. These entries 53 are indexed by the direction (+i,O,) of the corresponding PSF 33. Figure 8 also illustrates that for each PSF entry 53, the intersection data 13 includes an entry 55-l to 55-9 for each tile 51 in the PSF 33. Within each tile entry 55, the intersection data 13 includes a list of the objects 22 that are intersected by one or more rays 35 that belong to that tile 51.
Figure 8 illustrates the intersection data 13 that will be generated for the PSF 33 having direction (+l,Oj) from the two objects A and B shown in Figure 6.
As shown, in this example, none of the rays 35 in the first tile 51-1 intersect either object A or object B. and therefore, an empty list is stored in the entry 55-1. The tiles 51-2, 51-3, 51-6 and 51-9 each include at least one ray which intersects only with object A and not with object B. Therefore, the object intersection lists stored in the corresponding entries only include the object identifier for object A. Likewise, the object intersection list for tile 51-7 will only include the identifier for object B. However, tiles 51-4, 51-5 and 51-8 each include one or more rays 35 which intersect with object A and one or more rays 35 which intersect with object B. Therefore, the entries 55 for these tiles will include object identifiers for both object A and object B. In this embodiment, the central controller 39 outputs the intersection data 13 that is generated for the received 3D scene via an output data interface 57. The intersection data 13 may be output either on a storage medium 59 or as a signal 61. As discussed above, this intersection data 13 can be used by an appropriate image renderer 5 together with the 3D scene data 3 to generate 2D images of the 3D scene. The way in which this is achieved will be described below.
Image renderer Figure 9 is a block diagram illustrating the main components of a computer system 63, such as a personal computer, which is programmed to function as the image renderer 5. As with computer system 21, the computer system 63 is programmed to operate in accordance with programming instructions input, for example, as data stored on a data storage medium 65 and/or as a signal 67 input to the computer system 63, for example, from a remote database, by transmission over a communication network such as the Internet and/or entered by a user via a user input device 69, such as a keyboard, mouse etc. As shown in Figure 9, the computer system 63 includes an input data interface 71 which is operable to receive the intersection data generated by the rendering pre-processor 1, the 3D scene data 3 describing the scene 18 and the PSF data 9 describing the structure of the PSFs 33. This data may be input, for example, as data stored on a data storage medium 73 and/or as a signal 25 input to the computer system 63, for example, from a remote database, by transmission over a communication network such as the Internet or from the storage device 11 shown in Figure 1. The input data interface passes the received intersection data 13 to an intersection data store 77, the received 3D scene data 3 to a 3D scene data store 81 and the received PSF data 9 to a PSF data store 81.
In operation, a central controller 83 of the computer system 63 operates in accordance with the received programming instructions to generate rendered 2D images 15 of the 3D scene from the user defined viewpoint 7 using the 3D scene data 3, the PSF data 9 and the intersection data 13. The images 15 that are generated are stored in the frame buffer 17 for display to the user on a display 19 via a display controller 87.
As shown in Figure 9, the computer system 63 also includes an output data interface 89 for outputting the 2D images 15 generated by the central controller 83. These images 15 may be recorded and output on a storage medium 91 such as a CD ROM or the like or they may be output on a carrier signal 93 for transmission to a remote processing device over, for example, a data network such as the Internet.
The way in which the central controller 83 operates to render the 2D image 15 will now be described with reference to Figures 10 and 11. Figure 10 is a flow chart illustrating the processing steps performed by the central controller 83 of the computer system 63 to generate a 2D image 15 of the 3D scene 18 from the user defined viewpoint 7. As shown, in step s25, the central controller 83 receives the 3D scene data 3 and the intersection data 13. The processing then proceeds to step s27 where the central controller 83 receives the viewpoint 7 defined by the user via the user input device 69. The processing then proceeds to step s29 where the central controller 83 generates the 2D image of the 3D scene from the user defined viewpoint 7 using the received 3D scene data 3 and the intersection data 13. The processing then proceeds to step s31 where the central controller 83 displays the generated 2D image 15 on the display 19. The processing then proceeds to step s33 where the central controller 83 determines whether or not the user has defined a new viewpoint 7. If the viewpoint 7 has changed, then the processing returns to step s29.
Otherwise, the processing proceeds to step s35 where the central controller 83 checks to see if the rendering process is to end. If it is not to end, then the processing returns to step s33. Otherwise, the processing ends.
Genera he ED Image Figures 11 is a flow chart illustrating the processing steps performed by the central controller 83 in step s29 shown in Figure 10, when generating a 2D image 15 of the 3D scene 18 from the user defined viewpoint 7.
As shown, in step s37, the central controller 83 identifies a current pixel 16 in the 2D image 15 to be processed. Then, in step s39, the central controller 83 projects a ray 14 from the user defined viewpoint 7 through the current pixel 16 into the 3D scene 18 defined by the 3D scene data 3. The processing then proceeds to step s41 where the central controller 83 identifies the PSF 33 whose direction is nearest to the direction of the projected ray 14 (which is defined, in this embodiment, in terms of a vector extending from the viewpoint 7). This can be achieved in a number of different ways. However, in this preferred embodiment, the PSF data stored in the PSF data store 81 includes a look-up table (not shown) which receives the vector defining the direction of the projected ray as an input and which identifies the closest PSF 33. The inventor's earlier paper entitled "Constant Time Queries on Uniformly Distributed Points on a Hemisphere", Journal of Graphics Tools, Volume 7, Number 1, 2002 describes a technique which may be used to implement this look-up table.
The processing then proceeds to step s43 where the central controller 83 determines the tile 51 in the identified PSF 33 which includes the projected ray 14.
This is a relatively straightforward calculation which involves identifying the intersection between the projected ray 14 and the base 36of the identified PSF 33 and then identifying in which tile 51 that intersection point lies. This can be achieved, for example, by projecting the defined viewpoint 7 onto the base 36 of the identified PSF 33 and by identifying in which tile 51 the projected viewpoint lies. This will fall into a particular tile. The other end of the ray, for example, at a point at the boundary of the scene, can also be projected in a similar way. In the vast majority of cases, for a sufficiently large number of PSF directions, both end points will fall into the same tile. In cases were this does not happen the candidate set of objects is the union of sets of objects found in the tiles traversed by the line.
The processing then proceeds to step s45 where the central controller 83 retrieves the list of objects from the tile entry 55 in the intersection data 13 for the identified tile 51 of the nearest PSF 33. The processing then proceeds to step s47 where the central controller 83 identifies the intersection point 24 of the projected ray 14 with each of the objects 22 in the list and determines which of the intersected objects 22 is closest to the user defined viewpoint 7.
The processing then proceeds to step s49 where the central controller 83 updates the pixel value for the current pixel 16 using the object definition data 45 for the object 22 that is closest to the user defined viewpoint 7. If, at step s47, the central controller 83 determines that the projected ray 14 does not intersect with any of the identified objects 22, then the pixel value for the current pixel 16 is set to
that of a background colour.
In this embodiment, the central controller 83 is arranged to consider reflections, refractions and shadows of objects 22 in the 3D scene 18. To achieve this, the central controller 83 must consider secondary rays which are projected from the point of intersection 24 between the original projected ray 14 and the nearest object 22. The number of secondary rays and their directions of propagation depend on the characteristics of the object 22 (as defined by the object definition data 45) and the type of ray tracing being used. For example, if the object 22 is defined as being a specular reflector then a single secondary ray will be projected from the intersection point 24 in a direction corresponding to the angle of incidence of the projected ray on the object (i.e. like a mirror where the angle of incidence equals the angle of reflectance). If, however, the object is a diffuse reflector, then the number of secondary rays and their directions will depend on the ray tracing technique being used. For example, in Whitted-like ray tracing no secondary rays are reflected from a diffuse reflector, instead a simple estimation of the local colour is computed by reference to the angle of direction of the light source to the intersection point. Alternatively, if Kajiya-like path tracing is being used then a single ray (at most) is reflected from the surface in a direction determined stochastically from the BRDF of the object.
Therefore, in this embodiment, after step s49, the processing proceeds to step s51 where the central controller 83 determines whether or not there are any more secondary rays to consider. If there are, then the processing proceeds to step s53 where the central controller 83 determines the direction of the current secondary projected ray. The processing then returns to steps sol, s43, s45 and s47 where a similar processing is performed for the secondary ray as was performed for the original primary ray 14 projected from the user defined viewpoint 7.
Then, in step s49, the pixel value for the current pixel 16 is updated using the object definition data 45 for the nearest object intersected by the current secondary ray. As those skilled in the art will appreciate, this processing to update the current image pixel value can be performed according to a number of conventional techniques, for example, as described in the paper 'An Improved Illumination Model for Shaded Display" by Whitted T. mentioned above.
Once all of the secondary rays have been considered in the above way, the processing proceeds to step s55 where the central controller 83 determines if there are any more pixels 16 in the 2D image to be processed. If there are, then the processing returns to step s37. Otherwise, the processing returns to step s31 shown in Figure 10.
As those skilled in the art will appreciate, the image renderer 5 formed by the computer system 63 has a number of advantages over conventional image rendering systems. In particular, since the objects 22 that a projected ray 14 may intersect with are identified quickly using the stored intersection data 13 calculated in advance by the rendering preprocessor 1, the image renderer 5 does not need to check for intersections between all the objects 22 in the 3D scene 18 and each projected ray 14. The generation of the 2D image from the user-defined viewpoint 7 can therefore be calculated significantly faster than with conventional techniques, which allows substantially real time 2D image 15 rendering of the 3D scene 18.
Further, since the user can change the currently defined viewpoint 7, the image renderer 5 can generate a real time walk through', of the scene 18 in response to dynamic changes of viewpoint 7 made by the user.
Although existing 3D walk through systems are available, these do not employ ray tracing techniques which provide realistic renderings that include reflections, refractions and shadows.
Generation of Parallel Sub-Fields
As described above with reference to Figure 4, a volume of rays is defined by a plurality of parallel sub-fields (PSFs) 33. Each PSF 33 comprises a cubic volume in x, y, z Cartesian space defined by: -1 < x < 1 < y < 1 -1 < z < 1 A rectangular grid is imposed on the x, y plane comprising n sub-divisions of equal width in each of the x and y directions to define the n2 tiles of the PSF 33.
All possible directions of rays can be described using the following ranges: = 0; : don't care 0 < l< n/2; 0 < < 2n = n/2; 0 < < It should be noted that at = 0, the direction of the rays 35 is independent of the value of 0, and that at = /2, 0 need only pass through a half revolution in order to cover all possible directions.
A scene 18 is capable of being defined within the volume of rays defined by the PSFs. It is convenient to confine the scene 18 for example to a unit cube space defined by the ranges: -0.5 < x < 0.5 -0.5 < y < 0.5 -0.5 < z < 0.5 This ensures that any orientation of the cubic volume of the PSF 33 will fully enclose the scene 18. That means that no part of the scene 18 is omitted from coverage by rays in any particular direction. In fact, since the cubic volume of the PSF 33 has smallest dimension of 2, which is its edge dimension, any scene 18 with longest dimension no greater than 2 can be accommodated within the PSF 33. However, at the extreme edge of the PSFS, the number of rays and ray directions may be limited relative to the density thereof in the central region of the PSFS, so it is preferred to apply a constraint on the size of the scene 18 smaller than the theoretical maximum size of the PSFs.
SECOND EMBODIMENT
In the first embodiment described above, the image renderer 5 was able to generate 2D images 15 of the 3D scene 18 from a current user defined viewpoint 7. The user was also able to change the user defined viewpoint 7 and the image renderer 5 recalculated the 2D image 15 from the new viewpoint 7. An embodiment will now be described in which the image renderer 5 can generate a new 2D image not only for a new viewpoint 7 but also taking into account movement or deformation of one or more of the objects 22 within the 3D scene 18. The image renderer 5 of this second embodiment can therefore be used, for example, in virtual reality and game applications. The way in which the image renderer 5 achieves this will now be described with reference to Figures 12 and 13.
In the second embodiment, the image renderer 5 is also run on a conventional computer system 63 such as that shown in Figure 8. The only difference will be in the programming instructions loaded into the central controller 83 used to control its operation. Figure 12 is a flow chart defining the operation of the central controller 83, programmed in accordance with the second embodiment.
As shown, at step s57, the central controller 83 receives the 3D scene data 3, the PSF data 9 and the intersection data 13. The central controller 83 then receives, in step s59, the current viewpoint 7 defined by the user via the user input device 69. The processing then proceeds to step s61, where the central controller 83 generates a 2D image 15 of the 3D scene 18 from the current viewpoint 7, using the received 3D scene data 3, PSF data 9 and intersection data 13. The processing steps performed in step s61 are the same as those performed in step s29 (shown in Figure 10) and will not, therefore, be described again. The processing then proceeds to step s63 where the generated 2D image 15 is displayed to the user on the display 19.
The processing then proceeds to step s65 where the central controller 83 determines whether or not any objects 22 within the 3D scene 18 have been modified (e.g. moved or deformed etc.). If they have, then the processing proceeds to step s67 where the central controller 83 updates the intersection data 13 for the modified objects 22. The processing then proceeds to step s69 where the central controller 83 checks to see if the user defined viewpoint 7 has changed. In either case, the processing then proceeds to step 71 where the central controller 83 generates a new 2D image 15 of the 3D scene 18. If, at step s69, the central controller determines that the viewpoint 7 has not changed, then in step s71, the new 2D image 15 is generated from the current, unchanged viewpoint 7, but using the updated intersection data 13. If, however, the central controller determines, in step s69, that the viewpoint 7 has changed, then in step sol, the central controller 83 generates the new 2D image 15 of the 3D scene 18 from the new viewpoint 7 using the updated intersection data 13.
Returning to step s65, if the central controller 83 determines that none of the objects 22 in the 3D scene 18 is to be modified, then the processing proceeds to step s73 where the central controller 83 determines whether or not the viewpoint 7 has changed. If it has, then the processing also proceeds to step s71, where the central controller 83 generates a new 2D image 15 of the 3D scene 18 from the new viewpoint 7 using the unchanged intersection data 13. In this way, a new 2D image is generated when the viewpoint and/or one or more of the objects are changed, otherwise a new 2D image is not generated.
If a new 2D image 15 is generated in step s71, then the processing proceeds to step s75, where the new 2D image 15 is displayed to the user on the display 19.
The processing then proceeds to step s77, where the central controller 83 determines whether or not the rendering process is to end. Additionally, as shown in Figure 12, if at step s73, the central controller 83 determines that the viewpoint 7 has not changed, then the processing also proceeds to step s77. If the rendering process is not to end, then the processing returns to step s65 otherwise, the processing ends.
Update Intersection Data In step s67 discussed above, the central controller 83 updated the intersection data 13 when one or more objects 22 in the scene 18 were modified. The way in which the central controller 83 performs this update will now be described with reference to Figure 13. As shown, initially, the central controller 83 retrieves, in step s77, the current PSF intersection data 53 for the first PSF 33 (i.e. 53-1 shown in Figure 8) from the intersection data store 77. The processing then proceeds to step s79 where the central controller 83 removes the identifier for each object 22 to be modified from the current PSF intersection data 53.
This is achieved, in this embodiment, using a similar projection technique described with reference to Figure 6 except, at step sl9, instead of adding the object identifier to the intersection list, the object identifier is removed from the intersection list for all the determined tiles 51 of the current PSF.
The processing then proceeds to step S81 where the modified PSF intersection data 53 for the current PSF is returned to the intersection data store 77. The processing then proceeds to step S83 where the central controller 83 determines whether or not there are any more PSFs 33 to be processed. If there are, then the processing proceeds to step S85 where the central controller 83 retrieves the PSF intersection data 53 for the next PSF from the intersection data store 77 and then the processing returns to step s79.
Once all of the objects 22 to be modified have been removed from the intersection data 13, the processing then proceeds to step s87 where the objects are modified by modifying their object definition data 45.
The modifications that are made may include changing the object's geometrical shape and/or position within the 3D scene 18. The processing then proceeds to step s89 where the central controller 83 retrieves the PSF intersection data 53 for the first PSF 33 from the intersection data store 77. The processing then proceeds to step s91 where the central controller 83 inserts the or each modified object 22 into the 2 5 current PSF 33 and determines the corresponding intersection data. In this embodiment, the steps performed in step s91 are the same as those shown in Figure 6 and will not, therefore, be described again.
Once the PSF intersection data 53 for the current PSF has been updated to take into account the modified objects 22, the processing proceeds to step S93 where the modified PSF intersection data 53 is returned to the intersection data store 77. The processing then proceeds to step s95 where the central controller 83 determines if there are any more PSFS 33 to be processed. If there are, then the processing proceeds to step s97 where the central controller 83 retrieves the PSF intersection data 53 for the next PSF 33 and then the processing returns to step sol. After all the PSFs have been processed in the above way, the processing returns to step s69 shown in Figure 12.
As those skilled in the art will appreciate, in the above way, each of the objects 22 to be modified is firstly removed from the PSF intersection data 53, modified and then reinserted back into the PSF intersection data 53. The central controller 83 can then generate the new 2D image 15 for the modified scene 18 in the manner discussed above.
Summary and modifications and alternatives
A rendering system has been described above which can generate 2D images of 3D scenes in real time using a new real time ray tracing technique which provides realistic 2D images of scenes that include reflections, refractions and shadows. Because the ray tracing can be performed in real time, the image renderer can generate moving images of the 3D scene which include reflections, refractions and shadows as the user "walks through', or alters the scene. This allows, for example, in a virtual reality context or in a computer game, a person to walk through a scene and see virtual shadows and reflections of his/her virtual body" in nearby surfaces. This is not possible using any general methods, with existing technology, in real time.
As those skilled in the art will appreciate, the software used to configure the computer systems as either the rendering pre-processor or as the image renderer may be sold separately and may form part of a general computer graphics library that other users can call and use in their own software routines, for example, the software used to create the intersection data for a 3D scene may be stored as one function which can be called by other software modules.
Similarly, the software used to configure the computer device as the image renderer may also be stored as a computer graphics function which again can be called from other users' software routines.
In the above embodiments, the rendering pre-processor used parallel subfields (PSFs) to provide a finite approximation to all possible ray paths that may be projected into the 3D scene by the image renderer. In the above embodiment, each PSF included a plurality of parallel rays which were divided into tiles. As those skilled in the art will appreciate, other parallel sub-fields may be defined. For example, parallel sub fields may be defined which do not include any rays, but only include the above described tiles. This could be achieved in the same way as discussed above.
In particular, each of the objects may be projected on to the base of the PSF and intersection data generated for each tile that the projected object intersects.
Alternatively, if PSF rays are used, then it is not essential to group the rays into tiles. Instead, intersection data may be generated for each PSF ray.
Indeed, considered in general terms, each PSF effectively has a respective direction and a plurality of volumes which extend parallel to each other in that direction. Each of these volumes,, may be defined, for example, as the thickness of one of the PSF rays or as the volume of space defined by extending the tile in the direction of the PSF. In these general terms, the rendering pre-processor calculates intersection data for each of these volumes, which identifies the objects within the 3D scene that intersect with the volume.
In the above embodiment, the rendering pre-processor and the image renderer were described as having a central controller which carried out the various processing steps described with reference to the attached flow-charts. As those skilled in the art will appreciate, separate processors may be provided to carry out the various processing functions. For example, some of the processing steps may be performed by the processor on a connected graphics card.
In the above embodiment, the base of each PSF was divided into square tiles. As those skilled in the art will appreciate, it is not essential to define the PSF to have such regular shaped tiles. Tiles of any size and shape may be provided. Further, it is not essential to have the tiles arranged in a continuous manner. The tiles may be separated from each other or they may overlap. However, regular shaped contiguous tiles are preferred since it reduces the complexity of the calculations performed by the image renderer.
In the above embodiment, the image renderer projected a ray through a 2D image plane into the 3D scene to be rendered. As those skilled in the art will appreciate, it is not essential to only project one ray through each pixel of the image to be generated.
In particular, the image renderer may project several different rays through different parts of each pixel, with the pixel colour then being determined by combining the values obtained from the different rays projected through the same pixel (and any secondary rays that are considered).
In the second embodiment described above, the image renderer could update the intersection data in order to take into account the modified properties of one or more of the objects within the 3D scene. As those skilled in the art will appreciate, the rendering pre processor may also be arranged to be able to update the intersection data to take into account modified objects. In this way, the rendering pre-processor can update the intersection data without having to re compute intersection data for objects which have not changed.
In the above embodiment, the rendering pre-processor generated and output intersection data for a 3D scene.
This intersection data was provided separately from the associated PSF data which was used during its calculation. As those skilled in the art will appreciate, the intersection data and the PSF data may be combined into a single data structure which may be output and stored on a data carrier for subsequent use by the image renderer.
In the first embodiment described above, the rendering pre-processor determined intersection data for each tile of each PSF. Depending on the size of the tiles, many neighbouring tiles will include the same intersection data. In such a case, instead of storing intersection data for each tile separately, tiles having the same intersection data may be grouped to define a larger tile. This may be achieved, for example, using Quad-tree techniques which will be familiar to those skilled in the art. Alternatively, if the rendering pre-processor determines that a second tile includes the same intersection data as a first tile, then instead of including the same intersection data for the second tile, it may simply include a pointer to the intersection data of the first tile.
In the above embodiment, the PSFs were defined and then the 3D scene was inserted into the volume of space defined by the PSFs. As those skilled in the art will appreciate, in an alternative embodiment, each of the PSFs may be defined around the 3D scene.
In the above embodiments, the PSF data, the intersection data and the 3D scene data were stored in separate data stores within the image renderer. As those skilled in the art will appreciate, when calculating real time 2D images from the 3D scene, this data will preferably be stored within the working memory of the image renderer in order to reduce the time required to access the relevant data.
In the above embodiment, the image renderer used a ray tracing technique to generate 2D images of the 3D scene using the intersection data generated by the rendering pre-processor. As those skilled in the art Will appreciate, ray tracing is one specific technique which is commonly used to provide realistic 2D images of a 3D scene. However, the image renderer may instead be configured to render the 3D scene using any other rendering technique that uses a global illumination algorithm and which relies on efficient ray intersection calculations. For example, the generated intersection data could be used by the image renderer when rendering the 3D scene using the "path tracing" technique described in the paper "The Rendering Equation" by Kajiya, J.T. (1986) Computer Graphics (SIGGRAPH) 20(4), pages 145 to 170. This technique is a stochastic method that fires primary rays from the viewpoint through a pixel and then probabilistically follows the path of the ray through the scene accumulating energy at each bounce from surface to surface. The ray path is terminated probabilistically.
Each pixel may have more than forty such random ray paths and the final "colour" of each pixel is determined as the average of each of the ray paths.
The same technique may be used with Photon Mapping (Jensen, H.W. (1996) Global Illumination Using Photon Maps, Rendering Techniques '96, Proceedings of the 7th Eurographics Workshop on Rendering, 21-30) which follows ray paths emanating from light sources as well as from the viewpoint.
In the above embodiment, when the image renderer is generating a 2D image of the 3D scene, it determines, in step s47 shown in Figure ll, the nearest object (in the list of objects associated with the tile containing the projected ray) which intersects with the projected ray. On some occasions, none of the identified objects will intersect with the projected ray. This is most likely to happen if the projected ray intersects with the base of the closest PSF near the boundary of one of the tiles and/or if the direction of the projected ray falls approximately midway between the directions of two or more PSFs.
Under these circumstances the image renderer can simply look at the object lists for neighbouring tiles of the same PSF or it can look at the intersection data for the or each other PSF having a similar direction.
In a further alternative, if the image renderer determines that the projected ray intersects with a tile close to the tile boundary, then instead of simply searching for the nearest object in the object list for that tile, the system may also look at the object lists for the nearest neighbouring tiles as well.
In the above embodiment, the lists of object identifiers that were stored as the intersection data were not stored in any particular order. The image renderer therefore has to identify the intersection point between the projected ray and each of the objects defined by the intersection data for the identified tile. If, however, the objects within the 3D scene are constrained so that they cannot intersect with each other, then the list of objects stored for each tile are preferably sorted according to the distance of the objects from the base of the corresponding ASP. In this way, when the image is being rendered, the image renderer only has to find the first object within the sorted list that has an intersection with the projected ray. In this way, the image renderer does not need to determine intersection points between the projected ray and objects which are located behind the nearest object.
In the above embodiment, the rays of the PSF were divided into nine tiles or groups of rays. As those skilled in the art will appreciate, any number of tiles may be defined. For example, a less complex 3D scene may require fewer tiles to be defined because there may be fewer objects within the 3D scene. On the other hand, a more complex 3D scene may require a greater number of tiles to be defined for each PSF in order to avoid the situation that one or more of the tiles include rays that intersect with all objects within the scene (which would then not give any computational savings for the processing of that ray).
Therefore, the number of tiles within each PSF is preferably defined in dependence upon the complexity of the 3D scene to be rendered, so that each tile includes rays which intersect with only a sub-set of all of the objects within the 3D scene.
In the above embodiment, the rendering pre-processor and the image renderer were formed in two separate computer systems. As those skilled in the art will appreciate, both of these systems may be run on a single computer system.
In the second embodiment described above, when an object was modified within the scene, the tiles containing the original object were first identified by performing the same rotation and projection described with reference to Figure 6. In an alternative embodiment, the image renderer may simply search all of the lists of intersection data and remove the object identifier for the object that is to be modified from the lists.
In the second embodiment described above, objects within the scene could be moved or deformed and the image renderer generated a new 2D image for the modified scene. In an alternative embodiment, the image renderer may allow the insertion or the deletion of objects into or from the scene. In this case, if an object is to be inserted, then there is no need to remove the object identifier from the intersection data as it will not exist. The image renderer only has to add the object identifier for the new object into the intersection data. Similarly, where an object is to be deleted, its object identifier is simply removed from the intersection data.
As those skilled in the art will appreciate, in addition to allowing objects to move within the scene, the image renderer can also allow the reflective and refractive properties of the object to be modified in order to, for example, change its colour or transparency within the scene. In this case, the intersection data may not have to change, but only the object definition data for the object. The resulting change in colour or transparency of the object would then be observed in the re-rendered image of the 3D scene.
In the above embodiments, the intersection data is stored as a table having an entry for each PSF, and in which each PSF entry includes an entry for each tile in the PSF, which tile entry includes the list of objects for the tile. As those skilled in the art will appreciate, the intersection data may be stored in a variety of different data structures, for example, as a hash table or as a linked list etc. In the aboveembodiments, the object definition data comprises a list of vertex positions defining the vertices of polygons and included surface properties defining the optical characteristics of the polygons.
As those skilled in the art will appreciate, the object definition data may instead define the three dimensional shape of the objects using appropriate geometric equations.
In the above embodiment, the image renderer generated a 2D image of the 3D scene from a user defined viewpoint. As those skilled in the art will appreciate, it is not essential for the user to define the viewpoint. Instead, the viewpoint may be defined automatically by the computer system to show, for example, a predetermined walk through of the 3D scene.

Claims (81)

  1. CLAIMS: 1. A computer graphics system for processing data to generate a 2D
    image of a 3D scene from a defined viewpoint, the system comprising: a rendering pre-processor having: first receiving means for receiving 3D scene data defining the 3D scene, which 3D scene data includes data defining objects and their locations within the 3D scene; second receiving means for receiving
    parallel subfield data defining a plurality of
    subfields, each subfield having a respective direction and a plurality of volumes which extend parallel to each other in the direction; first processing means for processing said
    3D scene data and said parallel subfield data to
    determine intersections between said volumes and each object within the 3D scene; and means for generating intersection data for each volume, which intersection data includes data identifying the objects within the 3D scene that intersect with the volume; and an image renderer having: means for defining pathway data representing a plurality of pathways which extend from said viewpoint through the 3D scene, each pathway being associated with a pixel of the 2D image to be generated; and means for determining each pixel value of the 2D image to be generated using the pathway data, the intersection data and the 3D scene data, the determining means comprising: second processing means for processing the pathway data representing a current pathway associated with a current pixel, to identify the direction in which the current pathway extends; means for identifying which of said subfields has a similar direction to the direction of the current pathway; means for selecting the intersection data for a volume of the identified subfield through which, or adjacent which, the current pathway extends; third processing means for processing the pathway data representing the current pathway and the 3D scene data for the objects identified by the selected intersection data, to identify which of those objects is intersected by the current pathway and is closest to said viewpoint; and means for determining the pixel value for the current pixel in dependence upon the 3D scene data for the object identified by said third processing means.
  2. 2. A system according to claim 1, wherein said second receiving means is operable to receive parallel subfield data which defines said subfields such that each subfield has an associated base plane from which said parallel volumes extend, and such that the parallel volumes are arranged in a rectangular array extending perpendicular to the base plane of the
    subfield.
  3. 3. A system according to claim 1 or 2, wherein said second receiving means is operable to receive parallel subfield data that includes an index for each subfield which is indicative of the direction of the subfield and wherein said identifying means is operable to
    identify which of said suLfields has a similar
    direction to the direction identified by the first processing means using the index of the parallel
    subfield data.
  4. 4. A system according to claim 2 or 3, wherein said second receiving means is operable to receive parallel subfield data which defines said subfields such that
    the base plane of each subfield is divided into a
    plurality of tiles, each associated with a respective volume of the subfield, and wherein said intersection data generating means is operable to generate intersection data associated with each of said tiles.
  5. 5. A system according to any of claims 1 to 3, wherein said second receiving means is operable to
    receive parallel subfield data which defines said
    subfields such that each volume of a subfield
    represents a predetermined pathway which extends in
    the direction of the subfield.
  6. 6. A system according to any preceding claim, wherein said means for defining said pathway data is operable to define a vector associated with each pathway, which vector defines the direction of the pathway.
  7. 7. A system according to claim 6, wherein said identifying means is operable to identify which of
    said suLfields has a similar direction to the
    direction of the current pathway using the vector associated with the current pathway.
  8. 8. A system according to any preceding claim, wherein said identifying means is operable to identify which
    of said subfields has a direction nearest to the
    direction of the current pathway identified by said second processing means.
  9. 9. A system according to any preceding claim, wherein the intersection data associated with each volume includes data for each of the intersected objects and wherein said intersection data generating means is operable to sort the data for each intersected object according to the distance of the objects from a base
    plane of the associated subfield.
  10. 10. A system according to claim 9, wherein said third processing means is operable to process the sorted intersection data to identify the first object which is intersected by the pathway.
  11. 11. A system according to any preceding claim, wherein said pathway data defining means is operable to define primary pathway data representing a plurality of primary pathways which extend from said viewpoint through the 3D scene; wherein said pixel value determining means further comprises means for defining secondary pathway data representing at least one secondary pathway which extends through the 3D scene from an intersection point between an associated primary pathway and the object identified by said third processing means for that associated primary pathway, which at least one secondary pathway is associated with the same pixel of the 2D image to be generated as the associated primary pathway; and wherein said pixel value determining means is operable to determine each pixel value of the 2D image to be generated using the primary and secondary pathway data, the intersection data and the 3D scene data.
  12. 12. A system according to claim 11, wherein said pixel value determining means is operable to determine each pixel value of the 2D image to be generated such that, for each of the at least one secondary pathways: said second processing means is operable to process the secondary pathway data representing the current secondary pathway associated with the current pixel, to identify the direction of the current secondary pathway; said identifying means is operable to identify which of said subfields has a similar direction to the direction of the current secondary pathway; said selecting means is operable to select the intersection data for a volume of the identified
    subfield through which, or adjacent which, the
    secondary pathway extends; said third processing means is operable to process the secondary pathway data representing the current secondary pathway and the 3D scene data for the objects identified by the selected intersection data, to identify which of those objects is intersected by the current secondary pathway and is closest to said intersection point; and said pixel value determining means is operable to determine the pixel value for the current pixel in dependence upon the 3D scene data for the object identified by said third processing means when processing said secondary pathway data.
  13. 13. A system according to any preceding claim, wherein said image renderer further comprises means for receiving a new viewpoint, and wherein said image renderer is operable, in response to said received new viewpoint, to define new pathway data representing a plurality of new pathways which extend from the new viewpoint through the 3D scene and to generate a new 2D image of the 3D scene from the new viewpoint using the new pathway data, the intersection data and the 3D scene data.
  14. 14. A system according to any preceding claim, wherein said image renderer further comprises: means for receiving object modification data identifying an object within the 3D scene to be modified and the modification to be made; and means for updating the intersection data in response to said received object modification data; and wherein said image renderer is operable to generate a new 2D image of the 3D scene using the pathway data, the updated intersection data and the 3D scene data.
  15. 15. A system according to claim 14, wherein said image renderer further comprises means for receiving a new viewpoint and wherein said image renderer is operable to generate a new 2D image of the 3D scene in dependence upon the received new viewpoint and the received object modification data.
  16. 16. A system according to claim 13 or 15, wherein said image renderer is operable to receive a sequence of new viewpoints and to generate a corresponding sequence of 2D images of the 3D scene.
  17. 17. A system according to claim 16, wherein said image renderer is operable to generate the sequence of 2D images of the 3D scene in substantially real time.
  18. 18. A system according to any one of claims 14 to 17, wherein said intersection data updating means comprises: means for removing data for the object to be modified from the intersection data; means for modifying the 3D scene data for the identified object in accordance with said object modification data; fourth processing means for processing the modified 3D scene data and said parallel subfield data to determine intersections between said volumes and the modified object within the 3D scene; and means for modifying the intersection data associated with each of said volumes which intersect with the modified object, to include data identifying the modified object.
  19. 19. A system according to any preceding claim, wherein said image renderer further comprises 2D image output means for outputting the generated 2D image data.
  20. 20. A rendering pre-processor comprising: first receiving means for receiving 3D scene data defining the 3D scene, which 3D scene data includes data defining objects and their locations within the 3D scene; second receiving means for receiving parallel subfield data defining a plurality of subfields, each subfield having a respective direction and a plurality of volumes which extend parallel to each other in the direction; processing means for processing said 3D scene
    data and said parallel subfield data to determine
    intersections between said volumes and each object within the 3D scene; means for generating intersection data for each volume, which intersection data includes data identifying the objects within the 3D scene that intersect with the volume; and means for outputting said intersection data.
  21. 21. A system according to claim 20, wherein said second receiving means is operable to receive parallel subfield data which defines said subfields such that each subfield has an associated base plane from which said parallel volumes extend, and such that the parallel volumes are arranged in a rectangular array extending perpendicular to the base plane of the
    subfield.
  22. 22. A system according to claim 21, wherein said second receiving means is operable to receive parallel subfield data which defines said subfields such that
    the base plane of each subfield is divided into a
    plurality of tiles, each associated with a respective volume of the subfield, and wherein said intersection data generating means is operable to generate intersection data associated with each of said tiles.
  23. 23. A system according to claims 20 or 21, wherein said second receiving means is operable to receive parallel subfield data which defines said subfields
    such that each volume of a subfield represents a
    predetermined pathway which extends in the direction of the subfield and wherein said selecting means is operable to select the intersection data for the predetermined pathway of the identified subfield which is immediately adjacent the current pathway.
  24. 24. A rendering pre-processor according any one of claims 20 to 23, wherein the intersection data associated with each volume includes data for each of the intersected objects and wherein said intersection data generating means is operable to sort the data for each intersected object according to the distance of the objects from a base plane of the associated
    subfield.
  25. 25. A rendering pre-processor according to any one of claims 20 to 24, wherein said output means is operable to output the 3D scene data received by the first receiving means.
  26. 26. A rendering pre-processor according to any one of claims 20 to 25, wherein said output means is operable to output the parallel subfield data received by the second receiving means.
  27. 27. A rendering pre-processor according to any one of claims 20 to 26, wherein said output means is operable to output data on a recording medium.
  28. 28. A rendering pre-processor according to any one of claims 20 to 26, wherein said output means is operable to output data on a carrier signal.
  29. 29. An image renderer for generating a 2D image of a 3D scene from a defined viewpoint, the image renderer comprising: first receiving means for receiving 3D scene data defining the 3D scene, which 3D scene data includes data defining objects and their locations within the 3D scene; second receiving means for receiving parallel subfield data defining a plurality of subfields, each subfield having a respective direction and a plurality of volumes which extend parallel to each other in the direction; third receiving means for receiving intersection
    data associated with each volume of the subfields,
    which intersection data includes data identifying the objects within the 3D scene that intersect with the volume; means for defining pathway data representing a plurality of pathways which extend from said viewpoint through the 3D scene, each pathway being associated with a pixel of the 2D image to be generated; and means for determining each pixel value of the 2D image to be generated using the pathway data, the intersection data and the 3D scene data, the determining means comprising: first processing means for processing the pathway data representing a current pathway associated with a current pixel, to identify the direction in which the current pathway extends; means for identifying which of said subfields has a similar direction to the direction of the current pathway; means for selecting the intersection data for a volume of the identified subfield through which or adjacent which the current pathway extends; second processing means for processing the pathway data representing the current pathway and the 3D scene data for the objects identified by the selected intersection data, to identify which of those objects is intersected by the current pathway and is closest to said viewpoint; and means for determining the pixel value for the current pixel in dependence upon the 3D scene data for the object identified by said second processing means.
  30. 30. An image renderer according to claim 29, wherein said second receiving means is operable to receive parallel subfield data which defines said suLfields such that each subfield has an associated base plane from which said parallel volumes extend, and such that the parallel volumes are arranged in a rectangular array extending perpendicular to the base plane of the
    subfield.
  31. 31. An image renderer according to claim 29 or 30, wherein said second receiving means is operable to receive parallel subfield data that includes an index for each subfield which is indicative of the direction of the subfield and wherein said identifying means is
    operable to identify which of said subfields has a
    similar direction to the direction identified by the first processing means using the index of the parallel
    subfield data.
  32. 32. An image renderer according to claim 30 or 31, wherein said second receiving means is operable to
    receive parallel subfield data which defines said
    subfields such that the base plane of each subfield is divided into a plurality of tiles, each associated with a respective volume of the subfield, and wherein said third receiving means is operable to receive intersection data which is associated with each of said tiles.
  33. 33. A system according to any of claims 29 to 31, wherein said second receiving means is operable to
    receive parallel subfield data which defines said
    subfields such that each volume of a subfield
    represents a predetermined pathway which extends in
    the direction of the subfield and wherein said
    selecting means is operable to select the intersection data for the predetermined pathway of the identified
    subfield which is immediately adjacent the current
    pathway.
  34. 34. An image renderer according to any of claims 29 to 33, wherein said means for defining said pathway data is operable to define a vector associated with each pathway, which vector defines the direction of the pathway.
  35. 35. An image renderer according to claim 34, wherein said identifying means is operable to identify which
    of said subfields has a similar direction to the
    direction of the current pathway using the vector associated with the current pathway.
  36. 36. An image renderer according to any one of claims 29 to 35, wherein said identifying means is operable to identify which of said subfields has a direction nearest to the direction of the current pathway identified by said second processing means.
  37. 37. An image renderer according to any of claims 29 to 36, wherein said third receiving means is operable to receive intersection data that includes data for each of the intersected objects which is sorted according to the distance of the objects from a base
    plane of the associated subfield.
  38. 38. An image renderer according to claim 37, wherein said second processing means is operable to process the sorted intersection data to identify the first object which is intersected by the pathway.
  39. 39. An image renderer according to any of claims 29 to 38, wherein said pathway data defining means is operable to define primary pathway data representing a plurality of primary pathways which extend from said viewpoint through the 3D scene; wherein said pixel value determining means further comprises means for defining secondary pathway data representing at least one secondary pathway which extends through the 3D scene from an intersection point between an associated primary pathway and the object identified by said second processing means for that associated primary pathway, which at least one secondary pathway is associated with the same pixel of the 2D image to be generated as the associated primary pathway; and wherein said pixel value determining means is operable to determine each pixel value of the 2D image to be generated using the primary and secondary pathway data, the intersection data and the 3D scene data.
  40. 40. An image renderer according to claim 39, wherein said pixel value determining means is operable to determine each pixel value of the 2D image to be generated such that, for each of the at least one secondary pathways: said first processing means is operable to process the secondary pathway data representing the current secondary pathway associated with the current pixel, to identify the direction of the current secondary pathway; said identifying means is operable to identify which of said suLfields has a similar direction to the direction of the current secondary pathway; said selecting means is operable to select the intersection data for a volume of the identified subfield through which, or adjacent which the current pathway extends; said second processing means is operable to process the secondary pathway data representing the current secondary pathway and the 3D scene data for the objects identified by the selected intersection data, to identify which of those objects is intersected by the current secondary pathway and is closest to said intersection point; and said pixel value determining means is operable to determine the pixel value for the current pixel in dependence upon the 3D scene data for the object identified by said second processing means when processing said secondary pathway data.
  41. 41. An image renderer according to any of claims 29 to 40, further comprising means for receiving a new viewpoint, and wherein said pathway data defining means is operable, in response to said received new viewpoint, to define new pathway data representing a plurality of new pathways which extend from the new viewpoint through the 3D scene and wherein said determining means is operable to determine each pixel value of a new 2D image of the 3D scene from the new viewpoint using the new pathway data, the intersection data and the 3D scene data.
  42. 42. An image renderer according to any one of claims 29 to 40, further comprising: means for receiving object modification data identifying an object within the 3D scene to be modified and the modification to be made; and means for updating the intersection data in response to said received object modification data; and wherein said image renderer is operable to generate a new 2D image of the 3D scene using the pathway data, the updated intersection data and the 3D scene data.
  43. 43. An image renderer according to claim 42, further comprising means for receiving a new viewpoint and wherein said image renderer is operable to generate a new 2D image of the 3D scene in dependence upon the received new viewpoint and the received object modification data.
  44. 44. An image renderer according to claim 41 or 43, wherein said image renderer is operable to receive a sequence of new viewpoints and to generate a corresponding sequence of 2D images of the 3D scene.
  45. 45. An image renderer according to claim 44, wherein said image renderer is operable to generate the sequence of 2D images of the 3D scene in substantially real time.
  46. 46. An image renderer according to any one of claims 42 to 45, wherein said intersection data updating means comprises: means for removing data for the object to be modified from the intersection data; means for modifying the 3D scene data for the identified object in accordance with said object modification data; third processing means for processing the modified 3D scene data and said parallel subfield data to determine intersections between said volumes and the modified object within the 3D scene; and means for modifying the intersection data associated with each of said volumes which intersect with the modified object, to include data identifying the modified object.
  47. 47. An image renderer according to any of claims 29 to 46, further comprising 2D image output means for outputting the generated 2D image data.
  48. 48. A method of pre-processing a 3D computer graphics scene to generate intersection data for the scene, the method comprising: a first receiving step of receiving 3D scene data defining the 3D scene, which 3D scene data includes data defining objects and their locations within the 3D scene; a second receiving step of receiving parallel subfield data defining a plurality of subfields, each subfield having a respective direction and a plurality of volumes which extend parallel to each other in the direction; processing said 3D scene data and said parallel subfield data to determine intersections between said volumes and each object within the 3D scene; generating intersection data for each volume, which intersection data includes data identifying the objects within the 3D scene that intersect with the volume; and outputting said intersection data.
  49. 49. A method according to claim 48, wherein said second receiving step receives parallel subfield data which defines said subfields such that each subfield has an associated base plane from which said parallel volumes extend, and such that the parallel volumes are arranged in a rectangular array extending
    perpendicular to the base plane of the subfield.
  50. 50. A method according to claim 49, wherein said second receiving step receives parallel subfield data which defines said subfields such that the base plane of each subfield is divided into a plurality of tiles, each associated with a respective volume of the
    subfield, and wherein said intersection data
    generating step generates intersection data associated with each of said tiles.
  51. 51. A method according to claims 48 or 49, wherein said second receiving step receives parallel subfield
    data which defines said subfields such that each
    volume of a subfield represents a predetermined
    pathway which extends in the direction of the subfield and wherein said selecting step selects the intersection data for the predetermined pathway of the identified subfield which is immediately adjacent the current pathway.
  52. 52. A method according to any one of claims 48 to 51, wherein the intersection data associated with each volume includes data for each of the intersected objects and wherein said method further comprises the step of sorting the data for each intersected object according to the distance of the objects from a base
    plane of the associated subfield.
  53. 53. A method according to any one of claims 48 to 52, wherein said output step is arranged to output the 3D scene data received in said first receiving step.
  54. 54. A method according to any one of claims 48 to 53, wherein said output step is arranged to output the
    parallel subfield data received in said second
    receiving step.
  55. 55. A method according to any one of claims 48 to 54, wherein said output step is arranged to output data on a recording medium.
  56. 56. A method according to any one of claims 48 to 54, wherein said output step is arranged to output data on a carrier signal.
  57. 57. A method of generating a 2D image of a 3D scene from a defined viewpoint, the method comprising: a first receiving step of receiving 3D scene data defining the 3D scene, which 3D scene data includes data defining objects and their locations within the 3D scene; a second receiving step of receiving parallel subfield data defining a plurality of subfields, each subfield having a respective direction and a plurality of volumes which extend parallel to each other in the direction; a third receiving step of receiving intersection
    data associated with each volume of the subfields,
    which intersection data includes data identifying the objects within the 3D scene that intersect with the volume; 7r r, defining pathway data representing a plurality of pathways which extend from said viewpoint through the 3D scene, each pathway being associated with a pixel of the 2D image to be generated; and determining each pixel value of the 2D image to be generated using the pathway data, the intersection data and the 3D scene data, the determining step comprising: a first processing step of processing the pathway data representing the pathway associated with the current pixel, to identify the direction in which the current pathway extends;
    identifying which of said subfields has a
    similar direction to the direction of the current pathway; selecting the intersection data for a volume of the identified subfield through which or adjacent which the current pathway extends; a second processing step of processing the pathway data representing the current pathway and the 3D scene data for the objects identified by the selected intersection data, to identify which of those objects is intersected by the current pathway and is closest to said viewpoint; and determining the pixel value for the current pixel in dependence upon the 3D scene data for the object identified in said second processing step.
  58. 58. A method according to claim 57, wherein said second receiving step receives parallel subfield data which defines said subfields such that each subfield has an associated base plane from which said parallel volumes extend, and such that the parallel volumes are arranged in a rectangular array extending
    perpendicular to the base plane of the subfield.
  59. 59. A method according to claim 57 or 58, wherein said second receiving step receives parallel subfield data that includes an index for each subfield which is
    indicative of the direction of the subfield and
    wherein said identifying step identifies which of said suLfields has a similar direction to the direction
    identified in the first processing step using the
    index of the parallel subfield data.
  60. 60. A method according to any one of claims 58 or 59, wherein said second receiving step receives parallel subfield data which defines said subfields such that
    the base plane of each subfield is divided into a
    plurality of tiles, each associated with a respective
    volume of the subfield, and wherein said third
    receiving step receives intersection data which is associated with each of said tiles.
  61. 61. A method according to any of claims 57 to 60, wherein said second receiving step receives parallel subfield data which defines said subfields such that each volume of a subfield represents a predetermined pathway which extends in the direction of the subfield and wherein said selecting step selects the intersection data for the predetermined pathway of the identified subfield which is immediately adjacent the current pathway.
  62. 62. A method according to any of claims 57 to 61, wherein said means for defining said pathway data is operable to define a vector associated with each pathway, which vector defines the direction of the pathway.
  63. 63. A method according to claim 62, wherein said identifying step identifies which of said subfields has a similar direction to the direction of the current pathway using the vector associated with the current pathway.
  64. 64. A method according to any of claims 57 to 63, wherein said identifying step identifies which of said suLfields has a direction nearest to the direction of the current pathway.
  65. 65. A method according to any of claims 57 to 64, wherein said third receiving step receives intersection data that includes data for each of the intersected objects which is sorted according to the distance of the objects from a base plane of the
    associated subfield.
  66. 66. A method according to claim 65, wherein said second processing step processes the sorted intersection data for each intersected object to identify the first object which is intersected by the pathway.
  67. 67. A method according to any of claims 57 to 66, wherein said step of defining said pathway data defines primary pathway data representing a plurality of primary pathways which extend from said viewpoint through the 3D scene; wherein said pixel value determining step further comprises a step of defining secondary pathway data representing at least one secondary pathway which extends through the 3D scene from an intersection point between an associated primary pathway and the object identified in said second processing step for that associated primary pathway, which at least one secondary pathway is associated with the same pixel of the 2D image to be generated as the associated primary pathway; and wherein said pixel value determining step determines each pixel value of the 2D image to be generated using the primary and secondary pathway data, the intersection data and the 3D scene data.
  68. 68. A method according to claim 67, wherein said pixel value determining step determines each pixel value of the 2D image to be generated such that, for each of the at least one secondary pathways: said first processing step processes the secondary pathway data representing the current secondary pathway associated with the current pixel, to identify the direction of the current secondary pathway; said identifying step identifies which of said suLfields has a similar direction to the direction of the current secondary pathway; said selecting step selects the intersection data for a volume of the identified subfield through which, or adjacent which, the current pathway extends; said second processing step processes the secondary pathway data representing the current secondary pathway and the 3D scene data for the objects identified by the selected intersection data, to identify which of those objects is intersected by the current secondary pathway and is closest to said intersection point; and said pixel value determining step determines the pixel value for the current pixel in dependence upon the 3D scene data for the object identified in said second processing step when said secondary pathway data is processed.
  69. 69. A method according to any of claims 57 to 68, further comprising a step of receiving a new viewpoint, and wherein said step of defining said pathway data defines, in response to said received new viewpoint, new pathway data representing a plurality of new pathways which extend from the new viewpoint through the 3D scene and wherein said determining step determines each pixel value of a new 2D image of the 3D scene from the new viewpoint using the new pathway data, the intersection data and the 3D scene data.
  70. 70. A method according to any one of claims 57 to 68, further comprising the steps of: receiving object modification data identifying an object within the 3D scene to be modified and the modification to be made; and updating the intersection data in response to said received object modification data; and wherein a new 2D image of the 3D scene is generated using the pathway data, the updated intersection data and the 3D scene data.
  71. 71. A method according to claim 70, further comprising a step of receiving a new viewpoint and wherein a new 2D image of the 3D scene is generated in dependence upon the received new viewpoint and the received object modification data.
  72. 72. A method according to claim 69 or 71, wherein a sequence of new viewpoints is received and a corresponding sequence of 2D images of the 3D scene is generated.
  73. 73. A method according to claim 72, wherein said sequence of 2D images of the 3D scene is generated in substantially real time.
  74. 74. A method according to any one of claims 70 to 73, wherein said step of updating said intersection data comprises the steps of: removing data for the object to be modified from the intersection data; modifying the 3D scene data for the identified object in accordance with said object modification data; a third processing step of processing the modified 3D scene data and said parallel subfield data to determine intersections between said volumes and the modified object within the 3D scene; and modifying the intersection data associated with each of said volumes which intersect with the modified object, to include data identifying the modified object.
  75. 75. A method according to any of claims 57 to 74, further comprising a step of outputting the generated 2D image data.
  76. 76. A method according to claim 75, wherein said generated 2D image data is output on a recording medium.
  77. 77. A method according to claim 75, wherein said generated 2D image data is output on a carrier signal.
  78. 78. A virtual reality system for generating a sequence of 2D images of a 3D scene from a plurality of user defined viewpoints, the system comprising: means for receiving user input defining a current viewpoint; an apparatus according to any of claims 29 to 47 for generating a 2D image of the 3D scene from the current viewpoint; and means for displaying the generated 2D image to the user.
  79. 79. A 3D computer games system comprising: means for defining a plurality of objects within a 3D scene, at least one of which has an associated current viewpoint; means for receiving player input defining a modification to an object within the 3D scene; an apparatus according to any of claims 42 to 46 for modifying the 3D scene in response to said received player input and for generating a 2D image of the modified 3D scene from the current viewpoint; and means for displaying the generated 2D image to the player.
  80. 80. A computer readable medium storing processor executable instructions for programming a computer apparatus to become operable to perform the method of at least one of claims 48 to 77.
  81. 81. A signal carrying processor executable instructions for programming a computer apparatus to become operable to perform the method of at least one of claims 48 to 77.
GB0401957A 2004-01-29 2004-01-29 3d computer graphics processing system Withdrawn GB2410663A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0401957A GB2410663A (en) 2004-01-29 2004-01-29 3d computer graphics processing system
PCT/GB2005/000306 WO2005073924A1 (en) 2004-01-29 2005-01-28 Apparatus and method for determining intersections

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0401957A GB2410663A (en) 2004-01-29 2004-01-29 3d computer graphics processing system

Publications (2)

Publication Number Publication Date
GB0401957D0 GB0401957D0 (en) 2004-03-03
GB2410663A true GB2410663A (en) 2005-08-03

Family

ID=31971670

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0401957A Withdrawn GB2410663A (en) 2004-01-29 2004-01-29 3d computer graphics processing system

Country Status (2)

Country Link
GB (1) GB2410663A (en)
WO (1) WO2005073924A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2596566A (en) * 2020-07-01 2022-01-05 Sony Interactive Entertainment Inc Image rendering using ray-tracing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0854441A2 (en) * 1997-01-09 1998-07-22 The Boeing Company Method and apparatus for rapidly rendering computer generated images of complex structures
US6326964B1 (en) * 1995-08-04 2001-12-04 Microsoft Corporation Method for sorting 3D object geometry among image chunks for rendering in a layered graphics rendering system
US6466207B1 (en) * 1998-03-18 2002-10-15 Microsoft Corporation Real-time image rendering with layered depth images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5729672A (en) * 1993-07-30 1998-03-17 Videologic Limited Ray tracing method and apparatus for projecting rays through an object represented by a set of infinite surfaces

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6326964B1 (en) * 1995-08-04 2001-12-04 Microsoft Corporation Method for sorting 3D object geometry among image chunks for rendering in a layered graphics rendering system
EP0854441A2 (en) * 1997-01-09 1998-07-22 The Boeing Company Method and apparatus for rapidly rendering computer generated images of complex structures
US6466207B1 (en) * 1998-03-18 2002-10-15 Microsoft Corporation Real-time image rendering with layered depth images

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2596566A (en) * 2020-07-01 2022-01-05 Sony Interactive Entertainment Inc Image rendering using ray-tracing
GB2596566B (en) * 2020-07-01 2022-11-09 Sony Interactive Entertainment Inc Image rendering using ray-tracing
US11521344B2 (en) 2020-07-01 2022-12-06 Sony Interactive Entertainment Inc. Image rendering using ray-tracing

Also Published As

Publication number Publication date
WO2005073924A1 (en) 2005-08-11
GB0401957D0 (en) 2004-03-03

Similar Documents

Publication Publication Date Title
Cohen‐Or et al. Conservative visibility and strong occlusion for viewspace partitioning of densely occluded scenes
Zhang et al. Visibility culling using hierarchical occlusion maps
US7034825B2 (en) Computerized image system
US8411088B2 (en) Accelerated ray tracing
El-Sana et al. Integrating occlusion culling with view-dependent rendering
Ernst et al. Early split clipping for bounding volume hierarchies
US9208610B2 (en) Alternate scene representations for optimizing rendering of computer graphics
JPH10208077A (en) Method for rendering graphic image on display, image rendering system and method for generating graphic image on display
WO2002045025A9 (en) Multiple processor visibility search system and method
US20030160798A1 (en) Bucket-sorting graphical rendering apparatus and method
Dietrich et al. Massive-model rendering techniques: a tutorial
Wyman et al. Frustum-traced raster shadows: Revisiting irregular z-buffers
US7439970B1 (en) Computer graphics
Jeschke et al. Layered environment-map impostors for arbitrary scenes
Greene Hierarchical rendering of complex environments
Erikson et al. Simplification culling of static and dynamic scene graphs
US5926183A (en) Efficient rendering utilizing user defined rooms and windows
Van Kooten et al. Point-based visualization of metaballs on a gpu
Aila Surrender umbra: A visibility determination framework for dynamic environments
GB2410663A (en) 3d computer graphics processing system
Gomez et al. Time and space coherent occlusion culling for tileable extended 3D worlds
Plemenos et al. Intelligent visibility-based 3D scene processing techniques for computer games
Callahan The k-buffer and its applications to volume rendering
Gummerus Conservative From-Point Visibility.
Aliaga Automatically reducing and bounding geometric complexity by using images

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)