WO1993001561A1 - A beam tracing method for curved surfaces - Google Patents

A beam tracing method for curved surfaces Download PDF

Info

Publication number
WO1993001561A1
WO1993001561A1 PCT/AU1992/000344 AU9200344W WO9301561A1 WO 1993001561 A1 WO1993001561 A1 WO 1993001561A1 AU 9200344 W AU9200344 W AU 9200344W WO 9301561 A1 WO9301561 A1 WO 9301561A1
Authority
WO
WIPO (PCT)
Prior art keywords
patch
tracing
ray
beams
category
Prior art date
Application number
PCT/AU1992/000344
Other languages
French (fr)
Inventor
Hong Lip Lim
Original Assignee
Hong Lip Lim
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hong Lip Lim filed Critical Hong Lip Lim
Publication of WO1993001561A1 publication Critical patent/WO1993001561A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing

Definitions

  • the present invention relates to three-dimensional computer graphics and, in particular, discloses a method by which beams are used to trace curved surfaces.
  • Ray tracing techniques were first applied in optics and techniques mainly employ numerical analysis to compute the intersections between the rays and curved surfaces. More recently, the ray tracing approach has been adopted by computer vision and computer graphics for the recognition of of 3D objects or the display of these objects. The biggest challenge in ray tracing curved surfaces is to compute the ray and surface intersections. According to their strategies, the current ray-tracing techniques in computer graphics ca be classified into two groups. The first group of techniques apply numerical algorithms to evaluate the equations representing the rays and the surfaces. The fir such technique uses Newton's method to compute the ray and surface intersections. Later techniques apply more complicated numerical techniques to improve the speed of convergence. However, the iterations in these techniques have to terminate when the result reaches certain precision.
  • the second group of ray-tracing techniques first subdivide surfaces into polygonal patches.
  • the ray and surface intersections are indirectly obtained by computing the intersections between the rays and these patches.
  • the intersection tests between rays and polygons are simpler.
  • the disclosed method traces beams, each of which bein a set of rays.
  • the present method can achieve the advantages of the surface subdivision algorithm in ray tracing.
  • a method of determining a volume that encloses a surface forming part of a computer generated image comprising the steps of: (i) testing said surface to determine if a predetermined characteristic is met, and if not, defining said surface as a non-leaf surface and subsequently dividing said non-leaf surface into a plurality of sub-surfaces;
  • step (ii) repeating step (i) for each said sub-surface unti said characteristic is met at which time the corresponding sub-surface(s) is/are defined as a leaf surface;
  • Fig. 1A illustrates the standard ray tracing technique
  • Fig. IB illustrates the standard beam tracing technique
  • Fig. 1C illustrates the beams used in one embodiment
  • Figs. 2A-2D illustrate the expanded bounding volume of a patch
  • Figs. 3A-3D show alternative beams of other embodiments
  • Fig. 4 illustrates preliminary size testing
  • Fig. 5 illustrates an expanded boundary volume test
  • Figs. 6A and 6B show the expanded bounding volume test between a patch and a beam using an object space approach;
  • Fig. 7 is a linear interpolation and mapping of points and rays
  • Fig. 8 shows the arrangement of patches on the path of a single beam
  • Appendix 1 is a list of references. BEST AND OTHER MODES FOR CARRYING OUT THE INVENTION
  • Aliasing has been mitigated by comparing corner rays at each pixel and by recursively subdividing and tracing that pixel when aliasing is likely to occur. It has also been proposed to use cones and beams instead of rays as trace elements.
  • a cone is an extended ray which consists of a spread angle.
  • the beam tracing technique starts with a pyramidal beam that covers the whole screen and breaks into prisms when it hits surfaces. It has also been proposed to use the tracing of pencils which are defined by an axial ray and several surrounding paraxial rays.
  • One noteworthy observation is that the sizes of cross-sections of the cones can be used to determine the correct level of detail during cone tracing. Such has been proposed with application to the strip tree displaying technique.
  • the details of objects at all levels have to be precomputed before the tracing.
  • the cone, the beam and the pencil tracing techniques suffer from the difficulty of computing the intersections of these entities and complicated curved surfaces. Therefore, the current beam tracing technique restricts the environments to those comprising only flat surfaces. Likewise, the cone tracing technique is only applicable to spheres, planes and polygons.
  • Fig. 1A shows an example of simple ray tracing in which a surface SU is observed from a viewpoint VP and its image is shown on an image plane IP.
  • the primary ray PR passes through a pixel PX of the image plane IP and causes a reflected ray RL and a refracted ray RF.
  • Fig. IB simple beam tracing is shown in which the image plane IP is determined by the edges ED of the primary beam PB observing the surface SU.
  • the primary beam PB cause a reflected beam RC and a refracted beam RR.
  • the size of the primary beam PB has been reduced to equal with one pixel PX at the image plane IP. This is achieved by defining each beam by four corner rays CO and an optional centre ray CE. In this manner, greater accuracy can. be achieved where the surface SU is curved.
  • a hierarchy of patches is created before the tracing. This hierarchy can be used for fast elimination of patches not likely to be intersected by the beams. As seen in Fig. 2A, a ray RA might not intersect a patch PA or its associated bounding box BB.
  • the hierarchy is based on object precision, it allows thin surfaces to be subdivided more frequently along their longer edges. This improves efficiency as patches created will have more even shapes.
  • the subdivision can provide a more accurate estimation of the expanded bounding volumes of patches. Such volumes enable accurate estimation of the image size of the actual surfaces of these patches.
  • Fig. 2B the same ray RA intersects two sub-patches SP formed from the patch PA.
  • An expanded bounding volume box EB is used to enclose all the sub-patches SP.
  • the subdivision is usually carried out by triangulation.
  • multi-sided patches can be used, although the approximation of the actual surfaces and the tracing of such patches are usually more difficult.
  • the size criterion for termination of subdivision does not have to be very stringent. It is determined by the amount of memory available and the properties of the surfaces to be displayed. These properties may be the importance, the optical properties, the curvature, the bu piness, etc.
  • a record is created to store the information of each patch.
  • the record contains pointers which point to the locations of the records of its child patches.
  • Such links form a tree data structure which organizes the patch records as a hierarchy. Winged-edge data structure is used to store the information about how it inter-connected with other patches at the same level of detail.
  • vertex records are often shared by patches, separate vertex records may also be created to store the information about them. Such records can be accessed from the associated patch records using pointers.
  • a patch is often treated as flat. However, the actual surface of the patch may not be flat. The approximated flat region of the patch is referred to as its principal plane.
  • each patch is associated with a 3D coordinate system called the patch coordinate system CS.
  • An axis of this system is perpendicular to the principal plane of the patch PA and the other two axes are on the plane and perpendicular to each other.
  • the patch PA can be divided into a series of overlapping leaf patches LP.
  • Each patch is also associated with an extended bounding volume EV.
  • the volume is defined to be the volume which, within an acceptable error margin, contains the actual surface of the patch. To minimize it's size, the edges of the volume are usually aligned with the patch coordinate system CS of the patch PA. It should contain that patch PA and its actual surfaces AP.
  • a non-leaf patch has an extended bounding volume EV-NL and a leaf patch has an extended bounding volume EV-LP.
  • Fig. 2C specifically illustrates the computation of the extended bounding volume of a non-leaf patch EV-NL, in a cross-sectional view.
  • Fig. 2D the enlargement of the extent of an edge when a patch is a leaf of the patch hierarchy (cross-sectional view).
  • the numeral 10 represents a possible protruding of the patch surface and the numeral 12 indicates the enlarged heights of the edge AB of the patch PA.
  • the expanded bounding volume can also be computed by finding the volume that contains the estimated expanded bounding volumes of its edges using the following pseudo-code: for each edge of the patch do compute the bounding volume of the edge that is aligned with the patch coordinate system; for each face of the volume do determine the largest dimension of the face; multiply this dimension by a predefined factor to obtain a length (the value of this factor has to be on the safe side so that the expanded
  • bounding volume would contain the actual surface of the patch); expand the volume in the outward direction of that face by that length; done done
  • the expanded bounding volume may also be obtained by the following technique: for each edge of the patch do finding the bounding volume of the edge that is
  • the expanded bounding volume of each of the leaf patches can be replaced by a volume containing it and aligning with the world coordinate system. This can speed up the computation of the expanded bounding volumes of non-leaf patches because each of them can be approximated by the volume containing the expanded bounding volumes of its leaf patches.
  • a geometric modelling technique can also be applied to combine the patch hierarchies of different surfaces and objects into one single patch hierarchy.
  • a high level tree node representing more than one surface would not be associated with any surface equation but it can still be described by a bounding volume and an expanded bounding volume which contain the respective volumes of its child nodes.
  • the primary beams PB, , PB 2 are the first generation beams.
  • Each beam PB, , PB 2 originates from the viewpoint VP, , passes through its associated pixel (or sub-pixels, or image regions) 20, and enters into the environment to be displayed. Its central ray CE, , CE « must pass through the centre of the pixel (or sub-pixels, or image regions).
  • the beam of each primary beam should ideally have a regular (cross-sectional) shape.
  • each beam PB- is defined by three corner rays CO,, CO-, CO.,.
  • Two of the corner rays (CO,, CO-.) pass through the bottom corner of the associated pixel and the other ray (CO.,) passes through the midpoint of the opposite side of the pixel.
  • the intersection of the beam PB-, with the screen therefore forms an isosceles triangle whose width and height are respectively the width and height of a pixel.
  • each beam PB consists of four corner rays C0 4 , C0 5 , C0 ⁇ , C0 7 (C0 7 is hidden from view).
  • the corner rays of each beam pass through the corner of the associated pixel .
  • two beams PB., PB ⁇ may be formed out of the four corner beam just mentioned, by splitting at two diagonal corner rays.
  • the cross-section of each beam at the image plane is now a right and isosceles triangle. It is also possible to haye a beam comprised of just two rays if only the testing of one image dimension is required.
  • Fig. 3C shows another alternative, similar to Fig. 3A, but where the shape is smaller than that of a pixel.
  • a beam may not contain a central ray. The tracing of such beams is discussed later. If the beams contain central rays, a patch would be on the path of a beam if it is in front of the beam and its expanded bounding volume is intersected by the infinite line coincident with the central ray. Otherwise, a patch would be on the path of a beam if it is in front of the beam and its expanded bounding volume is intersected by the open-ended volume coincident with the whole beam.
  • the surface subdivision algorithm is used to subdivide surfaces into patches whose image projections are less than the size of a pixel.
  • the closest patch projected on each pixel is determined using a z-buffer.
  • the closest patch for each primary beam is determined.
  • the patches can reflect a secondary beam. If the object beneath the patch is transparent, a secondary beam can also be refracted. This requires intersection computations between the primary beam and the patch. If the actual surfaces of the patches are flat or quadric, the computation of the intersections between the central and corner rays of the beams and these surfaces can be directly evaluated based on the surface equations of the latter. However, if these surfaces are more complicated, the evaluation of the ray and surface intersections can have the same difficulty experienced by the earlier ray tracing techniques. On the other hand, because the directions of the secondary beams depend on the curvature of the emitting surfaces, Catmull's strategy of assuming that all patches are flat could produce errors.
  • the present embodiment overcomes the problem by observing that when the image of a patch is small, a simplification of the actual surface of that patch does not yield noticeable errors.
  • the equation of such a surface can be solved by the positions of the patch vertices and the normal directions at these vertices; such as: d. a flat surface; and e. a flat surface, in which the normals, reflected or refracted rays on the surface are approximated by linear interpolation according to the origins and directions of the corner rays of the beam. This is seen in Fig.
  • a patch PA has a central point (x, y, z) and an associated normal (Nx, Ny, Nz). Corner normals Nl , N2 and N3 are also shown.
  • al , a2, a3, bl , b2, b3, cl, c2, and c3 can be solved from the coefficient of corner normals Nl , N2 and N3.
  • the reflected/ refracted rays of the corner and central rays of the primary beam are computed. These new rays generally form secondary beams.
  • the central ray of a secondary beam can be computed directly from the central ray of the corresponding primary beam. However, to ensure that the central ray is always close to the centre of the beam, it can also be computed by averaging the corner rays of its beam.
  • the secondary beams are not immediately traced after they have been created. Rather, their tracing is delayed until all the secondary beams have been created.
  • the secondary beams are sorted when they have all been generated.
  • the sort keys can be either image space or object space attributes, or both. Possible image space attributes are the identifiers of the emitting surfaces, whether the beams are generated by refractions or reflections, and the coordinates of the corresponding pixel positions. Possible object space attributes that have also been used in the ray classification technique are the originating positions and the directions of their central rays. This sorting increases the coherence of the patch and vertex caches as adjacent beams usually have similar magnification factors and are more likely to intersect the same patches.
  • the beams are traced by the sort sequence. For each beam, a topdown access of the patch hierarchy is carried out. If a path is not on the path of the beam*, no further testing between the beam and the patch or its sub-patches is needed as the beam does not intersect them. If the node in the hierarchy contains more than one surface, the test below is bypassed. The node immediately undergoes the expanded bounding volume test mentioned in the next section.
  • the expanded bounding volume of the patch is compared with the principle plane of the patch emitting the beam. If the two intersect, the patch needs to be further subdivided. Otherwise, the intersections of the corner rays of the beam and the principal plane of the patch are computed.
  • a 2D coordinate system having a vertical axis VA and a horizontal axis HA lying on the plane of a patch PA can be defined.
  • the system is oriented such that the two axes VA, HA of the patch coordinate system of the patch PA are parallel to the lines joining intersections of the corner rays (Rl, R2, R3, R4) which correspond to the vertical and parallel edges of the pixels.
  • two axis-aligned parallelepiped with one (22) just containing the patch PA, and the other (24) just containing the ray intersections can be defined.
  • the width (26) and height (28) of the former (24) are compared with the width (30) and height (32) dimensions of the latter (22). If any of their ratios is larger than a predefined factor (usually one), the patch PA is deemed to be too large and have failed the preliminary size test.
  • a predefined factor usually one
  • the preliminary size test is relatively fast. However, if the curvature of the actual surface of the patch is significant, and its plane is almost parallel to the beam direction, passing the preliminary size test does not guarantee that a patch is at the right size. Therefore, a patch is subjected to the expanded volume test after it passes the preliminary size test. This test compares the expanded bounding volume of the patch with the beam. Since it considers the possible undulation of the patch surface, it can determine more accurately whether the patch is at the right size.
  • the beam is subdivided into triangular sub-beams. For example, if the beam consists of four corner rays, it is divided into two sub-beams along two diagonal corner rays. If the beam only has three corner rays, it itself is the only sub-beam.
  • a sub-beam SP is shown having corner rays Rl , R2 and R3 in which Rl and R2 correspond to the vertical edge of the pixel and, R2 and R3 correspond to the horizontal edge of the pixel.
  • the wavefront of each sub-beam can be described by a quadric surface (this wavefront only needs to be computed once for each
  • a projection plane PP perpendicular to the sub-beam SB can be defined. Such a plane can be approximated by the plane perpendicular to the central ray of the current beam, or the plane perpendicular to the averaged ray of the corner rays of the sub-beam. Based on the equation of the wavefront, the vertices of the faces of the expanded
  • HI and Wl represent the height and width of the corner ray intersection's
  • H2 and W2 the height and width of the projection PJ of the expanded bounding volume EV.
  • a 2D coordinate system is defined on the plane. Usually this system is oriented such that the vertical axis is parallel to the pair
  • a rectangle aligned with the axes and just containing the volume projection PJ can be defined.
  • a similar rectangle can be defined for 5 the ray intersections.
  • the ratios between the width (W2) and height (H2) of the former and the corresponding dimensions of the latter (Wl, HI) are computed. Similar to the preliminary size test, if any of the ratios is larger than a predefined value, the patch is deemed to be too large and have failed the size test. 0
  • Figs. 6A and 6B it is also possible to apply the expanded bounding volume test using an object space approach.
  • An intersection volume IV identical to the expanded bounding volume EV and having the same distance to the beam-emitting patch PA is defined.
  • the Intersection volume IV is positioned such that it is just touching, (at 5 34 in Fig. 6A, at 36 in Fig. 6B) one corner ray CO and it and all the corner rays are on one side of that ray. If one of these rays CO intersect the intersecting volume IV, the patch PA is deemed to be not at the right size.
  • only the preliminary size test or the expanded bounding volume test need be carried out. For example, if the actual surface of the patch is flat, only the preliminary size test needs to be carried out. Conversely, only the expanded bounding volume test may be carried out if the actual surface of the patch is known to be highly curved, or if the node contains several surfaces. The sequence of the two tests may also be reversed.
  • Criteria for accepting the sizes of patches may be variable in both the preliminary and expanded bounding volume tests. For example, if a surface is bright, important, or known to be bumpy, it may undergo extra subdivisions or having the size ratio set to be more stringent. This may also happen to bright, higher generation beams or beams emitted by highly curved surfaces.
  • Adaptive Subdivision of Patches If a patch is too large, it is further subdivided. The patch hierarchy and the caches are used to reduce the computation requirement of the subdivisions. First, the patch hierarchy is checked. If the children of the patch is in the hierarchy and hence accessible from its child pointers, they can be directly accessed and need not be computed from scratch.
  • the patch and vertex caches are searched. Again, any information about these patches and their vertices needs not be computed if it is already in the caches. Information that cannot be found in either the patch hierarchy or the caches requires to be computed. The results are written into the patch and vertex caches. Most scenes usually have some ray coherence. Hence nearby rays are likely to intersect nearby surfaces. By tracing the beams in the sorted sequence, the successive beams are more likely to be close to each other and the chances of having a cache miss are kept to a minimum.
  • a beam If a beam is converging and a surface is close to its focal point, the surface can undergo many subdivisions without passing the size tests. This can exhaust both the caches and requires a lot of computations.
  • a fixed number of levels can be set so that no further subdivision is carried out if the depth of subdivision reaches that level .
  • adjacent patches can undergo different levels of subdivision, after the right-sized patches on the path of the beam have been found, the shared patches are compared. Each of these patches can be further subdivided if the patches that share edges with it have undergone further subdivision within the same beam. This ensures that the beam does not pass through the inter-patch gaps caused by subdivision mismatches. After this reconciliation, the patches are examined.
  • the first patch intersected by the central ray of the current beam is the patch intersected by the beam. Otherwise, all the patches whose approximated polygons intersect the volume of the current beam are likely to encounter it.
  • the computation of the intersected patches in such a situation is called an occlusion relationship and will be discussed later.
  • Usin g Depth Coherence to Reduce Size Tests Both the preliminary and expanded volume tests and the adaptive subdivisions can be highly computation intensive especially when the depth complexity is high. The computations can be substantially reduced by taking advantage of the depth and ray coherence in the environment and by applying two linked lists (or arrays.) These lists are referred to as the front list and the hold list. For each beam, the following tracing method is carried out:
  • each highest level patch on the path of the beam is not immediately subjected to the size tests. Rather, it is placed in the hold list.
  • the closest and furthest distances from its expanded bounding volume to the beam emitting patch ' are also computed and stored in the list. These distances are referred to as the closest and furthest distances of that patch.
  • the hold list contains all the patches that may be too large but are on the path of the beam.
  • the patch having the closest distance is obtained.
  • This patch is referred to as the front patch. It is removed from the hold list. All patches in the hold list whose closest distances are less than its furthest distance are moved to the front list because their actual surfaces may in fact be closer than the front patch.
  • patch PT2 is the front patch but patch PT1 is actually the closer patch with respect to the beam PB6.
  • patch PT1 is in the front list while patches PT3 and PT4 are in the hold list.
  • the preliminary and/or expanded bounding volume tests are applied to the front patch. If the patch is not at the right size, it is subdivided. All its child patches not on the path of the beam are ignored. The new front patch is detected from the remaining child patches and the front list. Each of the remaining child patches is added to front list if it is not the new front patch. However, if all the child patches are not on the path of the beam and the front list is empty, the new front patch is obtained from the hold list.
  • the front and hold lists are examined. Certain records in one list would need to be moved to the other list because the furthest distance of the new front patch is usually different from that of the old one.
  • the new front patch then undergoes size tests and the above process if again carried out.
  • the size tests are carried out on patches in the front list after it is found. If a patch in the list is found to be too large, it is removed from the front list and repeatedly subdivided until all its descendant patches that are on the path of the beam, at the right size and whose closest distances are less than the furthest distance of the front patch are found. These patches are added to the front list.
  • the front list contains only patches at the right size. Since the beam might pass through the inter-patch gaps caused by subdivision mismatches, the front patch and patches in the list are compared. A patch is further subdivided at the mismatch boundary if it has undergone less subdivision than its connected patches. After the reconciliation, the first patch encountered by the beam can be obtained by computing the intersections of these patches with the central ray of the current beam.
  • the technique described above uses the property that any surfaces behind the first surface encountered by the beam does not need to be analyzed in detail if it is found to be totally behind that surface. Firstly, the technique selects the front patch which is the patch most likely to be encountered by the beam based on the expanded bounding
  • the front patch at the right size may not be the first patch encountered by the beam, for example, the patches PT1 and PT2 in Fig. 8. Therefore, patches in the front list have to be subdivided to obtain patches that are at the right size and that might be in front of the front patch. Patches in the hold list need not be subdivided because they cannot hide the front patch. In normal situations surfaces are far apart. Hence most of them would be assigned to the hold list and enjoy few subdivisions. Therefore, the technique can avoid a lot of subdivision computations involving hidden surfaces when the depth complexity of the environment is high.
  • the above method is based on the beam-tracing technique, it can also be applied in normal ray-tracing if the beams are replaced by rays and the size testings are replaced by more elementary testings such as checking the levels of subdivision or sizes of the patches.
  • the method can also be used when transparent surfaces are present because a beam refracted by such a surface is treated as another beam which is separately traced.
  • Next Generation Beams When a patch is found to be at the right size, its intensity contribution to its associated pixel is computed. From the intersections between the rays of the beam and the surface of that patch, a pair of third generation beams corresponding to the reflection and refraction at the patch are computed. Similar to the second generation beams, these newly created beams are not traced immediately. If a node containing more than one surface is found to be at the right size. It either still undergoes subdivision or its image contribution to the pixel approximated from the surfaces in it.
  • the third generation beams generated by them are sorted as in their case. These beams are then traced by their
  • the tracing is the same as that of the second generation beams. Hence, for each beam, the same subdivision and size testings of associated patches are carried out. All the techniques applicable to the second generation beams can also be applied to these beams.
  • This image mapping can be more easily computed by applying the approximation strategy for computing the reflected/refracted rays of patches mentioned earlier. After a patch is found to be at the right 5 size, its vertices are mapped to the surface emitting the current beam. Both surfaces and the wavefront of the beam are usually assumed to be quadric, although other methods used to approximate surfaces mentioned earlier can be used. If the beam is not primary, the mapping is repeated to compute the point of mapping on the surface emitting the beam. This process is repeated until the mapping on the image plane is found.
  • the actual image of the patch may only be a portion of the whole image mapping of the patch. This is because the patch may be hidden by other patches on the path of the beam. The base of the beam also may not be totally within the emitting patch.
  • the image mapping of a patch by a beam can be superimposed on the image of the patch emitting that beam, the image clipping of the latter would also masks the image of the former.
  • the image projection of a patch must be masked by the image projections of closer patches on the path of the current beam and the image areas outside the image area of the patch emitting the current beam.
  • the former can be done by depth comparisons of the mappings of patches on the path of the current beams, based on known techniques of anti-aliasing in hidden surface removal.
  • the visible area of the patch is then clipped by the image area of the patch emitting the current beam. Base on the percentage between this area and the area of the pixel, the intensity contribution of each patch at the pixel can be computed.
  • the computation of the exact visible area of the patch mappings may be highly computation intensity. This computation can be reduced using sub-pixel area sampling such as the known A-buffer technique. Using Frame-To-Frame Coherence to Speed up Tracing
  • the described techniques can also be used in applications other than those in computer graphics. For example, the reflections and refractions of surfaces are computed to determine shape from shading in computer vision. The current techniques can be used to accelerate such computations when curved surfaces are involved.
  • the current techniques can in turn be used in optics applications. It can be used to simulate the images generated by optical instruments containing non-linear curved lenses and mirrors, so as to assist the design of these instruments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

Ray tracing complicated curved surfaces encounters many problems caused by the inability of the existing techniques to adaptively compute the ray and surface intersections based on the image precision of the tracing. Disclosed is a method which uses beams (PB) to trace curved surfaces (SU). Each beam (PB) contains several corner rays and optionally a central ray (PR). Working in synergy, the central ray (PR) allows fast detection of the intersecting patch while the corner rays enable estimation of the image size of the patch. Using this strategy, the method can adaptively subdivide surfaces based on the image characteristics of patches during tracing. It further reduces the computation requirement of this subdivision by using caches which store the information of patches and vertices, and carrying out the tracing in a sequence that maximizes the coherence of these caches.

Description

A BEAM TRACING METHOD FOR CURVED SURFACES
FIELD OF THE INVENTION The present invention relates to three-dimensional computer graphics and, in particular, discloses a method by which beams are used to trace curved surfaces.
BACKGROUND TO THE INVENTION . The display of curved surfaces is a complicated task in computer graphics. Unless surfaces are simple, it is usually difficult to determine where and how they would appear on the screen. Early research in the display of curved surfaces mainly focused on the computation of surfaces directly visible to a particular viewpoint. The developed techniques either directly evaluate the surface projections or approximate surfaces by polygons which are then displayed. However, both approaches have limitations. The former often requires complicated numerical computations and are restricted to a limited class of surfaces. The latter could create artifacts caused by the flatness of the polygons.
Problems associated with the early techniques were overcome by the development of a surface subdivision technique by Catmull and referenced in Appendix 1. The technique subdivides surfaces into patches until the image of each patch is smaller than a pixel. At that size, the patches can be safely assumed to be flat. Such a technique is very robust and applicable to all surfaces that can be subdivided and be represented by patches. Unfortunately, where one is used, a Z-buffer can only store the closest surfaces under direct view projection. The surface subdivision technique therefore cannot produce the refraction and specular reflection effects. Currently these effects are generated by ray tracing. However, current ray-tracing techniques apply strategies similar to the early visible surface techniques in solving the intersections between rays and surfaces. Consequently, they encounter many problems similar to those experienced by the latter.
Ray tracing techniques were first applied in optics and techniques mainly employ numerical analysis to compute the intersections between the rays and curved surfaces. More recently, the ray tracing approach has been adopted by computer vision and computer graphics for the recognition of of 3D objects or the display of these objects. The biggest challenge in ray tracing curved surfaces is to compute the ray and surface intersections. According to their strategies, the current ray-tracing techniques in computer graphics ca be classified into two groups. The first group of techniques apply numerical algorithms to evaluate the equations representing the rays and the surfaces. The fir such technique uses Newton's method to compute the ray and surface intersections. Later techniques apply more complicated numerical techniques to improve the speed of convergence. However, the iterations in these techniques have to terminate when the result reaches certain precision. This precision usually has to be very small as its projection on the image plane is uncertain during the iterations. Consequently, most of the intersection computations can be unnecessarily accurate. Nonetheless, in situations where images of surfaces are highly magnified, such a precision may still be inadequate.
The second group of ray-tracing techniques first subdivide surfaces into polygonal patches. The ray and surface intersections are indirectly obtained by computing the intersections between the rays and these patches. Compared to the techniques using numerical methods, the intersection tests between rays and polygons are simpler. However, there are usually a lot more patches than surfaces. Hence such an approach is not necessarily faster.
In addition, noticeable image defects can appear when patches ar magnified beyond certain limit. Since the current techniques cannot predetermine the image sizes of patches, such defects are difficult to prevent.
SUMMARY OF THE INVENTION It is an object of the present invention to substantially overcome, or ameliorate, the abovementioned problems by the use of a method which combines both beam tracing and ray tracing.
Generally, the disclosed method traces beams, each of which bein a set of rays. By using the beams to gauge the image sizes of patches, and applying adaptive subdivision of these patches, the present method can achieve the advantages of the surface subdivision algorithm in ray tracing.
In accordance with one aspect of the present invention there is disclosed a method of tracing one or more surfaces forming part or a computer generated image, said method comprising the steps of:
(1) tracing said surfaces with a plurality of beams, each of said beams having at least two rays, and for each said beam; (ii) determining those said surfaces on the path of said beam; (iii) classifying those said surfaces into groups of those requiring subdivision and those not requiring subdivision;
(iv) subdividing those said surfaces requiring subdivision into a plurality of sub-surfaces, and treating each said sub-surface as a further one of said surfaces, and repeating steps (i) to (iv) on said further surfaces using a new plurality of (secondary) beams; and
(v) for those said surfaces not requiring subdivision, determining the occlusion relationship between selected ones of said surfaces and computing a corresponding image contribution as projected by said beam.
In accordance with another aspect of the present invention there is disclosed a method of determining a volume that encloses a surface forming part of a computer generated image, said method comprising the steps of: (i) testing said surface to determine if a predetermined characteristic is met, and if not, defining said surface as a non-leaf surface and subsequently dividing said non-leaf surface into a plurality of sub-surfaces;
(ii) repeating step (i) for each said sub-surface unti said characteristic is met at which time the corresponding sub-surface(s) is/are defined as a leaf surface;
(iii) for each said leaf surface determining a corresponding expanded bounding box; and
(iv) combining each of said expanded bounded boxes to define said volume.
BRIEF DESCRIPTION OF THE DRAWINGS A number of embodiments of the present invention will now be described with reference to the drawings in which:
Fig. 1A illustrates the standard ray tracing technique; Fig. IB illustrates the standard beam tracing technique; Fig. 1C illustrates the beams used in one embodiment; Figs. 2A-2D illustrate the expanded bounding volume of a patch; Figs. 3A-3D show alternative beams of other embodiments; Fig. 4 illustrates preliminary size testing; Fig. 5 illustrates an expanded boundary volume test; Figs. 6A and 6B show the expanded bounding volume test between a patch and a beam using an object space approach;
Fig. 7 is a linear interpolation and mapping of points and rays; Fig. 8 shows the arrangement of patches on the path of a single beam; and
Appendix 1 is a list of references. BEST AND OTHER MODES FOR CARRYING OUT THE INVENTION
Improvement of the Existing Techniques Through Improved Beam Tracing
The problems of the prior art can be overcome only if the image precision of the ray and surface intersection computations can be determined while rays are traced. However, since the rays are linear entities, a single ray can only convey the binary information of whethe it intersects a patch. The ray alone cannot be used to measure the image sizes of the patch because that involves dimensions orthogonal to i s path.
Because aliasing is caused by poor image sampling resolution, th pixel-sampling nature of rays could also compound anti-aliasing.
Aliasing has been mitigated by comparing corner rays at each pixel and by recursively subdividing and tracing that pixel when aliasing is likely to occur. It has also been proposed to use cones and beams instead of rays as trace elements. A cone is an extended ray which consists of a spread angle. The beam tracing technique starts with a pyramidal beam that covers the whole screen and breaks into prisms when it hits surfaces. It has also been proposed to use the tracing of pencils which are defined by an axial ray and several surrounding paraxial rays. One noteworthy observation is that the sizes of cross-sections of the cones can be used to determine the correct level of detail during cone tracing. Such has been proposed with application to the strip tree displaying technique. However, the details of objects at all levels have to be precomputed before the tracing. In addition, the cone, the beam and the pencil tracing techniques suffer from the difficulty of computing the intersections of these entities and complicated curved surfaces. Therefore, the current beam tracing technique restricts the environments to those comprising only flat surfaces. Likewise, the cone tracing technique is only applicable to spheres, planes and polygons.
Fig. 1A shows an example of simple ray tracing in which a surface SU is observed from a viewpoint VP and its image is shown on an image plane IP. The primary ray PR passes through a pixel PX of the image plane IP and causes a reflected ray RL and a refracted ray RF.
In Fig. IB, simple beam tracing is shown in which the image plane IP is determined by the edges ED of the primary beam PB observing the surface SU. The primary beam PB cause a reflected beam RC and a refracted beam RR.
However, in the embodiment shown in Fig. 1C, the size of the primary beam PB has been reduced to equal with one pixel PX at the image plane IP. This is achieved by defining each beam by four corner rays CO and an optional centre ray CE. In this manner, greater accuracy can. be achieved where the surface SU is curved. Preliminary Surface Subdivision
With reference to Figs. 2A-2D, a hierarchy of patches is created before the tracing. This hierarchy can be used for fast elimination of patches not likely to be intersected by the beams. As seen in Fig. 2A, a ray RA might not intersect a patch PA or its associated bounding box BB.
In addition, since the hierarchy is based on object precision, it allows thin surfaces to be subdivided more frequently along their longer edges. This improves efficiency as patches created will have more even shapes. Finally, the subdivision can provide a more accurate estimation of the expanded bounding volumes of patches. Such volumes enable accurate estimation of the image size of the actual surfaces of these patches. In Fig. 2B, the same ray RA intersects two sub-patches SP formed from the patch PA. An expanded bounding volume box EB is used to enclose all the sub-patches SP.
For faster tracing, the subdivision is usually carried out by triangulation. Alternatively, multi-sided patches can be used, although the approximation of the actual surfaces and the tracing of such patches are usually more difficult. Since the leaf patches of the hierarchy can be further subdivided during tracing, the size criterion for termination of subdivision does not have to be very stringent. It is determined by the amount of memory available and the properties of the surfaces to be displayed. These properties may be the importance, the optical properties, the curvature, the bu piness, etc. A record is created to store the information of each patch. The record contains pointers which point to the locations of the records of its child patches. Such links form a tree data structure which organizes the patch records as a hierarchy. Winged-edge data structure is used to store the information about how it inter-connected with other patches at the same level of detail.
Since vertices are often shared by patches, separate vertex records may also be created to store the information about them. Such records can be accessed from the associated patch records using pointers. During the tracing, a patch is often treated as flat. However, the actual surface of the patch may not be flat. The approximated flat region of the patch is referred to as its principal plane.
As seen in Fig. 2C, which shows a sectional view of Fig. 2B, each patch is associated with a 3D coordinate system called the patch coordinate system CS. An axis of this system is perpendicular to the principal plane of the patch PA and the other two axes are on the plane and perpendicular to each other. Furthermore, the patch PA can be divided into a series of overlapping leaf patches LP.
Each patch is also associated with an extended bounding volume EV. The volume is defined to be the volume which, within an acceptable error margin, contains the actual surface of the patch. To minimize it's size, the edges of the volume are usually aligned with the patch coordinate system CS of the patch PA. It should contain that patch PA and its actual surfaces AP. A non-leaf patch has an extended bounding volume EV-NL and a leaf patch has an extended bounding volume EV-LP.
Fig. 2C specifically illustrates the computation of the extended bounding volume of a non-leaf patch EV-NL, in a cross-sectional view. In Fig. 2D, the enlargement of the extent of an edge when a patch is a leaf of the patch hierarchy (cross-sectional view). The numeral 10 represents a possible protruding of the patch surface and the numeral 12 indicates the enlarged heights of the edge AB of the patch PA.
The following methods can be used to compute the expanded bounding volume of each patch:
(a) If the formula defining the actual surface or convex hull of the patch is simple and can be obtained from the equations of the actual surfaces of the patch, the volume can be directly evaluated.
(b) the expanded bounding volume can also be computed by finding the volume that contains the estimated expanded bounding volumes of its edges using the following pseudo-code: for each edge of the patch do compute the bounding volume of the edge that is aligned with the patch coordinate system; for each face of the volume do determine the largest dimension of the face; multiply this dimension by a predefined factor to obtain a length (the value of this factor has to be on the safe side so that the expanded
bounding volume would contain the actual surface of the patch); expand the volume in the outward direction of that face by that length; done done
(c) The expanded bounding volume may also be obtained by the following technique: for each edge of the patch do finding the bounding volume of the edge that is
aligned with the patch coordinate system; expanding the bounding volume using the expansion technique mentioned in (b); done compute a volume that is aligned with the patch coordinate system and that contains all the expanded bounding volumes of the edges.
* To reduce computations, the expanded bounding volume of each of the leaf patches can be replaced by a volume containing it and aligning with the world coordinate system. This can speed up the computation of the expanded bounding volumes of non-leaf patches because each of them can be approximated by the volume containing the expanded bounding volumes of its leaf patches.
A geometric modelling technique can also be applied to combine the patch hierarchies of different surfaces and objects into one single patch hierarchy. A high level tree node representing more than one surface would not be associated with any surface equation but it can still be described by a bounding volume and an expanded bounding volume which contain the respective volumes of its child nodes. The Tracing of the Primary Beams
As seen in Fig. 3D, the primary beams PB, , PB2 are the first generation beams. Each beam PB, , PB2 originates from the viewpoint VP, , passes through its associated pixel (or sub-pixels, or image regions) 20, and enters into the environment to be displayed. Its central ray CE, , CE« must pass through the centre of the pixel (or sub-pixels, or image regions). To simplify surface mapping operations, the beam of each primary beam should ideally have a regular (cross-sectional) shape.
In one possible shape shown in Fig. 3A, the boundary of each beam PB- is defined by three corner rays CO,, CO-, CO.,. Two of the corner rays (CO,, CO-.) pass through the bottom corner of the associated pixel and the other ray (CO.,) passes through the midpoint of the opposite side of the pixel. The intersection of the beam PB-, with the screen therefore forms an isosceles triangle whose width and height are respectively the width and height of a pixel.
In the possible shape shown in Fig. 3D, each beam PB, consists of four corner rays C04, C05, C0β, C07 (C07 is hidden from view). The corner rays of each beam pass through the corner of the associated pixel .
In yet another possible shape, shown in Fig. 3B, two beams PB., PBς may be formed out of the four corner beam just mentioned, by splitting at two diagonal corner rays. The cross-section of each beam at the image plane is now a right and isosceles triangle. It is also possible to haye a beam comprised of just two rays if only the testing of one image dimension is required.
Fig. 3C shows another alternative, similar to Fig. 3A, but where the shape is smaller than that of a pixel. To enable anti-aliasing, a beam may not contain a central ray. The tracing of such beams is discussed later. If the beams contain central rays, a patch would be on the path of a beam if it is in front of the beam and its expanded bounding volume is intersected by the infinite line coincident with the central ray. Otherwise, a patch would be on the path of a beam if it is in front of the beam and its expanded bounding volume is intersected by the open-ended volume coincident with the whole beam.
In this embodiment, the surface subdivision algorithm is used to subdivide surfaces into patches whose image projections are less than the size of a pixel. As in Catmull's technique, the closest patch projected on each pixel is determined using a z-buffer.
There are however differences between the surface subdivision algorithm and the subdivision of this embodiment. First, triangulation instead of the normal binary subdivision by the sides of four-side patches is usually used. Secondly, since the patch hierarchy has already been created, the subdivision is first carried out by descending down the hierarchy. If a patch is in the hierarchy, its information can be retrieved from the stored patch and vertex records and needs not be computed from scratch. When a leaf patch of the hierarchy is further subdivided, it is advantageous to store the information of patches and vertices in a patch cache and a vertex cache. This can reduce the number of computations because the subdivision carries out during the tracing of subsequent beams may need to trace these entities again. To exploit the coherence between rays, these caches are normally organized as queues. When a cache is full and a new record is to be added, the first record in the associated queue is removed. The freed space allows the new record to be joined at the end of the queue. Usually when a record is accessed, it is relocated at the end of the queue. This ensures that the record which has the long time span after it has been accessed rather than the record which is the oldest in the queue is replaced when the associated cache is full. Records in the caches are indexed so that they can be accessed based on the information of their associated entities. The patch hierarchy and the patch and vertex caches may be combined together by storing the records of the hierarchy in the caches. This has the advantages of simplifying the searching. However, special arrangements are required to ensure that these records are not replaced by newer records.
Approximation of Surfaces for the Computation of Secondary Beams
Using the z-buffer, the closest patch for each primary beam is determined. The patches can reflect a secondary beam. If the object beneath the patch is transparent, a secondary beam can also be refracted. This requires intersection computations between the primary beam and the patch. If the actual surfaces of the patches are flat or quadric, the computation of the intersections between the central and corner rays of the beams and these surfaces can be directly evaluated based on the surface equations of the latter. However, if these surfaces are more complicated, the evaluation of the ray and surface intersections can have the same difficulty experienced by the earlier ray tracing techniques. On the other hand, because the directions of the secondary beams depend on the curvature of the emitting surfaces, Catmull's strategy of assuming that all patches are flat could produce errors.
The present embodiment overcomes the problem by observing that when the image of a patch is small, a simplification of the actual surface of that patch does not yield noticeable errors.
Depending on the surface types and the requirement of the image quality, different approximation of patch surfaces can be applied. Below are some of the possibilities: a. the actual equations of the surface; b. a bi-cubic surface; and c. a quadric surfaces. The equation of such a surface can be solved by the positions of the patch vertices and the normal directions at these vertices; such as: d. a flat surface; and e. a flat surface, in which the normals, reflected or refracted rays on the surface are approximated by linear interpolation according to the origins and directions of the corner rays of the beam. This is seen in Fig. 7 where a patch PA has a central point (x, y, z) and an associated normal (Nx, Ny, Nz). Corner normals Nl , N2 and N3 are also shown. The normal can be described in the following manner: Nx = alx + a2y + a3 Ny - blx + b2y + b3 Nz = clx + c2y + c3 or alternatively,
Nx/Nz = alx + a2y + a3 Ny/Nz = blx + b2y + b3 The terms al , a2, a3, bl , b2, b3, cl, c2, and c3 can be solved from the coefficient of corner normals Nl , N2 and N3. Preliminary Size Tests of Secondary Beams
Based on the actual or approximated equation of the patch surface and applying the laws of reflection/refraction, the reflected/ refracted rays of the corner and central rays of the primary beam are computed. These new rays generally form secondary beams. The central ray of a secondary beam can be computed directly from the central ray of the corresponding primary beam. However, to ensure that the central ray is always close to the centre of the beam, it can also be computed by averaging the corner rays of its beam.
To improve the coherence of caches used in the present embodiment, the secondary beams are not immediately traced after they have been created. Rather, their tracing is delayed until all the secondary beams have been created.
The secondary beams are sorted when they have all been generated. The sort keys can be either image space or object space attributes, or both. Possible image space attributes are the identifiers of the emitting surfaces, whether the beams are generated by refractions or reflections, and the coordinates of the corresponding pixel positions. Possible object space attributes that have also been used in the ray classification technique are the originating positions and the directions of their central rays. This sorting increases the coherence of the patch and vertex caches as adjacent beams usually have similar magnification factors and are more likely to intersect the same patches.
The beams are traced by the sort sequence. For each beam, a topdown access of the patch hierarchy is carried out. If a path is not on the path of the beam*, no further testing between the beam and the patch or its sub-patches is needed as the beam does not intersect them. If the node in the hierarchy contains more than one surface, the test below is bypassed. The node immediately undergoes the expanded bounding volume test mentioned in the next section.
The expanded bounding volume of the patch is compared with the principle plane of the patch emitting the beam. If the two intersect, the patch needs to be further subdivided. Otherwise, the intersections of the corner rays of the beam and the principal plane of the patch are computed.
As seen in Fig. 4, a 2D coordinate system having a vertical axis VA and a horizontal axis HA lying on the plane of a patch PA can be defined. Usually the system is oriented such that the two axes VA, HA of the patch coordinate system of the patch PA are parallel to the lines joining intersections of the corner rays (Rl, R2, R3, R4) which correspond to the vertical and parallel edges of the pixels. Based on this system, two axis-aligned parallelepiped with one (22) just containing the patch PA, and the other (24) just containing the ray intersections can be defined.
The width (26) and height (28) of the former (24) are compared with the width (30) and height (32) dimensions of the latter (22). If any of their ratios is larger than a predefined factor (usually one), the patch PA is deemed to be too large and have failed the preliminary size test. The Expanded Bounding Volume Test
The preliminary size test is relatively fast. However, if the curvature of the actual surface of the patch is significant, and its plane is almost parallel to the beam direction, passing the preliminary size test does not guarantee that a patch is at the right size. Therefore, a patch is subjected to the expanded volume test after it passes the preliminary size test. This test compares the expanded bounding volume of the patch with the beam. Since it considers the possible undulation of the patch surface, it can determine more accurately whether the patch is at the right size.
With reference to Fig. 5, to carry out the size test, the beam is subdivided into triangular sub-beams. For example, if the beam consists of four corner rays, it is divided into two sub-beams along two diagonal corner rays. If the beam only has three corner rays, it itself is the only sub-beam. In Fig. 5, a sub-beam SP is shown having corner rays Rl , R2 and R3 in which Rl and R2 correspond to the vertical edge of the pixel and, R2 and R3 correspond to the horizontal edge of the pixel. The wavefront of each sub-beam can be described by a quadric surface (this wavefront only needs to be computed once for each
5 sub-beam.) A projection plane PP perpendicular to the sub-beam SB can be defined. Such a plane can be approximated by the plane perpendicular to the central ray of the current beam, or the plane perpendicular to the averaged ray of the corner rays of the sub-beam. Based on the equation of the wavefront, the vertices of the faces of the expanded
10 bounding volume EV, of a patch PA, front-facing to the sub-beam SB are projected onto the plane PP. The projection PJ of the volume EV can be approximated by the polygon with these projections as vertices. The intersections between rays (Rl, R2, R3) in the sub-beam SB and the plane PP are also computed.
15. In Fig. 5, HI and Wl represent the height and width of the corner ray intersection's, and H2 and W2 the height and width of the projection PJ of the expanded bounding volume EV.
A 2D coordinate system is defined on the plane. Usually this system is oriented such that the vertical axis is parallel to the pair
20 of ray intersections corresponding to the vertical edge of the pixel and the horizontal axis is parallel to the ray intersections corresponding to the horizontal edge of the pixel.
A rectangle aligned with the axes and just containing the volume projection PJ can be defined. A similar rectangle can be defined for 5 the ray intersections. The ratios between the width (W2) and height (H2) of the former and the corresponding dimensions of the latter (Wl, HI) are computed. Similar to the preliminary size test, if any of the ratios is larger than a predefined value, the patch is deemed to be too large and have failed the size test. 0 Turning now to Figs. 6A and 6B, it is also possible to apply the expanded bounding volume test using an object space approach. An intersection volume IV identical to the expanded bounding volume EV and having the same distance to the beam-emitting patch PA is defined. The Intersection volume IV is positioned such that it is just touching, (at 5 34 in Fig. 6A, at 36 in Fig. 6B) one corner ray CO and it and all the corner rays are on one side of that ray. If one of these rays CO intersect the intersecting volume IV, the patch PA is deemed to be not at the right size. On some occasions only the preliminary size test or the expanded bounding volume test need be carried out. For example, if the actual surface of the patch is flat, only the preliminary size test needs to be carried out. Conversely, only the expanded bounding volume test may be carried out if the actual surface of the patch is known to be highly curved, or if the node contains several surfaces. The sequence of the two tests may also be reversed.
Criteria for accepting the sizes of patches may be variable in both the preliminary and expanded bounding volume tests. For example, if a surface is bright, important, or known to be bumpy, it may undergo extra subdivisions or having the size ratio set to be more stringent. This may also happen to bright, higher generation beams or beams emitted by highly curved surfaces. Adaptive Subdivision of Patches If a patch is too large, it is further subdivided. The patch hierarchy and the caches are used to reduce the computation requirement of the subdivisions. First, the patch hierarchy is checked. If the children of the patch is in the hierarchy and hence accessible from its child pointers, they can be directly accessed and need not be computed from scratch.
If the child patches are not in the hierarchy, the patch and vertex caches are searched. Again, any information about these patches and their vertices needs not be computed if it is already in the caches. Information that cannot be found in either the patch hierarchy or the caches requires to be computed. The results are written into the patch and vertex caches. Most scenes usually have some ray coherence. Hence nearby rays are likely to intersect nearby surfaces. By tracing the beams in the sorted sequence, the successive beams are more likely to be close to each other and the chances of having a cache miss are kept to a minimum.
If a beam is converging and a surface is close to its focal point, the surface can undergo many subdivisions without passing the size tests. This can exhaust both the caches and requires a lot of computations. A fixed number of levels can be set so that no further subdivision is carried out if the depth of subdivision reaches that level . As adjacent patches can undergo different levels of subdivision, after the right-sized patches on the path of the beam have been found, the shared patches are compared. Each of these patches can be further subdivided if the patches that share edges with it have undergone further subdivision within the same beam. This ensures that the beam does not pass through the inter-patch gaps caused by subdivision mismatches. After this reconciliation, the patches are examined. If the central rays are used, the first patch intersected by the central ray of the current beam is the patch intersected by the beam. Otherwise, all the patches whose approximated polygons intersect the volume of the current beam are likely to encounter it. The computation of the intersected patches in such a situation is called an occlusion relationship and will be discussed later. Using Depth Coherence to Reduce Size Tests Both the preliminary and expanded volume tests and the adaptive subdivisions can be highly computation intensive especially when the depth complexity is high. The computations can be substantially reduced by taking advantage of the depth and ray coherence in the environment and by applying two linked lists (or arrays.) These lists are referred to as the front list and the hold list. For each beam, the following tracing method is carried out:
(1) Before a beam 1s traced, the front and the hold lists are first emptied. During the tracing of the beam, each highest level patch on the path of the beam is not immediately subjected to the size tests. Rather, it is placed in the hold list. The closest and furthest distances from its expanded bounding volume to the beam emitting patch' are also computed and stored in the list. These distances are referred to as the closest and furthest distances of that patch.
(2) After patches from the patch hierarchy have undergone the above process, the hold list contains all the patches that may be too large but are on the path of the beam. By scanning the hold list, the patch having the closest distance is obtained. This patch is referred to as the front patch. It is removed from the hold list. All patches in the hold list whose closest distances are less than its furthest distance are moved to the front list because their actual surfaces may in fact be closer than the front patch. This is shown in Fig. 8, where patch PT2 is the front patch but patch PT1 is actually the closer patch with respect to the beam PB6. In the situation shown in Fig. 8, patch PT1 is in the front list while patches PT3 and PT4 are in the hold list.
(3) The preliminary and/or expanded bounding volume tests are applied to the front patch. If the patch is not at the right size, it is subdivided. All its child patches not on the path of the beam are ignored. The new front patch is detected from the remaining child patches and the front list. Each of the remaining child patches is added to front list if it is not the new front patch. However, if all the child patches are not on the path of the beam and the front list is empty, the new front patch is obtained from the hold list.
After the new front patch is found. The front and hold lists are examined. Certain records in one list would need to be moved to the other list because the furthest distance of the new front patch is usually different from that of the old one. The new front patch then undergoes size tests and the above process if again carried out.
By repeating this process, progressively smaller front patches are obtained until the one that is found to be at the right size is found. If both the front list and the hold list become empty and there is no front patch at the right size, the beam does not intersect any patch at the right size.
(4) Since the right-sized front patch may not be the first patch intersected by the current beam, the size tests are carried out on patches in the front list after it is found. If a patch in the list is found to be too large, it is removed from the front list and repeatedly subdivided until all its descendant patches that are on the path of the beam, at the right size and whose closest distances are less than the furthest distance of the front patch are found. These patches are added to the front list.
(5) After the above processing, the front list contains only patches at the right size. Since the beam might pass through the inter-patch gaps caused by subdivision mismatches, the front patch and patches in the list are compared. A patch is further subdivided at the mismatch boundary if it has undergone less subdivision than its connected patches. After the reconciliation, the first patch encountered by the beam can be obtained by computing the intersections of these patches with the central ray of the current beam.
The processing described above can be summarized by the following pseudo-code:
for each beam do initialise the front and hold lists; move top level patches in the patch hierarchy and on the path of the beam to the hold list; detect the front patch in the hold list and remove it from the list; move patches in the hold list that have a chance of being closer than the front patch to the front list; while (the front patch is not the right size) do subdivide the front patch; find the new front patch; move patches in the hold list that have a change of being closer than the new front patch to the front list; move patches in the front list that do not have a chance of being closer than the new front patch to the hold list; done for (each patch in the front list not at the right size) do remove the patch from the front 11st and subdivide it; add all its right-size descendant patches that are on the path of the beam to the front list; done perform reconciliation of subdivision mismatches among patches in the front list and the front patch; compute the patch intersected by the beam from the front patch and patches in the front list; end
The technique described above uses the property that any surfaces behind the first surface encountered by the beam does not need to be analyzed in detail if it is found to be totally behind that surface. Firstly, the technique selects the front patch which is the patch most likely to be encountered by the beam based on the expanded bounding
— volumes of the highest level surfaces. The patch is subdivided until its patch on the path of the beam is at the right size. Since the choice of the front patch is based on the expanded bounding volume, there could be patches closer than it . However, such patches can only be from the front list because patches in the hold list are totally behind its expanded bounding volume and hence behind it. Therefore, during the subdivision process the subdivided patches are only compared with patches in the front 11st. When the patch is further subdivided its sub-patches may be out of the path of the beam or be determined to be behind another surface. Hence there is a likelihood that the next front patch is a patch in the front or hold list.
The front patch at the right size may not be the first patch encountered by the beam, for example, the patches PT1 and PT2 in Fig. 8. Therefore, patches in the front list have to be subdivided to obtain patches that are at the right size and that might be in front of the front patch. Patches in the hold list need not be subdivided because they cannot hide the front patch. In normal situations surfaces are far apart. Hence most of them would be assigned to the hold list and enjoy few subdivisions. Therefore, the technique can avoid a lot of subdivision computations involving hidden surfaces when the depth complexity of the environment is high.
Although the above method is based on the beam-tracing technique, it can also be applied in normal ray-tracing if the beams are replaced by rays and the size testings are replaced by more elementary testings such as checking the levels of subdivision or sizes of the patches. The method can also be used when transparent surfaces are present because a beam refracted by such a surface is treated as another beam which is separately traced.
Tracing of the Next Generation Beams When a patch is found to be at the right size, its intensity contribution to its associated pixel is computed. From the intersections between the rays of the beam and the surface of that patch, a pair of third generation beams corresponding to the reflection and refraction at the patch are computed. Similar to the second generation beams, these newly created beams are not traced immediately. If a node containing more than one surface is found to be at the right size. It either still undergoes subdivision or its image contribution to the pixel approximated from the surfaces in it.
After all the second generation beams emitted from the same surface have been traced, the third generation beams generated by them are sorted as in their case. These beams are then traced by their
5 sorted sequence. The tracing is the same as that of the second generation beams. Hence, for each beam, the same subdivision and size testings of associated patches are carried out. All the techniques applicable to the second generation beams can also be applied to these beams.
10 Similar to the ray-tracing technique, this process is repeated until all beams have reached a certain generation or their intensity has been attenuated to a level that can be ignored. Enhancement of the Technique to Include Anti-Aliasing
The technique described above enables adaptive subdivision of
15. surfaces during tracing. However, unless a beam is traced for each sub-pixel, anti--aliasing cannot be carried out if the central ray is used. This is because the use of central ray to detect intersecting patches is equivalent to point-sampling in normal ray-tracing. Therefore, the method cannot compute the intensity contribution of each 0 patch for each beam. However, this limitation can be lifted if the following modifications are applied to the tracing:
(a) During the tracing, the central ray is not used. Throughout the tracing, the patches have to be tested with the beam rather than the central ray to determine whether it is intersected by the latter. This 5 involves more complicated computations. However, it is necessary for anti-aliasing which requires the consideration of all the surfaces intersected by the beam. Consequently, an entity is intersected by the beam only if it is intersected by any part of the beam rather than the central ray. 0 (b) When a patch found to be at the right size, its image mapping has to be computed.
This image mapping can be more easily computed by applying the approximation strategy for computing the reflected/refracted rays of patches mentioned earlier. After a patch is found to be at the right 5 size, its vertices are mapped to the surface emitting the current beam. Both surfaces and the wavefront of the beam are usually assumed to be quadric, although other methods used to approximate surfaces mentioned earlier can be used. If the beam is not primary, the mapping is repeated to compute the point of mapping on the surface emitting the beam. This process is repeated until the mapping on the image plane is found.
However, the actual image of the patch may only be a portion of the whole image mapping of the patch. This is because the patch may be hidden by other patches on the path of the beam. The base of the beam also may not be totally within the emitting patch. In addition, since the image mapping of a patch by a beam can be superimposed on the image of the patch emitting that beam, the image clipping of the latter would also masks the image of the former.
To compute the actual image area of the patch, the image projection of a patch must be masked by the image projections of closer patches on the path of the current beam and the image areas outside the image area of the patch emitting the current beam. The former can be done by depth comparisons of the mappings of patches on the path of the current beams, based on known techniques of anti-aliasing in hidden surface removal. The visible area of the patch is then clipped by the image area of the patch emitting the current beam. Base on the percentage between this area and the area of the pixel, the intensity contribution of each patch at the pixel can be computed.
The computation of the exact visible area of the patch mappings may be highly computation intensity. This computation can be reduced using sub-pixel area sampling such as the known A-buffer technique. Using Frame-To-Frame Coherence to Speed up Tracing
The expanded volume test earlier described shows the advantage of the patch and vertex caches in tracing an image. In applications where multiple frames are generated, there is usually frame-to-frame coherence. This coherence implies that the paths and magnification factors of rays often change little between two successive frames. Also, a patch intersected by a ray is likely to be intersected by the same ray and its neighbouring rays in the next frame. Therefore it is advantageous not to erase the information stored in the patch and vertex caches after a frame is computed and to use this information for successive frames. To apply such a strategy, each record in the caches needs to contain a field to indicate the latest frame its information was computed or updated. When the record is accessed and the field shows that it is not computed in the current frame, the information in it is updated according to the movement of Its associated patch and the change of shape or properties of its associated surface since it was last updated.
INDUSTRIAL APPLICABILITY The described techniques can also be used in applications other than those in computer graphics. For example, the reflections and refractions of surfaces are computed to determine shape from shading in computer vision. The current techniques can be used to accelerate such computations when curved surfaces are involved.
While the ray-tracing technique was first used in optics and adopted in computer graphics, the current techniques can in turn be used in optics applications. It can be used to simulate the images generated by optical instruments containing non-linear curved lenses and mirrors, so as to assist the design of these instruments.
Those skilled in the art will appreciate that the foregoing presents a method for the realistic display of curved surfaces. This method can overcome the limitations of the earlier ray tracing, beam tracing and cone tracing techniques. The improvement it gains over them could be equivalent to the advantage of Catmull's subdivision method over the early hidden surface techniques. The methods described can be implemented and embodied in a computer program and run on a general purpose computer configured for graphics processing.
The foregoing describes a number of embodiments of the present invention and modifications, obvious to those skilled in the art can be made thereto without departing from the scope of the present invention.

Claims

APPENDIX 1REFERENCES:
1. Amanatides, J. "Ray Tracing with Cones," Computer Graphics, Vol 18 No 3, July 1984, pp.
2. Arvo, J., Kirk, D., "Fast Ray Tracing by Ray Classification," Computer Graphics, Vol 21, No 4, July 1987.
3. Arvo, J., Kirk, D., "A Survey of Ray Tracing Acceleration Techniques," in "An Introduction to Ray Tracing,", Glassner, A. S. (Ed.), Academic Press, 1989, pp. 201-262.
4. Blinn, J. F. , "A generalization of algebraic surface drawing," ACM Trans, on Graphics, Vol 1, No. 3, July 1982, pp. 236-256.
5. Carpenter, L., "The A-buffer, An Anti-aliased Hidden Surface Method," Computer Graphics, Vol 18 No 3, July 1984, pp. 103-108.
6. Catmull , E. "A Subdivision Algorithm for Computer Display of Curved Surfaces," Ph.D Thesis, University of Utah, Salt Lake City, December 1974.
7. Clark, J. H., "Hierarchical Geometric Models for Visible Surface Algorithms," Communications of the ACM, 19(10), October 1976, pp. 547-570.
8. Catmull, E., "A Hidden Surface Algorithm with Anti-aliasing," Computer Graphics, Vol 12 No 3, Aug 1978, pp. 462467.
9. Dadoun, N., Kirkpatrick, D. G., "The Geometry of Beam Tracing", Proceedings of the Symposium on Computational Geometry, 5-7, June 1985, pp. 55-61.
10. Foley, J. D., van Dam, A., Feiner, S. K. , Hughes, J. F., "Computer Graphics, Principles and Practice" Second Edition,
Addison-Wesley, 1990. 11. Hanrahan, P., "A Survey of Ray-Surface Intersection Algorithms," in "An Introduction to Ray Tracing,", Glassner, A. S. (Ed.), Academic Press, 1989, pp..80-119.
12. Heckbert, P. S., and Hanrahan, P. "Beam Tracing Polygonal Objects," Computer Graphics, Vol 18 No 3, July 1984, pp.119-127.
13. Kajiya, J., "New Techniques for Ray Tracing Procedurally Defined Objects," Computer Graphics, Vol 17 No 3, July 1983, pp. 91-102.
14. Kay, T., Kajiya, J., "Ray Tracing Complex Scenes," Computer graphics, 20(4), August 1986, pp. 269-278.
15. Laikin, M. , "Lens Design," Marcel Dekker, Inc. 1991.
16. Macdonald, J., Booth, K. , "Heuristics for ray tracing using space subdivision," Visual Computer, Vol. 6, No. 3, June 1990, pp. 153-166.
17. Nishita, T., Sederberg, T. W., Kakimoto, M., "Ray Tracing Trimmed Rational Surface Patches," Computer Graphics, Vol. 24, No. 4, August 1990, pp. 337-345.
18. Shinya, M., Takahashi , T., Naito, S., "Principles and Applications of Pencil Tracing," Computer Graphics, Vol. 21, No. 4, July 1987, pp. 45-54.
* 19. Snyder, J. M., Barr, A. H., "Ray Tracing Complex Models Containing Surface Tessellations," Computer Graphics, Vol. 21, No. 4, July 1987, pp. 119-128.
20. Stavroudis, 0. N., "Optics of rays, wavefronts, and caustics", Academic Press, New York, 1972.
21. Subramanian, K. R., Fussel, D. S., "Automatic Termination Criteria for Ray Tracing Hierarchies," Graphics Interface'91. 22. Toth, D., "On Ray Tracing Parametric Surfaces," Computer Graphics, 19(3), July 1985, pp. 171-179.
23. Watt, M., "Light-Water Interaction using Backward Beam Tracing," Computer Graphics, Volume 24(4), Aug 1990, pp. 377-385.
24. Weghorst, H., Hooper, G., and Greenberg, D. P. "Improved Computational Methods for Ray Tracing," ACM Transactions on Graphics, Vol 3 No 1 , Jan 1984, pp. 52-69.
25. Whitted, J. T. "An Improved Illumination Model for Shaded Display," Communications of the ACM, Vol 23 (6), June 1980, pp. 343-349.
26. Woodward, C. , "Ray Tracing Parametric Surfaces by Subdivision in Viewing Plane," in "Theory and Practice of Geometric Modeling", Straber, W., Seidel, H. (Eds), pp. 273-287.
CLAIMS :
1. A method of tracing one or more surfaces forming part of a computer generated image, said method comprising the steps of:
(i) tracing said surfaces with a plurality of beams, each of said beams having at least two rays, and for each said beam;
(ii) determining those said surfaces on the path of said beam;
(iii) classifying those said surfaces into groups of those requiring subdivision and those not requiring subdivision;
(iv) subdividing those said surfaces requiring subdivision into a plurality of sub-surfaces, and treating each said sub-surface as a further one of said surfaces, and repeating steps (i) to (iv) on said further surfaces said beam; and
(v) -for those said surfaces not requiring subdivision, determining the occlusion relationship between selected ones of said surfaces and computing a corresponding image contribution as projected by said beam.
2. A method as claimed in claim 1, wherein the step (v) includes the step of:
(vl) computing reflected and/or refracted beams of said beam and from said surfaces, and repeating each of steps (i) to (v) for each one of said reflected and refracted beams.
3. A method as claimed in claim 1, wherein the classification of said surfaces is based on either the estimated or actual image size of each said surface as projected by said beam when compared with a predetermined image size.
4. A method as claimed in claim 3, wherein said predetermined size is selected from the group consisting of an image pixel, a sub-pixel, and a group of pixels, the selection of which being based on the nature of said beam and each said surface.
5. A method as claimed in claim 1, wherein said beams originate from a single viewpoint.
6. A method as claimed in claim 1, wherein said beams have three corner rays defining a triangular beam cross-section.
7. A method as claimed in claim 1, wherein said beams have four corner rays defining a rectangular beam cross-section. 8. A method as claimed in claim 1, wherein some of said beams have three corner rays defining a triangular beam cross-section and some of said beams have four corner rays defining a rectangular beam cross-section.
9. A method as claimed in claim 6, 7 or 8, wherein initially beams having a rectangular beam cross-section are, upon the repetition of steps (i) to (iv) for said further surfaces, divided into two or more further beams.
10. A method as claimed in claim 9, wherein said further beams comprise a number of rays either equal to or less than said beams.
11. A method as claimed in claim 10, wherein at least one said beam having a rectangular or triangular cross-section is divided into at least two said further beams having either a rectangular and /or a triangular cross-section.
12. A method as claimed in claim 6, wherein said surfaces are adaptively subdivided into said sub-surfaces using triangulation or regular binary subdivision.
13. A method as claimed in claim 1, wherein each said beam comprises a central ray that permits (fast) detection of the patch.
14. A method as claimed in claim 13, wherein said ray is computed from those rays emanating from the surface emitting said beam.
15. A method as claimed in claim 1, wherein said surface is in the path of the beam if a surface representation is intersected by a beam representation.
16. A method as claimed in claim 15, wherein said surface representation is selected from the group consisting of a polygon approximating the surface, a flat surface with interpolated normals, the actual surface, an approximated curved surface, a bounding volume of said surface, and an expanded bounding volume of said surface.
17. A method as claimed in claim 15, wherein said beam representation is selected from the group consisting of a central ray of said beam, and a beam volume.
18. A method as claimed in claim 1, wherein the classification of said surfaces is based upon the intersection of said rays with said surface and a comparison between polygonal dimensions defined by said intersections with said surface in a plane at which the polygonal definition of said surface lies.
19. A method as claimed in claim 1, wherein the classification of said surfaces is based upon a projection of a bounding box surrounding said surface onto a plane and a comparison of dimensions of said projection with intersections of said rays with said plane.
20. A method as claimed in claim 1, wherein an image portion projected by said beam is calculated using a beam coefficient and the image contribution of said surface is computed.
21. A method as claimed in claim 1, wherein data relating to said surfaces is simplified for computational purposes by converting said data in the surface formula to a preselected format.
22. A method as claimed in claim 14 wherein said ray and said reflected and refracted rays are approximated by mapping onto the particular surface emitting the beam.
23. A method as claimed in claim 22, wherein said approximation further includes mappings on a surface intersected by the beam, and1 assuming that the wavefront of the beam, or either or both of the two surfaces, are of simpler representation.
24. A method as claimed in claim 23, wherein said simpler representation comprises quadric representation of image data.
25. A method as claimed in claim 1, wherein on repetition of steps (i) to (Iv), the further steps of:
(vi) generating all necessary said secondary beams; (vii) sorting said secondary beams based upon selected image space or object space attributes, or both; and
(viii) tracing said secondary beams by the sort sequence.
26. A method as claimed in claim 1 wherein upon sub-division of said surfaces, sub-surface and ray intersection data are stored in respective caches which thereby permit the re-use of said data.
27. A method of determining a volume that encloses a surface forming part of a computer generated image, said method comprising the steps of:
(i) testing said surface to determine if a predetermined characteristic is met, and if not, defining said surface as a non-leaf surface and subsequently dividing said non-leaf surface into a plurality of sub-surfaces;
(ii) repeating step (i) for each said sub-surface until said characteristic is met at which time the corresponding sub-surface(s) is/are defined as a leaf surface;
Figure imgf000029_0001
(Hi) for each said leaf surface determining a corresponding expanded bounding box; and
(iv) combining each of said expanded bounded boxes to define said volume.
28. A method as claimed in claim 27, comprising the further step of:
(v) combining said expanded bounding boxes for leaf surfaces corresponding to each said non-leaf surface to define a bounding volume of each said corresponding non-leaf surface, wherein said bounding volumes are combined to define said volume.
29. A method as claimed in claim 28, wherein a bounding volume of an intermediate non-leaf surface is determined by combining the expanded bounding boxes of these leaf surfaces corresponding to said intermediate non-leaf surface.
30. A method as claimed in claim 27, wherein said surfaces are arranged into at least three groups based upon a predetermined priority level, and steps (i) to (v) are performed only upon those said surfaces in the groups of selected priority.
31. A method as claimed in claim 30, wherein said groups comprise a front surface, a first list of surfaces a higher level than said priority level, and a second list of surfaces lower than said priority level .
32. A method of tracing one or more surfaces forming part of a computer generated image, said method comprising the steps of:
(i) representing each surface by a hierarchy wherein each surface is represented by a record in the hierarchy, and contains at least one sub-surface represented by a corresponding record in the next lower level of the hierarchy;
(ii) tracing each of said surfaces with a trace entity, and for each trace entity;
(iii) grouping said surfaces into three categories wherein the first category comprises the surface most likely to be intersected by the trace entity, the second category includes surfaces which might be in front of the first category surface with respect to the trace entity, and the third category includes surfaces that cannot be in front of the first category surface with respect to the trace entity;
(iv) carrying out size tests on the first category surface; (v) if said first category surface is not at the right size, subdividing it and replacing its membership in the first category by a new surface which is most likely to be intersected by the trace entity and which is selected from either the sub-surfaces of said surface, or surfaces belonging to the second or third categories, updating the membership of surfaces in the second and third categories reflecting the change of membership of the first category surface;
(vi) if said new surface is not at the right size, repeat step (v) on the new surface;
(vii) if said first category surface is at the right size, carrying out size tests on each surface belonging to the second category, if said surface in the second category is not at the right size, removing it from said category, adding its sub-surfaces which are at the right size and in the path of the trace entity, to the category;
(viii) comparing the surfaces in the first and second categories, reconcile any subdivision mismatch amongst those surfaces, and finding the surface closest to the trace entity from those surfaces.
33. A method as claimed in claim 32, wherein the trace entity is either a beam or a ray.
34. A method as claimed in claim 32, wherein the surface most likely to be intersected is that surface whose expanded bounded volume is closest to the trace entity in the path of the trace entity.
35. A method as claimed in claim 32, wherein the second category surfaces are those whose expanded bounding volumes overlap the expanded bounding volume of the first category surface and in the path of the trace entity.
36. A method as claimed in claim 32, wherein the third category surfaces are those whose expanded bounding volumes do not overlap the expanded bounding volume of the first category surface and in the path of the trace entity.
SHEET
PCT/AU1992/000344 1991-07-12 1992-07-10 A beam tracing method for curved surfaces WO1993001561A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
AUPK719991 1991-07-12
AUPK7199 1991-07-12
AUPK8644 1991-10-01
AUPK864491 1991-10-01
AUPL2114 1992-04-27
AUPL211492 1992-04-27

Publications (1)

Publication Number Publication Date
WO1993001561A1 true WO1993001561A1 (en) 1993-01-21

Family

ID=27157620

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU1992/000344 WO1993001561A1 (en) 1991-07-12 1992-07-10 A beam tracing method for curved surfaces

Country Status (1)

Country Link
WO (1) WO1993001561A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8411087B2 (en) 2008-02-28 2013-04-02 Microsoft Corporation Non-linear beam tracing for computer graphics
WO2015050304A1 (en) * 2013-10-04 2015-04-09 삼성전자주식회사 Rendering method and rendering device
CN117058299A (en) * 2023-08-21 2023-11-14 云创展汇科技(深圳)有限公司 Method for realizing rapid mapping based on rectangular length and width in ray detection model

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0193151A2 (en) * 1985-02-26 1986-09-03 Sony Corporation Method of displaying image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0193151A2 (en) * 1985-02-26 1986-09-03 Sony Corporation Method of displaying image

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8411087B2 (en) 2008-02-28 2013-04-02 Microsoft Corporation Non-linear beam tracing for computer graphics
WO2015050304A1 (en) * 2013-10-04 2015-04-09 삼성전자주식회사 Rendering method and rendering device
US20160241833A1 (en) * 2013-10-04 2016-08-18 Samsung Electronics Co., Ltd. Rendering method and rendering device
CN117058299A (en) * 2023-08-21 2023-11-14 云创展汇科技(深圳)有限公司 Method for realizing rapid mapping based on rectangular length and width in ray detection model

Similar Documents

Publication Publication Date Title
US7030879B1 (en) System and method of improved calculation of diffusely reflected light
US5299298A (en) Accelerated shadow testing method for three dimensional graphics rendering system
US7362332B2 (en) System and method of simulating motion blur efficiently
Schaufler et al. Ray tracing point sampled geometry
JP3095751B2 (en) Method to obtain radiation image
Bittner et al. Hierarchical visibility culling with occlusion trees
Adamson et al. Ray tracing point set surfaces
US6985143B2 (en) System and method related to data structures in the context of a computer graphics system
US7009608B2 (en) System and method of using multiple representations per object in computer graphics
Jansen Data structures for ray tracing
US6825840B2 (en) System and method of adjusting ray origins when shading vertices with rays
WO2002054351A2 (en) Using ancillary geometry for visibility determination
TWI417808B (en) Reconstructable geometry shadow mapping method
US8605088B2 (en) Method for reconstructing geometry mapping
US20240185503A1 (en) Displacement-centric acceleration for ray tracing
Fournier et al. Chebyshev polynomials for boxing and intersections of parametric curves and surfaces
US6518964B1 (en) Apparatus, system, and method for simplifying annotations on a geometric surface
WO1993001561A1 (en) A beam tracing method for curved surfaces
Hertel et al. A Hybrid GPU Rendering Pipeline for Alias-Free Hard Shadows.
Müller Image generation by space sweep
Teutsch et al. Evaluation and correction of laser-scanned point clouds
CN109934900A (en) Real-time global illumination solution based on VR hardware structure
KR100269100B1 (en) Rasterizer using triangle traverse
Mesquita et al. Non-overlapping geometric shadow map
JP2952585B1 (en) Image generation method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA JP KR US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU MC NL SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA