EP1844445A1 - Volumetric shadows for computer animation - Google Patents

Volumetric shadows for computer animation

Info

Publication number
EP1844445A1
EP1844445A1 EP05812297A EP05812297A EP1844445A1 EP 1844445 A1 EP1844445 A1 EP 1844445A1 EP 05812297 A EP05812297 A EP 05812297A EP 05812297 A EP05812297 A EP 05812297A EP 1844445 A1 EP1844445 A1 EP 1844445A1
Authority
EP
European Patent Office
Prior art keywords
volume
objects
occlusion
computer program
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05812297A
Other languages
German (de)
French (fr)
Inventor
Yangli H. Yee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pacific Data Images LLC
Original Assignee
Pacific Data Images LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pacific Data Images LLC filed Critical Pacific Data Images LLC
Publication of EP1844445A1 publication Critical patent/EP1844445A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Definitions

  • the amount of occlusion information 130 for the volume may be significant.
  • the accelerated structure 150 comprises a data structure that stores the occlusion information 130 efficiently and provides easy access thereto.
  • the acceleration structure 150 can be accessed to compute 160 the shading at one or more locations in the scene, thereby producing shading information 170.
  • the shading information 170 for locations in the scene is used, along with other information about the scene, to render 180 the scene to produce one or more animated images 190.
  • the animated images 190 can then be combined to create an animated video work.
  • an object is typically modeled as a set of primitives, which are constituent elements of the object.
  • Types of common primitives include curves, parametric surfaces, polygons and volumetric primitives, although other types of primitives are possible within the scope of the invention.
  • the rasterization step operates on each primitive of an object separately. During rasterization, therefore, the geometric representation of each primitive is converted into occlusion information, such as opacity and color values. This occlusion information is then added to the occlusion information of the volume elements that contain the primitive.
  • N samples are placed within the voxel. These samples are each tested for whether they are inside the cylinder.
  • the ratio of samples inside the cylinder to the total number of samples per voxel, N, is the opacity contribution of that cylinder to the voxel.
  • the color values for each voxel can be compressed to save system resources.
  • the color values comprise full floating point numbers for red, green, blue, and alpha.
  • Each color value is gamma corrected, multiplied by 255, and then stored in an 8-bit memory location. This can be performed for each red, green, and blue value, using the following formula:
  • the occlusion information in this acceleration structure is then accessed in the shading step, in which a volumetric shadow is computed for a location in the scene.
  • the shadow is computed based on the accumulated shading effects of the volume elements on a ray passed from the location to be shaded through the occluding volume to the light source.
  • the shading step uses several types of information about a scene, including a location on a surface to be shaded, the position and type of the light source from which light comes, and the acceleration structure describing objects that can cast a volumetric shadow. Using this information, a ray is cast from the location on the surface to the light source, through the occluding volume. The accumulated occluding effect of any voxels through which the ray passes is then determined.
  • the transmittance of a light to point x is defined as:
  • Blend is set to 0 (e.g., the feature is turned off) when Far is less than a threshold, Shadow DisappearStart, and it is set to 1 (e.g., no shading) when Far is greater than a threshold, ShadowDisappearEnd. Between these thresholds, Blend is calculated according to the equation:
  • any of the steps, operations, or processes described herein can be performed or implemented with one or more software modules or hardware modules, alone or in combination with other devices. It should further be understood that any portions of the system described in terms of hardware elements may be implemented with software, and that software elements may be implemented with hardware, such as hard-coded into a dedicated circuit.
  • a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described herein.
  • the methods, systems, and computer program products can be employed to produce a feature animation product, such as a movie, that includes images with volumetric images rendered according to any of the embodiments described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A computer-rendered image of a three-dimensional scene includes a volumetric shadow cast by one or more objects in the scene. The data for the objects are rasterized to generate occlusion information that describes how objects within the volume occlude light. This information may be stored in an acceleration structure, which relates the occlusion information to volume elements within the occluding volume. A volumetric shadow is computed by tracing a ray through the occluding volume, the shadow based on the accumulated shading, effects of the volume elements on the traced ray. An artist-­driven lighting model may be used to compute the shadow rather than a model based on pure physics. This model may provide a number of adjustable tools for controlling shading effects, including falloff, blending, motion blur, and coloring.

Description

VOLUMETRIC SHADOWS FOR COMPUTER ANIMATION
BACKGROUND
Field of the Invention
[0001] This invention relates to rendering techniques in computer graphics, and in particular to rendering volumetric shadows using ray tracing. Background of the Invention
[0002] Computing proper and realistic lighting is an important aspect of rendering computer-generated images from three-dimensional scenes. In an image, which can be a single frame in an animated work, one or more light sources illuminate the surfaces of various objects in the scene. These light sources have particular locations, lighting powers, and other properties that determine how they illuminate these surfaces. Their illumination affects the appearance of the objects in the image, as seen from a camera position, the point of view from which the image is taken. To produce realistic images, a rendering program, or renderer, determines the extent to which objects in the image occlude the light sources from illuminating other objects, thereby casting a shadow on those other objects. The degree to which a shadow is cast is determined according to a three-dimensional model of the objects and the light sources, usually based at least in part on the physics underlying light transmission. In this way, the renderer simulates shadows cast on various objects in the image. [0003] Traditional shading techniques often produce shadows having very sharp edges when an object is lit by light coming from a point light source. "With these techniques, objects are either completely in shadow or are fully lit by that light source. But in many instances it is desirable to have shadows that fall off gradually, even when lit with a point light source. This happens for example when shadows are cast by fur or smoke, where the occluding object gradually attenuates the light instead of completely blocking it all at once. This type of shadow is called a volumetric shadow. Volumetric shadows have traditionally been rendered either by ray tracing or using augmented shadow maps.
[0004] In the ray-tracing method, rays are projected from the surface to be shadowed through participating media towards the light source. The media's optical density is then accumulated along the ray. Although generally accurate, this technique can be time consuming and demanding of computing resources, especially where the participating media is complex. Existing ray-tracing techniques thus do a poor job shading through complex volumes, such as those that include hair or other small objects having complex shapes.
[0005] Shadow maps, on the other hand, store object distance (depth) and opacity information relative to the light source. At render time, the opacity information is retrieved from the shadow map, and these pre-calculated opacity values are used to determine the shadow on a surface that is cast by the volume. The shadow map method generally takes up less memory than the ray-tracing method, but the shadow maps have to be generated for each different light source and recomputed whenever a light source moves relative to the occluding objects. SUMMARY OF THE INVENTION
[0006] The present invention improves upon previous ray-tracing methods for computing volumetric shadows for locations in a computer-rendered image of a three- dimensional scene. To compute a volumetric shadow in a scene, a volume that contains a number of occluding objects is divided into discrete volume elements. Occlusion information (such as opacity) is then computed for at least some of these volume elements. The occlusion information for a volume element describes how any objects within that volume element would occlude light. In this way, a discrete grid of occluding blocks can be used to approximate the occlusive effects of highly complex three-dimensional media in a scene. The shadow cast by the objects is then determined using these volume elements rather than the objects themselves. The technique thus allows for improved volumetric shading for complex occluding objects. Moreover, unlike shadow mapping techniques, this ray-tracing method is view independent; the generation cost is fixed, and it does not increase with each additional light source. [0007] One embodiment of the invention includes the steps of rasterizing, building an acceleration structure, and shading. In the rasterization step, data for objects in the scene are converted from a their physical model to corresponding occlusion information about the objects, such as their opacity and color. The objects may comprise a number of primitives, including curves, surfaces, polygon meshes, and volumes. This occlusion information is used to build an acceleration structure, which relates the occlusion information to volume elements within the occluding volume. The occlusion information in this acceleration structure is then accessed in the shading step, in which a shadow is computed for a location in the scene. The shadow is computed based on the accumulated shading effects of the volume elements on a ray passed from the location to be shaded through the occluding volume to the light source.
[0008] In another aspect of an embodiment of the invention, an artist-driven lighting model may be used for shading rather than a model based on pure physics. The model may include a number of artist-controllable tools for controlling values at which shadow starts to take effect and when it stops taking effect, as well as the rate at which the shadow falls off. In other embodiments, the model allows an artist to blend the shadow for motion blur and/or to reduce the shading when the occluding objects are far from the shaded surface. In other embodiments, adjustments are provided for controlling the color transfer of the shading volume to the volume shadow.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] FIG. 1 is a diagram of a process flow for creating an animated work that is shaded using volumetric shadows and ray tracing, in accordance with an embodiment of the invention.
[0010] FIG. 2 is a two-dimensional representation of an occluding volume structure that includes a number of volume elements overlaid over objects within a scene, in accordance with an embodiment of the invention.
[0011] FIG. 3 is a cumulative density function for the volume of an example curve used in a rasterization step, in accordance with an embodiment of the invention. [0012] FIG. 4 illustrates segmented polygons in a rasterization step, in accordance with an embodiment of the invention. [0013] FIG. 5 is a two-dimensional representation of the bounding geometry of a volume through which rays are sent in a rasterization step, in accordance with an embodiment of the invention.
[0014] FIG. 6 is a two-dimensional representation of a sample volume for which an octree is used to encode occlusion values, in accordance with an embodiment of the invention.
[0015] FIG. 7 is a two-dimensional representation of an octree data structure, in accordance with an embodiment of the invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS [0016] FIG. 1 illustrates one embodiment of a process for creating an animated work that is shaded with volumetric shadows using ray tracing. The process begins with a set of geometry/scene data 1 10 that describe a three-dimensional scene to be animated. The data 1 10 include information about modeled objects that can cast shadows from one or more light sources placed within the scene. The objects may be simple solids and surfaces or more complex objects such as hair, skin, clothing, fog, and the like. The data 1 10 are rasterized 120 to create occlusion information 130 about an occluding volume within the scene that contains the objects.
[0017] The occluding volume bounds the objects in the scene that are to cast a volumetric shadow. The volume is preferably divided into a plurality of discrete volume elements. In one embodiment, each volume element represents a voxel, defined as the smallest distinguishable box-shaped element in a three-dimensional space. The user may specify the size of a voxel, with smaller voxel size generally leading to a better result but requiring more computing resources. Alternatively, volume elements as used herein may be larger or smaller than a voxel, may comprises a fractional number of voxels, and may be of non-uniform shape or size.
[0018] The occlusion information 130 created by the rasterizing step 120 describes the effect that the volume would have on any light that passes therethrough. For example, occlusion information 130 resulting from the rasterization step 120 preferably includes an opacity value for each volume element inside the occluding volume, where the opacity value relates how much light a volume element allows to pass. The occlusion information 130 may also include other information for the volume elements, such as a color value that describes how the color of the light passing through the volume element is affected.
[0019] Depending on how many volume elements are in the volume, the amount of occlusion information 130 for the volume may be significant. To manage and facilitate access to the occlusion information 130 by a renderer, therefore, and accelerated structure 150 is preferably built 140. In one embodiment, the accelerated structure 150 comprises a data structure that stores the occlusion information 130 efficiently and provides easy access thereto. In this way, the acceleration structure 150 can be accessed to compute 160 the shading at one or more locations in the scene, thereby producing shading information 170. The shading information 170 for locations in the scene is used, along with other information about the scene, to render 180 the scene to produce one or more animated images 190. The animated images 190 can then be combined to create an animated video work. [0020] Various embodiments and variations of the invention are now described more fully with reference to the accompanying figures, in which several embodiments of the invention are shown. The present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. The descriptions, terminology, and figures are provided so as to explain the invention without undue complexity, but should not be taken as limiting the scope of the invention, which is set forth in the claims below. Rasterization
[0021] In the rasterization step, occlusion information about one or more objects in the scene is obtained from the physical model of those objects. The objects are contained within an occluding volume, which is divided into a number of volume elements, or voxels. Each voxel has a specific position in space and is associated with occlusion information (e.g., opacity and color values). The occlusion information may be zero-valued, for example if the voxel does not overlap with any objects. Although the occlusion information is preferably stored in an acceleration structure, during rasterization the voxels can be thought of as disjoint cubes located in space and associated with occlusion information for any objects contained therein. [0022] In computer graphics, an object is typically modeled as a set of primitives, which are constituent elements of the object. Types of common primitives include curves, parametric surfaces, polygons and volumetric primitives, although other types of primitives are possible within the scope of the invention. In one embodiment, the rasterization step operates on each primitive of an object separately. During rasterization, therefore, the geometric representation of each primitive is converted into occlusion information, such as opacity and color values. This occlusion information is then added to the occlusion information of the volume elements that contain the primitive.
[0023] Accordingly, in one embodiment, a bounding volume in the scene is divided - into voxels of specified size. An opacity value for each voxel is computed as the summation of the occluding contributions of any objects that pass through the voxel. In one embodiment, the occluding contribution of an object is taken as the fraction (e.g., percentage) of the voxel that any portion of the object occupies. Accordingly, the opacity value for a voxel is taken as the summation of the occluding contributions of each primitive of each object that passes through the voxel. This is conceptually expressed by the following equation:
_ . , V r Volume of Object, within voxel \
Opacity of voxel = ∑ / '
, \ Volume of voxel J
A summation of opacity contributions is preferably performed for each voxel that contains an occluding object. In this way, voxels have an opacity of zero if there are no occluding objects within them. Occupied voxels have an opacity value based on the fraction of the voxel that is occupied by an object that blocks light. If an object occupies an entire voxel, the opacity of that voxel would be 1, or completely occluding. The less of a voxel's volume occupied by occluding objects, the more transparent, or closer to an opacity value of 0, the voxel is.
[0024] FIG. 2 is a two-dimensional (e.g., side view) representation of the occluding volume and its voxel grid, illustrating the relationship between the voxels and the objects within the occluding volume. In this case, there are two curves 205 and 210 within the volume. After rasterization, each voxel should reflect the opacity and color information of any portions of objects contained within the voxel. In the relationship described above, the opacity associated with a voxel is the volume overlap of the objects and a voxel as a proportion of the voxel's volume. If multiple objects overlap with a voxel, their opacity contributions are thus added together, not to exceed unity (e.g., fully occluding). In the example of FIG. 2, voxel 215 does not contain any portion of an object, so its opacity value is zero. Voxel 220, however, does contain a portion of an object (curve 210), so its opacity value is calculated as the fraction of the voxel that curve 210 occupies. Similarly, both curves 205 and 210 occupy a portion of voxel 225, so its opacity value is calculated as the sum of the fractions of the voxel 225 that curves 205 and 210 occupy.
[0025] Depending on the type of primitives or objects that occupy the volume, different methods of rasterizing can be employed. For example, some primitives such as curves and surfaces have no volume, so assumptions for their volume are adopted so that a volume comparison can be made. But it should be understood that while certain methods are described for calculating the occlusion information for the voxels, any of a variety of calculations, assumptions, and simplifications could be used to obtain or estimate occlusion information for the volume. The invention is thus not limited to particular calculations and methods described, but rather it encompasses generally rasterizing the object data to obtain information about the occlusion properties of a plurality of volume elements. Curve Rasterization
[0026] One type of primitive is a curve. While curve primitives may have a variety of definitions in different three-dimensional modeling applications, a curve may be defined by an arbitrary line in space with a radius that varies along its length. To determine occlusion contributions of a curve for one or more voxels, the problem becomes to determine which voxels are occupied by the curve and how much of the curve occupies each voxel. Several techniques can be used for this curve rasterization: deterministic, fixed-length, and fixed-volume.
[0027] In the deterministic scheme, the curve is placed in the voxel grid, and an intersection test is performed for each voxel in the grid or for each voxel that is determined to intersect the curve. For each voxel that intersects the curve, the opacity value of the voxel is increased by the fraction of the voxel's volume occupied by the curve. When this is performed for all of the curves in the voxel, the opacity of each voxel will have been determined according to the equation for a voxel's opacity set forth above. Although this deterministic scheme can be used to compute opacity values, it can be too slow for certain applications as where the scene includes a large number of curves (e.g., millions of hairs generated for each character in the scene). [0028] In one embodiment, the intersection for the deterministic test is determined for each curve, by first approximating the curve by dividing it into a discrete number of cylinders, each cylinder having a radius and a height. For each cylinder of the curve, a three-dimensional bounding box is computed by finding the bounding box of the end caps of the cylinder, approximated by the bounding box of four points on each end cap. Then, for each voxel inside the bounding box of the cylinder, N samples are placed within the voxel. These samples are each tested for whether they are inside the cylinder. The ratio of samples inside the cylinder to the total number of samples per voxel, N, is the opacity contribution of that cylinder to the voxel.
[0029] In the fixed-length technique, the curve is divided into segments of equal length. In one embodiment, this constant length of each curve segment is chosen to be the cube root of the volume of a voxel. Because the radius associated with a curve can change, the volume of each segment could be larger or smaller than a voxel. If a segment's volume is smaller than a voxel, its opacity contribution to a voxel is computed directly as the fraction of the voxel's volume that the segment occupies, as in the equation above. If the segment's volume is bigger than a voxel, the segment is divided randomly into portions each having an equal volume that is smaller than a voxel. Opacity contributions are then computed for these smaller portions and added to the opacity values for each voxel they occupy. Again, although this fixed-length technique can be used to compute opacity values, it can generate noisy results when the radius of the curve varies so that the segments vary in volume from being larger than a voxel to being smaller than a voxel.
[0030] The fixed-volume technique can also be used to generate occlusion information for the voxels. In this technique, the curve is divided into a number of segments each having a volume equal to a voxel. Each equal-volume segment is then randomly inserted into the volume, and its opacity contribution is computed based on the amount of each yoxel's volume it occupies. But because the radius of the curve can vary, the length of each fixed-volume segment is not necessarily constant. Therefore, a technique is used to divide the curve into segments of equal volume by finding the appropriate endpoints of each equal-volume segment along the curve's length. [0031] In one embodiment, the length of each equal-volume segment is determined by parameterizing the curve by the parameter t, where a position on the curve is represented by P(t) = (x(t), y(t), z(t)), where x, y, and z are functions of the parameter t. Expressed in this way, the parameter t can be thought of as the distance along the curve from its start point. The curve can also have a varying radius function, r(t), also a function of the parameter t. Parameterized in this way, a cumulative density function for the volume of the curve is computed by integrating the radius of the curve across the length of the curve. The cumulative density function gives the accumulated volume of the curve from the beginning point of the curve (t = 0) to a current distance along the curve. A cumulative density function for an example curve is shown in FIG. 3. At the end point of the curve, t = T, the cumulative density function gives the volume of the entire curve, V = V.
[0032] To construct the cumulative density function, the curve may be interpolated between control points using an arbitrary basis function. For example, the radius of the curve is sampled at equal length steps of length dt, where dt is half the length of the curve (T) divided by the number of control points. The volume of the curve for the length segment is then approximated using to the values of dt and the radius evaluated at the parameter t. When the volumes of all the segments are computed, they are summed up incrementally to obtain the cumulative density function. In one embodiment, the cumulative density function is stored as a table oft and v samples, where for each position, t, the table has the cumulative volume, v. In the example cumulative density function is graphed in FIG. 3, the associated curve would have a larger radius in the middle of its length, as reflected in the greater rate of volume accumulation (i.e., a steeper curve) that middle region.
[0033] With the cumulative density function constructed, the curve is then divided into equal segments by sampling the curve on the t-axis at equal intervals of the v-axis (the intervals being the volume of a voxel), as shown in FIG. 3. The number of segments can be computed by dividing the curve volume V by the volume of a voxel, so that the equal-volume segments of the curve have the same volume as a voxel. Alternatively, the number of segments may be scaled by a user-controlled factor if it is desired to have smaller or larger segments. Each segment thus has a volume of dv equal to the total volume, V, divided by the number of segments. To determine the location along the curve for each equal-volume segment, the cumulative density function table is searched for the nearest two t values of each volume increment, dv. If the cumulative density function is stored as a table of coordinates, the position t corresponding to the next volume segment can be determined by interpolation of the table between the nearest two t values. With the parameter t known for each of the endpoints of the segment, its position in space can be readily determined from the parameterization function, P(t) = (χ(t), y(t), z(t)).
[0034] Once each segment has been divided and has a known position in world space, its opacity contribution can be added to each of the voxels that overlap the segment. An opacity contribution is preferably based on the fraction of the voxel's volume occupied by the segment, as described above in the equation for the opacity of a voxel. However, it would be a computationally difficult task to determine the exact amount of volume overlap between a segment and each voxel. In one embodiment, therefore, an approximation for the opacity contributions is obtained by selecting M samples at various points within the segment. The samples may be deterministically or pseudorandomly selected within the segment, where M is preferably greater than 1. Each point sample is located in world space within a voxel and thus represents a 1/Mth portion of the segment's opacity contribution. Accordingly, for each of the samples in a segment, an opacity contribution is added to the voxel where the sample is located. This opacity contribution is based on the opacity of the sampled segment divided by the number of samples, M. When this is performed for each sample of each segment of a curve, the curve has been rasterized to generate occlusion information for the voxels in the occluding volume.
[0035] The rasterization can also produce a color value for the voxel. In one embodiment, the color value for a voxel is computed based on the color, if any, associated with any objects that occupy the voxel. If a voxel already contains a color value, the overlap of colors caused by two objects in a voxel may be resolved by simply averaging the existing color of the voxel with the new color to be inserted. Otherwise, if there is no color value already associated with the voxel, the shaded color of the curve segment that occupies the voxel is used for the voxel's color value. [0036] In one embodiment, an algorithm can be used to eliminate falloff due to light and thus brighten the color values. This may be done by converting the color values from RGB (red, green, and blue) color space into HSV (hue, saturation, and value) color space, setting V to be equal to 1, and then converting back into RGB color space. [0037] To save system resources, the color value need not be recomputed for every curve segment. Accordingly, in one embodiment, the color of the curve is evaluated every N voxels, where N is an integer greater than one that may be specified by a user. In this way, the first voxel for each curve is shaded to get its color, and the shading is not recomputed again until N segments have been evaluated along the curve.
Rasterization of Surfaces and Polygon Meshes
[0038] Other types of primitives include surfaces (e.g., parametric surfaces) and polygon meshes. Surfaces and meshes usually have area but no volume. In one embodiment, a surface is simplified by tessellating the surface into a polygonal mesh. Because a two-dimensional polygon has no volume, each polygon of a polygon mesh can be extruded along its normal by a user specified amount, thereby giving each polygon in the mesh some volume. Because polygons in a mesh are often quadrilaterals or triangles, it is easy to calculate the volume of the resulting rectangular volume or prism. In this way, opacity contributions for a voxel are readily determined based on the amount of the voxel's volume occupied by each extruded polygon element. [0039] The opacity contributions of an extruded polygon mesh are determined by dividing the volume of each extruded polygon by the volume of a voxel. The result, N, represents the number of voxel-sized portions the extruded polygon contains. Then, N points are chosen at random on the polygon. FIG. 4 illustrates the process of rasterizing a polygon into the volume of voxels. In FIG. 4, two different polygons are rasterized by picking N random points on the polygon. The opacity contribution of the polygon on the volume is then determined for each of the N points by inserting the opacity information of the polygon at the point into one or more voxels in the voxel grid that correspond to the point's position in space.
[0040] As with curve primitives, color values may also be computed for the voxels. To compute a color value, the N points on the polygon may be shaded to obtain a color therefor. These colors may added to the voxel's occlusion information in the same way as described for curves above.
Rasterization of Volumetric Primitives
[0041] Volume primitives are another type of primitive that may exist within the occluding volume. A volume primitive may be defined by an enclosing geometry and a volume shader. The volume shader is a function that specifies the opacity of each point within the enclosing geometry. For example, the volume shader for a sphere with constant properties might specify an opacity of 0.5 within the sphere and 0.0 outside the sphere. The volume shader may also specify color values for the volume. [0042] To rasterize such a volume primitive, in one embodiment, rays are directed from below the enclosing geometry upwards through the volume. FIG. 5 illustrates this embodiment of a rasterization process for z volume primitive. For each ray, the first time the ray passes a surface of the enclosing geometry, the ray is considered to be inside the enclosing geometry. When inside the enclosing geometry, the opacity and color information is evaluated according to the volume shader at points within the enclosing geometry along each ray. In one embodiment, the volume shader is sampled along each ray at intervals equal to the height of a voxel, and each column of voxels has a ray. When the surface of the enclosing geometry is passed again, the ray is considered to be outside the enclosing geometry. The rays are then followed until a surface of the enclosing geometry is again passed. Every odd intersection of the ray and the geometry is counted as being inside the geometry, and every even intersection outside the geometry. Once the ray has left the vicinity of the bounding geometry, the process ends for that ray.
[0043] At each point where the volume shader is sampled, the corresponding voxel is assigned the opacity and color values of the evaluated volume shader. Where the voxel already has an opacity and/or color value, the evaluated volume shader is added to the voxel as described above.
Acceleration Structure
[0044] After the rasterization process is completed, the occlusion information for the voxels is obtained. Each of the voxels is preferably associated with an opacity value and a color value, each of which may be zero. This occlusion information is for use in the shading process to perform the volumetric shadow computation. But because there are typically a significant number of voxels, it is desirable to organize this information in a way that allows a system to access the information for the voxels quickly.
[0045] Many different types of data structures may be used to store the voxels' occlusion information. In one embodiment, the occlusion information computed during rasterization of the geometry data for the objects in the scene is used to build an acceleration structure. The acceleration structure relates the occlusion information to volume elements or voxels within the occluding volume.
[0046] The volume of voxels may be densely packed or sparsely packed. A densely packed volume is one that has a relatively high number of voxels with nonzero-valued occlusion information (e.g., voxels that contain an occluding object). Conversely, a sparsely packed volume is one that has a relatively high number of voxels with zero- valued occlusion information (e.g., voxels that do not contain any occluding objects). Depending on the nature of the voxels in the occluding volume, different acceleration structures may be built to optimize access to the occlusion information therein. [0047] If the volume is densely packed, as is likely when the objects include volume primitives (e.g., smoke), a high number of the voxels within the volume are likely to have nonzero opacity and/or color values. Accordingly, in one embodiment, the information for the voxels is stored in a uniform voxel grid, where the acceleration structure comprises an MxNxO three-dimensional array of voxels. In one embodiment, each position in the array identifies a voxel and includes an opacity value and a color value for that voxel. Stored in such an array, ray tracing through the voxel grid can be performed by known methods, such as the DDA (digital differential analyzer) algorithm for tracing rays through uniform voxel grids. In addition, this array need not have the same dimensions as the entire occluding volume. For example, it can be reduced in size where a smaller sub-volume within the occluding volume includes all of the voxels that have nonzero occlusion information.
[0048] If the volume is sparsely packed, for example where the volume contains many curve and surface primitives, most of the voxels are likely to have zero-valued occlusion information. In such a case, in one embodiment, the acceleration structure is built by storing the occlusion information for the voxels in an octree. An octree is data structure composed by sub-dividing a volume repeatedly into eight sub-volumes (e.g., cubes) until the desired level of subdivision is achieved for each branch of the octree. Preferably, a branch of the octree is subdivided only if there is a voxel inside it that contains nonzero occlusion information.
[0049] FIG. 6 shows the subdivision process of a sample volume for which an octree is used to encode occlusion values. As shown from a side slice view, the volume is divided into two parts along each of the three axes if the volume contains a nonzero- valued voxel. Each of the resulting sub-volumes is further divided into two parts along each of the three axes if the sub-volume contains a nonzero-valued voxel. This division continues until either (1) the size of the newly divided sub-volume matches the size of a voxel, or (2) all of the voxels within the sub-volume are zero-valued. In this way, the octree acceleration structure need not store zero values for regions within the volume that do not contain occlusion information. As illustrated, the octree is subdivided down to the voxel level as needed to include nonzero-valued voxels, although this also results in some zero- valued voxels included as well. With this type of acceleration structure, when a ray is traced through the volume, the shading software determines the points where the ray intersects the octree and then steps though each of the voxels accordingly. In FIG. 6, the black dots along the ray represent the locations where the octree is queried for the corresponding voxel's opacity and color values.
[0050] Typically, a user knows the domain and the types of objects in the scene that are casting the volumetric shadow (e.g., fur, fog, solids, etc.). Therefore, the user is often in the best position to provide an input to the rendering software whether the volume should be treated as densely or sparsely packed. Alternatively, the rendering software may suggest or automatically select an appropriate acceleration structure based on heuristic methods. The software can determine whether the volume should be treated as densely or sparsely packed, for example, based on the percentage of voxels within the volume that have nonzero occlusion information. In one embodiment, the volume is considered sparsely packed if less than 10% of the voxels contain occlusion information, and the volume is considered densely packed if more than 50% of the voxels contain occlusion information. However, different number and/or heuristics may be found to be more suitable in different applications.
[0051] The octree acceleration structure can be designed to accommodate a large number of voxels, which may be required for feature animation applications. For example, traditional octrees are stored as a parent node with pointers to eight children nodes, which represent the eight cubes within the parent. The children nodes may further contain pointers to their eight children nodes, and so on. But with a large number of nodes involved, the overhead in storing these pointers becomes excessive. To reduce the required overhead, each non-leaf node preferably includes a flat array of eight indices rather than eight pointers. The values of these indices index the corresponding children nodes in the same array. At the leaf nodes, the indices point instead to an array that contains the occlusion information for the voxels, for example an opacity value and a color value for each. Such a structure is shown in FIG. 7.
[0052] Additionally, it has been observed that in most applications the voxels rarely have uniform opacity or color values. Therefore, instead of storing data at each level of the octree, the occlusion data are preferably stored only at the finest detail level - at the leaf nodes. Finally, the data may be stored on a storage medium in a compressed form, for example using the Lempel-Ziv compression algorithm via the public domain ZLEB library. For speed and ease of access, however, the octree is preferably not compressed in memory.
Accessing Voxels in the Accelerated Structure
[0053] With the occlusion information stored in the acceleration structure, the data therein can be accessible as data in any database. For example, an opacity value and a color value for a particular voxel can be retrieved with a key that identifies the voxel by its world space coordinates (or, alternatively, by any other desired scheme). Identifying voxels according to their world coordinates hides from the user which implementation the acceleration structure is being used (array, octree, or another). The world space coordinates can be converted into integer voxel space coordinates, for example, so that the world space coordinate in the bottom left of the cube would be the voxel coordinate (0, 0, 0) and the top right of the cube would be the voxel coordinate (M-I, N-I, O-l) if querying a uniform voxel grid or (M-I, M-I5 M-I) if querying an octree. [0054] For a uniform voxel grid, in one embodiment, a key is generated for a voxel by the function: key = x + yM + zMN ,
where the voxel coordinate is (x, y, z) and the grid is of dimension MxNxO. This provides the key as a one-dimensional index into a flat array that stores the opacity and color information.
[0055] Where the acceleration structure is implemented with an octree, a key can also be used to access the occlusion data. In one embodiment, the number of voxels inside the octree that have nonzero occlusion data is determined. A counter may be used to track this number as the voxels are being generated, where the counter corresponds to the key used to access the opacity and color data in the acceleration structure. The occlusion data are stored in a list of leaf nodes for the octree, while the structure of the octree is stored in a list of parent nodes. Splitting the octree and its data in this way allows the data to be added to or operated on without having to change the structure itself. Preferably, the parent nodes and the children nodes are stored as flat arrays. [0056] FIG. 7 is a diagram of the memory layout of an octree with example data. The octree comprises two lists: a list of parent nodes and a list of leaf nodes. Each entry in the list of parent nodes contains eight indices. Each of the indices for the parent nodes point to another parent node, to a leaf node, or to nothing (in the example, a value of -1). If a parent node points to nothing, the voxel or voxels within its subdivision are all empty of occlusion data. Whether a parent node points to another parent node or to a leaf node depends on the number of subdivisions in the octree. In this case, the octree is three levels deep, so the third level points to leaf nodes instead of another parent node. All parent nodes at the lowest level of the octree have indices that point to the leaf node list (numbered independently from the parent node list). The entries in the list of leaf nodes each contain the occlusion information (here, opacity and color data) for their respective voxel. The indices for the array indices of the leaf nodes correspond to the keys, which are stored in the lowest level of the octree.
[0057] Regardless of whether stored in an array, an octree, or any other data structure, the color values for each voxel can be compressed to save system resources. In one embodiment, the color values comprise full floating point numbers for red, green, blue, and alpha. Each color value is gamma corrected, multiplied by 255, and then stored in an 8-bit memory location. This can be performed for each red, green, and blue value, using the following formula:
color _byte = 255 • color_value{U22) ,
where color _yalue is the original color value and color jbyte is the converted 8-bit number. The alpha value of a color is not gamma corrected, but rather merely multiplied by 255 and stored in an 8-bit memory location. When the user queries for the color of a voxel, these transformations are reversed and the original color values provided. Volume Shadow Shading
[0058] The occlusion information in this acceleration structure is then accessed in the shading step, in which a volumetric shadow is computed for a location in the scene. The shadow is computed based on the accumulated shading effects of the volume elements on a ray passed from the location to be shaded through the occluding volume to the light source. The shading step uses several types of information about a scene, including a location on a surface to be shaded, the position and type of the light source from which light comes, and the acceleration structure describing objects that can cast a volumetric shadow. Using this information, a ray is cast from the location on the surface to the light source, through the occluding volume. The accumulated occluding effect of any voxels through which the ray passes is then determined. [0059] Classically, the transmittance of a light to point x is defined as:
T(x) = exp(-O) ,
where O is defined as:
O = const∑ (opacity, ■ distance! ) , where const is a constant, opacity, is the opacity value for the ith voxel, distance! is the distance the ray travels through the /th voxel, and the summation is through all voxels through which the ray passes. When most of the voxels are opaque, the transmittance is very small, but if all the voxels are transparent, the transmittance is 1 and all the light shines through the participating media unblocked (e.g., no shadow is cast). [0060] This classical transmittance function may be modified to allow the user, such as a graphics artist, to adjust the effect of the occluding volume on the volume shadow. In one embodiment, the adjustment enables the artist to control when the shadow starts to take effect and when it stops taking effect, as well as how fast the shadow falls of in between the range of the start and end. To control the falloff of the volume shadow, the variables Shadow FalloffStart and ShadowFalloffEnd are threshold variables that can be adjusted by an artist. When the computed O (as defined above) is less than ShadowFalloffStart, the transmittance is set to 1. In effect, this causes the occluding volume not to cast any shadow unless the occluding volume blocks at least a threshold amount of the light. Similarly, when the computed O is greater than ShadowFalloffEnd, the transmittance is set to 0. In effect, this causes the occluding volume to block the light fully if the occluding volume blocks more than a threshold amount of the light. [0061] In another embodiment, between these two falloff values, the modified transmittance function is shaped like a horizontal s-curve, where the rate of falloff is adjustable by an artist. It is given by.
ShadowFallofβnd - 0 Λ ShadowFalloffEnd - Shadow FalloffStart J where Shadow FalloffExp is an artist-controllable variable that controls the shape and falloff of the s-curve, and where:
S_CURVE(t) = 3t2 - 2f3 .
In this way, an artist can control how fast the volume shadow falls off in between the start and end thresholds.
[0062] In another embodiment, the calculated transmittance is further modified to simulate blur for the volume shadow. In this scheme, the shading software can slowly attenuate the effect of the shadow as a function of the shadow's distance from the occluding volume, allowing the artist to turn off the shadow in full or in part based on this distance. To turn off the shadow, the calculated transmittance of the light through the occluding volume is adjusted between its calculated value up to 1 (i.e., no blocking). In one implementation, the variable Far is defined as the distance from the surface to be shaded to the last point of exit from the occluding volume along the traced ray. The variable Blend is set to 0 (e.g., the feature is turned off) when Far is less than a threshold, Shadow DisappearStart, and it is set to 1 (e.g., no shading) when Far is greater than a threshold, ShadowDisappearEnd. Between these thresholds, Blend is calculated according to the equation:
ShadowDisappearEnd - ShadowDisappearStart )
where ShadowDisappearExp is an artist-controllable variable that controls the rate at which this effect is scaled by the distance from the occluding object, and SjCURVE is defined above. Finally, the modified transmittance is calculated by: T'(x) = Blend + (1 - Blend) • T(x) .
This blending effect increases the value of the transmittance (i.e., reduces the shadow) up to the maximum value of 1, allowing the volume shadow to be turned off or reduced at locations sufficiently far from any occluding objects.
[0063] Color filtering caused by objects in the occluding volume can also be added. In one embodiment, an average coloring effect of the occluding volume is determined according to the equation (e.g., for each of red, blue, green, and alpha elements):
Average Color = where Color t is a color value for the /th voxel, Length, is the distance the ray travels through the /th voxel, and the summations are through all voxels through which the ray passes. Then, the resulting color of the location to be shaded may be given by: T _ colorix) = T'(x) + (1 - T'(x)) Average Color .
In this way, the amount of color transmitted to the shaded location by the occluding volume depends on the amount of light the volume blocks. If the shaded location is fully in shadow (i.e., if T'(x) is G), its color will be equal to the average color of the
occluding volume, as computed above.
[0064] In one embodiment, an artist can adjust the amount of color of the occluding volume that is transmitted to the location to be shaded, or shading point. This adjustment can be achieved using the equation:
T _ color' (x) = (\ - t) + t ' T _ color (x) , where / is a user-specified transmittance amount. If t = 1, the occluding volume transmits the maximum amount of color to the location, and where / = 0, it transmits none of the volume's color in the shading.
[0065] Finally, the color of the shading point due to the volume shadow is determined by:
Point Color (x) = Light Color Light Intensity T'(x) T Color' (x) .
Although this coloring scheme is not necessarily physically correct, it is generally faster to compute than the physically correct solution and is a good approximation thereof.
[0066] As described above, a single ray is shot from the shading point to the light source. Although this works well for point light sources, it may be desirable to take into account the effects of an area light source. Where the illuminating light source is an extended area light source, the method described above is simply repeated for a plurality of rays shot from the location to be shaded to different (possibly random or pseudo random) locations on the light source. The results are then averaged to obtain the color of the shading point.
[0067] In some implementations, objects such as smoke may produce volume shadows with visible bands due to the voxelization of the smoke. To minimize the banding due to voxelization, the color and opacity values of each voxel can be replaced with values interpolated from the voxel's surrounding neighbors (e.g., using the 26 voxels that surround each voxel). This interpolation can be performed with trilinear interpolation of the opacity and color information of a voxel, using the center of the ray segment passing through the voxel as the interpolant. [0068] Any of the steps, operations, or processes described herein can be performed or implemented with one or more software modules or hardware modules, alone or in combination with other devices. It should further be understood that any portions of the system described in terms of hardware elements may be implemented with software, and that software elements may be implemented with hardware, such as hard-coded into a dedicated circuit. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described herein. Moreover, the methods, systems, and computer program products can be employed to produce a feature animation product, such as a movie, that includes images with volumetric images rendered according to any of the embodiments described herein. [0069] The foregoing description of the embodiments of the invention has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above teachings. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto.

Claims

What is claimed is:
1. A computer-implemented method for generating a volumetric shadow for a computer-rendered image of a three-dimensional scene, the method comprising: dividing a volume in the scene into a plurality of volume elements, the volume containing a set of objects that can occlude light; rasterizing geometry data for the objects to generate occlusion information for a plurality of the volume elements, the occlusion information for a volume element describing how light is occluded by portions of any objects that occupy the volume element; for each of a set of volume elements, determining the volume element's occlusive effect on light that passes from a light source to a location in the scene, the occlusive effect of a volume element based on the volume element's occlusion information; and accumulating the occlusive effects to determine a total amount of occlusion of the light source caused by the objects within the volume.
2. The method of claim 1, wherein each volume element's occlusion information includes an opacity value associated with the volume element.
3. The method of claim 1, wherein each volume element's occlusion information includes a color value associated with the volume element.
4. The method of claim 1, wherein an object located within the volume comprises a set of primitives.
5. The method of claim 4, wherein the set of primitives includes a curve.
6. The method of claim 4, wherein the set of primitives includes a parametric surface.
7. The method of claim 4, wherein the set of primitives includes a polygon mesh.
8. The method of claim 4, wherein the set of primitives includes a volumetric primitive.
9. The method of claim 1, wherein the rasterizing comprises, for each of a set of volume elements: identifying objects located at least partially within the volume element; and determining the occlusion information for the volume element based on an amount of overlap of each of the objects located at least partially within the volume element.
10. The method of claim 9, wherein the amount of overlap is determined based on a fraction of a number of sampled points within the volume element that are also located within the object.
11. The method of claim 9, wherein determining the occlusion information for each volume element is further based on the opacity values of the objects located at least partially within the volume element.
12. The method of claim 1, wherein rasterizing an object that includes a number of primitives comprises, for each primitive: dividing the primitive into a plurality of segments of equal volume, each segment having a position in space; associating each segment with a volume element based on the segment's position in space; and for each volume element to which a segment is associated, adding to the occlusion information of the volume element.
13. The method of claim 1 , wherein at least one of the objects comprises a curve primitive, and rasterizing the curve primitive includes: a step for determining the curve primitive's contribution to the occlusion information of the volume elements.
14. The method of claim 1, wherein at least one of the objects comprises a surface primitive, and rasterizing the surface primitive comprises a step for determining the surface primitive's contribution to the occlusion information of the volume elements.
15. The method of claim 1, wherein at least one of the objects comprises a volume primitive, and rasterizing the volume primitive comprises a step for determining the volume primitive's contribution to the occlusion information of the volume elements.
16. The method of claim 1 , further comprising: storing the occlusion information for the volume elements in an octree data structure.
17. The method of claim 16, wherein the octree data structure comprises: a list of leaf nodes, each leaf node comprising the occlusion information for a volume element; and a plurality of parent nodes, each parent node comprising an array of items selected from a group consisting of: a reference to a parent node, a reference to a leaf node, and an empty value.
18. The method of claim 1 , further comprising:
determining a color value for the location based on the determined total amount of occlusion.
19. The method of claim 18, wherein determining the color value of the location comprises assuming that the location is not occluded by the volume if the determined amount of occlusion does not exceed a minimum threshold value.
20. The method of claim 19, wherein the minimum threshold value is adjustable by a user.
21. The method of claim 1 , wherein determining the color value of the location comprises assuming that the location is fully occluded by the volume if the determined amount of occlusion exceeds a maximum threshold value.
22. The method of claim 21, wherein the maximum threshold value is adjustable by a user.
23. The method of claim 1 , wherein the determined color value for the location is further based on a function that controls how the volumetric shadow increases as the determined amount of occlusion increase.
24. The method of claim 23, wherein the function is adjustable by a user.
25. The method of claim 23, wherein the function is given by:
T(X) = S CURVEl ShadowFallofβnd - 0 λ
~ [ShadowFallofβnd -ShadowFallofβtart J
where T(x) is a transmittance of light from the light source to the lcoation; O is the total amount of occlusion of the light source caused by the objects within the volume; ShadowFalloffExp, Shadow Fallofβnd, and Shadow Fallofβtart are adjustable by a user; and:
S_CURVE(t) = 3t2 - 2t3 .
26. The method of claim 1 , wherein determining the color value of the location comprises assuming that the location is not occluded by the volume if the distance from the location to a nearest object in the volume does not exceed a minimum threshold value.
27. The method of claim 26, wherein the minimum threshold value is adjustable by a user.
28. The method of claim 1, wherein the determining the color value of the location comprises reducing the volumetric shadow based on the distance from the location to a nearest object in the volume.
29. The method of claim 1 , wherein the determined color is further based on
a transmitted color from the volume, where an amount of color transmitted by the volume depends on an amount of light the volume occludes.
30. The method of claim 1, wherein the determined color is further based on a transmitted color from the volume, where the amount of transmitted color from the volume is reduced by a transmittance amount.
31. The method of claim 1, wherein the transmittance amount is adjustable by a user.
32. The method of claim 1, wherein the light source is an area light source, and further comprising: repeating the tracing and accumulating steps for a plurality of rays between the location and different locations on the area light source; and determining the color value for the location based an average of the determined amounts of occlusion for each of the plurality of rays.
33. A computer-implemented method for generating a volumetric shadow for a computer-rendered image of a three-dimensional scene, the method comprising: a step for generating occlusion information for a volume of voxels within the scene based on geometry data for a set of objects located within the volume; tracing a ray from a shading point in the scene to a light source in the scene, the ray passing through one or more of the voxels; and a step for determining a volumetric shadow cast at the shading point by the objects based on the occlusion information for the voxels through which the ray passes.
34. A computer-implemented method for generating a volumetric shadow cast by a hair in a three-dimensional scene, the hair located within a bounding volume, the method comprising: dividing the bounding volume into a plurality of voxels, each voxel having an opacity value; initializing the opacity values for the voxels; for each hair, increasing the opacity value of each voxel through which the hair passes based on an amount of the voxel occupied by the hair; casting light from a light source to a shading point in the scene; and occluding the light cast onto the shading point according to the opacity value of any voxels through which the light passes.
35. The method of claim 34, increasing the opacity value of each voxel through which the hair passes comprises: dividing the hair into a plurality of segments of equal volume, each segment having a position in space; associating each segment with a voxel based on the segment's position in
space; and for each voxel to which a segment is associated, adding to the opacity value of the voxel.
36. The method of claim 34, further comprising: storing the opacity values for the voxels in an octree data structure.
37. The method of claim 36, wherein the octree data structure comprises: a list of leaf nodes, each leaf node comprising the opacity value for a voxel; and a plurality of parent nodes, each parent node comprising an array of items selected from a group consisting of: a reference to a parent node, a reference to a leaf node, and an empty value.
38. A computer program product for generating a volumetric shadow for a computer-rendered image of a three-dimensional scene, the computer program product comprising a computer-readable medium containing computer program code for performing the method comprising: dividing a volume in the scene into a plurality of volume elements, the volume containing a set of objects that can occlude light; rasterizing geometry data for the objects to generate occlusion information for a plurality of the volume elements, the occlusion information for a volume element describing how light is occluded by portions of any objects that occupy the volume element; for each of a set of volume elements, determining the volume element's occlusive effect on light that passes from a light source to a location in the scene, the occlusive effect of a volume element based on the volume element's occlusion information; and accumulating the occlusive effects to determine a total amount of occlusion of the light source caused by the objects within the volume.
39. The computer program product of claim 38, wherein each volume element's occlusion information includes an opacity value associated with the volume element.
40. The computer program product of claim 38, wherein each volume element's occlusion information includes a color value associated with the volume element.
41. The computer program product of claim 38, wherein the rasterizing comprises, for each of a set of volume elements: identifying objects located at least partially within the volume element; and determining the occlusion information for the volume element based on an amount of overlap of each of the objects located at least partially within the volume element.
42. The computer program product of claim 41 , wherein the amount of overlap is determined based on a fraction of a number of sampled points within the volume element that are also located within the object.
43. The computer program product of claim 41 , wherein determining the occlusion information for each volume element is further based on the opacity values of the objects located at least partially within the volume element.
44. The computer program product of claim 38, wherein rasterizing an object that includes a number of primitives comprises, for each primitive: dividing the primitive into a plurality of segments of equal volume, each segment having a position in space; associating each segment with a volume element based on the segment's position in space; and for each volume element to which a segment is associated, adding to the occlusion information of the volume element.
45. The computer program product of claim 38, wherein at least one of the objects comprises a curve primitive, and rasterizing the curve primitive includes: a step for determining the curve primitive's contribution to the occlusion information of the volume elements.
46. The computer program product of claim 38, wherein at least one of the objects comprises a surface primitive, and rasterizing the surface primitive comprises a step for determining the surface primitive's contribution to the occlusion information of the volume elements.
47. The computer program product of claim 38, wherein at least one of the objects comprises a volume primitive, and rasterizing the volume primitive comprises a step for determining the volume primitive's contribution to the occlusion information of the volume elements.
48. The computer program product of claim 38, wherein the method further comprises: storing the occlusion information for the volume elements in an octree data structure.
49. The computer program product of claim 48, wherein the octree data structure comprises: a list of leaf nodes, each leaf node comprising the occlusion information for a volume element; and a plurality of parent nodes, each parent node comprising an array of items selected from a group consisting of: a reference to a parent node, a reference to a leaf node, and an empty value.
50. The computer program product of claim 38, wherein the method further comprises: determining a color value for the location based on the determined total amount of occlusion.
51. The computer program product of claim 50, wherein determining the color value of the location comprises assuming that the location is not occluded by the volume if the determined amount of occlusion does not exceed a minimum threshold value.
52. The computer program product of claim 51 , wherein the minimum threshold value is adjustable by a user.
53. The computer program product of claim 38, wherein determining the color value of the location comprises assuming that the location is fully occluded by the volume if the determined amount of occlusion exceeds a maximum threshold value.
54. The computer program product of claim 53, wherein the maximum threshold value is adjustable by a user.
55. The computer program product of claim 38, wherein the determined color value for the location is further based on a function that controls how the volumetric shadow increases as the determined amount of occlusion increase.
56. The computer program product of claim 55, wherein the function is adjustable by a user.
57. The computer program product of claim 55, wherein the function is given by:
τω-s cυKvA s m .0 r yShadowFallofβnd - Shadow Fallofβtart )
where T(x) is a transmittance of light from the light source to the lcoation; O is the total amount of occlusion of the light source caused by the objects within the volume; Shadow FalloffExp, ShadowFalloffEnd, and Shadow Fallofβtart are adjustable by a user; and:
S_CURVE(t) = 3t2 - 2t\
58. The computer program product of claim 38, wherein determining the color value of the location comprises assuming that the location is not occluded by the volume if the distance from the location to a nearest object in the volume does not exceed a minimum threshold value.
59. The computer program product of claim 58, wherein the minimum threshold value is adjustable by a user.
60. The computer program product of claim 38, wherein the determining the color value of the location comprises reducing the volumetric shadow based on the distance from the location to a nearest object in the volume.
61. The computer program product of claim 38, wherein the determined color is further based on a transmitted color from the volume, where an amount of color transmitted by the volume depends on an amount of light the volume occludes.
62. The computer program product of claim 38, wherein the determined color is further based on a transmitted color from the volume, where the amount of transmitted color from the volume is reduced by a transmittance amount.
63. The computer program product of claim 38, wherein the transmittance amount is adjustable by a user.
64. The computer program product of claim 38, wherein the light source is an area light source, and the method further comprises: repeating the tracing and accumulating steps for a plurality of rays between the location and different locations on the area light source; and determining the color value for the location based an average of the determined amounts of occlusion for each of the plurality of rays.
65. A feature animation product comprising a machine-readable medium, the
machine-readable medium containing media data for being processed by a video machine to produce a video image, the machine-readable medium manufactured by storing thereon media data for the video image produced by: receiving a three-dimensional model of a scene; dividing a volume in the scene into a plurality of volume elements, the volume containing a set of objects that can occlude light; rasterizing geometry data for the objects to generate occlusion information for a plurality of the volume elements, the occlusion information for a volume element describing how light is occluded by portions of any objects that occupy the volume element; for each of a set of volume elements, determining the volume element's occlusive effect on light that passes from a light source to a location in the scene, the occlusive effect of a volume element based on the volume element's occlusion information; accumulating the occlusive effects to determine a total amount of occlusion of the light source caused by the objects within the volume; and rendering the scene to produce an image thereof.
EP05812297A 2004-10-27 2005-10-18 Volumetric shadows for computer animation Withdrawn EP1844445A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US97521604A 2004-10-27 2004-10-27
PCT/US2005/037488 WO2006049870A1 (en) 2004-10-27 2005-10-18 Volumetric shadows for computer animation

Publications (1)

Publication Number Publication Date
EP1844445A1 true EP1844445A1 (en) 2007-10-17

Family

ID=35569583

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05812297A Withdrawn EP1844445A1 (en) 2004-10-27 2005-10-18 Volumetric shadows for computer animation

Country Status (4)

Country Link
EP (1) EP1844445A1 (en)
CA (1) CA2583664A1 (en)
TW (1) TW200632780A (en)
WO (1) WO2006049870A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476877A (en) * 2020-04-16 2020-07-31 网易(杭州)网络有限公司 Shadow rendering method and device, electronic equipment and storage medium

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4852555B2 (en) * 2008-01-11 2012-01-11 株式会社コナミデジタルエンタテインメント Image processing apparatus, image processing method, and program
US8379022B2 (en) 2008-09-26 2013-02-19 Nvidia Corporation Fragment shader for a hybrid raytracing system and method of operation
FR2965652A1 (en) 2010-09-30 2012-04-06 Thomson Licensing METHOD FOR ESTIMATING LIGHT QUANTITY RECEIVED IN ONE POINT OF A VIRTUAL ENVIRONMENT
TWI557685B (en) * 2012-05-23 2016-11-11 雲南恆達睿創三維數字科技有限公司 Mesh animation
US9679398B2 (en) * 2015-10-19 2017-06-13 Chaos Software Ltd. Rendering images using color contribution values of render elements
CN109215134B (en) * 2018-09-04 2023-06-20 深圳市易尚展示股份有限公司 Occlusion determination method and device for three-dimensional model, computer equipment and storage medium
CN117455977B (en) * 2023-09-27 2024-07-09 杭州市交通工程集团有限公司 Method and system for calculating stacking volume based on three-dimensional laser scanning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2006049870A1 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476877A (en) * 2020-04-16 2020-07-31 网易(杭州)网络有限公司 Shadow rendering method and device, electronic equipment and storage medium
CN111476877B (en) * 2020-04-16 2024-01-26 网易(杭州)网络有限公司 Shadow rendering method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2006049870A1 (en) 2006-05-11
CA2583664A1 (en) 2006-05-11
TW200632780A (en) 2006-09-16

Similar Documents

Publication Publication Date Title
Reeves et al. Rendering antialiased shadows with depth maps
Heckbert Discontinuity meshing for radiosity
US6985143B2 (en) System and method related to data structures in the context of a computer graphics system
Westermann et al. Efficiently using graphics hardware in volume rendering applications
Winkenbach et al. Rendering parametric surfaces in pen and ink
Sander et al. Signal-specialized parameterization
US6396492B1 (en) Detail-directed hierarchical distance fields
US8570322B2 (en) Method, system, and computer program product for efficient ray tracing of micropolygon geometry
US6483518B1 (en) Representing a color gamut with a hierarchical distance field
EP1074947B1 (en) Sculpturing objects using detail-directed hierarchical distance fields
EP1844445A1 (en) Volumetric shadows for computer animation
Ernst et al. Early split clipping for bounding volume hierarchies
Szirmay-Kalos et al. GPU-based techniques for global illumination effects
Yang et al. The cluster hair model
JPH10208077A (en) Method for rendering graphic image on display, image rendering system and method for generating graphic image on display
US6791544B1 (en) Shadow rendering system and method
WO2000033257A1 (en) A method for forming a perspective rendering from a voxel space
Yuksel et al. Lighting grid hierarchy for self-illuminating explosions.
Marshall et al. Multiresolution rendering of complex botanical scenes
Fernandez et al. Local Illumination Environments for Direct Lighting Acceleration.
US20140267357A1 (en) Adaptive importance sampling for point-based global illumination
US9514566B2 (en) Image-generated system using beta distribution to provide accurate shadow mapping
Baumann et al. Integrated multiresolution geometry and texture models for terrain visualization
Halli et al. Per-pixel displacement mapping using cone tracing
Vyatkin et al. Shadow Generation Method for Volume-Oriented Visualization of Functionally Defined Objects

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070321

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20080111

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20080522