CA2307352A1 - System and method for displaying a three-dimensional object using motion vectors to generate object blur - Google Patents

System and method for displaying a three-dimensional object using motion vectors to generate object blur

Info

Publication number
CA2307352A1
CA2307352A1 CA 2307352 CA2307352A CA2307352A1 CA 2307352 A1 CA2307352 A1 CA 2307352A1 CA 2307352 CA2307352 CA 2307352 CA 2307352 A CA2307352 A CA 2307352A CA 2307352 A1 CA2307352 A1 CA 2307352A1
Authority
CA
Grant status
Application
Patent type
Prior art keywords
means
objects
selected object
set
motion vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA 2307352
Other languages
French (fr)
Inventor
Iliese Claire Chelstowski
Charles R. Johns
Barry L. Minor
George L. White, Jr.
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation

Abstract

A system, method, and computer-usable medium for simulating object blur uses motion vectors. A motion vector, or array of motion vectors, may be specified on either a per-vertex or per-primitive (i.e. per-object) basis. A motion vector is opposite to the direction of the motion, and thus points in the direction of the blur. The magnitude of the motion vector represents the distance the vertex or the primitive (i.e. each vertex in the primitive) travels in one unit of time. When a scene is rendered, only those objects which are in motion, or which are subject to depth of field blurring, are rendered over a series of time slices.
All objects which are static (i.e. nonmoving) are rendered directly into a color buffer, rather than being repeatedly rendered over a series of time slices. Thus, static (i.e. non-blurred) objects are rendered only once, while objects which are to be blurred are rendered over a series of time slices.

Description

SYSTEM AND METHOD FOR DISPLAYING A THREE-DIMENSIONAL OBJECT
USING MOTION VECTORS TO GENERATE OBJECT BLUR
FIELD OF THE INVENTION
The invention relates to the field of information handling system, and, more particularly, to a system and method for displaying a three-dimensional object. Still more particularly, the invention relates to using motion vectors to generate a blurring effect for an object.
BACKGROUND OF THE INVENTION
Three-dimensional (3D) graphics systems are used for a variety of applications, including computer-assisted drafting, architectural design, simulation trainers for aircraft and other vehicles, molecular modeling, virtual reality applications, and video games. Three-dimensional systems are often implemented on workstations and personal computers, which may or may not include 3D graphics hardware. In systems which include 3D graphics hardware, a graphics accelerator card typically facilitates the creation and display of the graphics imagery.
A software application program generates a 3D graphics scene, and provides the scene, along with lighting attributes, to an application programming interface (API). Current APIs include OpenGL, PHIGS, and Direct3D. A 3D graphics scene consists of a number of polygons which are delimited by sets of vertices. The vertices are combined to form larger primitives, such as triangles or other polygons. The triangles (or polygons) are combined to form surfaces, and the surfaces are combined to form an object.
Each vertex is associated with a set of attributes, typically including: 1) material color, which describes the color of the object to which the vertex belongs; 2) a normal vector, which describes the direction to which the surface is facing at the vertex; and 3) a position, including three Cartesian coordinates x, y, and z. Each vertex may optionally be associated with texture coordinates and/or an alpha (i.e, transparency) value. In addition, the scene typically has a set of attributes, including:
1) an ambient color, which typically describes the amount of ambient light; and 2) one or more individual light sources. Each light source has a number of properties associated with it, including a direction, an ambient color, a diffuse color, and a specular color.
Rendering is employed within the graphics system to create two-dimensional image projections of the 3D graphics scene for display on a monitor or other display device. Typically, rendering includes processing geometric primitives (e.g., points, lines, and polygons) by performing one or more of the following operations as needed: transformation, clipping, culling, lighting, fog calculation, and texture coordinate generation. Rendering further includes processing the primitives to determine component pixel values for the display device, a process often referred to specifically as rasterization.
In some 3D applications, for example, computer animation and simulation programs, objects within the 3D graphics scene may be in motion. In these cases, it is desirable to simulate motion blur for the objects that are in motion. Without motion blur, objects in motion may appear to move jerkily across the screen.
Similar techniques are also commonly used to blur objects when simulating depth of field. Objects which are within the "field of view" are left un-blurred, while objects which are closer or farther away are blurred according to their distance from the camera (i.e. viewer).
A prior art method for simulating object blur includes the use of an accumulation buffer. The accumulation buffer is a non-displayed buffer that is used to accumulate a series of images as they are rendered. An entire scene (i.e. each object, .or primitive, in the scene) is repeatedly rendered into the accumulation buffer over a series of time slices. The entire scene is thus accumulated in the accumulation buffer, and then copied to a frame buffer for viewing on a display device.
A prior art method for using an accumulation buffer to simulate object blur is illustrated in Figure 1. As shown in Figure 1, a time period is divided into "n" time slices (step 100).
The time period is the amount of time during which ascene is visible on a display device, and is analogous to the exposure interval, or shutter speed, of a video camera shutter. A longer shutter speed corresponds to a greater amount of blurring, whereas a shorter shutter speed corresponds to a lesser amount of blurring.
A time-slice count is set to one (step 102). Next, an object (i.e.
primitive) is selected for rendering (step 104). The location, color, and all other per-vertex values are calculated for each vertex in the object for this particular time slice (step 106).
The object is then rendered into a color buffer (step 108). A
check is made to determine if the object rendered is the last object in the scene (step 110). If not, the process loops back to step 104, and is repeated for each object in the scene.
If the last object in the scene has been rendered (i.e. the answer to the question in step 110 is "yes"), the scene is accumulated (step 112), meaning it is scaled (for example, by 1/n) and copied into the accumulation buffer. The time-slice count is checked to see if it is equal to n (step 114). If not, the time slice count is incremented (step 116). The process then loops back to step 104, and is repeated for each time slice. If the time-slice count is equal to n (i.e. the answer to the question in step 114 is "yes"), then the accumulation buffer is scaled and copied to the frame buffer (step 120) and is displayed on a display screen (step 122).
The use of an accumulation buffer as described in Figure 1 is a computationally expensive process, as the entire scene (i.e. each object in the scene) is rendered "n" times for each time period.
Consequently, it would be desirable to have a system and method for more efficiently simulating object blur in a three-dimensional graphics environment.
SUMMARY OF THE INVENTION
Accordingly, the present invention is directed to a system, method, and computer-usable medium for simulating object blur using motion vectors. A motion vector, or array of motion vectors, may be specified on either a per-vertex or per-primitive (i.e.
per-object) basis. A motion vector is opposite to the direction of the motion, and thus points in the direction of the blur. The magnitude of the motion vector represents the distance the vertex or the primitive ( i . a . each vertex in the primitive ) travels in one unit of time.
When a scene is rendered, only those objects which are in motion, or which are subject to depth of field blurring, are rendered over a series of time slices. All objects which are static (i.e. non-blurred) are rendered directly into a color buffer, rather than being repeatedly rendered over a series of time slices. Thus, static (i.e. non-blurred) objects are rendered only once, while objects which are to be blurred are rendered over a series of time slices. This increases the efficiency of the rendering process while simulating object blur of the objects which are in motion and/or subject to depth of field blurring.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other features and advantages of the present invention will become more apparent from the detailed description of the best mode for carrying out the invention as rendered below.
In the description to follow, reference will be made to the accompanying drawings, where like reference numerals are used to identify like parts in the various views and in which:
Figure 1 is a flow chart illustrating a prior art method for simulating object blur;
Figure 2 is a representative system in which the present invention may be implemented;
Figure 3 depicts a moving object, including a motion vector, within a static scene;
Figure4 depicts a moving object, including an array of motion vectors, within a static scene; and Figures 5A and SB are f low charts illustrating a method for using motion vectors to simulate object blur in accordance with the present invention.
DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT OF THE INVENTION
A representative system in which the present invention may be implemented is illustrated in Figure 2. Information handling system 200 includes one or more processors 202 coupled to a processor or host bus 204. Cache memory 206 may also be coupled to host bus 204. A bridge/memory controller 208 provides a path between host bus 204 and system memory 210, as well as a path between host bus 204 and peripheral bus 212. Note that system memory 210 may include both read only memory (ROM) and random access memory (RAM). Accumulation buffer 214 is included within system memory 210. Alternately, accumulation buffer 214 can be included in graphics adapter 218. In one embodiment, peripheral bus 212 is a PCI bus or other bus suitable for supporting high performance graphics applications and hardware. Graphics adapter 218 is coupled to peripheral bus 212, and may include local memory portion 220, and frame buffer 221. System 200 may or may not include graphics adapter 218, and if graphics adapter 218 is not present in system 200, then frame buffer 221 may be included in system memory 210 or in video controller 216. Video controller 216 is coupled to display device 222, and is configured to refresh display device 222 with a graphics image stored in frame buffer 221. Note that graphics adapter 218 may be suitably integrated in a single device with video controller 216.
The present invention is a system, method, and computer-usable medium for simulating object blur using motion vectors. A motion vector, or array of motion vectors, may be specified on either a per-vertex or per-primitive (i.e. per-object) basis. A motion vector is opposite to the direction of the motion, and thus points in the direction of the blur. The magnitude of the motion vector represents the distance the vertex or the primitive (i.e. each vertex in the primitive) travels in one unit of time.

When a scene is rendered, only those objects which are in motion are rendered over a series of time slices. All objects which are static (i.e. nonmoving) are rendered directly into a color buffer, rather than being repeatedly rendered over a series of time slices. In the prior art, every object in a scene (whether static or in motion) is rendered over a series of time slices, and then accumulated, as discussed above in the Background Of The Invention section herein. The present invention renders static objects only once, and only performs rendering over a series of time slices for those objects which are in motion. This increases the efficiency of the rendering process while simulating object blur of the objects which are in motion. The present invention may also be used to simulate object blur associated with depth of field blurring.
Referring to Figure 3, an example of a linearly moving object within a static scene will now be described. While many objects within a typical scene may be in motion, for illustrative purposes Figure 3 depicts a single object 300 in motion within a static scene 302. Note that motion vector 304 is opposite to the direction of motion 306. The magnitude of motion vector 304 represents the distance over which object 300 moves in a predefined period of time. In the example shown in Figure 3, a single motion vector 304 has been specified for the object. Thus, motion vector 304 applies to each vertex of object 300. During the predefined time period, the vertices of object 300 have moved from points a1, b1, and c1 to points a2, b2, and c2 respectively. The magnitude of motion vector 304 is thus equal to a2-a1. The magnitude is also equal to b2-b1, and equal to c2-c1.
Of course, each vertex of object 304 does not have to be moving at the same velocity. It is possible to assign a different motion vector to each vertex of an object. Each vertex may be moving in a different direction and/or at a different rate of speed.
Referring to Figure 4, an example of a non-linearly moving object within a static scene will now be described. As in Figure 3, for illustrative purposes only, a single moving object 400 is depicted within a static scene 402. There are several motion vectors 404, 406, 408, and 410 associated with object 400. Each motion vector has a magnitude equal to a portion of the distance traveled by object 400 during a predefined time period. In the example shown, each motion vector 404, 406, 408, and 410 applies to every vertex of object 400. During the predefined time period, the vertices of object 400 move from points a1, b1, and c1 to points a2, b2, and c2 respectively. Each vertex moves uniformly with the other vertices, however, object 400 (and its vertices) do not move linearly. Thus, each motion vector 404, 406, 408, and 410 includes a magnitude equal to the distance traveled during a portion of the predefined time period. As in Figure 3, each motion vector is opposite to the direction of motion 412. Motion vectors 404, 406, 408, and 410 are referred to as an array of motion vectors. An array of motion vectors may be assigned to an object, as shown in Figure 4, in which case the array of motion vectors applies to every vertex in the object. Alternately, an array of motion vectors may be assigned to a vertex.
An Application Programming Interface (API) is preferably provided in order to allow an application program to specify motion vectors for objects and vertices. An exemplary OpenGL API is depicted below. Note that this API is shown for illustrative purposes only, and is not meant to be limiting. Those skilled in the art will appreciate that motion vectors may be specified using a variety of programming techniques. Further, the use of OpenGL as an API is not meant to be limiting. The present invention may be implemented using various APIs, including, but not limited to PHIGS
and Direct3D.
An exemplary OpenGL API is as follows:
Overview This extension allows object blur to occur via a point, line, or edge along a specified motion vector. The motion vector is opposite to the direction of motion, thus it points in the direction of blur. The magnitude of the vector is the distance each vertex has traveled in one unit of time.
The "glMotionVector*" routines allow the application to specify motion vectors or arrays of motion vectors on a per-vertex basis or a per-primitive basis. The "glMotionEnv*" routines allow the application to specify the duration of motion, the degree of fade over time, and whether motion blur (i.e. object blur) is enabled or disabled.
Procedures And Functions 1. void glMotionVector[bsifd]IBM(T xcomponent, ycomponent, zcomponent) Purpose: Specify a motion vector for an object Variables: Three [b]ytes, [s]hortwords, [i]ntegers, [f]loating point numbers, or [d]ouble precision floats specifying a 3D vector 2. void glMotionVectorv[bsifd]IBM(T components) Purpose: Specify a motion vector for a vertex Variables: An array specifying a 3D vector 3. void glMotionVectorPointerIBM(int size, enum type, sizei stride, void *pointer) Purpose: Specify an array of motion vectors for an object or vertex Variables: The size, type, and stride of a list of motion vectors pointed to by the pointer variable 4. void glMotionEnv[if]IBM(GLenum pname, GLfloat param) Purpose: Specifies the duration and degree of blur Variables: If pname is equal to GL MOTION ENV FADE-IBM, then param specifies the degree to which the object is faded over time. If pname is equal to GL MOTION ENV DELTA TIME_IBM, then param specifies the number of units of time to blur.
Referring to Figures 5A and 5B, a flow chart illustrating a method for using motion vectors to simulate object blur in accordance with the present invention will now be described. Note that the steps described in Figures 5A and 5B can be performed either in software or in hardware (e. g., by a graphics accelerator) , or by a combination of software and hardware. An object within a scene is defined for rendering (step 500), meaning that the location, color, motion vector(s), and other attributes are defined for the object. The environment of the scene, along with any motion vectors associated with the object, are used to determine whether the object is static or in motion (step 502).
Note that the determination in step 502 could further include a determination as to whether or not the object needs to be blurred due to its depth of field. If the object is not static (i.e. the answer to the question in step 504 is "no") , then the object is identified as in "in-motion" object (step 506): If the object is static (i.e. the answer to the question in step 504 is "yes"), then the object is rendered into a color buffer. The color buffer can be any displayed or non-displayed buffer area. For example, the color buffer may be a portion of system memory 210 or local memory 220 on graphics adapter 218, as described above with reference to Figure 2.
Referring back to Figures 5A and 5B, a check is made to determine if the object is the last object in the scene (step 510).
If not (i.e. the answer to the question in step 510 is "no"), then another object is defined for rendering in step 500. If the object is the last object in the scene (i.e. the answer to the question in step 510 is "yes"), then the in-motion objects (i.e. the objects that require blur) are processed. One skilled in the art will realize that if there are no "in-motion" objects in the scene, the color buffer may be copied to the frame buffer at this point, and the scene may be displayed. For illustrative purposes, the process depicted in Figures 5A and 5B assumes a combination of static and "in-motion" objects in the scene.
A predetermined render time period is divided into "n" time slices (step 512). As discussed above with reference to Figure 1, the render time period is the amount of time during which a scene is visible on a display device, and is analogous to the exposure interval, or shutter speed, of a video camera shutter. A longer shutter speed corresponds to a greater amount of blurring, whereas a shorter shutter speed corresponds to a lesser amount of blurring.
A time-slice count is set to one (step 514). Next, an "in-motion"
object (i.e. an object identified as an "in-motion" object in step 506) is selected for rendering (step 516). The motion vector or vectors associated with the object are used, along with other motion variables to calculate and/or modify the location, color, and all other attributes for each vertex in the object (step 518).
The object is then rendered into a color buffer (step 520). A
check is made to determine if the object rendered is the last "in-motion" object in the scene (step 522). If not, the process loops back to step 516, and is repeated for each "in-motion" object in the scene.
If the last "in-motion" object in the scene has been rendered (i.e. the answer to the question in step 522 is "yes"), the scene is accumulated (step 524), meaning it is scaled (for example, by 1/n) and copied from the color buffer into the accumulation buffer.
The time-slice count is checked to see if it is equal to n (step 526). If not, the time slice count is incremented (step 528), and the process then loops back to step 516, and is repeated for each time slice. If the time-slice count is equal to n (i.e. the answer to the question in step 528 is "yes"), then the accumulation buffer is scaled and copied to the frame buffer (step 532) and is displayed on a display screen (step 534).
Note that it is possible to define two entry points into the method described in Figures 5A and 5B, or alternately, it is possible to have two separate routines to execute the method described in Figures 5A and 5B. For example, the determination as to whether an object is in motion (i.e. needs to be blurred) or not can be made by an application program. If the application program determines that an object is static, the application program can call a routine which executes only steps 500 through 510 to render the static object. If the application program determines that an object needs to be blurred, the application program can call a routine which executes only steps 512 through 534 to render the in-motion object.
Although the invention has been described with a certain degree of particularity, it should be recognized that elements thereof may be altered by persons skilled in the art without departing from the spirit and scope of the invention. One of the implementations of the invention is as sets of instructions resident in the random access memory of one or more computer systems configured generally as described in Figure 2. Until required by the computer system, the set of instructions may be stored in another computer readable memory, for example in a hard disk drive, or in a removable memory such as an optical disk for eventual use in a CD-ROM drive, or a floppy disk for eventual use in a floppy disk drive. Further, the set of instructions can be stored in the memory of another computer and transmitted over a local area network or a wide area network, such as the Internet, when desired by the user. One skilled in the art will appreciate that the physical storage of the sets of instructions physically changes the medium upon which it is stored electrically, magnetically, or chemically so that the medium carries computer usable information. The invention is limited only by the following claims and their equivalents.

Claims (29)

1. A method for displaying a three-dimensional graphics scene, including a plurality of objects, comprising the steps of:
categorizing the plurality of objects into a first set of objects and a second set of objects, wherein the first set of objects contains one or more objects to be blurred, and wherein the second set of objects contains one or more objects not to be blurred;
rendering each object in the second set of objects directly into a first buffer;
dividing a render time period into a plurality of time slices;
for each time slice, performing the following steps:
rendering each object in the first set of objects into the first buffer; and accumulating the three-dimensional graphics scene from the first buffer into an accumulation buffer; and displaying the three-dimensional graphics scene
2. A method according to claim 1, wherein said categorizing step comprises the steps of:
determining if a motion vector is associated with a selected object;
if a motion vector is associated with the selected object, assigning the selected object to the first set of objects; and if a motion vector is not associated with the selected object, assigning the selected object to the second set of objects.
3. A method according to claim 2, wherein said determining step comprises the step of determining if a motion vector is associated with a vertex of the selected object.
4. A method according to claim 2, wherein the motion vector is opposite to a direction of motion of the selected object.
5. A method according to claim 2, wherein a length of the motion vector is proportional to a speed of motion of the selected object.
6. A method according to claim 2, wherein the motion vector is opposite to a direction of blur of the selected object.
7. A method according to claim 2, wherein a length of the motion vector is proportional to a speed of blur of the selected object.
8. A method according to claim 1, wherein said displaying further comprises the steps of:
copying the accumulation buffer to a frame buffer; and displaying the frame buffer on a display device.
9. An information handling system, comprising:
a display means;
a plurality of objects to be displayed as a three-dimensional graphics scene on said display means;
means for categorizing the plurality of objects into a first set of objects and a second set of objects, wherein the first set of objects contains one or more objects to be blurred, and wherein the second set of objects contains one or more objects not to be blurred;
means for rendering each object in the second set of objects directly into a first buffer;
means for dividing a render time period into a plurality of time slices;
means for rendering each object in the first set of objects into the first buffer during each time slice;
means for accumulating the three-dimensional graphics scene from the first buffer into an accumulation buffer during each time slice; and means for displaying the three-dimensional graphics scene on said display device.
10. An information handling system according to claim 9, wherein said means for categorizing comprises:
means for determining if a motion vector is associated with a selected object;
means for assigning the selected object to the first set of objects if a motion vector is associated with the selected object;
and means for assigning the selected object to the second set of objects if a motion vector is not associated with the selected object.
11. An information handling system according to claim 10, wherein said means for determining comprises means for determining if a motion vector is associated with a vertex of the selected object.
12. An information handling system according to claim 10, wherein the motion vector is opposite to a direction of motion of the selected object.
13. An information handling system according to claim 10, wherein a length of the motion vector is proportional to a speed of motion of the selected object.
14. An information handling system according to claim 10, wherein the motion vector is opposite to a direction of blur of the selected object.
15. An information handling system according to claim 9, wherein said means for displaying further comprises:
means for copying the accumulation buffer to a frame buffer;
and means for displaying the frame buffer on said display means.
16. A graphics system, comprising:
a display means;
a plurality of objects to be displayed as a three-dimensional graphics scene on said display means;
means for categorizing the plurality of objects into a first set of objects and a second set of objects, wherein the first set of objects contains one or more objects to be blurred, and wherein the second set of objects contains one or more objects not to be blurred;
means for rendering each object in the second set of objects directly into a first buffer;
17 means for dividing a render time period into a plurality of time slices;
means for rendering each object in the first set of objects into the first buffer during each time slice;
means for accumulating the three-dimensional graphics scene from the first,buffer into an accumulation buffer during each time slice; and means for displaying the three-dimensional graphics scene on said display device.
17. A graphics system according to claim 16, wherein said means for categorizing comprises:
means for determining if a motion vector is associated with a selected object;
means for assigning the selected object to the first set of objects if a motion vector is associated with the selected object;
and means for assigning the selected object to the second set of objects if a motion vector is not associated with the selected object.
18. A graphics system according to claim 17, wherein said means for determining comprises means for determining if a motion vector is associated with a vertex of the selected object.
19. A graphics system according to claim 17, wherein the motion vector is opposite to a direction of motion of the selected object.
20. A graphics system according to claim 17, wherein a length of the motion vector is proportional to a speed of motion of the selected object.
21. A graphics system according to claim 17, wherein the motion vector is opposite to a direction of blur of the selected object.
22. A graphics system according to claim 16, wherein said means for displaying further comprises:
means for copying the accumulation buffer to a frame buffer;
and means for displaying the frame buffer on said display means.
23. A computer program product on a computer usable medium, the computer usable medium having computer usable program means embodied therein for displaying a three-dimensional graphics scene on a display device, the computer usable program means comprising:
means for categorizing a plurality of objects into a first set of objects and a second set of objects, wherein the first set of objects contains one or more objects to be blurred, and wherein the second set of objects contains one or more objects not to be blurred;
means for rendering each object in the second set of objects directly into a first buffer;
means for dividing a render time period into a plurality of time slices;
means for rendering each object in the first set of objects into the first buffer during each time slice;
means for accumulating the three-dimensional graphics scene from the first buffer into an accumulation buffer during each time slice; and means for displaying the three-dimensional graphics scene on the display device.
24. A computer program product according to claim 23, wherein the computer usable program means further comprises:
means for determining if a motion vector is associated with a selected object;
means for assigning the selected object to the first set of objects if a motion vector is associated with the selected object;
and means for assigning the selected object to the second set of objects if a motion vector is not associated with the selected object.
25. A computer program product according to claim 24, wherein said means for determining comprises means for determining if a motion vector is associated with a vertex of the selected object.
26. A computer program product according to claim 24, wherein the motion vector is opposite to a direction of motion of the selected object.
27. A computer program product according to claim 24, wherein a length of the motion vector is proportional to a speed of motion of the selected object.
28. A computer program product according to claim 24, wherein the motion vector is opposite to a direction of blur of the selected object.
29. A computer program product according to claim 23, wherein said means for displaying further comprises:
means for copying the accumulation buffer to a frame buffer;
and means for displaying the frame buffer on the display device.
CA 2307352 1999-06-30 2000-05-01 System and method for displaying a three-dimensional object using motion vectors to generate object blur Abandoned CA2307352A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US34344399 true 1999-06-30 1999-06-30
US09/343,443 1999-06-30

Publications (1)

Publication Number Publication Date
CA2307352A1 true true CA2307352A1 (en) 2000-12-30

Family

ID=23346144

Family Applications (1)

Application Number Title Priority Date Filing Date
CA 2307352 Abandoned CA2307352A1 (en) 1999-06-30 2000-05-01 System and method for displaying a three-dimensional object using motion vectors to generate object blur

Country Status (3)

Country Link
JP (1) JP3286294B2 (en)
CA (1) CA2307352A1 (en)
GB (1) GB2356114B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4596227B2 (en) 2001-06-27 2010-12-08 ソニー株式会社 Communication device and method, a communication system, a recording medium, and program
US8390729B2 (en) 2007-09-05 2013-03-05 International Business Machines Corporation Method and apparatus for providing a video image having multiple focal lengths
WO2012001587A1 (en) * 2010-06-28 2012-01-05 Koninklijke Philips Electronics N.V. Enhancing content viewing experience

Also Published As

Publication number Publication date Type
GB2356114B (en) 2003-09-10 grant
JP3286294B2 (en) 2002-05-27 grant
GB2356114A (en) 2001-05-09 application
GB0015421D0 (en) 2000-08-16 application
JP2001043398A (en) 2001-02-16 application

Similar Documents

Publication Publication Date Title
Funkhouser et al. Adaptive display algorithm for interactive frame rates during visualization of complex virtual environments
Reeves et al. Rendering antialiased shadows with depth maps
van der Laan et al. Screen space fluid rendering with curvature flow
Saito et al. Comprehensible rendering of 3-D shapes
Mark et al. Post-rendering 3 D image warping: visibility, reconstruction, and performance for depth-image warping
Cook et al. Distributed ray tracing
Raskar et al. Image precision silhouette edges
US6876362B1 (en) Omnidirectional shadow texture mapping
US5805782A (en) Method and apparatus for projective texture mapping rendered from arbitrarily positioned and oriented light source
Isenberg et al. A developer's guide to silhouette algorithms for polygonal models
US5854631A (en) System and method for merging pixel fragments based on depth range values
Van WijK Flow visualization with surface particles
US7348989B2 (en) Preparing digital images for display utilizing view-dependent texturing
US5777619A (en) Method for simulating hair using particle emissions
US6480205B1 (en) Method and apparatus for occlusion culling in graphics systems
US6023279A (en) Method and apparatus for rapidly rendering computer generated images of complex structures
Lake et al. Stylized rendering techniques for scalable real-time 3d animation
Zhang et al. Visibility culling using hierarchical occlusion maps
US8253730B1 (en) System and method for construction of data structures for ray tracing using bounding hierarchies
US6448968B1 (en) Method for rendering graphical objects represented as surface elements
US6498607B1 (en) Method for generating graphical object represented as surface elements
US5630043A (en) Animated texture map apparatus and method for 3-D image displays
Max et al. Flow visualization using moving textures
US6664959B2 (en) Method and apparatus for culling in a graphics processor with deferred shading
Fuhrmann et al. Real-time techniques for 3D flow visualization

Legal Events

Date Code Title Description
EEER Examination request
FZDE Dead