US20060262128A1 - Three dimensional rendering including motion sorting - Google Patents
Three dimensional rendering including motion sorting Download PDFInfo
- Publication number
- US20060262128A1 US20060262128A1 US11/407,884 US40788406A US2006262128A1 US 20060262128 A1 US20060262128 A1 US 20060262128A1 US 40788406 A US40788406 A US 40788406A US 2006262128 A1 US2006262128 A1 US 2006262128A1
- Authority
- US
- United States
- Prior art keywords
- motion
- comparing
- closed figures
- sided closed
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/36—Level of detail
Definitions
- the present invention is directed to visualization methods and, more particularly, to three dimensional (3D) rendering techniques.
- 3D images are generated, or rendered, by 3D pipelines.
- a 3D pipeline may be represented as a series of steps such as those shown in FIG. 5 .
- the steps are often implemented by an application program running on a computer, with or without specialized graphics acceleration hardware, in conjunction with memory devices.
- the memory devices store information about objects, lighting, view points, and other information needed to generate a 3D image.
- the goal of the rendering operation is to produce in a frame buffer a 2D image that is to be displayed on a monitor.
- Scenes are defined by a data structure referred to as the scene database.
- the scene database contains models of objects in the scene as well as information relating the objects to one another.
- the viewpoint is important because it determines how the objects are seen in relation to one another.
- the viewpoint may be thought of as the position of an observer, and as the position of the observer changes, the relationships between the objects changes. For example, as the viewer moves from the front to the right side of a first object, a second object that is behind the first object may come into view while a third object that is to the left of the first object may be blocked or occluded by the first object.
- a light source that is behind the first object will interact with the first, second and third objects differently depending upon whether the light source is in front of the user or to the right of the user.
- the rendering pipeline it is necessary for the rendering pipeline to be able to manipulate objects based on the viewpoint.
- the manipulation of objects, light sources, and the like based on a viewpoint is referred to as transforming the data, because the individual components making up an image are all transformed to a common viewpoint, referred to as view space.
- objects are represented by a series of triangles or other primitive shapes (primitives).
- Each triangle has three vertices in three dimensions, represented by x, y and z coordinates.
- Meshes of individual triangles can be built up from lists of vertices to represent objects. Once a common set of vertices is prepared, the next step is to convert the coordinates for the vertices from view space to screen space. That process is referred to as triangle setup.
- Triangle setup requires that the 3D scene be changed so that it may be stored in a 2D frame buffer to enable the image to be displayed on a screen, which is made up of pixels.
- Triangle setup is performed triangle by triangle. However, some of the triangles of the 3D scene might be covered by other triangles that are in front of it, but at this stage it is unknown to the rendering pipeline which triangles are covered or partly covered and which are not.
- the triangle setup step receives all three vertices for each triangle. Each of these vertices has an x, y and z coordinate which defines its place in the three 3D scene.
- the triangle setup step fills each triangle with pixels. Each of the pixels in the triangle receives the x and y coordinate for the place it occupies on the screen, and a z coordinate which holds its depth information. Each of the pixels for the triangle are sent one by one to the rendering step.
- the triangle setup step receives a triangle that is somewhere in the background of the scene, where it is partly or completely covered by triangles in front of it, it will still perform its normal function which is to convert the triangle into pixels. After that, those pixels are sent to the rendering step.
- the rendering step details such as texture, shading and lighting are addressed.
- the z buffer (the memory with depth information) is accessed and the z coordinate of the pixel at the spot where the new pixel is supposed to be drawn in is read.
- the rendering pipeline has wasted a clock cycle rendering the old pixel which has now been replaced by a new pixel. Furthermore, even if the new pixel is rendered and stored, it is possible that a later triangle will happen to cover this pixel, again causing an overwrite. Thus, it is seen that many pixels are rendered unnecessarily. The rendering pipeline is wasting valuable rendering power for the drawing, or at least the processing, of pixels that will never be seen on the screen. Each of those uselessly rendered pixels is taling away fill rate.
- the z buffer is accessed twice for each pixel in each triangle of the scene, which represents several times the screen resolution. Such z buffer accesses cost an immense amount of memory bandwidth. As a result, the z buffer is the most accessed part of the local video memory associated with the 3D rendering pipeline.
- One technique for reducing the number of triangles that must be rendered is for the 3D application to determine when objects may be ignored. For example, if a viewpoint is looking through a doorway into a room, many of the objects will not be visible and may thus be ignored. Such a process is referred to as culling.
- Another process referred to as clipping involves the use of bounding boxes to determine if portions of objects are occluded. Culling and clipping may be used to reduce the number of triangles that must be rendered.
- a multi-resolution mesh is used to create at design time models of an object using different numbers of polygons depending upon the degree of resolution which is required.
- FIG. 6A represents an automobile modeled with 200 polygons;
- FIG. 6B represents the same automobile modeled with only 100 polygons, while
- FIG. 6C represents the same automobile modeled with only 75 polygons.
- the present solves the problems of the prior art by providing a method of reducing at run time the number of primitives that need to be used to render an object.
- the method of the present invention determines that an object is moving within a scene.
- the number of primitives used to represent the moving object is reduced.
- the degree of reduction can be related to the amount of motion, i.e. speed, of the moving object.
- the moving object is then rendered based on the reduced number of primitives.
- the present invention takes advantage of the fact that the human eye is less sensitive to the details of an object in motion, but rather is more sensitive to the motion itself. By identifying and quantifying the motion, the level of detail of the moving object can be reduced. By saving time and memory bandwidth by not rendering moving objects with the same level as detail as that of stationary objects, more rendering time and hence more detail can be added to the stationary items, leading to a more realistic image.
- the motion detecting aspect of the present invention can also be used to make decisions whether the moving object should be re-rendered, moved, or left as is.
- the present invention can be implemented in existing 3D rendering pipelines. Those, and other advantages and benefits, will be apparent from the Description of the Preferred Embodiments appearing hereinbelow.
- FIG. 1 is a block diagram of hardware which may be used to implement the present invention
- FIG. 2 is a flow chart illustrating the present invention
- FIGS. 3A and 3B illustrate how motion may be detected
- FIGS. 4A, 4B and 4 C illustrate an object at three different degrees of resolution produced by a triangle reduction algorithm
- FIG. 5 is a block diagram of a typical 3D rendering pipeline
- FIGS. 6A, 6B and 6 C illustrate an object modeled at three different degrees of resolution with the models constructed at design time.
- FIG. 1 is a block diagram of hardware 10 which may used to implement the present invention.
- the hardware 10 may be a personal computer system comprised of a computer 12 having as input devices keyboard 14 , mouse 16 and microphone 18 .
- Output devices such as a monitor 20 and speakers 22 may also be provided.
- the reader will recognize that other types of input and output devices may be provided and that the present invention is not limited by the particular hardware configuration.
- a main processor 24 which is comprised of a host central processing unit 26 (CPU).
- Software applications such as graphics software application 27 , may be loaded from, for example, disk 28 (or other device), into main memory 29 from which the software applications 27 may be run on the host CPU 26 .
- the main processor 24 operates in conjunction with a memory subsystem 30 .
- the memory subsystem 30 is comprised of the main memory 29 , which may be comprised of a number of memory components, and a memory and bus controller 32 which operates to control access to the main memory 29 .
- the main memory 29 and controller 32 may be in communication with a graphics system 34 through a bus 36 which may be, for example, an AGP bus.
- Other buses may exist, such as a PCI bus 37 , which interfaces to I/O devices or storage devices, such as disk 28 or a CDROM, or to provide network access.
- the graphics system 34 may include a graphics accelerator 38 .
- the graphics accelerator 38 is specialized hardware for performing certain tasks within the 3D rendering pipeline.
- a graphics accelerator would typically perform the triangle setup and render triangles steps illustrated in FIG. 5 .
- the graphics accelerator 38 is connected to the remainder of the graphics system 34 through a memory arbiter 40 which is responsible for cueing requests and information, block writes, block reads, etc.
- the memory arbiter 40 communicates with a graphics memory 42 through a memory interface 44 .
- the amount and speed of graphics memory 42 is an important hardware consideration. A typical bottleneck in hardware design is the speed with which the graphics accelerator 38 can output its results to memory.
- typically two frame buffers are provided. Rendering in the frame buffers is performed in a ping-pong fashion, rendering a first scene in a first frame buffer followed by rendering a second scene in the second frame buffer.
- rendering in the first frame buffer is completed, the scene in the first frame buffer is displayed on monitor 20 . Any refreshes which must be performed on the screen are performed from the first, stable frame buffer.
- the scene in the second frame buffer is complete, that scene is displayed on monitor 20 , while rendering of the next scene is performed in the first frame buffer.
- the rate at which new scenes are displayed by the buffer is referred to as the frame rate.
- step 46 is to detect motion.
- Motion may be detected by comparing the position of objects in one scene to the object's position in a second scene and/or the object's position with respect to a view point.
- FIGS. 3A and 3B Three different circumstances may be illustrated by reference to FIGS. 3A and 3B .
- FIG. 3A an automobile is shown along with a first tree 48 and a second tree 50 .
- FIG. 3B there is no information from which to determine whether the automobile is moving.
- FIG. 3B By looking at FIG. 3B and assuming the viewpoint has not changed, it can be determined that the position of the automobile has changed.
- the position of the objects in FIG. 3B With their position in FIG. 3A , it can be determined that the automobile has moved.
- the degree or amount of motion can be determined.
- the trees 48 and 50 will be moving to the left, at an amount equal to the rate at which the viewpoint is changing.
- the automobile will be moving to the left at an amount equal to the amount by which the viewpoint is changing plus its own speed.
- the degree of change is compared to a threshold amount at step 54 . If the degree of change is less than the threshold amount, the model of the object is retrieved from memory at step 56 and rendered at step 58 .
- the threshold amount may be a variable amount based on primitive size. If the amount of motion exceeds the size of the primitive, mesh reduction would be invoked.
- a polygon reduction algorithm is performed on the model to reduce the number of polygons which need to be rendered.
- a polygon reduction algorithm Melax, “A Simple, Fast, and Effective Polygon Reduction Algorithm”, Game Developer, November 1998, which is hereby incorporated by reference.
- Using the polygon reduction algorithm disclosed in that article at run time would result in the renderings illustrated in FIGS. 4A, 4B and 4 C.
- FIG. 4A illustrates a female human model rendered with 100% of the original polygons.
- FIG. 4B illustrates the female human model rendered with 20% of the original polygons, while FIG.
- 4C illustrates the female human model rendered with 4% of the original polygons. Additional decision steps 54 can be added to make a determination between, for example, the renderings of FIG. 4B and FIG. 4C . That is, the greater the motion, the lower the resolution of the model that is required.
- the reader will recognize that other types of polygon reduction algorithms are available, and the present invention is not intended to be limited to any particular reduction technique.
- Motion sorting may be implemented when the CPU 26 is handling the 3D geometry, or when the graphics system 34 is handling the 3D geometry.
- the data paths to the graphics system 34 should include an indication of the quality to be applied in the rendering process. That may be achieved by including the information as a part of the stream of vertex information.
- the embodiment set forth above will more likely be implemented when the geometry is handled by the graphics system 34 .
- Other techniques for detecting or estimating motion may be used. See, for example, Agrawala “Model-Based Motion Estimation for Synthetic Images”, ACM Multimedia 1995, which is hereby incorporated by reference.
- motion detection and estimation as described in that article, as well as in the industry, are generally applied to video, not graphics data types.
- the technique described in the article is used to effect a data compression, such as in MPEG4.
- the means of motion detection may be quite different with video.
- Video is always a 2D data-type.
- motion detection searches for blocks of 2D pixels that have translated from one scene position to another between successive frames.
- synthetic (graphics) data it is basically a technique to generate a reduction of 2D data.
- the motion detection aspects of the present invention can be used to reduce the number of polygons used to render an object
- the information may be used for other purposes. For example, in situations where the object is moving relatively fast with respect to the frame rate, it may be desirable to re-render the scene for the purpose of updating the position of the moving object.
- the preferred embodiment of the present invention is to be implemented in software.
- the present invention will be an ordered set of instructions stored in a memory device.
- the set of instructions is executed, the methods disclosed herein will be performed.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
The present invention determines that an object is moving within a scene. At run time, the number of primitives used to represent the moving object is reduced. The degree of reduction can be related to the amount of motion, i.e. speed, of the moving object. The moving object is then rendered based on the reduced number of primitives saving time and memory bandwidth.
Description
- The present invention is a continuation of pending U.S. application Ser. No. 09/902,981 entitled “Three Dimensional Rendering Including Motion Sorting” filed 11 Jul. 2001 and assigned to the same assignee as the present invention.
- The present invention is directed to visualization methods and, more particularly, to three dimensional (3D) rendering techniques.
- 3D images are generated, or rendered, by 3D pipelines. A 3D pipeline may be represented as a series of steps such as those shown in
FIG. 5 . The steps are often implemented by an application program running on a computer, with or without specialized graphics acceleration hardware, in conjunction with memory devices. The memory devices store information about objects, lighting, view points, and other information needed to generate a 3D image. The goal of the rendering operation is to produce in a frame buffer a 2D image that is to be displayed on a monitor. - Scenes are defined by a data structure referred to as the scene database. The scene database contains models of objects in the scene as well as information relating the objects to one another. The viewpoint is important because it determines how the objects are seen in relation to one another. The viewpoint may be thought of as the position of an observer, and as the position of the observer changes, the relationships between the objects changes. For example, as the viewer moves from the front to the right side of a first object, a second object that is behind the first object may come into view while a third object that is to the left of the first object may be blocked or occluded by the first object. Also, a light source that is behind the first object will interact with the first, second and third objects differently depending upon whether the light source is in front of the user or to the right of the user. Thus, it is necessary for the rendering pipeline to be able to manipulate objects based on the viewpoint. The manipulation of objects, light sources, and the like based on a viewpoint is referred to as transforming the data, because the individual components making up an image are all transformed to a common viewpoint, referred to as view space.
- In modem rendering pipelines, objects are represented by a series of triangles or other primitive shapes (primitives). Each triangle has three vertices in three dimensions, represented by x, y and z coordinates. Meshes of individual triangles can be built up from lists of vertices to represent objects. Once a common set of vertices is prepared, the next step is to convert the coordinates for the vertices from view space to screen space. That process is referred to as triangle setup.
- Triangle setup requires that the 3D scene be changed so that it may be stored in a 2D frame buffer to enable the image to be displayed on a screen, which is made up of pixels. Triangle setup is performed triangle by triangle. However, some of the triangles of the 3D scene might be covered by other triangles that are in front of it, but at this stage it is unknown to the rendering pipeline which triangles are covered or partly covered and which are not. As a result, the triangle setup step receives all three vertices for each triangle. Each of these vertices has an x, y and z coordinate which defines its place in the three 3D scene. The triangle setup step fills each triangle with pixels. Each of the pixels in the triangle receives the x and y coordinate for the place it occupies on the screen, and a z coordinate which holds its depth information. Each of the pixels for the triangle are sent one by one to the rendering step.
- If the triangle setup step receives a triangle that is somewhere in the background of the scene, where it is partly or completely covered by triangles in front of it, it will still perform its normal function which is to convert the triangle into pixels. After that, those pixels are sent to the rendering step. Here, in the rendering step, details such as texture, shading and lighting are addressed. During the rendering step, the z buffer (the memory with depth information) is accessed and the z coordinate of the pixel at the spot where the new pixel is supposed to be drawn in is read. If the value in the z buffer is zero, which means that nothing has been drawn at this location yet, or if the information shows that the new pixel is in front of the value that was found in the z buffer, the pixel will be rendered and the z coordinate of the pixel just rendered will be stored in the z buffer. The problem, however, is that the rendering pipeline has wasted a clock cycle rendering the old pixel which has now been replaced by a new pixel. Furthermore, even if the new pixel is rendered and stored, it is possible that a later triangle will happen to cover this pixel, again causing an overwrite. Thus, it is seen that many pixels are rendered unnecessarily. The rendering pipeline is wasting valuable rendering power for the drawing, or at least the processing, of pixels that will never be seen on the screen. Each of those uselessly rendered pixels is taling away fill rate.
- Another problem with rendering pixels that will not be seen in the final image is with the z buffer. The z buffer is accessed twice for each pixel in each triangle of the scene, which represents several times the screen resolution. Such z buffer accesses cost an immense amount of memory bandwidth. As a result, the z buffer is the most accessed part of the local video memory associated with the 3D rendering pipeline.
- One technique for reducing the number of triangles that must be rendered is for the 3D application to determine when objects may be ignored. For example, if a viewpoint is looking through a doorway into a room, many of the objects will not be visible and may thus be ignored. Such a process is referred to as culling. Another process referred to as clipping involves the use of bounding boxes to determine if portions of objects are occluded. Culling and clipping may be used to reduce the number of triangles that must be rendered.
- Even with culling and clipping, however, the number of triangles to be rendered in a highly detailed scene requires a tremendous amount of computing power and memory bandwidth. Consider a sophisticated video game or virtual tour in which the viewer is walking down the center of an exhibit hall in which dozens of individual objects are within view, and the view is constantly changing as a result of the motion of the viewer. As a result, other techniques are needed to enable real time rendering.
- One technique which has been developed is the multi-resolution mesh. A multi-resolution mesh is used to create at design time models of an object using different numbers of polygons depending upon the degree of resolution which is required.
FIG. 6A represents an automobile modeled with 200 polygons;FIG. 6B represents the same automobile modeled with only 100 polygons, whileFIG. 6C represents the same automobile modeled with only 75 polygons. When a determination is made, for example, that the object is in the background, a lower resolution model of the object is retrieved and used by the rendering pipeline. By reducing the number of polygons, the rendering operation is simplified. - Despite efforts to simplify the rendering process, consumer demands for more realism in real time 3D imaging continue to push hardware and software to their limits. The multi-resolution mesh approach, because the resolution is determined at design time rather than run time, is not scalable and cannot adapt to different platforms of varying rendering capabilities. Accordingly, the need exists for a technique which simplifies the rendering process at run time thereby enabling real time 3D imaging at a level of detail acceptable to consumers.
- The present solves the problems of the prior art by providing a method of reducing at run time the number of primitives that need to be used to render an object. The method of the present invention determines that an object is moving within a scene. At run time, the number of primitives used to represent the moving object is reduced. The degree of reduction can be related to the amount of motion, i.e. speed, of the moving object. The moving object is then rendered based on the reduced number of primitives.
- The present invention takes advantage of the fact that the human eye is less sensitive to the details of an object in motion, but rather is more sensitive to the motion itself. By identifying and quantifying the motion, the level of detail of the moving object can be reduced. By saving time and memory bandwidth by not rendering moving objects with the same level as detail as that of stationary objects, more rendering time and hence more detail can be added to the stationary items, leading to a more realistic image. The motion detecting aspect of the present invention can also be used to make decisions whether the moving object should be re-rendered, moved, or left as is. The present invention can be implemented in existing 3D rendering pipelines. Those, and other advantages and benefits, will be apparent from the Description of the Preferred Embodiments appearing hereinbelow.
- For the present invention to be easily understood and readily practiced, the present invention will now be described, for purposes of illustration and not limitation, in conjunction with the following figures, wherein:
-
FIG. 1 is a block diagram of hardware which may be used to implement the present invention; -
FIG. 2 is a flow chart illustrating the present invention; -
FIGS. 3A and 3B illustrate how motion may be detected; -
FIGS. 4A, 4B and 4C illustrate an object at three different degrees of resolution produced by a triangle reduction algorithm; -
FIG. 5 is a block diagram of a typical 3D rendering pipeline; and -
FIGS. 6A, 6B and 6C illustrate an object modeled at three different degrees of resolution with the models constructed at design time. -
FIG. 1 is a block diagram ofhardware 10 which may used to implement the present invention. Thehardware 10 may be a personal computer system comprised of acomputer 12 having asinput devices keyboard 14,mouse 16 andmicrophone 18. Output devices such as amonitor 20 andspeakers 22 may also be provided. The reader will recognize that other types of input and output devices may be provided and that the present invention is not limited by the particular hardware configuration. - Residing within
computer 12 is amain processor 24 which is comprised of a host central processing unit 26 (CPU). Software applications, such asgraphics software application 27, may be loaded from, for example, disk 28 (or other device), intomain memory 29 from which thesoftware applications 27 may be run on thehost CPU 26. Themain processor 24 operates in conjunction with amemory subsystem 30. Thememory subsystem 30 is comprised of themain memory 29, which may be comprised of a number of memory components, and a memory andbus controller 32 which operates to control access to themain memory 29. Themain memory 29 andcontroller 32 may be in communication with agraphics system 34 through abus 36 which may be, for example, an AGP bus. Other buses may exist, such as a PCI bus 37, which interfaces to I/O devices or storage devices, such asdisk 28 or a CDROM, or to provide network access. - The
graphics system 34 may include agraphics accelerator 38. Thegraphics accelerator 38 is specialized hardware for performing certain tasks within the 3D rendering pipeline. A graphics accelerator would typically perform the triangle setup and render triangles steps illustrated inFIG. 5 . - The
graphics accelerator 38 is connected to the remainder of thegraphics system 34 through amemory arbiter 40 which is responsible for cueing requests and information, block writes, block reads, etc. Thememory arbiter 40 communicates with agraphics memory 42 through amemory interface 44. The amount and speed ofgraphics memory 42 is an important hardware consideration. A typical bottleneck in hardware design is the speed with which thegraphics accelerator 38 can output its results to memory. - In 3D applications, typically two frame buffers are provided. Rendering in the frame buffers is performed in a ping-pong fashion, rendering a first scene in a first frame buffer followed by rendering a second scene in the second frame buffer. When rendering in the first frame buffer is completed, the scene in the first frame buffer is displayed on
monitor 20. Any refreshes which must be performed on the screen are performed from the first, stable frame buffer. When the scene in the second frame buffer is complete, that scene is displayed onmonitor 20, while rendering of the next scene is performed in the first frame buffer. The rate at which new scenes are displayed by the buffer is referred to as the frame rate. - Turning now to
FIG. 2 , a flowchart illustrating the present invention is shown. The first step of the present invention, step 46, is to detect motion. Motion may be detected by comparing the position of objects in one scene to the object's position in a second scene and/or the object's position with respect to a view point. Three different circumstances may be illustrated by reference toFIGS. 3A and 3B . - In
FIG. 3A , an automobile is shown along with afirst tree 48 and asecond tree 50. Looking atjustFIG. 3A , there is no information from which to determine whether the automobile is moving. However, by looking atFIG. 3B and assuming the viewpoint has not changed, it can be determined that the position of the automobile has changed. Thus, by comparing the position of the objects inFIG. 3B with their position inFIG. 3A , it can be determined that the automobile has moved. Furthermore, because the x, y coordinates of each of the polygons making up the automobile are known inFIG. 3A as well asFIG. 3B , the degree or amount of motion can be determined. - Assume now that the observer turns their head so as to follow the motion of the automobile as it moves from the position illustrated in
FIG. 3A to the position illustrated inFIG. 3B . Under those circumstances, the position of the automobile with respect to the viewpoint has not changed. However, the position of thetrees trees trees - Assume now that as the automobile moves from right to left, the observer moves their head to the right to observe the direction from which the automobile came. Under those circumstances, all of the objects will be determined to be in motion with respect to the viewpoint. The
trees - Returning to
FIG. 2 , after determining which objects are considered to be in motion at step 46, and the amount of motion or degree of change has been quantified atstep 52, the degree of change is compared to a threshold amount atstep 54. If the degree of change is less than the threshold amount, the model of the object is retrieved from memory atstep 56 and rendered atstep 58. The threshold amount may be a variable amount based on primitive size. If the amount of motion exceeds the size of the primitive, mesh reduction would be invoked. - Returning to
decision step 54, assuming that the amount of motion exceeds the threshold, the next step is to retrieve the model atstep 60. Atstep 62, a polygon reduction algorithm is performed on the model to reduce the number of polygons which need to be rendered. One example of a polygon reduction algorithm Melax, “A Simple, Fast, and Effective Polygon Reduction Algorithm”, Game Developer, November 1998, which is hereby incorporated by reference. Using the polygon reduction algorithm disclosed in that article at run time would result in the renderings illustrated inFIGS. 4A, 4B and 4C.FIG. 4A illustrates a female human model rendered with 100% of the original polygons.FIG. 4B illustrates the female human model rendered with 20% of the original polygons, whileFIG. 4C illustrates the female human model rendered with 4% of the original polygons. Additional decision steps 54 can be added to make a determination between, for example, the renderings ofFIG. 4B andFIG. 4C . That is, the greater the motion, the lower the resolution of the model that is required. The reader will recognize that other types of polygon reduction algorithms are available, and the present invention is not intended to be limited to any particular reduction technique. - Motion sorting, according to the present invention, may be implemented when the
CPU 26 is handling the 3D geometry, or when thegraphics system 34 is handling the 3D geometry. In the case where theCPU 26 handles the geometry, the data paths to thegraphics system 34 should include an indication of the quality to be applied in the rendering process. That may be achieved by including the information as a part of the stream of vertex information. The embodiment set forth above will more likely be implemented when the geometry is handled by thegraphics system 34. Other techniques for detecting or estimating motion may be used. See, for example, Agrawala “Model-Based Motion Estimation for Synthetic Images”, ACM Multimedia 1995, which is hereby incorporated by reference. Note that motion detection and estimation as described in that article, as well as in the industry, are generally applied to video, not graphics data types. The technique described in the article is used to effect a data compression, such as in MPEG4. The means of motion detection may be quite different with video. Video is always a 2D data-type. With 2D data-types, motion detection searches for blocks of 2D pixels that have translated from one scene position to another between successive frames. While the article deals with synthetic (graphics) data, it is basically a technique to generate a reduction of 2D data. - While the motion detection aspects of the present invention can be used to reduce the number of polygons used to render an object, the information may be used for other purposes. For example, in situations where the object is moving relatively fast with respect to the frame rate, it may be desirable to re-render the scene for the purpose of updating the position of the moving object.
- The preferred embodiment of the present invention is to be implemented in software. When implemented in software, the present invention will be an ordered set of instructions stored in a memory device. When the set of instructions is executed, the methods disclosed herein will be performed.
- While the present invention has been described in conjunction with preferred embodiments thereof, those of ordinary skill in the art will recognize that many modifications and variations are possible. For example, the present invention may be implemented in connection with a variety of different hardware configurations. The point in the rendering pipeline in which the motion is detected, quantified, and compared to the threshold, as well as the point in which the polygon reduction algorithm is performed may be varied, and need not be performed immediately before the rendering step as illustrated in the flowchart of
FIG. 2 . Such modifications and variations fall within the scope of the present invention which is limited only by the following claims.
Claims (29)
1.-22. (canceled)
23. A method, comprising:
determining that an object is moving;
determining the degree of motion of said object;
comparing the degree of motion to a variable threshold related to the size of a plurality of multi-sided closed figures representing the object;
reducing at run time the number of multi-sided closed figures used to represent the moving object based on said comparing; and
rendering the moving object based on the reduced number of multi-sided closed figures.
24. The method of claim 23 wherein said step of determining that an object is moving includes the step of comparing the position of the object from a scene to be rendered to the position of the object in a rendered scene.
25. The method of claim 23 wherein said reducing step includes the step of performing a polygon reduction algorithm on a model of the moving object.
26. A method, comprising:
identifying relative motion between a first and a second object;
determining which of the first and second objects is in motion and which is stationary;
determining the degree of motion of said object;
comparing the distance the object has moved to the size of a plurality of multi-sided closed figures representing the object;
reducing at run time the number of multi-sided closed figures used to represent the moving object based on said comparing; and
rendering the moving object based on the reduced number of multi-sided closed figures and rendering the stationary object based on a model of said stationary object.
27. The method of claim 26 wherein said identifying step includes the step of comparing the positions of the first and second objects from a scene to be rendered to the positions of the first and second objects in a rendered scene.
28. The method of claim 26 wherein said reducing step includes the step of performing a polygon reduction algorithm on a model of the moving object.
29. A method, comprising:
determining that an object is moving with respect to a viewpoint;
determining the degree of motion of said object;
comparing the distance the object has moved to the size of a plurality of multi-sided closed figures representing the object;
reducing at run time the number of multi-sided closed figures used to represent the object based on said comparing; and
rendering the object using the reduced number of multi-sided closed figures.
30. The method of claim 29 wherein said step of determining that an object is moving includes the step of comparing the position of the object from a scene to be rendered to the position of the object in a rendered scene.
31. The method of claim 29 wherein said reducing step includes the step of performing a polygon reduction algorithm on a model of the moving object.
32. A method, comprising:
identifying relative motion between a first object, a second object, and a viewpoint;
determining which objects are in motion and which are stationary;
determining the degree of motion of said objects;
comparing the degree of motion to a variable threshold related to the size of a plurality of multi-sided closed figures representing the object;
reducing at run time the number of multi-sided closed figures used to represent the moving object based on said comparing; and
rendering the objects considered to be in motion using the reduced number of multi-sided closed figures.
33. The method of claim 32 wherein said identifying step includes the step of comparing the positions of the first and second objects from a scene to be rendered to the positions of the first and second objects in a rendered scene.
34. The method of claim 32 wherein said reducing step includes the step of performing a polygon reduction algorithm on the model of the moving object.
35. A method of rendering 3D images in real time, comprising:
comparing the position of a plurality of objects in one scene to each object's position in another scene;
determining if the position of any of the objects has changed;
comparing the degree to which an object's position has changed to a variable threshold related to the size of a plurality of multi-sided closed figures representing the object;
performing a reduction algorithm in run time in response to said comparing on models representing the moving objects to produce models having a reduced number of multi-sided closed figures;
rendering each object using one of the model representing the object and the model having the reduced number of multi-sided closed figures to produce a scene;
displaying the rendered scene; and
repeating the previous steps.
36. A computer readable medium carrying an ordered set of instructions which, when executed, performs a method comprising:
determining that an object is moving;
determining the degree of motion of said object;
comparing the degree of motion to a variable threshold related to the size of a plurality of multi-sided closed figures representing the object;
reducing at run time the number of multi-sided closed figures used to represent the moving object based on the comparing; and
rendering the moving object based on the reduced number of multi-sided closed figures.
37. A computer readable medium carrying an ordered set of instructions which, when executed, performs a method comprising:
identifying relative motion between a first and a second object;
determining which of the first and second objects is in motion and which is stationary;
determining the degree of motion of said object;
comparing the degree of motion to a variable threshold related to the size of a plurality of multi-sided closed figures representing the object;
reducing at run time the number of multi-sided closed figures used to represent the moving object based on said comparing; and
rendering the moving object based on the reduced number of multi-sided closed figures and rendering the stationary object based on a model representing the stationary object.
38. A computer readable medium carrying an ordered set of instructions which, when executed, performs a method comprising:
determining that an object is moving with respect to a viewpoint;
determining the degree of motion of said object;
comparing the degree of motion to a variable threshold related to the size of a plurality of multi-sided closed figures representing the object;
reducing at run time the number of multi-sided closed figures used to represent the object based on the comparing; and
rendering the object using the reduced number of multi-sided closed figures.
39. A computer readable medium carrying an ordered set of instructions which, when executed, performs a method comprising:
identifying relative motion between a first object, a second object, and a viewpoint;
determining which objects are in motion and which are stationary;
determining the degree of motion of said object;
comparing the degree of motion to a variable threshold related to the size of a plurality of multi-sided closed figures representing the object;
reducing at run time the number of multi-sided closed figures used to represent the object based on the comparing; and
rendering the objects in motion using the reduced number of multi-sided closed figures.
40. A computer readable medium carrying an ordered set of instructions which, when executed, performs a method comprising:
comparing the position of a plurality of objects in one scene to each object's position in another scene;
determining if the position of any of the objects has changed;
comparing the degree to which an object's position has changed to a variable threshold related to the size of a plurality of multi-sided closed figures representing the object;
performing a reduction algorithm in run time in response to said comparing on models representing the moving objects to produce models having a reduced number of multi-sided closed figures;
rendering each object using one of the model representing the object and the model having the reduced number of multi-sided closed figures to produce a scene;
displaying the rendered scene; and
repeating the previous steps.
41. A system, comprising: a processor; a memory controller responsive to said processor; and a computer readable memory device responsive to said memory controller, said memory device carrying an ordered set of instructions which, when executed, performs a method comprising:
determining that an object is moving;
determining the degree of motion of said object;
comparing the degree of motion to a variable threshold related to the size of a plurality of multi-sided closed figures representing the object;
reducing at run time the number of multi-sided closed figures used to represent the moving object based on said comparing; and
rendering the moving object based on the reduced number of multi-sided closed figures.
42. The system of claim 41 additionally comprising a graphics system responsive to said processor and an output device responsive to said processor.
43. A system, comprising:
a processor; a memory controller responsive to said processor; and a computer readable memory device responsive to said memory controller, said memory device carrying an ordered set of instructions which, when executed, performs a method comprising:
identifying relative motion between a first and a second object;
determining which of the first and second objects is in motion and which is stationary;
determining the degree of motion of said object;
comparing the degree of motion to a variable threshold related to the size of a plurality of multi-sided closed figures representing the object;
reducing at run time the number of multi-sided closed figures used to represent the moving object based on said comparing; and
rendering the moving object based on the reduced number of multi-sided closed figures and rendering the stationary object based on a model representing said stationary object.
44. The system of claim 43 additionally comprising a graphics system responsive to said processor and an output device responsive to said processor.
45. A system, comprising: a processor; a memory controller responsive to said processor; and a computer readable memory device responsive to said memory controller, said memory device carrying an ordered set of instructions which, when executed, performs a method comprising:
determining that an object is moving with respect to a viewpoint;
determining the degree of motion of said object;
comparing the degree of motion to a variable threshold related to the size of a plurality of multi-sided closed figures representing the object;
reducing at run time the number of multi-sided closed figures used to represent the moving object based on said comparing; and
rendering the object using the reduced number of multi-sided closed figures.
46. The system of claim 45 additionally comprising a graphics system responsive to said processor and an output device responsive to said processor.
47. A system, comprising: a processor; a memory controller responsive to said processor; and a computer readable memory device responsive to said memory controller, said memory device carrying an ordered set of instructions which, when executed, performs a method comprising:
identifying relative motion between a first object, a second object, and a viewpoint;
determining which objects are in motion and which are stationary;
determining the degree of motion of said object;
comparing the degree of motion to a variable threshold related to the size of a plurality of multi-sided closed figures representing the object;
reducing at run time the number of multi-sided closed figures used to represent the moving object based on said comparing; and
rendering the objects considered to be in motion using the reduced number of multi-sided closed figures.
48. The system of claim 47 additionally comprising a graphics system responsive to said processor and an output device responsive to said processor.
49. A system, comprising: a processor; a memory controller responsive to said processor; and a computer readable memory device responsive to said memory controller, said memory device carrying an ordered set of instructions which, when executed, performs a method comprising:
comparing the position of a plurality of objects in one scene to each object's position in another scene;
determining if the position of any of the objects has changed;
comparing the degree to which an object's position has changed to a variable threshold related to the size of a plurality of multi-sided closed figures representing the object;
performing a reduction algorithm in run time in response to said comparing on models representing the moving objects to produce models having a reduced number of multi-sided closed figures;
rendering each object using one of the model representing the object and the model having the reduced number of multi-sided closed figures to produce a scene;
displaying the rendered scene; and
repeating the previous steps.
50. The system of claim 49 additionally comprising a graphics system responsive to said processor and an output device responsive to said processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/407,884 US20060262128A1 (en) | 2001-07-11 | 2006-04-20 | Three dimensional rendering including motion sorting |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/902,981 US6806876B2 (en) | 2001-07-11 | 2001-07-11 | Three dimensional rendering including motion sorting |
US10/934,215 US7038679B2 (en) | 2001-07-11 | 2004-09-03 | Three dimensional rendering including motion sorting |
US11/407,884 US20060262128A1 (en) | 2001-07-11 | 2006-04-20 | Three dimensional rendering including motion sorting |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/934,215 Continuation US7038679B2 (en) | 2001-07-11 | 2004-09-03 | Three dimensional rendering including motion sorting |
Publications (1)
Publication Number | Publication Date |
---|---|
US20060262128A1 true US20060262128A1 (en) | 2006-11-23 |
Family
ID=25416718
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/902,981 Expired - Fee Related US6806876B2 (en) | 2001-07-11 | 2001-07-11 | Three dimensional rendering including motion sorting |
US10/934,215 Expired - Fee Related US7038679B2 (en) | 2001-07-11 | 2004-09-03 | Three dimensional rendering including motion sorting |
US11/407,884 Abandoned US20060262128A1 (en) | 2001-07-11 | 2006-04-20 | Three dimensional rendering including motion sorting |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/902,981 Expired - Fee Related US6806876B2 (en) | 2001-07-11 | 2001-07-11 | Three dimensional rendering including motion sorting |
US10/934,215 Expired - Fee Related US7038679B2 (en) | 2001-07-11 | 2004-09-03 | Three dimensional rendering including motion sorting |
Country Status (1)
Country | Link |
---|---|
US (3) | US6806876B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100053310A1 (en) * | 2008-08-31 | 2010-03-04 | Maxson Brian D | Transforming 3d video content to match viewer position |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6806876B2 (en) * | 2001-07-11 | 2004-10-19 | Micron Technology, Inc. | Three dimensional rendering including motion sorting |
EP1473678A4 (en) * | 2002-02-06 | 2008-02-13 | Digital Process Ltd | Three-dimensional shape displaying program, three-dimensional shape displaying method, and three-dimensional shape displaying device |
JP2004213641A (en) * | 2002-12-20 | 2004-07-29 | Sony Computer Entertainment Inc | Image processor, image processing method, information processor, information processing system, semiconductor device and computer program |
US7164423B1 (en) * | 2003-04-30 | 2007-01-16 | Apple Computer, Inc. | Method and apparatus for providing an animated representation of a reorder operation |
JP2005018538A (en) * | 2003-06-27 | 2005-01-20 | Cyberstep Inc | Image processing method and device |
US20050156930A1 (en) * | 2004-01-20 | 2005-07-21 | Matsushita Electric Industrial Co., Ltd. | Rendering device and rendering method |
US7737977B2 (en) * | 2004-05-14 | 2010-06-15 | Pixar | Techniques for automatically maintaining continuity across discrete animation changes |
WO2005116933A1 (en) * | 2004-05-14 | 2005-12-08 | Pixar | Techniques for automatically maintaining continuity across discrete animation changes |
AU2006253724A1 (en) * | 2005-05-31 | 2006-12-07 | Mentorwave Technologies Ltd. | Method and system for displaying via a network of an interactive movie |
US8031957B1 (en) * | 2006-07-11 | 2011-10-04 | Adobe Systems Incorporated | Rewritable lossy compression of graphical data |
US8127297B2 (en) | 2007-10-31 | 2012-02-28 | International Business Machines Corporation | Smart virtual objects of a virtual universe independently select display quality adjustment settings to conserve energy consumption of resources supporting the virtual universe |
US8013861B2 (en) * | 2007-10-31 | 2011-09-06 | International Business Machines Corporation | Reducing a display quality of an area in a virtual universe to conserve computing resources |
US8214750B2 (en) | 2007-10-31 | 2012-07-03 | International Business Machines Corporation | Collapsing areas of a region in a virtual universe to conserve computing resources |
US8732767B2 (en) * | 2007-11-27 | 2014-05-20 | Google Inc. | Method and system for displaying via a network of an interactive movie |
US8127235B2 (en) | 2007-11-30 | 2012-02-28 | International Business Machines Corporation | Automatic increasing of capacity of a virtual space in a virtual world |
US8199145B2 (en) * | 2008-05-06 | 2012-06-12 | International Business Machines Corporation | Managing use limitations in a virtual universe resource conservation region |
US7996164B2 (en) * | 2008-05-06 | 2011-08-09 | International Business Machines Corporation | Managing energy usage by devices associated with a virtual universe resource conservation region |
US20090281885A1 (en) * | 2008-05-08 | 2009-11-12 | International Business Machines Corporation | Using virtual environment incentives to reduce real world energy usage |
US9268385B2 (en) | 2008-08-20 | 2016-02-23 | International Business Machines Corporation | Introducing selective energy efficiency in a virtual environment |
US10121221B2 (en) | 2016-01-18 | 2018-11-06 | Advanced Micro Devices, Inc. | Method and apparatus to accelerate rendering of graphics images |
US10902265B2 (en) * | 2019-03-27 | 2021-01-26 | Lenovo (Singapore) Pte. Ltd. | Imaging effect based on object depth information |
US11836930B2 (en) * | 2020-11-30 | 2023-12-05 | Accenture Global Solutions Limited | Slip-to-slip connection time on oil rigs with computer vision |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5483627A (en) * | 1992-04-29 | 1996-01-09 | Canon Kabushiki Kaisha | Preprocessing pipeline for real-time object based graphics systems |
US5748761A (en) * | 1995-04-08 | 1998-05-05 | Daewoo Electronics Co., Ltd. | Method for segmenting and estimating a moving object motion |
US5856829A (en) * | 1995-05-10 | 1999-01-05 | Cagent Technologies, Inc. | Inverse Z-buffer and video display system having list-based control mechanism for time-deferred instructing of 3D rendering engine that also responds to supervisory immediate commands |
US5872575A (en) * | 1996-02-14 | 1999-02-16 | Digital Media Interactive | Method and system for the creation of and navigation through a multidimensional space using encoded digital video |
US6016150A (en) * | 1995-08-04 | 2000-01-18 | Microsoft Corporation | Sprite compositor and method for performing lighting and shading operations using a compositor to combine factored image layers |
US6023279A (en) * | 1997-01-09 | 2000-02-08 | The Boeing Company | Method and apparatus for rapidly rendering computer generated images of complex structures |
US6139433A (en) * | 1995-11-22 | 2000-10-31 | Nintendo Co., Ltd. | Video game system and method with enhanced three-dimensional character and background control due to environmental conditions |
US6147695A (en) * | 1996-03-22 | 2000-11-14 | Silicon Graphics, Inc. | System and method for combining multiple video streams |
US6155926A (en) * | 1995-11-22 | 2000-12-05 | Nintendo Co., Ltd. | Video game system and method with enhanced three-dimensional character and background control |
US6241610B1 (en) * | 1996-09-20 | 2001-06-05 | Nintendo Co., Ltd. | Three-dimensional image processing system having dynamically changing character polygon number |
US6267673B1 (en) * | 1996-09-20 | 2001-07-31 | Nintendo Co., Ltd. | Video game system with state of next world dependent upon manner of entry from previous world via a portal |
US6307554B1 (en) * | 1997-12-19 | 2001-10-23 | Fujitsu Limited | Apparatus and method for generating progressive polygon data, and apparatus and method for generating three-dimensional real-time graphics using the same |
US6449019B1 (en) * | 2000-04-07 | 2002-09-10 | Avid Technology, Inc. | Real-time key frame effects using tracking information |
US6664957B1 (en) * | 1999-03-17 | 2003-12-16 | Fujitsu Limited | Apparatus and method for three-dimensional graphics drawing through occlusion culling |
US6806876B2 (en) * | 2001-07-11 | 2004-10-19 | Micron Technology, Inc. | Three dimensional rendering including motion sorting |
-
2001
- 2001-07-11 US US09/902,981 patent/US6806876B2/en not_active Expired - Fee Related
-
2004
- 2004-09-03 US US10/934,215 patent/US7038679B2/en not_active Expired - Fee Related
-
2006
- 2006-04-20 US US11/407,884 patent/US20060262128A1/en not_active Abandoned
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5483627A (en) * | 1992-04-29 | 1996-01-09 | Canon Kabushiki Kaisha | Preprocessing pipeline for real-time object based graphics systems |
US5748761A (en) * | 1995-04-08 | 1998-05-05 | Daewoo Electronics Co., Ltd. | Method for segmenting and estimating a moving object motion |
US5856829A (en) * | 1995-05-10 | 1999-01-05 | Cagent Technologies, Inc. | Inverse Z-buffer and video display system having list-based control mechanism for time-deferred instructing of 3D rendering engine that also responds to supervisory immediate commands |
US6016150A (en) * | 1995-08-04 | 2000-01-18 | Microsoft Corporation | Sprite compositor and method for performing lighting and shading operations using a compositor to combine factored image layers |
US6155926A (en) * | 1995-11-22 | 2000-12-05 | Nintendo Co., Ltd. | Video game system and method with enhanced three-dimensional character and background control |
US6454652B2 (en) * | 1995-11-22 | 2002-09-24 | Nintendo Co., Ltd. | Video game system and method with enhanced three-dimensional character and background control due to environmental conditions |
US6139433A (en) * | 1995-11-22 | 2000-10-31 | Nintendo Co., Ltd. | Video game system and method with enhanced three-dimensional character and background control due to environmental conditions |
US6331146B1 (en) * | 1995-11-22 | 2001-12-18 | Nintendo Co., Ltd. | Video game system and method with enhanced three-dimensional character and background control |
US5872575A (en) * | 1996-02-14 | 1999-02-16 | Digital Media Interactive | Method and system for the creation of and navigation through a multidimensional space using encoded digital video |
US6147695A (en) * | 1996-03-22 | 2000-11-14 | Silicon Graphics, Inc. | System and method for combining multiple video streams |
US6241610B1 (en) * | 1996-09-20 | 2001-06-05 | Nintendo Co., Ltd. | Three-dimensional image processing system having dynamically changing character polygon number |
US6267673B1 (en) * | 1996-09-20 | 2001-07-31 | Nintendo Co., Ltd. | Video game system with state of next world dependent upon manner of entry from previous world via a portal |
US6346046B2 (en) * | 1996-09-20 | 2002-02-12 | Nintendo Co., Ltd. | Three-dimensional image processing system having dynamically changing character polygon number |
US6023279A (en) * | 1997-01-09 | 2000-02-08 | The Boeing Company | Method and apparatus for rapidly rendering computer generated images of complex structures |
US6307554B1 (en) * | 1997-12-19 | 2001-10-23 | Fujitsu Limited | Apparatus and method for generating progressive polygon data, and apparatus and method for generating three-dimensional real-time graphics using the same |
US6664957B1 (en) * | 1999-03-17 | 2003-12-16 | Fujitsu Limited | Apparatus and method for three-dimensional graphics drawing through occlusion culling |
US6449019B1 (en) * | 2000-04-07 | 2002-09-10 | Avid Technology, Inc. | Real-time key frame effects using tracking information |
US6806876B2 (en) * | 2001-07-11 | 2004-10-19 | Micron Technology, Inc. | Three dimensional rendering including motion sorting |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100053310A1 (en) * | 2008-08-31 | 2010-03-04 | Maxson Brian D | Transforming 3d video content to match viewer position |
Also Published As
Publication number | Publication date |
---|---|
US6806876B2 (en) | 2004-10-19 |
US20030011598A1 (en) | 2003-01-16 |
US7038679B2 (en) | 2006-05-02 |
US20050024362A1 (en) | 2005-02-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20060262128A1 (en) | Three dimensional rendering including motion sorting | |
EP0531157B1 (en) | Three dimensional graphics processing | |
US7289119B2 (en) | Statistical rendering acceleration | |
US6972769B1 (en) | Vertex texture cache returning hits out of order | |
US8154544B1 (en) | User specified contact deformations for computer graphics | |
US6307554B1 (en) | Apparatus and method for generating progressive polygon data, and apparatus and method for generating three-dimensional real-time graphics using the same | |
US8725466B2 (en) | System and method for hybrid solid and surface modeling for computer-aided design environments | |
US20040075654A1 (en) | 3-D digital image processor and method for visibility processing for use in the same | |
US8134556B2 (en) | Method and apparatus for real-time 3D viewer with ray trace on demand | |
KR20040044442A (en) | Automatic 3D modeling system and method | |
US9761037B2 (en) | Graphics processing subsystem and method for updating voxel representation of a scene | |
US7400325B1 (en) | Culling before setup in viewport and culling unit | |
CN112840378A (en) | Global lighting interacting with shared illumination contributions in path tracing | |
US10249077B2 (en) | Rendering the global illumination of a 3D scene | |
US20220230327A1 (en) | Graphics processing systems | |
US7292239B1 (en) | Cull before attribute read | |
US20050212811A1 (en) | Three-dimensional drawing model generation method, three-dimensional model drawing method, and program thereof | |
US20030043148A1 (en) | Method for accelerated triangle occlusion culling | |
JP3350473B2 (en) | Three-dimensional graphics drawing apparatus and method for performing occlusion culling | |
US6850244B2 (en) | Apparatus and method for gradient mapping in a graphics processing system | |
JP4047421B2 (en) | Efficient rendering method and apparatus using user-defined rooms and windows | |
US20190295214A1 (en) | Method and system of temporally asynchronous shading decoupled from rasterization | |
US10504279B2 (en) | Visibility function of a three-dimensional scene | |
JP2001273523A (en) | Device and method for reducing three-dimensional data | |
JP3711273B2 (en) | 3D graphics drawing device for occlusion culling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION |