EP0920678A1 - Rendu photographique accelere par machine - Google Patents

Rendu photographique accelere par machine

Info

Publication number
EP0920678A1
EP0920678A1 EP97936355A EP97936355A EP0920678A1 EP 0920678 A1 EP0920678 A1 EP 0920678A1 EP 97936355 A EP97936355 A EP 97936355A EP 97936355 A EP97936355 A EP 97936355A EP 0920678 A1 EP0920678 A1 EP 0920678A1
Authority
EP
European Patent Office
Prior art keywords
rendering
data
image data
tag
graphics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP97936355A
Other languages
German (de)
English (en)
Inventor
George Randolph Smith, Jr.
Karin P. Smith
David John Stradley
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intergraph Corp
Original Assignee
Intergraph Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intergraph Corp filed Critical Intergraph Corp
Publication of EP0920678A1 publication Critical patent/EP0920678A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects

Definitions

  • the present invention pertains to hardware-accelerated computer processing of multidimensional data, and. particularly, to systems for the creation and display of photoreal interactive graphics.
  • the rendering of graphical images is an application of digital signal processing which requires intensive computation at multiple levels of the process.
  • the typical three dimensional graphics workstation architecture consists of multiple subsystems that arc allocated functions uniquely.
  • General purpose computers are typically used for non- display rendering of three dimensional models into human viewable images.
  • the non- display rendering process entails representation of the scene as a set of polygons, to which attributes such as texture and shadowing are applied through computation.
  • each sub-system is used for a different part of the processing.
  • the CPU/Main memory is used for general algorithmic processing, and temporary storage of data.
  • Peripheral devices are used to allow human interaction with the workstation, to transmit and to permanently store digital information.
  • One such sub-system is the graphics display accelerator, typically used to take geometric shapes, mathematically place them in three dimensional mathematical space, associate simulated lighting and optical effects, and produce an electronic picture in a frame buffer for visible display using a two dimensional display component.
  • the graphics display accelerator is a one-way state machine pipeline processor with low volume, highly abstracted data flowing in, and low level information displayed on the workstation monitor.
  • the operation of the graphics display accelerator in the conventional architecture can be understood in the context of rendering processes.
  • the ray trace rendering process consists of defining a series of polygons (typically triangles), at least one viewport or window through which the human views the simulated scene, and various light sources and other optical materials into a mathematical three dimensional space.
  • a viewer eye location is selected and a window on an arbitrary view plane are selected.
  • the window is considered to be comprised of small image elements (pixels) arranged into a grid at a desired output resolution.
  • the window is chosen to correspond to the display monitor at a given resolution, and the eye location is chosen to be at some location outside the screen to approximate where a human observer ' s eye may actually be located.
  • a ray is fired from the eye location, through the pixel, and into the scene: the pixel is then colored (and assigned other attributes) based upon the intersection of the ray with objects and light sources within the scene.
  • the ray in effect, bounces around the objects within the scene, and surface and optical effectors modify the simulated light rays, and alter the eventual characteristics of the display pixel. More information regarding ray tracing may be found in Computer Graphics, Principles and
  • Z-Buffers are used to spatially sort the polygons and to reduce the processing of un-necessary (non-visible) polygons. If a polygon is obscured by another polygon, the ray is not processed.
  • the graphics display accelerator frequently uses a simpler type of rendering.
  • Such accelerators commonly provide special purpose hardware components that provide Z-Buffer sorting, a stencil buffer, texture processing and geometry (vertex coordinate) transformation calculation.
  • Using a typical graphic display accelerator generally consists of loading each polygon into local memory on the graphic display adapter, projecting the three dimensional coordinates on to the viewport two dimensional coordinate system, tri-linearly interpolating (with perspective correction) any surface color and optional texture pattern.
  • Current state-of-the-art rendering employs a high-performance three dimensional graphics library such as, for example, OpenGL, that is supported by numerous hardware and software vendors OpenGL significantly speeds the process ot preview rende ⁇ ng, but it has some limitations These limitations include its inability to directly support Phong shading and bump maps, graphics effects which provide more realistic images than simple Gouraud shading, which is supported by OpenGL
  • a rendering apparatus for providing, with respect to a detined viewer location and a delined viewport, a desired rende ⁇ ng of objects defined by object data having an object data format, in a three dimensional object space
  • the apparatus in this embodiment has a graphics accelerator for transforming object data into image data determined with respect to the defined viewer location and the defined viewport, and a rendc ⁇ ng processor tor converting at least one parameter characterizing the desired rende ⁇ ng into parameter data in object data format, feeding the parameter data to the graphics accelerator, and converting resulting image data as to the at least one parameter to a further processed result pertinent to the desired rendering
  • the apparatus has an intermediate memory in which is stored the image data from the graphics accelerator, wherein the rende ⁇ ng processor converts the image data stored within the intermediate memory into the further processed result
  • the image data may be defined by values associated with a plurality ol pixel locations in an image.
  • the rendering processor before feeding the object data to the graphics accelerator, utilizes a tag assigned to each of the objects, so as to associate by tag pixel locations in the image with objects.
  • Each of the objects has a surface that may be represented by a plurality of primitive polygons
  • the rende ⁇ ng processor before feeding the object data to the graphics accelerator, may utilize a tag assigned to the p ⁇ mitive polygons, so as to associate by tag pixel locations with p ⁇ mitive polygons.
  • the tag may be a color.
  • the rendering processor as part of converting resulting image data, identifies by tag the portions ot object surfaces present in the image, and rest ⁇ cts further processing associated with the desired rende ⁇ ng to such portions so as to reduce processing overhead associated with the desired rendering.
  • a graphics rende ⁇ ng program stored on a computer readable medium for providing a desired rendering of objects defined by object data having an object data format, in a three dimensional object space.
  • the program is contigured so as to be executable by a computer having a two dimensional graphics accelerator for transforming object data into image data determined with respect to a defined viewer location and a delined viewport.
  • the program When loaded into the computer, the program causes the establishment ot a rende ⁇ ng apparatus having a graphics accelerator for transforming object data into image data determined with respect to the detined viewer location and the detined viewport, and a rendering processor for converting at least one parameter characte ⁇ zing the desired rendering into object data lormat, feeding the ob
  • a rende ⁇ ng apparatus having a graphics accelerator for transforming object data into image data determined with respect to the detined viewer location and the detined viewport, and a rendering processor for converting at least one parameter characte ⁇ zing the desired rendering into object data lormat, feeding the ob
  • the computer further includes an intermediate memory in which the rendering program causes to be stored the image data from the graphics accelerator, and wherein the rendering processor converts the image data stored within the intermediate memory into the further processed result
  • FIG. 1 is a block diagram ot the rende ⁇ ng graphics architecture with an open application programming interface, in accordance with the present invention.
  • FIGS. 2a and 2b illustrate the process of computing normals to polygon vertices of a surface to enable Gouraud shading of the surface.
  • FIGS. 3a and 3b illustrate the process of perturbing surface normals to replicate the effect ot surface texture, in accordance with the present invention
  • FIG. 4 illustrates the process of computing shadows.
  • FIGS. 5a and 5b illustrate the process ol computing procedural three dimensional texture.
  • FIG. 6 is a flow diagram depicting the steps of image rendering in accordance with the present invention.
  • FIG 7 shows an eye looking towards a three-dimensional cube
  • FIG 8 shows a t ⁇ angle for which the normal ot an interior pixel is sought.
  • image data refers to the processed product of the scene data, after application, singly or recursively, of dedicated graphics display accelerator
  • Graphics rende ⁇ ng software 10 is the first step toward the goal of Real Time Reality on the desktop (i.e. the ability to render photorealistic images in real-time or near real-time).
  • the rende ⁇ ng of "Toy Story” required a few hundred prop ⁇ etary RISC/UNIX computers working 24 hours a day for a couple of years.
  • the present invention will allow creative and technical professionals to render the same number of frames in a traction of that time.
  • graphics display accelerator 14 is now used to produce intermediate results, where these results may be uti /ed in a general purpose ray trace rendering algorithm, or other graphics algorithm
  • Such intermediate results are used to determine polygonal normals, texture coordinates lor three dimensional texture algo ⁇ thms, local coordinates tor two dimensional surface textu ⁇ ng, bump-map perturbations ot the visible rays and to determine interpolated world coordinates ol polygonal surfaces
  • the complete rendering process is now split amongst two ot the major subsystems, the graphics display accelerator 14 (or hardware accelerator) and the CPU/Main Memory subsystem, thus improving perlormance of the rende ⁇ ng process on the three dimensional graphics workstation over that ol a general purpose computer
  • the rende ⁇ ng software includes a graphics library with an open application programming interface (API) which extends
  • the graphics workstation is now able to produce higher quality graphics, incorporating features ol photoreal, or production, rendering.
  • normals may be interpolated pixel-by-pixel (Phong shading), to produce higher quality three dimensional graphics than are available using Gouraud shading (provided by OpenGL), in which normals are computed for faces ot polygons, and then averaged to derive values tor polygon vertices.
  • the graphics rende ⁇ ng subsystem (which includes the rendering library) dramatically accelerates graphics attributes, such as Gouraud shading, which are standard to the existing (such as OpenGL) high-performance three dimensional graphics library.
  • the invention accelerates features that OpenGL does not support, such as Phong shading, Bump maps, Shadows, and Procedural three dimensional textures.
  • FIGS. 2a and 2b illustrate the process of computing normals to polygon vertices ol a surface to enable Gouraud shading ot the surface.
  • Three dimensional models are usually represented as a large collection of smaller polygons, typically triangles, and the quality of the final rendered image depends on the sophistication of the algorithms used to shade these polygons. Shown in FIG. 2(a) are the faces of several polygons 20, and an associated normal vector 22 which is used to indicate how light will be reflected by that polygon Shown in FIG.
  • 2b is the first step in the Gouraud shading process, which is to calculate the normals 24 for each of the polygon's vertices (corners) by averaging the normals 22 from the surrounding faces. The computed normals for the vertices are then used to determine the RGB (red, green, and blue) color components for the vertices as seen by an observing eye or camera. These RGB components are based on the color of the material combined with the effect of any light sources. Gouraud shading is based on calculating the color components of each of the pixels forming the polygon by linear interpolation of the RGB values at the vertices.
  • RGB color scheme is for exemplary purposes only, and another coloring scheme, such as CYMK (cyan, yellow, magenta, and black) may also be used.
  • Gouraud shading provides acceptable results, but the highest quality three dimensional graphics demand something more. The problem is that Gouraud shading simply calculates the normals at the vertices and then interpolates the colors of the polygon's interior pixels. And, especially for curved polygons, the shading effect is not too realistic as the interpolation does not account for different normal values across the polygon surface.
  • Phong shading is to derive a normal for every interior pixel, and to then apply a given shading model to that individual pixel based on its normal component.
  • FIGS. 3a and 3b illustrate the process of perturbing surface normals to replicate the effect of surface texture, in accordance with the present invention.
  • Real-world objects are rarely geometrically smooth; instead, they often have distortions in their surfaces giving them a physical texture.
  • a strawberry which is all one color, but whose dimples give it a rich physical texture.
  • FIG 3a shows one way to replicate this effect, where one explicitly creates geometncal distortions 30 into the model's surface
  • This solution requires sigmlicant modeling etlort and results in excessive amounts of computation
  • FIG 3b shows an alternate solution, in which the surlace normals 32 are perturbed, causing them to reflect the light so as to provide a similar ettect
  • perturbing the surface normals requires the use ot a bump map. which simulates the effect ot displacing the points on the surtace above or below their actual positions Bump maps fully complement the process of Phong shading, in which individual normals have already been computed tor each pixel on the surlace, the bump map is used to modify these normals prior to the shading model being applied to each pixel
  • Bump maps are distinct from the concept ol texture (pattern) maps, in which an image is projected onto a surtace Texture maps range from flat images (such as geometric patterns) to actual photographs ol rough surl ccs
  • texture maps of rough surlaces don't look quite right, because they simply atfect the a surface's shading, and not its surtace shape They also tend to look inco ⁇ ect because the direction of the light source used to create the texture map is typically different from the direction ol the light illuminating the mapped three dimensional object That is, unless the light sources for the rough pattern arc the same as the one within the three dimensional object space, when viewing the texture mapped onto an object within the object space, one sees that something is wrong with the resultant image
  • bump maps are not used for producing realistic output with OpenGL
  • bump maps provide superbly textured three dimensional images, overcoming the visual lighting problems requires use of Phong shading, and such shading is not directly supported by OpenGL
  • Phong shading such shading is not directly supported by
  • FIG 4 shows a light source 40 causing a shadow 42 to be casted by a first object 44 onto a second object 46.
  • An important component to realistic imaging is to ensure that objects within a given three dimensional obiect space cast proper shadows However, doing so greatly increases the amount of computation that needs to be performed Creating shadows is computationally intensive because an area in shadow is rarely pure black. Instead, the area usually contains some amount of color content m a diminishcd form.
  • a preferred embodiment provides special extensions for the acceleration of shadow creation
  • FIGS 5a and 5b illustrate the process ot computing procedural three dimensional texture, textures not directly supported by OpenGL
  • FIG 5a shows that the application of flat, two-dimensional textures 50 to three dimensional objects 52 usually results in unrealistic results, particularly when attempting to display a cross-sectional cut-away view 54 through an object.
  • FIG. 5a shows that the application of flat, two-dimensional textures 50 to three dimensional objects 52 usually results in unrealistic results, particularly when attempting to display a cross-sectional cut-away view 54 through an object.
  • procedural three dimensional textures 56 provide the ability to define a more realistic texture that occupies three dimensions and understands the geometry ol the object in question
  • procedural three dimensional wood-grain 56 texture is applied to an object, then taking a cross-sectional view ol the object reveals the grain 58 ot the wood inside the object (or whatever else, such as a knot hole, that was detined to be within the texture). This provides tor a more realistic image.
  • the invention provides lor accelerated generation and utilization of procedural three dimensional textures.
  • FIG 6 shows a flow diagram depicting the steps ol image rende ⁇ ng in accordance with the present invention.
  • a preterred embodiment utilizes a graphics display adapter (preferably optimized for this task) to off-load such processing from the host computer so as to free the CPU and memory subsystem for other processing tasks.
  • a preferred embodiment allows intermediate graphics processed images to be displayed and for successive attributes to be applied without reinitiating the entire rendering process.
  • three dimensional-coded color data is written to the graphics accelerator (step 60) and polygon identification information is read back (step 62). By applying flat color shading information to each polygon as it is processed in the graphics display accelerator, each pixel in the output "image" from the graphics display accelerator uniquely identifies the front visible polygon at that position in the viewport.
  • the process consists of the steps of: estabhshing view parameters. artificially coding each polygon with a unique "color.” placing the polygon in the display space, "displaying” the view, and reading the resulting "image" which consists ol coded polygon identifiers.
  • linear interpolated Barycent ⁇ c coordinates (u,v,w
  • Barycent ⁇ c coordinates may then be used to calculate additional parameters during the rcndc ⁇ ng process including the direction ol normal vectors, the three dimensional texture coordinates, two dimensional texture coordinates and the world coordinates ol each pixel in the output image
  • This process consists ol: establishing view parameters, coding the "color” value of each vertex ot the polygon, placing the polygon in the display space, "displaying” the view; and reading the resulting "image” which consists ot the polygon Barycent ⁇ c coordinates
  • This "image” is then used to directly identity the linear interpolated position ol the pixel on the polygon
  • This linear interpolation value is then tor example, applied to the normal vector direction, used in the process ol calculation ol the three dimensional texture value, in looking up the two dimensional surlace image texture, and in calculating the glow coordinate of the polygon visible at that viewpoint pixel location
  • Multi-colored encoding provides increased precision of intermediate data.
  • texture maps to encode positional information limitations ot the hardware processing architecture can be encountered
  • This process consists ot a multi-pass algo ⁇ thm to obtain (lor example) tirst the u coordinates and then the v coordinates.
  • the w coordinates are obtained by subtraction
  • the process consists ot the steps ol. establishing view parameters: placing a coded "texture” in the texture memory (step 66); placing the polygon in the display space;
  • step 68 the resulting "image" which consists ot coarse Barycent ⁇ c coordinates and quadrature phase encoded values for additional precision.
  • the coarse Barycent ⁇ c coordinates are then combined with a simple logic structure and applied to the quadrature phase encoded values tor additional precision
  • Step 70 Intermediate processed image data are now available tor the application of photorealistic rendering algorithms (step 70) such as shading, textu ⁇ ng, and shadowing, as discussed above.
  • FIG 7 shows an eye looking towards a three-dimensional cube, and illustrates a situation in which a preterred embodiment may take a three dimensional problem, such as ray tracing (discussed hereinabove) the cube with respect to that eye location, and partially converts the ray tracing problem into a two dimensional problem, such as Z buffer sorting (discussed hereinabove), so as to allow a two dimensional accelerator to speed up portions of the more complex three dimensional ray tracing problem.
  • the cube is one object of perhaps many in an obiect space, where the object space has defined within it all of the objects, light sources, etc., that are to be rendered to an output (display, film, etc.).
  • Each object has associated characteristics such as color, texture, reflectivity, opacity.
  • characteristics such as color, texture, reflectivity, opacity.
  • ray tracing algorithms will either waste computing resources to trace the B surlace, only to then replace the computed data with dilferent data co ⁇ esponding to the nearest (to the eye) visible surface, or the algorithm will employ some three dimensional hidden-surface removal algorithm to first reduce the complexity of the ray-tracing problem
  • the hidden-surface removal algo ⁇ thm employs three dimensional techniques, such techniques require substantial computation resources to calculate the removal problem
  • a prefe ⁇ ed embodiment implements all or parts of a given three dimensional rendenng problem as on or more two dimensional operations, where the processing ot the two dimensional operations are perlormed with last two dimensional graphics accelerators
  • the ray trace problem is broken into two parts, the first being a Z butler sorting problem, a common procedure provided by two dimensional accelerators
  • the polygons are rendered to memory (i.e rendered but not made visible to the display output) with the two dimensional accelerator
  • the resultant two dimensional image contains only those pixels visible from the given eye location 72 By virtue of having rendered this scene, it is now possible to easily identity the visible portions ot all objects with respect to the defined viewing eye location.
  • the identification is pertormed by temporanly assigning a pseudo-color to each object (or polygon used to lorm the object), before passing the scene data to the graphics accelerator.
  • the colors in the resultant image indicate to which object each image pixel belongs.
  • a preferred embodiment may utilize the stencil buffer to indicate whether a given pixel is m shadow from a given light source. If the pixel is in shadow, then the ray tracer does not need to go back to that light source (although it may have to return a ray to a different light source)
  • FIG. 8 shows a tnangle 80 for which the normal of an inte ⁇ or pixel 82 is sought, so as to allow the application of Phong shading to the t ⁇ angle.
  • calculations for a given three dimensional object may be greatly minimized through use ol the two dimensional accelerator rende ⁇ ng technique to identify visible polygon portions for a particular eye location.
  • OpenGL does not provide an operation tor performing Phong shading, because Phong shading requires that a normal be calculated for every pixel to which the shading is to be applied.
  • the surtace is composed of a series ol triangles that approximate the shape of the surlace (For a curved surtace, it is assumed that the surlace is sufficiently tesselated so that the human eye is unable to distinguish the t ⁇ angle mesh from a surface having the true curvature.)
  • OpenGL does not provide lor obtaining a normal tor each of the pixels within each triangle, so a preferred embodiment compensates lor this as follows First, Barycent ⁇ c coordinates are used to represent the coordinates of the vertices lor each pixel within a given triangle.
  • a prc-determined function is utilized to encode the Barycent ⁇ c coordinates as color values and these color values are assigned to the vertices and used to set the color along the edge of the triangle.
  • the particular conversion function is not important, so long as the Barycent ⁇ c values are reversibly uniquely encoded into color values
  • the set color function results a color spectrum, along one edge, that ranges in value between the two color values assigned to the vertices forming a given edge segment
  • different colors are used to encode the different edge segments of the t ⁇ angle
  • dif lerent colors may be assigned to each of the X, Y, and Z axises, and then the segment vertices assigned a color corresponding to its location with respect to the o ⁇ gin for the three axises
  • These color assignments allow the determination of the Barycent ⁇ c coordinates for any inte ⁇ or pixel. For example, assume that a first edge segment lies on the X axis. The pixel color tor any pixel along the X axis segment may now be used to determine the pixel's distance along the X axis. A similar computation may also be performed for a second edge segment tor the t ⁇ angle.
  • any inte ⁇ or pixel location may be determined from the combination of the first and second segments. That is, if a perpendicular line is drawn Irom a first pixel along the first edge segment towards the interior ot the triangle, and a second perpendicular line is drawn trom a second pixel along the second edge segment, the Barycent ⁇ c coordinates tor the point identified by where the two lines intersect may be calculated from the Baryccnt ⁇ c coordinates tor the first and second pixels. Once the Barycent ⁇ c coordinates are known, then it is relatively simple to calculate the normal tor that point With the normal tor that pixel point, it is now possible to apply Phong shading. (This technique also applies to bump map processing.) Such processing is in stark contrast to OpenGL.
  • Barycent ⁇ c coordinates may be pertormed in hardware or software, as neither processing lormat affects the int ⁇ nsic nature ot the invention

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

Dispositif permettant d'effectuer, par rapport à un emplacement de visualisation et à un point d'observation déterminés, un rendu souhaité d'objets définis par des données d'objet possédant un format de données d'objet, dans un espace d'objet tridimensionnel. Ce dispositif peut posséder un accélérateur de graphiques servant à transformer des données d'objet en des données d'image déterminées par rapport à l'emplacement de visualisation et au point d'observation déterminés. Il peut également posséder un processeur de rendu servant à d'abord convertir au moins un paramètre caractérisant le rendu souhaité en des données de paramètre dans un format de données d'objet, à introduire les données de paramètre dans l'accélérateur de graphiques, puis à convertir les données d'image obtenues concernant le paramètre au moins en un résultat dont on a continué le traitement et qui est conforme au rendu souhaité.
EP97936355A 1996-08-01 1997-08-01 Rendu photographique accelere par machine Withdrawn EP0920678A1 (fr)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US2379596P 1996-08-01 1996-08-01
US23795P 1996-08-01
US2351396P 1996-08-07 1996-08-07
US23513P 1996-08-07
PCT/US1997/013563 WO1998006067A1 (fr) 1996-08-01 1997-08-01 Rendu photographique accelere par machine

Publications (1)

Publication Number Publication Date
EP0920678A1 true EP0920678A1 (fr) 1999-06-09

Family

ID=26697254

Family Applications (1)

Application Number Title Priority Date Filing Date
EP97936355A Withdrawn EP0920678A1 (fr) 1996-08-01 1997-08-01 Rendu photographique accelere par machine

Country Status (2)

Country Link
EP (1) EP0920678A1 (fr)
WO (1) WO1998006067A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE518545C2 (sv) * 2000-02-25 2002-10-22 Maple & Star Ab Förfarande och anordning för ett bildpresentationssystem

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19581872B4 (de) * 1994-12-22 2006-11-16 Apple Computer, Inc., Cupertino Dreidimensionales Graphikaufbereitungssystem

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9806067A1 *

Also Published As

Publication number Publication date
WO1998006067A1 (fr) 1998-02-12

Similar Documents

Publication Publication Date Title
CN111508052B (zh) 三维网格体的渲染方法和装置
US6903741B2 (en) Method, computer program product and system for rendering soft shadows in a frame representing a 3D-scene
US5307450A (en) Z-subdivision for improved texture mapping
US6532013B1 (en) System, method and article of manufacture for pixel shaders for programmable shading
US6567083B1 (en) Method, system, and computer program product for providing illumination in computer graphics shading and animation
US5613048A (en) Three-dimensional image synthesis using view interpolation
US8610729B2 (en) Floating point computer system with fog
US7170527B2 (en) Interactive horizon mapping
EP1128330B1 (fr) Projection de visibilité et reconstruction d'image pour éléments de surface
US7106325B2 (en) System and method for rendering digital images having surface reflectance properties
US20070139408A1 (en) Reflective image objects
US6922193B2 (en) Method for efficiently calculating texture coordinate gradient vectors
US6806886B1 (en) System, method and article of manufacture for converting color data into floating point numbers in a computer graphics pipeline
US7158133B2 (en) System and method for shadow rendering
US7071937B1 (en) Dirt map method and apparatus for graphic display system
US6690369B1 (en) Hardware-accelerated photoreal rendering
US6975319B1 (en) System, method and article of manufacture for calculating a level of detail (LOD) during computer graphics processing
US5793372A (en) Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically using user defined points
Batagelo et al. Real-time shadow generation using bsp trees and stencil buffers
EP0856815A2 (fr) Méthode et système pour déterminer et/ou utiliser des cartes d'illumination dans un système pour rendre des images
Pajarola et al. DMesh: Fast depth-image meshing and warping
US6906729B1 (en) System and method for antialiasing objects
US20180005432A1 (en) Shading Using Multiple Texture Maps
Groß et al. Advanced rendering of line data with ambient occlusion and transparency
US20030080966A1 (en) System for previewing a photorealistic rendering of a synthetic scene in real-time

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19990217

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): BE DE FR GB IT LU NL

17Q First examination report despatched

Effective date: 20010822

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20020103