WO2008147999A1 - Profondeur de champ a deplacement de cisaillement - Google Patents

Profondeur de champ a deplacement de cisaillement Download PDF

Info

Publication number
WO2008147999A1
WO2008147999A1 PCT/US2008/064709 US2008064709W WO2008147999A1 WO 2008147999 A1 WO2008147999 A1 WO 2008147999A1 US 2008064709 W US2008064709 W US 2008064709W WO 2008147999 A1 WO2008147999 A1 WO 2008147999A1
Authority
WO
WIPO (PCT)
Prior art keywords
image sample
sample point
geometric
depth
sheared
Prior art date
Application number
PCT/US2008/064709
Other languages
English (en)
Inventor
Robert Cook
Original Assignee
Pixar
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pixar filed Critical Pixar
Priority to GB0918780.8A priority Critical patent/GB2460994B/en
Publication of WO2008147999A1 publication Critical patent/WO2008147999A1/fr

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0075Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects

Definitions

  • the present invention relates to the field of computer graphics, and in particular to methods and apparatus for creating, modifying, and using components to create computer graphics productions.
  • Many computer graphic images are created by mathematically modeling the interaction of light with a three dimensional scene from a given viewpoint. This process, called rendering, generates a two-dimensional image of the scene from the given viewpoint, and is analogous to taking a photograph of a real-world scene.
  • Animated sequences can be created by rendering a sequence of images of a scene as the scene is gradually changed over time. A great deal of effort has been devoted to making realistic looking and artistically compelling rendered images and animations.
  • the scene geometry can be projected into the screen space defined by the image plane. Any geometric primitive in the scene that overlaps the X and Y coordinates in screen space of one of the rays 221 intersects that ray. In this example, determining the intersection of a three-dimensional ray with objects in a three-dimensional scene is reduced to a two-dimensional test. Occlusion can be determined by comparing Z coordinate values of geometric primitives at each ray intersection. Additionally, the scene geometry can be easily bounded and subdivided along any axis, which allows memory optimizations, such as bucketing, and parallel processing to be easily implemented.
  • scene 240 which is the perspective transformation of scene 225, includes rays 247 that are not parallel with each other.
  • rendering techniques such as a Z-buffer cannot be used with scene 240.
  • the Tenderer must determine the intersection of different three-dimensional rays with objects in three- dimensional space for each image sample. This is a complex and time-consuming operation. Additionally, it is difficult to bound scene geometry, making it difficult to optimize memory and processor usage.
  • determining the intersection of rays 247 with objects in scene 240 is not trivial, because the rays 247 are not parallel.
  • a Tenderer To determine the value of image sample point 243b, a Tenderer must determine how ray 247b intersects objects in scene 240. Because ray 247b is not perpendicular to the image plane 242, the X and Y coordinates in screen space of ray 247b may vary along the length of ray 247b as a function of the Z coordinate. Thus, a Tenderer must determine a three-dimensional line equation with three variables corresponding with ray 247b and then determine if this line equation intersects any geometric primitives in the scene 240. This determination requires extensive computing resources.
  • Figure 2E illustrates the application of an embodiment of an invention to an example scene 260.
  • Example scene 260 represents a scene in screen space, similar to scene
  • Scene 260 includes an image plane 262 with at least one image sample point 263.
  • Image sample point 263 is assigned a lens position 266 in lens aperture plane 264 and views the scene 260 along ray 265. If image sample point 263 had been assigned a lens position in the center of the lens aperture plane 264, then image sample point would instead view the scene 260 along ray 267.
  • An embodiment of the invention allows all of the image samples to view the scene along parallel rays in screen space, while shearing the scene differently for the image samples to account for each image sample's assigned lens position. Because all of the image samples view the scene along parallel rays in screen space, a wide variety of different rendering techniques, such as rasterization, scanline rendering, z-buffer rendering, painters algorithm, and micropolygon or REYES rendering, can be used to render depth of field effects without relying on computationally expensive three-dimensional ray / three-dimensional object intersection tests.
  • ray 267 is perpendicular to the image plane 262
  • determining which portions of the sheared version of the scene 260 intersect ray 267 is trivial.
  • the sheared version of the scene 260 can be projected into the screen space defined by the image plane 262. Assuming a screen space coordinate system with the X and Y axes parallel to the image plane and the Z axis perpendicular to the image plane, then any geometric primitive in the scene that overlaps the X and Y screen space coordinates of ray 267 intersects ray 267, and thus intersects the corresponding image sample point 263.
  • determining the intersection of the three-dimensional ray 265 with objects in a three-dimensional scene is reduced to a two-dimensional test intersection test between image sample point 263 and the sheared version of the objects in a scene.
  • Occlusion can be determined by comparing Z coordinate values of geometric primitives at each intersection with ray 267, using a hit test, or by drawing primitives in order of decreasing depth.
  • Figure 3 illustrates a method 300 of generating depth of field effects according to an embodiment of the invention.
  • Step 305 receives three-dimensional scene data.
  • the scene data can be defined using any three-dimensional modeling and/or animation technique known in the art.
  • Scene data can include programs and data describing the geometry, lighting, shading, motion, and animation of a scene to be rendered.
  • Step 310 selects an image sample point.
  • a Tenderer produces one or more images from scene data by sampling optical attributes of the scene data at a number of different image sample points located on a virtual image plane.
  • Image sample points can correspond with pixels in the rendered image or with sub-pixel samples that are combined by the Tenderer to form pixels in the rendered image.
  • Image samples such as pixels and sub- pixel samples may be distributed in regular locations on the image plane or at random or pseudo-random locations on the image plane to reduce aliasing and other sampling artifacts.
  • Step 315 assigns a lens position to the selected image sample point.
  • a single virtual aperture is centered over the image plane in world space.
  • step 315 selects an arbitrary point within the aperture on the virtual aperture plane as a lens location for the selected image sample point.
  • step 315 selects lens locations randomly or pseudo-randomly within the virtual aperture, so that the distribution of lens positions assigned to the set of image sample points covers the entire aperture without any undesirable visually discernable patterns.
  • other low discrepancy sampling methods such as Halton or Hammersley sampling can be used to select the lens locations.
  • step 315 can select lens positions using a regular pattern or sequence, if aliasing is not a concern.
  • Tenderers form images by determining the portion of the scene, such as portions of objects, intersecting or projecting onto each image sample and determining the optical attributes of these intersecting portions.
  • Renderers typically identify intersecting image samples by casting or tracing rays from image samples into the scene or by projecting the geometry of the scene on to the image plane.
  • Step 320 identifies and selects a portion of the scene that, when projected into the image plane, potentially intersects or overlaps the selected image sample point. This is done to reduce the amount of data to be evaluated for each image pixel.
  • alternate embodiments of method 300 may omit step 320 and process the entire scene for each image sample point.
  • geometric primitives within the scene data such as polygons, micropolygons, polygon fragments, splines and or other curved surfaces, subdivision surfaces, and particles, are associated with bounding boxes or other bounding volumes.
  • Step 320 projects these bounding volumes in the image plane to identify bounding volumes intersecting with the area associated with the selected image sample (or with an area associated with multiple image samples), and hence the geometric primitives potentially intersecting the selected image sample.
  • the depth of geometric primitives and the lens position modify the bounding volumes used to identify a portion of the scene potentially intersecting the selected image sample.
  • Step 325 shears the position of the selected portion of the scene according to its depth values and the lens position of the selected image sample.
  • the selected portion of the scene typically includes numerous geometric primitives located at various depths and positions relative to the image sample point.
  • the portions of the unsheared version of the scene that intersected a first ray from the selected image sample point and passing through the selected lens position will now be intersected by a second ray from the selected image sample point and passing through the center of the virtual aperture.
  • the second ray will be perpendicular to the image plane.
  • Step 330 samples the sheared scene to determine the value of one or more attributes of the selected sample point.
  • step 330 identifies sheared geometric primitives, which when projected onto the image plane, intersect the selected sample point.
  • Step 330 determines the attributes of the geometric primitives intersecting the image sample point, which can include color and transparency and attributes used for rendering special effects, including depth values, normal vectors, and other arbitrary shading attributes; and combines these values according to the depth values of the geometric primitives.
  • Step 330 may also store depth information for each image sample to use with Z-buffering or any other depth compositing technique known in the art.
  • step 320 selects a portion of the scene potentially intersecting an image sample to be sheared and evaluated.
  • step 320 uses the shear value of a geometric primitive, which is based on the geometric primitive's depth and the image sample's lens position, to determine the size and/or orientation of a bounding volume of the geometric primitive.
  • a bounding box of a geometric primitive may be generated using any bounding technique known in the art.
  • An embodiment of step 320 can then scale the bounding box of the geometric primitive by the magnitude of the shear displacement to be applied to geometric primitives at that depth for the image sample. This expanded bounding box can then be projected against the image plane to determine if the geometric primitive potentially intersects the image sample.
  • Figure 4 illustrates the specification of depth of field parameters according to an embodiment of the invention.
  • users can specify depth of field parameters using traditional camera parameters, such as the focal length and aperture size, or as the ratio between the focal length and aperture size, commonly referred to as an f-number or f-stop.
  • function 405 the maximum possible shear of a geometric primitive towards the optical axis or center of the virtual aperture increases linearly as the depth increases, up to a maximum blur limit.
  • the maximum shear specified by the function at a given depth corresponds with the amount of depth of field blurring applied to objects at that depth.
  • Function 405 corresponds to a typical optical system. It should be noted that the maximum shear specified by function 405 is approximately zero at the location of the depth of the focal plane 415. In contrast, function 410 specifies that the maximum shear of geometric primitives may increase, decrease or stay constant as function of depth in a continuous or discontinuous manner.
  • Functions such as function 410 allows users to specify any arbitrary depth of field. For example, users can specify that very distant objects and near objects are in focus, while objects in between are out of focus. Function 410 corresponds with using a different aperture size for objects at different depths, which is not possible with conventional cameras and optical systems, but is sometimes aesthetically or cinematically desirable. In the example function 410, there is a wide depth range 420 in which the maximum shear is at or close to zero; thus, objects or portions thereof within range 420 will appear in focus. Objects or portions thereof outside of range 420 will be have depth of field blurring.
  • Figure 4 illustrates the specification of depth of field as maximum shear as a function of depth.
  • similar functions can be used to specify depth of field in terms of other parameters.
  • a function can specify depth of field in terms of the aperture size or f-number as a function of depth.
  • Figures 5A-5C illustrate the distribution of image samples for depth of field effects according to an embodiment of the invention.
  • Figure 5A illustrates a stratified assignment of lens positions to image samples.
  • a region of the image plane 505 which may correspond with the region of a pixel, is subdivided into a number of sub-regions 507A-507I.
  • Each sub-region 507 includes one image sample.
  • the image sample is assigned a random, pseudo-random, or other arbitrary or irregular location.
  • image samples may be assigned different locations within corresponding sub-regions.
  • Each sub-region 507 of the image plane is associated with one of the sub-regions of the aperture plane.
  • sub-region 507A is associated with aperture plane sub- region 512H, as indicated by ray 515B.
  • sub-region 507B is associated with the aperture plane sub-region 512E, as indicated by ray 515A.
  • the number of sub-regions 512 of the aperture plane may be the same or different from the number of sub-regions 507 of the image plane. Thus, there may be zero, one, or more image samples associated with each aperture plane sub-region 512.
  • image samples are assigned different lens positions within the sub-region.
  • the image sample associated with sub-region 535A is assigned to lens position 545 A, located in the bottom right corner of the aperture plane sub- region L2.
  • the image sample associated with sub-region 535B is assigned lens position 545B, which is located in the upper left corner of the aperture plane sub-region L2.
  • L2 589 from the center of the apertures to the edge of square aperture 580, passing through lens position 585 is also measured.
  • Ll the radial distance between the center of the aperture 575 and lens position 585, is scaled by the ratio between the radius R of the circular aperture 580 and L2 589.
  • lens position 585 is moved to a location Ll * R/L2 591.
  • FIG. 6 illustrates a computer system 2000 suitable for implementing an embodiment of the invention.
  • Computer system 2000 typically includes a monitor 2100, computer 2200, a keyboard 2300, a user input device 2400, and a network interface 2500.
  • User input device 2400 includes a computer mouse, a trackball, a track pad, graphics tablet, touch screen, and/or other wired or wireless input devices that allow a user to create or select graphics, objects, icons, and/or text appearing on the monitor 2100.
  • Embodiments of network interface 2500 typically provides wired or wireless communication with an electronic communications network, such as a local area network, a wide area network, for example the Internet, and/or virtual networks, for example a virtual private network (VPN).
  • VPN virtual private network
  • Computer 2200 typically includes components such as one or more processors 2600, and memory storage devices, such as a random access memory (RAM) 2700, disk drives 2800, and system bus 2900 interconnecting the above components.
  • processors 2600 can include one or more general purpose processors and optional special purpose processors for processing video data, audio data, or other types of data.
  • RAM 2700 and disk drive 2800 are examples of tangible media for storage of data, audio / video files, computer programs, applet interpreters or compilers, virtual machines, and embodiments of the herein described invention.
  • tangible media include floppy disks; removable hard disks; optical storage media such as DVD-ROM, CD-ROM, and bar codes; non- volatile memory devices such as flash memories; read-only-memories (ROMS); battery-backed volatile memories; and networked storage devices.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Image Generation (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

Selon l'invention, des effets de profondeur de champ pour des images générées par ordinateur sont produits par l'attribution de positions de lentille à des points échantillons. Pour chaque point échantillon, la géométrie de scène cisaille vers le centre de l'ouverture du point échantillon pour représenter ses positions de lentille attribuées. La géométrie de scène cisaillée est échantillonnée à partir d'un point unique de l'ouverture, tel que le centre de l'ouverture. Des points échantillons présentant des positions de lentille attribuées différentes échantillonnent différentes versions cisaillées de la scène, ce qui produit un effet de profondeur de champ. La géométrie de scène est cisaillée en fonction de sa profondeur et de la position de lentille attribuée au point échantillon. L'effet de profondeur de champ peut être caractérisé par une fonction arbitraire de profondeur, notamment une distance focale ou une taille d'ouverture statique ou variable, ce qui permet d'obtenir des effets de profondeur de champ qui ne seraient pas possibles avec des systèmes optiques classiques du monde réel. Des points échantillons et des positions de lentille sont déterminés de façon pseudo-aléatoire et/ou stratifiée.
PCT/US2008/064709 2007-05-25 2008-05-23 Profondeur de champ a deplacement de cisaillement WO2008147999A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0918780.8A GB2460994B (en) 2007-05-25 2008-05-23 Shear displacement depth of field

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US94037907P 2007-05-25 2007-05-25
US60/940,379 2007-05-25

Publications (1)

Publication Number Publication Date
WO2008147999A1 true WO2008147999A1 (fr) 2008-12-04

Family

ID=40075518

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/064709 WO2008147999A1 (fr) 2007-05-25 2008-05-23 Profondeur de champ a deplacement de cisaillement

Country Status (2)

Country Link
GB (2) GB2483386B (fr)
WO (1) WO2008147999A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8194995B2 (en) 2008-09-30 2012-06-05 Sony Corporation Fast camera auto-focus
EP3557532A1 (fr) * 2018-04-20 2019-10-23 Thomson Licensing Dispositif et procédé de rendu de scène avec un effet de profondeur
CN112037363A (zh) * 2019-10-22 2020-12-04 刘建 行车记录大数据辅助分析系统
WO2022108609A1 (fr) * 2020-11-18 2022-05-27 Leia Inc. Système et procédé d'affichage multi-vue employant une inclinaison de plan de convergence d'images multi-vue

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010050713A1 (en) * 2000-01-28 2001-12-13 Naoki Kubo Device and method for generating timing signals of different kinds
US20040257375A1 (en) * 2000-09-06 2004-12-23 David Cowperthwaite Occlusion reducing transformations for three-dimensional detail-in-context viewing
US20050248569A1 (en) * 2004-05-06 2005-11-10 Pixar Method and apparatus for visibility determination and processing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010050713A1 (en) * 2000-01-28 2001-12-13 Naoki Kubo Device and method for generating timing signals of different kinds
US20040257375A1 (en) * 2000-09-06 2004-12-23 David Cowperthwaite Occlusion reducing transformations for three-dimensional detail-in-context viewing
US20050248569A1 (en) * 2004-05-06 2005-11-10 Pixar Method and apparatus for visibility determination and processing

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8194995B2 (en) 2008-09-30 2012-06-05 Sony Corporation Fast camera auto-focus
EP3557532A1 (fr) * 2018-04-20 2019-10-23 Thomson Licensing Dispositif et procédé de rendu de scène avec un effet de profondeur
CN112037363A (zh) * 2019-10-22 2020-12-04 刘建 行车记录大数据辅助分析系统
WO2022108609A1 (fr) * 2020-11-18 2022-05-27 Leia Inc. Système et procédé d'affichage multi-vue employant une inclinaison de plan de convergence d'images multi-vue

Also Published As

Publication number Publication date
GB2460994A (en) 2009-12-23
GB2483386B (en) 2012-08-15
GB0918780D0 (en) 2009-12-09
GB2460994B (en) 2012-05-16
GB2483386A (en) 2012-03-07
GB201119388D0 (en) 2011-12-21

Similar Documents

Publication Publication Date Title
Klein et al. Non-photorealistic virtual environments
US8493383B1 (en) Adaptive depth of field sampling
US8514238B2 (en) System and method for adding vector textures to vector graphics images
US7714866B2 (en) Rendering a simulated vector marker stroke
US8217949B1 (en) Hybrid analytic and sample-based rendering of motion blur in computer graphics
US9208610B2 (en) Alternate scene representations for optimizing rendering of computer graphics
WO1995004331A1 (fr) Synthese d'images tridimensionnelles utilisant une interpolation de vues
US8106906B1 (en) Optical system effects for computer graphics
Woo et al. Shadow algorithms data miner
JP2012190428A (ja) 立体映像視覚効果処理方法
US8026915B1 (en) Programmable visible surface compositing
WO2008147999A1 (fr) Profondeur de champ a deplacement de cisaillement
Haller et al. Real-time painterly rendering for mr applications
Hou et al. Real-time Multi-perspective Rendering on Graphics Hardware.
McReynolds et al. Programming with opengl: Advanced techniques
US8373715B1 (en) Projection painting with arbitrary paint surfaces
Trapp et al. Occlusion management techniques for the visualization of transportation networks in virtual 3D city models
US9007388B1 (en) Caching attributes of surfaces without global parameterizations
US7880743B2 (en) Systems and methods for elliptical filtering
Brosz et al. Shape defined panoramas
Trapp et al. 2.5 d clip-surfaces for technical visualization
Meyer et al. Real-time reflection on moving vehicles in urban environments
Jahrmann et al. Interactive grass rendering using real-time tessellation
Romanov ON THE DEVELOPMENT OF SOFTWARE WITH A GRAPHICAL INTERFACE THAT SIMULATES THE ASSEMBLY OF THE CONSTRUCTOR
Matsuyama et al. A Framework for Manipulating Multi-Perspective Image Using A Parametric Surface

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08769703

Country of ref document: EP

Kind code of ref document: A1

DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
ENP Entry into the national phase

Ref document number: 0918780

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20080523

WWE Wipo information: entry into national phase

Ref document number: 0918780.8

Country of ref document: GB

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 582312

Country of ref document: NZ

122 Ep: pct application non-entry in european phase

Ref document number: 08769703

Country of ref document: EP

Kind code of ref document: A1

DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)