WO2007130018A1 - Coupe d'occlusion basée sur une image - Google Patents

Coupe d'occlusion basée sur une image Download PDF

Info

Publication number
WO2007130018A1
WO2007130018A1 PCT/US2006/016465 US2006016465W WO2007130018A1 WO 2007130018 A1 WO2007130018 A1 WO 2007130018A1 US 2006016465 W US2006016465 W US 2006016465W WO 2007130018 A1 WO2007130018 A1 WO 2007130018A1
Authority
WO
WIPO (PCT)
Prior art keywords
computer program
objects
scene
program code
pixels
Prior art date
Application number
PCT/US2006/016465
Other languages
English (en)
Inventor
Angelique Ford
Don Schreiter
Original Assignee
Pixar
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pixar filed Critical Pixar
Priority to PCT/US2006/016465 priority Critical patent/WO2007130018A1/fr
Publication of WO2007130018A1 publication Critical patent/WO2007130018A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal

Definitions

  • the present invention relates to the field of computer graphics, and in particular to methods and apparatus for decreasing the time and computing resources needed to create computer graphics images.
  • An application such as a rendering or animation application creates an image by projecting surfaces and objects defined in three-dimensional space on to an image plane of a virtual "camera."
  • the image plane includes a set of pixels.
  • the color, transparency, and other optical attributes of surfaces and objects projected on to the image plane are evaluated to determine values of pixels.
  • Surfaces can include, for example: triangles and polygons; higher-order surfaces such as B-splines; subdivision surfaces; and implicit surfaces, among others.
  • a plurality of related surfaces can make up an object model.
  • An object model can be expressed using one-, two-, three-, four- or more dimensional geometry.
  • Objects represents various entities to be rendered, for example, such as a character or a set piece.
  • a group of related object models such as the walls, floor, and furniture of a living room, can define an animation "set.”
  • One or more virtual cameras can be positioned relative to the set in order to "film" a scene.
  • a shot comprises one or more frames of animation, each frame corresponding to a rendered image of the set based on the view and/or point of view of a virtual camera.
  • Each image frame of animation will include an array of pixels as known in the art. Ih many instances, some of the object models included in a frame or shot will be smaller than the size of a pixel in the rendered image. Further, a pixel might be representative of portions of multiple object models. Even if a pixel represents only a single object, a texture mapped to that object can be complex, whereby the pixel can represent multiple colors or textures. By averaging, weighting, filtering, or otherwise combining these objects, colors, textures, and surfaces for each pixel, a Tenderer can create a highly detailed image.
  • the rendering process is computationally expensive, however, given the large amount of data and complex computations required to evaluate the attributes of numerous objects. Further, the rendering process is memory intensive, such that attempting to render large or complex scenes can cause the rendering system to run out of memory, even in today's advanced rendering farms. Rendering can be particularly expensive when a frame or shot includes hundreds or thousands of object models. For example, a shot in a sporting stadium set might include object models for tens or thousands of seats and/or spectators, hi another example, a shot in the desert might include thousands of object models such as models for brush, bushes, cactus, rocks, and various other elements viewable in the shot.
  • Each of these object models requires various processes in the rendering, pre-rendering, and production stages, including processes such as lighting and texture mapping. There are many cases, however, where the object models in a shot cannot be seen in the final image, such as where an object is occluded by another object or where the object is so small that it only occupies a small portion of a pixel and therefore does not appreciably affect the attribute(s) for that pixel.
  • a preliminary render can be done for an image or frame of animation.
  • This preliminary render can be done at any appropriate resolution, and can have any shading, lighting, or other processing of the object models turned off or otherwise removed from the rendering process.
  • the render then can be done using only the object models contained within the bounds of the frame, such as any object model that is "in- camera" for the selected frame.
  • the preliminary render can produce an image comprised of an array of pixels. Each pixel can be analyzed to determine one or more object models contributing to that pixel, that is, responsible for determining an attribute for that pixel, such as an associated color.
  • each pixel can be subdivided into any appropriate number of sub-pixels, whereby each sub-pixel can be analyzed to determine which object model contributes to that sub-pixel.
  • Each object model determined to contribute to a pixel or sub- pixel can be assigned to that pixel or sub-pixel.
  • An identifier for each such object model also can be added to a list of visible object models.
  • the group of visible object models can be used for any subsequent processing or rendering for the frame.
  • the group of visible object models is used to determine which object models are not visible in the preliminarily rendered frame.
  • Non- visible or otherwise non-dominating object models then can be deactivated for the shot, such as by setting an attribute for each such object model.
  • the attribute can be set in a shot file or any other appropriate file or storage location.
  • Another attribute can be set for each deactivated object mode, designating that the models were removed by this process, allowing for a subsequent reversal of at least a portion of the process is desired.
  • an animation tool can use this information to present a user with a display of active and/or deactivated object models in a set.
  • the active models are presented in a first color and the deactivated models in another color, allowing the user to easily discern which models remain active for a given frame or shot.
  • any geometry, surface, texture, or other object can be removed or deactivated for a frame, shot, or image using such an approach, as would be obvious to one of ordinary skill in the art in light of the teachings and suggestions contained herein.
  • FIG. 1 illustrates a set including a number of object models that can be processed in accordance with one embodiment of the present invention
  • FIG. 2 illustrates a relationship between and array of pixels and a set of object models in accordance with one embodiment of the present invention
  • ETG. 3 illustrates a sub-division of pixels in order to process a group of object models in accordance with one embodiment of the present invention
  • FIG. 4 shows steps of a process that can be used in accordance with one embodiment of the present invention
  • FIG. 5 illustrates a processing system that can be used with various embodiments of the present invention.
  • Systems and methods in accordance with various embodiments of the present invention can quickly and easily deactivate in-camera object models that do not appreciably affect the final image. These systems and methods can provide for automatic, in-camera removal of obstructed or otherwise non- viewable object models.
  • a script, procedure, or function evaluates one or more preliminary rendered images of a shot to identify one or more object models that determines the colors and other attributes of the pixels of the all of the images of the shot. Any object model in the camera view for that frame or set of frames that does not affect or minimally affects the images of the shot are "pruned" or removed from subsequent processing or rendering, such as by deactivating that object model for the relevant frame(s).
  • the preliminary rendered images of a shot can be created at the same resolution as the desired final rendering, at a lower resolution, or any other appropriate resolution.
  • the primary purpose of the preliminary rendered images is to determine the visibility of objects from the point of view of the camera, the Tenderer's usual lighting, texturing, shading, and other similar computations are superfluous and maybe omitted from the creation of the preliminary rendered images.
  • the Tenderer's usual lighting, texturing, shading, and other similar computations are superfluous and maybe omitted from the creation of the preliminary rendered images.
  • Various other approaches and techniques would be understood to one of ordinary skill in the art in light of the teachings and suggestions contained herein.
  • FIG. 1 shows an example animation set 100 including a plurality of object models 102 representing bushes across a terrain.
  • object models can be contained and/or animated in the set, but are not shown, discussed, or described herein.
  • a frame of animation, representing the view of a virtual camera filming the object model set may include any appropriate subset of the object models 102 comprising the set 100.
  • the set 100 contains an object model 104 for a bush in the "foreground,” which due to forced perspective appears larger in the projected image than bushes in the "background.” Due to the relative size and position of this first bush 104 in the projected image, a second bush 106 located behind the first bush is partially obscured by the first bush 104. However, a substantial portion of the second bush 106 still can be seen in the image. A third bush 108 further off in the distance, but still in-camera, is totally obscured by the first bush 104 in the projected image.
  • the objects can be removed at render time. This is not desirable, however, as this requires data for each object to be loaded into memory and processed in order to determine whether that object can be seen in the final image. This can result in the renderer falling out of memory as discussed above. Further, each object still has to go through various production and preparation stages and processes as known in the art, including texture mapping and lighting, which can result in a significant amount of time being spent that is not otherwise necessary, hi one example, the inclusion of a large number of obstructed objects had a rendering preparation time on the order of 6.9 hours for a single frame of animation, before even beginning the rendering process.
  • a method in accordance with one embodiment of the present invention provides a quick and easy way to remove obstructed or otherwise non-discernable object models from a frame or shot of animation well in advance of the final rendering process.
  • Such a method can rely on a shot file or similar file for an image, frame, set of frames, shot, scene, or other appropriate segment of a piece of animation utilizing a set of object models.
  • a shot file can include a list of all object models contained in the model set being used for a shot, for example.
  • a script, program, or procedure can be run against the shot file to "flatten" the shot file, thereby expanding each group in the shot file to obtain a list of object models in the shot
  • the script, program, or procedure can be in any appropriate computing language, such as Perl, and can be generated, stored, and executed using any appropriate technology known in the art for executing such scripts against data files.
  • a procedure can be run against the file to parse the file into object models that can be seen in the finally rendered image and object models that cannot prominently be seen.
  • the object models that cannot prominently be seen can be removed or deactivated for the shot or frame.
  • a renderer or renderer preprocessor can ignore or discard object models and associated data designated as inactive.
  • a preliminary rendered image is generated using only the object models, without any shading, lighting, or other processing.
  • the value of each pixel includes an object identifier for each object visible at that pixel.
  • the preliminary rendered image then can be analyzed on a pixel-by-pixel basis.
  • a script, program, procedure, process, or other executable code portion evaluates the pixels of the preliminary rendered image to create a list of object models visible in the shot.
  • FIG. 2 illustrates an example of a set of pixels 200 that can be used to explain the evaluation of pixels of a preliminary rendered image.
  • there are three bushes that when projected onto the image plane are in-camera, or that are contained within the viewable area of that virtual camera for this frame.
  • the process can step through the pixels to determine which bush models contribute to each pixel.
  • the only object model viewable for that pixel is the model for bush 202. Therefore, a model attribute for bush 202 can be assigned to those pixels 208.
  • the model for bush 202 can simply be activated in the flattened file or added to an active model file, for example, when it is determined that model 202 contributes to a first one of those pixels 208.
  • the process reaches pixels 210, it can be determined that those pixels 210 contain a portion of both bush model 202 and bush model 204. Because a larger portion of bush model 204 corresponds to those pixels 210, an embodiment assigns the object model for that bush 204 to those pixels 210, and/or sets the model 204 to active in the appropriate file. In another embodiment, both bush models 202 and 204 can be set to active in the appropriate file.
  • the object model for bush 206 was not assigned to any pixel, or did not contribute substantially to any pixel, such that the bush will not be active in a pruned set file, hi an embodiment, even though bush model 206 can be seen in the projected area for pixel 212, its contribution to the pixels is insubstantial and thus bush model 206 would not be activated.
  • all objects visible to at least one pixel of the preliminary rendered image are set to be active.
  • a list of visible objects is generated that includes the names of the object models visible in the preliminary rendered image. A script or program can then compare to the list with the shot files to find and deactivate or remove objects not visible in the preliminary rendering.
  • the list of active object models then can be used in subsequent rendering and processing to exclude objects not visible in the preliminary rendered images.
  • a list can be used that contains a listing of all object models, but includes an attribute indicating whether each model is active or deactivated.
  • a list of activated object models and/or deactivated object models is obtained, that list can be used to ensure that the non-viewable object models are deactivated for any or all subsequent processing.
  • the advantage here is two-fold.
  • a first advantage can be obtained in production.
  • a person lighting or texture mapping for a shot would otherwise have to load up and process a shot containing thousands of object models, many of which will not even be seen in the rendered shot.
  • By automatically pruning out the unnecessary objects in advance the user can have a much smaller number of objects to process that actually will affect the appearance of the shot. In previous systems, a user might have to first go into a shot and figure out which objects cannot be seen, then manually deactivate those objects.
  • a pruning process can be used differently for various production tasks than for rendering.
  • a user might just want access to all object models visible for a given shot, knowing that not every object model will affect every frame in that shot.
  • a separate list of active object models might then be used at render time, where only a small number of frames will be rendered and it is not necessary to render every object model in the entire shot.
  • This provides a second advantage at render submission time, wherein only active object models for a five frame portion of the shot, for example, are loaded for rendering.
  • a second pre-pass procedure can be included in scripts, which in one embodiment are run between rendering submission and rendering, in order to ensure that any object model not needed for the shot is pruned before rendering.
  • the rendering process might accomplish some of this pruning if sufficient information is available. This can provide a significant increase in render speed, particularly where a shot involves thousands of objects, for example, but each individual frame might only require a subset on the order of a hundred objects that can actually be seen.
  • Such an approach may not, however, provide sufficient detail for all applications. For example, if the implied distance in a scene is such that objects off in the distance occupy less than a pixel of real estate in the final image, all of those objects might end up being excluded via pruning. For a vast forest containing thousands of trees, for example, the visible tops of the trees off in the distance might occupy less than a pixel each. This could result in all trees past a certain point being excluded from the rendered image, such that at some point the forest might appear to just stop.
  • systems and methods in accordance with some embodiments allow the user to process the preliminary rendered image at a sub-pixel resolution.
  • the pixel array 300 of FIG. 3 shows a 2x2 subdivision of pixels of the preliminary rendered image of FIG. 2.
  • object 202 contributes to pixel 212 (designated by the bolded bounding box), such that object model 206 would be removed as not dominating any pixel.
  • bush object model 206 has a drastically different color from that of bush object model 202, for example, it might benefit the accuracy of the final image to factor some of the color of object model 206 into the final color determination for pixel 212.
  • the system can utilize sub-pixel resolution, using appropriate technology known or used in the art to sub-divide the pixels into an appropriate number of regions. As shown in FIG. 3, simply dividing each pixel into a 2x2 array is sufficient to allow object model 206 to contribute to one of the sub-regions 302 (designated by the shaded area), such that object model 206 will be activated for the final render (along with any or all other subsequent processes). This allows object model 206 to affect the color determination for pixel 212 in the final render. In one embodiment, this functionality is embodied in a tool that allows a user to adjust the subdivision level to suit the specific frame(s) or shot.
  • a pixel-by-pixel determination and file parsing still can take an appreciable amount of time, however. As discussed above, this time can be drastically decreased by simplifying the object model set used for the determination. For example, a script or process can remove all lighting, textures, simulations, moveable characters, and other such objects or features, such that only the basic object models of static portions of the set are included in the determination. Removing all shaders can leave just the basic set renders. This can drastically decrease the processing time for each pixel, and can increase the accuracy of the determination.
  • Such an approach also can be beneficial for providing various levels of object activation.
  • Setting such an attribute can activate thousands of objects in a set.
  • a camera-frame based culling then can be run, as known in the art, to deactivate all objects that are not visible in the frame, or "in-camera,” in a slightly lower level of the hierarchy. For all objects that are contained in the frame, or "in-camera," a pruning procedure can automatically deactivate in- camera objects at yet a lower level of the object model hierarchy.
  • any transparent object can be set to invisible, or have another attribute set, in order to remove the object from consideration.
  • a transparent object is handled differently than an invisible object, as a transparent object allows for reflections, etc. Reflective, transparent, translucent, or other such objects can be handled in similar fashion.
  • a single pruning process can be executed for that shot. If non-static objects or animated are considered, where object models can be occluded even though the camera is fixed, then a single pruning may not be sufficient.
  • non-static objects can be considered in a process not only to prune objects hidden by the animation, but also to prune any non-static that are obscured in the image (such as faces in a crowd that are obscured by another character).
  • a tool useful for pruning also can allow a user to remove anything from consideration for pruning.
  • a user can select an object to exclude which then can be set to invisible for purposes of pruning.
  • a user can set a "prunable" or similar attribute so that the object cannot be pruned.
  • Such a tool can use the pruned flags discussed above to display activated objects to a user in a first color, such as green, and deactivated objects in a second color, such as red, so that a user can see exactly what has been removed from pruning. The user then can easily go back in and re-activate objects that via removed by the pruning procedure.
  • a first color such as green
  • deactivated objects in a second color such as red
  • Another animation tool can use pruning to allow a user to examine an animation set at different depths.
  • a user can run a pruning process against a flattened shot file, setting pruning parameters that deactivate non- viewable object models, for example, but that also prune certain object model types, or all but certain object model types.
  • a user might view a set containing mountains, bushes, and other objects, such as is shown in FIG. 1.
  • the process can generate a pruned object model set that allows the user to only view bushes that will be viewable in the final image.
  • FIG. 4 illustrates steps of an example pruning process 400 in accordance with one embodiment, hi such a process, an image, frame, set or frames, or shot of animation is selected for processing 402.
  • a preliminary rendering process is executed for the selected frame(s) 404. As discussed above, this can involve first turning off any shaders, lighting, textures, or anything else sitting on top of, or in addition to, the basic object models of the relevant set.
  • the preliminary rendered image may include object identifiers for each pixel to indicate the objects affecting that pixel.
  • the preliminary rendered image can be analyzed to determine which object models contributes to or determine the attribute(s) for, that pixel 406. This render can be done at any appropriate resolution, such as low resolution or the desired master resolution.
  • the pixels can be sub-divided and a determination can be made for each sub-pixel, allowing object models to remain active that can affect the attribute(s) for a pixel.
  • An active model file is produced that includes an identifier object models determined to contribute to at least one pixel in the rendered image(s) 408. [0040] After each pixel is analyzed, the active model file can be compared against a shot file for the selected frame(s) 410, the shot file containing an identifier for each object model in the set for the selected frame(s). As discussed above, the shot file can first be flattened to expand any groups and acquire a listing of each object model.
  • Each such object model then can be deactivated for the selected frame(s) 414, such as by setting an active attribute in the shot file.
  • Such a pruning process can be used to automatically remove any geometry or object models that do not appreciably affect the final image, which in most cases would have been removed anyway during rendering.
  • Such a process can be run before rendering, before render submission, before various production stages, or at any other appropriate time during the animation process.
  • a new shot file can be generated including only the active object models.
  • a pruned attribute can be set for any object model that is indicated as being deactivated in a given file. As discussed above, setting a pruned attribute for each deactivated object model can allow a user to quickly and easily reverse at least a portion of the pruning process, or at least determine which object models were removed by pruning.
  • pruning can be done on any appropriate geometry in a to-be rendered image, such as an object model, surface, texture, lighting effect, particle simulation, micropolygon, or other appropriate geometry or geometric approximation. Such a process also can be done at any point in the animation process, using any appropriate resolution and any appropriate tools useful for such purposes.
  • FIG. 5 illustrates an example computer system 500 suitable for implementing an embodiment of the invention.
  • Computer system 500 typically includes a monitor 502, computer 504, a keyboard 506, a user input device 508, and a network interface 510.
  • User input device 508 includes a computer mouse, a trackball, a track pad, graphics tablet, touch screen, and/or other wired or wireless input devices that allow a user to create or select graphics, objects, icons, and/or text appearing on the monitor 502.
  • Embodiments of network interface 510 typically provides wired or wireless communication with an electronic communications network, such as a local area network, a wide area network, for example the Internet, and/or virtual networks, for example a virtual private network (VPN).
  • VPN virtual private network
  • Computer 504 typically includes components such as one or more processors 512, and memory storage devices, such as a random access memory (RAM) 514, disk drives 516, and system bus 518 interconnecting the above components.
  • processors 512 can include one or more general purpose processors and optional special purpose processors for processing video data, audio data, or other types of data.
  • RAM 514 and disk drive 516 are examples of tangible computer-readable media for storage of data, audio / video files, computer programs, applet interpreters or compilers, virtual machines, and embodiments of the herein described invention.
  • tangible computer-readable media include floppy disks; removable hard disks; optical storage media such as DVD-ROM, CD-ROM, MINIDISC, optical discs, and bar codes; non-volatile memory devices such as flash memories; read-only-memories (ROMS); battery-backed volatile memories; networked storage devices.
  • Computer-readable media also can include a data signal embodied in a carrier wave.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne l'élimination de la géométrie, comme celle des modèles d'objet, invisible dans une image devant être restitué par un procédé de taille de pré-restitution. Une restitution préliminaire peut être effectuée à l'aide des modèles d'objet, sans aucun ombrage ni éclairage. Une détermination peut être faite quant aux modèles d'objet qui sont visibles pour chaque pixel de l'image. Les modèles d'objet qui ne sont pas visibles pour un quelconque pixel, ou partie de celui-ci, peuvent être désactivés de l'image devant être restituée. Cette désactivation peut être effectuée avant divers procédés de fabrication, de façon à simplifier la fabrication ainsi qu'à réduire les exigences de mémoire et le temps de traitement pendant la restitution finale.
PCT/US2006/016465 2006-04-27 2006-04-27 Coupe d'occlusion basée sur une image WO2007130018A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2006/016465 WO2007130018A1 (fr) 2006-04-27 2006-04-27 Coupe d'occlusion basée sur une image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2006/016465 WO2007130018A1 (fr) 2006-04-27 2006-04-27 Coupe d'occlusion basée sur une image

Publications (1)

Publication Number Publication Date
WO2007130018A1 true WO2007130018A1 (fr) 2007-11-15

Family

ID=37005810

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/016465 WO2007130018A1 (fr) 2006-04-27 2006-04-27 Coupe d'occlusion basée sur une image

Country Status (1)

Country Link
WO (1) WO2007130018A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640174A (zh) * 2020-05-09 2020-09-08 杭州群核信息技术有限公司 一种基于固定视角的家具生长动画云渲染方法及系统
CN112044062A (zh) * 2020-08-27 2020-12-08 腾讯科技(深圳)有限公司 游戏画面渲染方法、装置、终端和存储介质
WO2023280241A1 (fr) * 2021-07-09 2023-01-12 花瓣云科技有限公司 Procédé de rendu d'image d'image et dispositif électronique

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BATAGELO H C ET AL: "Dynamic scene occlusion culling using a regular grid", COMPUTER GRAPHICS AND IMAGE PROCESSING, 2002. PROCEEDINGS. XV BRAZILIAN SYMPOSIUM ON FORTALEZA-CE, BRAZIL 7-10 OCT. 2002, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 7 October 2002 (2002-10-07), pages 43 - 50, XP010624490, ISBN: 0-7695-1846-X *
COHEN-OR D ET AL: "A survey of visibility for walkthrough applications", IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS IEEE USA, vol. 9, no. 3, July 2003 (2003-07-01), pages 412 - 431, XP002400736, ISSN: 1077-2626, Retrieved from the Internet <URL:http://ieeexplore.ieee.org/iel5/2945/27179/01207447.pdf?tp=&arnumber=1207447&isnumber=27179> [retrieved on 20060926] *
FRAHLING G ET AL: "Online occlusion culling", ALGORITHMS - ESA 2005. 13TH ANNUAL EUROPEAN SYMPOSIUM. PROCEEDINGS (LECTURE NOTES IN COMPUTER SCIENCE VOL. 3669) SPRINGER-VERLAG BERLIN, GERMANY, 2005, pages 758 - 769, XP019020548, ISBN: 3-540-29118-0 *
KURKA G: "Image-based occluder selection: an introductory overview", 6TH WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS. PROCEEDINGS INT. INST. INF. & SYST ORLANDO, FL, USA, vol. 6, 2002, pages 6 - 11 vol.6, XP002400734, ISBN: 980-07-8150-1, Retrieved from the Internet <URL:http://www.gup.uni-linz.ac.at/~gk/docs/IBOSSCI2002.pdf> [retrieved on 20060926] *
ZANG, H.: "Effective Occlusion Culling for the Interactive Display of Arbitrary Models", TECHNICAL REPORT TR99-027, January 1998 (1998-01-01), University of North Carolina, Chapel Hill, USA, pages 1 - 98, XP002400735, Retrieved from the Internet <URL:http://citeseer.ist.psu.edu/cache/papers/cs/229/http:zSzzSzwww.cs.unc.eduzSz~zhanghzSzdissertation.pdf/zhang98effective.pdf> [retrieved on 20060926] *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640174A (zh) * 2020-05-09 2020-09-08 杭州群核信息技术有限公司 一种基于固定视角的家具生长动画云渲染方法及系统
CN111640174B (zh) * 2020-05-09 2023-04-21 杭州群核信息技术有限公司 一种基于固定视角的家具生长动画云渲染方法及系统
CN112044062A (zh) * 2020-08-27 2020-12-08 腾讯科技(深圳)有限公司 游戏画面渲染方法、装置、终端和存储介质
CN112044062B (zh) * 2020-08-27 2022-11-08 腾讯科技(深圳)有限公司 游戏画面渲染方法、装置、终端和存储介质
WO2023280241A1 (fr) * 2021-07-09 2023-01-12 花瓣云科技有限公司 Procédé de rendu d'image d'image et dispositif électronique

Similar Documents

Publication Publication Date Title
Klein et al. Non-photorealistic virtual environments
US8633939B2 (en) System and method for painting 3D models with 2D painting tools
US9171390B2 (en) Automatic and semi-automatic generation of image features suggestive of motion for computer-generated images and video
CA2795739C (fr) Format de fichier pour representer une scene
US7750906B2 (en) Systems and methods for light pruning
JP3599268B2 (ja) 画像処理方法、画像処理装置及び記録媒体
US20050219249A1 (en) Integrating particle rendering and three-dimensional geometry rendering
US8217949B1 (en) Hybrid analytic and sample-based rendering of motion blur in computer graphics
EP2674919A2 (fr) Propagation de lumière en continu
US9189883B1 (en) Rendering of multiple volumes
US8416260B1 (en) Sigma buffer for rendering small objects
US8698799B2 (en) Method and apparatus for rendering graphics using soft occlusion
CN111968214B (zh) 一种体积云渲染方法、装置、电子设备及存储介质
US20090033662A1 (en) Multiple artistic look rendering methods and apparatus
US9311737B1 (en) Temporal voxel data structure
US9292954B1 (en) Temporal voxel buffer rendering
US9292953B1 (en) Temporal voxel buffer generation
WO2007130018A1 (fr) Coupe d&#39;occlusion basée sur une image
US9519997B1 (en) Perfect bounding for optimized evaluation of procedurally-generated scene data
Papaioannou et al. Enhancing Virtual Reality Walkthroughs of Archaeological Sites.
Stich et al. Efficient and robust shadow volumes using hierarchical occlusion culling and geometry shaders
Macedo et al. Revectorization‐Based Soft Shadow Mapping
Döllner et al. Expressive virtual 3D city models
Döllner et al. Non-photorealism in 3D geovirtual environments
US20240161406A1 (en) Modifying two-dimensional images utilizing iterative three-dimensional meshes of the two-dimensional images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06758790

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06758790

Country of ref document: EP

Kind code of ref document: A1