EP1907964A4 - Systeme et procedes d'ombrage pour infographie - Google Patents
Systeme et procedes d'ombrage pour infographieInfo
- Publication number
- EP1907964A4 EP1907964A4 EP06774417A EP06774417A EP1907964A4 EP 1907964 A4 EP1907964 A4 EP 1907964A4 EP 06774417 A EP06774417 A EP 06774417A EP 06774417 A EP06774417 A EP 06774417A EP 1907964 A4 EP1907964 A4 EP 1907964A4
- Authority
- EP
- European Patent Office
- Prior art keywords
- shader
- phenomenon
- shaders
- metanode
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
Definitions
- the invention relates generally to the field of computer graphics, computer-aided design and the like, and more particularly to systems and methods for generating shader systems and using the shader systems so generated in rendering an image of a scene.
- the invention in particular provides a new type of component useful in a computer graphics system, identified herein as a "phenomenon," which comprises a system including a packaged and encapsulated shader DAG ("directed acyclic graph") or set of cooperating shader DAGs, each of which can include one or more shaders, which is generated and encapsulated to assist in defining at least a portion of a scene, in a manner which will ensure that the shaders can correctly cooperate during rendering.
- a phenomenon comprises a system including a packaged and encapsulated shader DAG ("directed acyclic graph") or set of cooperating shader DAGs, each of which can include one or more shaders, which is generated and encapsulated to assist in defining at least a portion of a scene,
- an artist, draftsman or other user attempts to generate a three-dimensional representation of objects in a scene, as maintained by a computer, and thereafter render respective two-dimensional images of the objects in the scene from one or more orientations.
- representation generation phase conventionally, computer graphics systems generate a three-dimensional representation from, for example, various two-dimensional line drawings comprising contours and/or cross-sections of the objects in the scene and by applying a number of operations to such lines which will result in two-dimensional surfaces in three-dimensional space, and subsequent modification of parameters and control points of such surfaces to correct or otherwise modify the shape of the resulting representation of the object.
- the operator also defines various properties of the surfaces of the objects, the structure and characteristics of light sources which illuminate the scene, and the structure and characteristics of one or more simulated cameras which generate the images.
- the operator After the structure and characteristics of the scene, light source(s) and camera(s) have been defined, in the second phase, an operator enables the computer to render an image of the scene from a particular viewing direction.
- the objects in the scene, light source(s) and camera(s) are defined, in the first, scene definition, phase, by respective multiple-dimensional mathematical representations, including at least the three spatial dimensions, and possibly one time dimension.
- the mathematical representations are typically stored in a tree-structured data structure.
- the properties of the surfaces of the objects are defined by "shade trees," each of which includes one or more shaders which, during the second, scene rendering, phase, enables the computer to render the respective surfaces, essentially providing color values representative of colors of the respective surfaces.
- shaders of a shade tree are generated by an operator, or are provided a priori by a computer graphics system, in a high-level language such as C or C++, which together enable the computer to render an image of a respective surface in the second, scene rendering, phase.
- a number of problems arise from the generation and use of shaders and shade trees as typically provided in computer graphics arrangements.
- shaders generally cannot cooperate with each other unless they are programmed to do so.
- input values provided to shaders are constant values, which limits the shaders' flexibility and ability to render features in an interesting and life-like manner.
- a phenomenon is an encapsulated shader DAG ("directed acyclic graph") comprising one or more nodes, each comprising a shader, or an encapsulated set of such DAGs which are interconnected so as to cooperate, which are instantiated and attached to entities in the scene which are created during the scene definition process to define diverse types of features of a scene, including color and textural features of surfaces of objects in the scene, characteristics of volumes and geometries in the scene, features of light sources illuminating the scene, features of simulated cameras which will be simulated during rendering, and numerous other features which are useful in rendering.
- Phenomena selected for use by an operator in connection with a scene may be predefined, or they may be constructed from base shader nodes by an operator using a phenomenon creator.
- the phenomenon creator ensures that phenomena are constructed so that the shaders in the DAG or cooperating DAGs can correctly cooperate during rendering of an image of the scene.
- a phenomenon Prior to being attached to a scene, a phenomenon is instantiated by providing values, or functions which are used to define the values, for each of the phenomenon's parameters, using a phenomenon editor.
- a scene image generator can generate an image of the scene.
- the scene image generator operates in a series of phases, including a pre-processing phase, a rendering phase and a post-processing phase.
- the scene image generator can perform pre-processing operations, such as shadow and photon mapping, multiple inheritance resolution, and the like.
- the scene image generator may perform pre-processing operations if, for example, a phenomenon attached to the scene includes a geometry shader to generate geometry defined thereby for the scene.
- the scene image generator renders the image.
- the scene image generator may perform post-processing operations if for example, a phenomenon attached to the scene includes a shader that defines postprocessing operations, such as depth of field or motion blur calculations which are dependent on velocity and depth information stored in connection with each pixel value in the rendered image.
- the present invention addresses the above-mentioned limitations of the prior art, and provides platform-independent methods and systems that can unite various shading applications under a single language (herein termed the "MetaSL shading language"), enable the simple re-use and re-purposing of shaders, facilitate the design and construction of shaders without need for computer programming, enable the graphical debugging of shaders, and accomplish many other useful functions.
- One aspect of the invention involves methods and systems that facilitate the creation of simple and compact componentized shaders, referred to herein as Metanodes, that can be combined in shader networks to build more complicated and visually interesting shaders.
- MetaSL Mental Mill shading language
- MetaSL is designed as a simple yet expressive language specifically for implementing shaders. It is also designed to unify existing shading applications, which previously were focused on specific platforms and contexts (e.g., hardware shading for games, software shading for feature film visual effects), under a single language and management structure.
- the Mental Mill thus enables the creation of Metanodes (i.e., shader blocks) written in MetaSL, that can be attached and combined to form sophisticated shader graphs and phenomena.
- Metanodes i.e., shader blocks
- shader graphs provide intuitive graphical user interfaces for creating shaders, which are accessible even to users lacking technical expertise to write shader software code.
- Another aspect of the invention relates to a library of APIs to manage shader creation.
- the Mental Mill GUI libraries harness the shader graph paradigm to provide a complete GUI for building shader graphs and phenomena.
- MetaSL shading language is effectively configurable as a superset of all currently existing and future shading languages for specific hardware platforms, and hence independent of such instantiations of special purpose graphics hardware, it enables the use of dedicated compilers in the
- Mental Mill system for generating optimized software code for a specific target platform in a specific target shader language (such as Cg, HLSL, or the like), from a single, reusable MetaSL description of a Phenomenon (which in turn is comprised of Metanodes in MetaSL).
- the platform/language optimized code for the specific target platform/language can then be converted to machine code for specific hardware (integrated circuit chip) instantiations by the native compiler for the shading language at issue. (That native compiler need not be, and in the general case will not be, part of Mental Mill.) This can be especially useful, for example, where a particular chip is available at a given point in time, but may soon be superseded by the next generation of that chip.
- a further aspect of the invention relates to a novel interactive, visual, real-time debugger for the shader programmer/writer (i.e., the programmer in the MetaSL shading language) in the Phenomenon creation environment.
- This debugger described in greater detail below, allows the effect of a change in even a single line of code to be immediately apparent from visual feedback in a "viewport" where a test scene with the shader, Metanode, or Phenomenon under development is constantly rendered.
- FIG. 1 depicts a computer graphics system that provides for enhanced cooperation among shaders by facilitating generation of packaged and encapsulated shader DAGs, each of which can include one or more shaders, which shader DAGs are generated in a manner so as to ensure that the shaders in the shader DAG can correctly cooperate during rendering, constructed in accordance with the invention.
- FIG. 2 is a functional block diagram of the computer graphics system depicted in
- FIG. 3 depicts a graphical user interface for one embodiment of the phenomenon creator used in the computer graphics system whose functional block diagram is depicted in FIG. 2.
- FIG. 4 graphically depicts an illustrative phenomenon generated using the phenomenon creator depicted in FIGS. 2 and 3.
- FIG. 5 depicts a graphical user interface for one embodiment of the phenomenon editor used in the computer graphics system whose functional block diagram is depicted in FIG. 2.
- FIGS. 6 A and 6B depict details of the graphical user interface depicted in FIG. 5.
- FIGS. 7 and 7 A show a flowchart depicting operations performed by a scene image generation portion of the computer graphics system depicted in FIG. 2 in generating an image of a scene.
- FIG. 8 depicts a flowchart of an overall method according to an aspect of the invention.
- FIG. 9 depicts a software layer diagram illustrating the platform independence of the mental mill.
- FIG. 10 depicts an illustration of the levels of MetaSL as subsets.
- FIG. 11 depicts a bar diagram illustrating the levels of MetaSL and their applicability to hardware and software rendering.
- FIG. 12 depicts a screenshot of a graphical performance analysis window.
- FIG. 13 depicts a bar graph of performance results with respect to a range of values of particular input parameter.
- FIG. 14 depicts a screen view in which performance results are displayed in tabular form.
- FIG. 15 depicts a diagram of a library module, illustrating the major categories of the mental mill libraries.
- FIG. 16 depicts a diagram of the mental mill compiler library.
- FIG. 17 depicts a diagram of a mental mill based renderer.
- FIG. 18 depicts a screen view of a Phenomenon graph editor.
- FIGS. 19-23 depict a series of screen shots illustrating the operation of a
- FIG. 24 depicts a view of a shader parameter editor.
- FIGS. 25 A-B depict, respectively, a thumbnail view and a list view of a Phenomenon/Metanode library explorer.
- FIG. 26 depicts a view of a code editor and IDE.
- FIGS. 27 A-C depict a series of views of a debugger screen, in which numeric values for variables at pixel locations are displayed based upon mouse location.
- FIG. 28 depicts a diagram of a GUI library architecture.
- FIG. 29 shows a table 0101 listing the methods required to implement a BRDF shader.
- FIG. 30 shows a diagram of an example configuration, in which two BRDFs are mixed
- FIG. 31 is a diagram illustrating a pipeline for shading with acquired BRDFs.
- FIG. 32 depicts a screenshot of a basic layout of a GUI.
- FIG. 33 depicts a view of a graph node.
- FIG. 34 depicts a view of a graph node including structures of sub-parameters.
- FIG. 35 depicts a sample graph view when inside a Phenomenon.
- FIG. 36 depicts a sample graph view when the Phenomenon is opened in-place.
- FIG. 37 depicts a sample graph view when a Phenomenon is opened inside another Phenomenon.
- FIG. 38 depicts a view illustrating the result of attaching a color output to a scalar input.
- FIG. 39 depicts a view illustrating shaders boxed up into a new Phenomenon node.
- FIG. 40 depicts a bird's eye view control for viewing a sample shader graph.
- FIGS. 4 IA-D depict a series of views illustrating the progression of node levels of detail.
- FIGS. 42 A-B depict a toolbox in thumbnail view and list view.
- FIG. 43 depicts a parameter view for displaying controls that allows parameters of a selected node to be edited.
- FIG. 44 depicts a view of a code editor that allows a user of MetaSL to create shaders by writing code.
- FIG. 45 depicts a view illustrating the combination of two metanodes into a third metanode.
- FIG. 46 depicts a view of a portion of a variable list.
- FIGS. 47 A-C depicts a series of views illustrating a visualization technique in which a vector is drawn as an arrow positioned on an image surface as a user drags a mouse over the image surface.
- FIG. 48 depicts a view illustrating a visualization technique for a matrix.
- FIG. 49 depicts a view illustrating a visualization technique for three direction vectors.
- FIG. 50 depicts a view illustrating a visualization technique for viewing vector type values using a gauge style display.
- FIG. 51 depicts a table listing Event_type parameters and their descriptions.
- FIG. 52 depicts a table illustrating the results of a vector construction method.
- FIG. 53 depicts a table setting forth Boolean operators.
- FIG. 54 depicts a table listing comparison operators.
- FIG. 55 depicts a schematic of a bump map Phenomenon.
- FIG. 56 depicts a diagram illustrating the bump map Phenomenon in use.
- FIG. 57 depicts a schematic of a bump map Phenomenon according to a further aspect of the invention.
- FIG. 58 depicts a diagram of a bump map Phenomenon in use.
- FIGS. 59A-B depict a table listing a set of state variables.
- FIG. 60 depicts shows a table listing transformation matrices.
- FIG. 61 depicts a table listing light shader state variables.
- FIG. 62 depicts a table listing volume shader state variables.
- FIG. 63 depicts a table listing the methods of the Trace_options class.
- FIGS. 64-65 set forth tables listing the functions that are provided as part of the intersection state and depend on values accessible through the state variable.
- FIG. 66 depicts a table listing members of the Light_iterator class.
- FIG. 67 depicts a diagram of the MetaSL compiler.
- FIG. 68 depicts a diagram of the MetaSL compiler according to an alternative aspect of the invention.
- FIG. 69 depicts a screenshot of a debugger screen according to a further aspect of the invention.
- FIG. 70 depicts a screenshot of a debugger screen if there are compile errors when loading a shader.
- FIG. 71 depicts a screenshot of a debugger screen once a shader has been successfully loaded and compiled without errors, at which point debugging can begin by selecting a statement.
- FIG. 72 depicts a screenshot of a debugger screen when the selected statement is conditional.
- FIG. 73 depicts a screenshot of a debugger screen when the selected statement is in a loop.
- FIG. 74 depicts a screenshot of a debugger screen, in which texture coordinates are viewed.
- FIG. 75 depicts a screenshot of a debugger screen, in which parallax mapping produces the illusion of depth by deforming texture coordinates.
- FIG. 76 depicts a screenshot of a debugger screen, in which the offset of texture coordinates can be seen when looking at texture coordinates in the debugger.
- FIGS. 77 and 78 show screenshots of a debugger screen illustrating other shader examples.
- the present invention provides improvements to the computer graphics entity referred to as a "phenomenon", which was described in commonly owned U.S. Patent No. 6,496,190 incorporated herein by reference. Accordingly, we first discuss, in Section I below, the various aspects of the computer graphics "phenomenon" described in U.S. Patent No. 6,496,190, and then, in Section II, which is subdivided into four subsections, we discuss the present improvements to the phenomenon entity.”
- Section I. Computer Graphics "Phenomena" U.S. Patent No. 6,496,190 described a new computer graphics system and method that provided enhanced cooperation among shaders by facilitating generation of packaged and encapsulated shader DAGS, each of which can include one or more shaders, generated in a manner so as to ensure that the shaders in the shader DAGs can correctly cooperate during rendering.
- a computer graphics system in which a new type of entity, referred to as a "phenomenon," can be created, instantiated and used in rendering an image of a scene.
- a phenomenon is an encapsulated shader DAG comprising one or more nodes each comprising a shader, or an encapsulated set of such DAGs which are interconnected so as to cooperate, which are instantiated and attached to entities in the scene which are created during the scene definition process to define diverse types of features of a scene, including color and textural features of surfaces of objects in the scene, characteristics of volumes and geometries in the scene, features of light sources illuminating the scene, features of simulated cameras which will be simulated during rendering, and numerous other features which are useful in rendering.
- Phenomena selected for use by an operator in connection with a scene may be predefined, or they may be constructed from base shader nodes by an operator using a phenomenon creator.
- the phenomenon creator ensures that phenomena are constructed so that the shaders in the DAG or cooperating DAGs can correctly cooperate during rendering of an image of the scene.
- a phenomenon Prior to being attached to a scene, a phenomenon is instantiated by providing values, or functions which are used to define the values, for each of the phenomenon's parameters, using a phenomenon editor.
- a scene image generator can generate an image of the scene.
- the scene image generator operates in a series of phases, including a pre-processing phase, a rendering phase and a post-processing phase.
- the scene image generator can perform pre-processing operations, such as shadow and photon mapping, multiple inheritance resolution, and the like.
- the scene image generator may perform pre-processing operations if, for example, a phenomenon attached to the scene includes a geometry shader to generate geometry defined thereby for the scene.
- the scene image generator renders the image.
- the scene image generator may perform post-processing operations if for example, a phenomenon attached to the scene includes a shader that defines postprocessing operations, such as depth of field or motion blur calculations which are dependent on velocity and depth information stored in connection with each pixel value in the rendered image.
- FIG. 1 depicts elements comprising a computer graphics system 10 constructed in accordance with the invention.
- the computer graphics system 10 provides for enhanced cooperation among shaders by facilitating generation of new computer graphic components, referred to herein as "phenomenon" (in the singular) or "phenomena” (in the plural), which are used to define features of a scene for use in rendering.
- a phenomenon is a packaged and encapsulated system comprising one or more shaders, which are organized and interconnected in the form of one or more directed acyclic graphs ("DAGs"), with each DAG including one or more shaders.
- DAGs directed acyclic graphs
- the phenomena generated by the computer graphics system 10 are generated in such a manner as to ensure that the shader or shaders in each shader DAG can correctly cooperate during rendering, to facilitate the rendering of realistic or complex visual effects.
- the computer graphics system 10 generates the phenomena such that the shaders in all of the shader DAGs can correctly cooperate during the rendering, to facilitate the rendering of progressively realistic or complex visual effects.
- the computer graphics system 10 in one embodiment includes a computer including a processor module 11 and operator interface elements comprising operator input components such as a keyboard 12A and/or a mouse 12B (generally identified as operator input element(s) 12) and an operator output element such as a video display device 13.
- the illustrative computer system 10 is of the conventional stored-program computer architecture.
- the processor module 11 includes, for example, processor, memory and mass storage devices such as disk and/or tape storage elements (not separately shown) which perform processing and storage operations in connection with digital data provided thereto.
- the operator input element(s) 12 are provided to permit an operator to input information for processing.
- the video display device 13 is provided to display output information generated by the processor module 11 on a screen 14 to the operator, including data that the operator may input for processing, information that the operator may input to control processing, as well as information generated during processing.
- the processor module 11 generates information for display by the video display device 13 using a so-called “graphical user interface” ("GUI"), in which information for various applications programs is displayed using various "windows.”
- GUI graphical user interface
- the computer system 10 is shown as comprising particular components, such as the keyboard 12A and mouse 12B for receiving input information from an operator, and a video display device 13 for displaying output information to the operator, it will be appreciated that the computer system 10 may include a variety of components in addition to or instead of those depicted in FIG. 1.
- the processor module 11 may include one or more network ports, generally identified by reference numeral 14, which are connected to communication links which connect the computer system 10 in a computer network.
- the network ports enable the computer system 10 to transmit information to, and receive information from, other computer systems and other devices in the network.
- certain computer systems in the network are designated as servers, which store data and programs (generally, "information") for processing by the other, client computer systems, thereby to enable the client computer systems to conveniently share the information.
- a client computer system which needs access to information maintained by a particular server will enable the server to download the information to it over the network. After processing the data, the client computer system may also return the processed data to the server for storage.
- a network may also include, for example, printers and facsimile devices, digital audio or video storage and distribution devices, and the like, which may be shared among the various computer systems connected in the network.
- the communication links interconnecting the computer systems in the network may, as is conventional, comprise any convenient information-carrying medium, including wires, optical fibers or other media for carrying signals among the computer systems.
- Computer systems transfer information over the network by means of messages transferred over the communication links, with each message including information and an identifier identifying the device to receive the message.
- computer graphics system 10 provides for enhanced cooperation among shaders by facilitating generation of phenomena comprising packaged and encapsulated shader DAGs or cooperating shader DAGs, with each shader DAG comprising at least one shader, which define features of a three-dimensional scene.
- Phenomena can be used to define diverse types of features of a scene, including color and textural features of surfaces of objects in the scene, characteristics of volumes and geometries in the scene, features of light sources illuminating the scene, features of simulated cameras or other image recording devices which will be simulated during rendering, and numerous other features which are useful in rendering as will be apparent from the following description.
- the phenomena are constructed so as to ensure that the shaders in the DAG or cooperating DAGs can correctly cooperate during rendering of an image of the scene.
- FIG. 2 depicts a functional block diagram of the computer graphics system 10 used in one embodiment of the invention.
- the computer graphics system 10 includes two general portions, including a scene structure generation portion 20 and a scene image generation portion 21.
- the scene structure generation portion 20 is used by an artist, draftsman or the like (generally, an "operator") during a scene entity generation phase to generate a representation of various elements which will be used by the scene image generation portion 21 in rendering an image of the scene, which may include, for example, the objects in the scene and their surface characteristics, the structure and characteristics of the light source or sources illuminating the scene, and the structure and characteristics of a particular device, such as a camera, which will be simulated in generating the image when the image is rendered.
- a particular device such as a camera
- the representation generated by the scene structure generation portion 20 is in the form of a mathematical representation, which is stored in the scene object database 22.
- the mathematical representation is evaluated by the image rendering portion 21 for display to the operator.
- the scene structure generation portion 20 and the scene image generation portion 21 may reside on and form part of the same computer, in which case the scene object database 22 may also reside on that same computer or alternatively on a server for which the computer 20 is a client.
- the portions 20 and 21 may reside on and form parts of different computers, in which case the scene object database 22 may reside on either computer or a server for both computers.
- the scene structure generation portion 20 is used by the operator to generate a mathematical representation defining comprising the geometric structures of the objects in the scene, the locations and geometric characteristics of light sources illuminating the scene, and the locations, geometric and optical characteristics of the cameras to be simulated in generating the images that are to be rendered.
- the mathematical representation preferably defines the three spatial dimensions, and thus identifies the locations of the object in the scene and the features of the objects.
- the objects may be defined in terms of their one-, two- or three-dimensional features, including straight or curved lines embedded in a three-dimensional space, two- dimensional surfaces embedded in a three-dimensional space, one or more bounded and/or closed three-dimensional surfaces, or any combination thereof.
- the mathematical representations may also define a temporal dimension, which may be particularly useful in connection with computer animation, in which the objects and their respective features are considered to move as a function of time.
- the mathematical representation further defines the one or more light sources which illuminate the scene and a camera.
- the mathematical representation of a light source particularly defines the location and/or the direction of the light source relative to the scene and the structural characteristics of the light source, including whether the light source is a point source, a straight or curved line, a flat or curved surface or the like.
- the mathematical representation of the camera particularly defines the conventional camera parameters, including the lens or lenses, focal length, orientation of the image plane, and so forth.
- the scene structure generation portion 20 also facilitates generation of phenomena, which will be described in detail below, and association of the phenomena to respective elements of the scene.
- Phenomena generally define other information that is required for the completion of the definition of the scene which will be used in rendering. This information includes, but is not limited to, characteristics of the colors, textures, and so forth, of the surfaces of the geometrical entities defined by the scene structure generation portion 20.
- a phenomenon may include mathematical representations or other objects which, when evaluated during the rendering operation, will enable the computer generating the rendered image to display the respective surfaces in the desired manner.
- the scene structure generation portion 20, under control of the operator effectively associates the phenomena to the mathematical representations for the respective elements (that is, objects, surfaces, volumes and the like) with which they are to be used, effectively "attaching" the phenomena to the respective elements.
- the scene image generation portion 21 is used by an operator during a rendering phase to generate an image of the scene on, for example, the video display unit 13 (FIG. 1).
- the scene structure generation portion 20 includes several elements, including an entity geometrical representation generator 23, a phenomenon creator 24, a phenomenon database 25, a phenomenon editor 26, a base shader node database 32, a phenomenon instance database 33 and a scene assembler 34, all of which operate under control of operator input information entered through an operator interface 27.
- the operator interface 27 may generally include the operator input devices 12 and the video display unit 13 of computer graphics system 10 as described above in connection with FIG. 1.
- the entity geometrical representation generator 23, under control of operator input from the operator interface 27, facilitates the generation of the mathematical representation of the objects in the scene and the light source(s) and camera as described above.
- the phenomenon creator 24 provides a mechanism whereby the operator, using the operator interface 27 and base shader nodes from the base shader node database 32, can generate phenomena which can be used in connection with the scene or otherwise (as will be described below). After a phenomenon is generated by the phenomenon creator 24, it (that is, the phenomenon) will be stored in the phenomenon database 25. After a phenomenon has been stored in the phenomenon database 25, an instance of the phenomenon can be created by the phenomenon editor 26. In that operation, the operator will use the phenomenon editor 26 to provide values for the phenomenon's various parameters (if any).
- the phenomenon editor 26 allows the operator, through the operator interface 27, to establish, adjust or modify the particular feature.
- the values for the parameters may be either fixed, or they may vary according to a function of a variable (illustratively, time).
- the operator, using the scene assembler 34, can attach phenomenon instances generated using the phenomenon editor 26 to elements of the scene as generated by the entity geometrical representation generator 23.
- the phenomenon editor 26 has been described as retrieving phenomena from the phenomenon database 25 which have been generated by the phenomenon creator 24 of the scene structure generation portion 20 of computer graphics system 10, it will be appreciated that one or more, and perhaps all, of the phenomena provided in the computer graphics system 10 may be predefined and created by other devices (not shown) and stored in the phenomenon database 25 for use by the phenomenon editor 26. In such a case, the operator, controlling the phenomenon editor through the operator interface 27, can select appropriate predefined phenomena for attachment to the scene.
- the scene image generation portion 21 includes several components including an image generator 30 and an operator interface 31. If the scene image generation portion 21 forms part of the same computer as the scene structure generation portion 20, the operator interface 31 may, but need not, comprise the same components as operator interface 27. On the other hand, if the scene image generation portion 21 forms part of a different computer from the computer of which the scene structure generation portion, the operator interface 31 will generally comprise different components as operator interface 27, although the components of the two operator interfaces 31 and 27 may be similar.
- the image generator 30, under control of the operator interface 31, retrieves the representation of the scene to be rendered from the scene representation database 22 and generates a rendered image for display on the video display unit of the operator interface 31.
- a phenomenon provides information that, in addition to the mathematical representation generated by the entity geometrical representation generator 23, is used to complete the definition of the scene which will be used in rendering, including, but not limited to, characteristics of the colors, textures, and closed volumes, and so forth, of the surfaces of the geometrical entities defined by the scene structure generation portion 20.
- a phenomenon comprises one or more nodes interconnected in the form of a directed acyclic graph ("DAG") or a plurality of cooperating DAGs. One of the nodes is a primary root node which is used to attach the phenomenon to an entity in a scene, or, more specifically, to a mathematical representation of the entity.
- DAG directed acyclic graph
- the shader nodes can comprise any of a plurality of conventional shaders, including conventional simple shaders, as well as texture shaders, material shaders, volume shaders, environmental shaders, shadow shaders, and displacement shaders, and material shaders which can be used in connection with generating a representation to be rendered.
- a number of other types of shader nodes can be used in a phenomenon, including (i) Geometry shaders, which can be used to add geometric objects to the scene.
- Geometry shaders essentially comprise pre- defined static or procedural mathematical representations of entities in three-dimensional space, similar to representations that are generated by the entity geometrical representation generator 23 in connection with in connection with entities in the scene, except that they can be provided at pre-processing time to, for example, define respective regions in which other shaders used in the respective phenomenon are to be delimited.
- a geometry shader essentially has access to the scene construction elements of the entity geometrical representation generator 23 so that it can alter the scene representation as stored in the scene object database to, for example, modify or create new geometric elements of the scene in either a static or a procedural manner.
- a Phenomenon that consists entirely of a geometry shader DAG or of a set of cooperating geometry shader DAGs can be used to represent objects in a scene in a procedural manner. This is in contrast to typical modeling, which is accomplished in a modeling system by a human operator by performing a sequence of modeling operations to obtain the desired representation of an object in the computer.
- typical modeling which is accomplished in a modeling system by a human operator by performing a sequence of modeling operations to obtain the desired representation of an object in the computer.
- a geometry phenomenon represents an encapsulated and automated, parameterized abstract modeling operation.
- Photon shaders which can be used to control the paths of photons in the scene and the characteristics of interaction of photons with surfaces of objects in the scene, such as absorption, reflection and the like, Photon shaders facilitate the physically correct simulation of global illumination and caustics in connection with rendering.
- photon shaders are used during rendering by the scene image generator 30 during a pre-processing operation
- Photon volume shaders which are similar to photon shaders, except that they operate in connection with a three- dimensional volume of space in the scene instead of on the surface of an object. This allows simulation of caustics and global illumination to be extended to volumes and accompanying enclosed participating media, such as scattering of photons by dust or fog particles in the air, by water vapor such as in clouds, or the like
- Photon emitter shaders which are also similar to photon shaders, except that they are related to light sources and hence to emission of photons.
- the simulated photons for which emission is simulated in connection with photon emitter shaders may then be processed in connection with the photon shaders, which can be used to simulate path and surface interaction characteristics of the simulated photons, and photon volume shaders which can be used to simulate path and other characteristics in three-dimensional volumes in particular along the respective paths, (v) Contour shaders, which are used in connection with generation of contour lines during rendering.
- contour shaders there are three sub-types of contour shaders, namely, contour store shaders, contour contrast shaders and contour generation shaders.
- a contour store shader is used to collect contour sampling information for, for example, a surface.
- a contour contrast shader is used to compare two sets of the sampling information which is collected by use of a contour store shader.
- a contour generation shader is used to generation contour dot information for storage in a buffer, which is then used by an output shader (described below) in generating contour lines, (vi)
- Output shaders which are used to process information in buffers generated by the scene image generator 30 during rendering.
- An output shader can access pixel information generated during rendering to, in one embodiment, perform compositing operations, complex convolutions, and contour line drawing from contour dot information generated by contour generation shaders as described above, (vii) Three- dimensional volume shaders, which are used to control how light, other visible rays and the like pass through part or all of the empty three-dimensional space in a scene.
- a three- dimensional volume shader may be used for any of a number of types of volume effects, including, for example, fog, and procedural effects such as smoke, flames, fur, and particle clouds.
- Light shaders which are used to control emission characteristics of light sources, including, for example, color, direction, and attenuation characteristics which can result from properties such as the shapes of respective light sources, texture projection, shadowing and other light properties.
- shaders which may be useful in connection with definition of a scene may also be used in a phenomenon.
- a phenomenon is defined by (i) a description of the phenomenon's externally- controllable parameters, (ii) one primary root node and, optionally, one or more optional root nodes, (iii) a description of the internal structure of the phenomenon, including the identification of the shaders that are to be used as nodes and how they are interconnected to form a DAG or a plurality of cooperating DAGs, and (iv) optionally, a description of dialog boxes and the like which may be defined by the phenomenon for use by the phenomenon editor 26 to allow the operator to provide values for parameters or properties that will be used in evaluation of the respective phenomenon.
- a phenomenon may include external declarations and link-executable code from libraries, as is standard in programming.
- a phenomenon may include a plurality of cooperating DAGs.
- information generated from processing of one or more nodes of a first DAG in the phenomenon may be used in processing in connection with one or more nodes of a second DAG in the phenomenon.
- the two DAGs are, nonetheless, processed independently, and may be processed at different stages in the rendering process.
- the information generated by a respective node in the first DAG which may be "cooperating" with a node in the second DAG (that is, which may be used by the node in the second DAG in its processing, may be transferred from the respective node in the first DAG to the node in the second DAG over any convenient communication channel, such as a buffer which may be allocated therefor.
- Providing all of the DAGs which may need to cooperate in this manner in a single phenomenon ensures that all of the conditions for cooperation will be satisfied, which may not be the case if the DAGs are provided unencapsulated or separated in distinct phenomena or other entities.
- a phenomenon may include several DAGs, including a material shader DAG, an output shader DAG and instructions for generating a label frame buffer.
- the material shader DAG includes at least one material shader for generating a color value for a material and also stores label information about the objects which are encountered during processing of the material shader DAG in the label frame buffer which is established in connection with processing of the label frame buffer generation instructions.
- the output shader DAG includes at least one output shader which retrieves the label information from the label frame buffer to facilitate performing object-specific compositing operations.
- the phenomenon may also have instructions for controlling operating modes of the scene image generator 30 such that both DAGs can function and cooperate. For example, such instructions may control the minimum sample density required for the two DAGs to be evaluated.
- a material phenomenon may represent a material that is simulated by both a photon shader DAG, which includes at least one photon shader, and a material shader DAG, which includes at least one material shader.
- the photon shader DAG will be evaluated during caustics and global illumination pre-processing, and the material shader DAG will be evaluated later during rendering of an image.
- information representing simulated photons will be stored in such a way that it can be used during later processing of the material shader DAG to add lighting contributions from the caustic or global illumination pre-processing stage.
- the photon shader DAG stores the simulated photon information in a photon map, which is used by the photon shader DAG to communicate the simulated photon information to the material shader DAG.
- a phenomenon may include a contour shader DAG, which includes at least one shader of the contour shader type, and an output shader DAG, which includes at least one output shader.
- the contour shader DAG is used to determine how to draw contour lines by storing "dots" of a selected color, transparency, width and other attributes.
- the output shader DAG is used to collect all cells created during rendering and, when the rendering is completed, join them into contour lines.
- the contour shader DAG includes a contour store shader, a contour contrast shader and a contour generation shader.
- the contour store shader is used to collect sampling information for later use by a contour contrast shader.
- the contour contrast shader is used to determine whether the sampling information collected by the contour store shader is such that a contour dot is to be placed in the image, and, if so, the contour generation shader actually places the contour dot.
- This illustrative phenomenon illustrates four-stage cooperation, including (1) a first stage, in which sampling information is collected (by the contour store shader); (2) a second stage, in which the decision as to whether a contour cell is to be placed (by the contour contrast shader); (3) a third stage, in which the contour dot is created (by the contour generation shader); and (4) a fourth stage, in which created contour dots are created (by the output shader DAG).
- a phenomenon may include a volume shader DAG and a geometry shader DAG.
- the volume shader DAG includes at least one volume shader that defines properties of a bounded volume, for example a fur shader that simulates fur within the bounded volume.
- the geometry shader DAG includes at least one geometry shader that is used to include an outer boundary surface as a new geometry into the scene before rendering begins, with appropriate material and volume shader DAGs attached to the outer boundary surface to define the calculations that are to be performed in connection with hair in connection with the original volume shader DAG.
- the cooperation is between the geometry shader DAG and the volume shader DAG, with the geometry shader DAG introducing a procedural geometry in which the geometry shader DAG supports the volume shader DAG.
- the volume shader DAG makes use of this geometry, but it would not be able to create the geometry itself since the geometry is generated using the geometry shader DAG during a pre-processing operation prior to rendering, whereas the volume shader DAG is used during rendering.
- the cooperation illustrated in connection with this fourth illustrative example differs from that illustrated in connection with the first through third illustrative examples since the shader or shaders comprising the geometry shader procedurally provide elements that are used by the volume shader DAG, and do not just store data, as is the case in connection with the cooperation in connection with the first through third illustrative examples.
- FIG. 3 depicts a phenomenon creator window 40, which the phenomenon creator 24 enables the operator interface 27 to display to the operator, to enable the operator to define a new phenomenon and modify the definition of an existing phenomenon.
- the phenomenon creator window 40 includes a plurality of frames, including a shelf frame 41, a supported graph node frame 42, a controls frame 43 and a phenomenon graph canvas frame 44.
- the shelf frame 41 can include one or more phenomenon icons, generally identified by reference numeral 45, each of which represents a phenomenon which has been at least partially defined for use in the scene structure generation portion 20.
- the supported graph node frame 42 includes one or more icons, generally identified by reference numeral 46, which represent entities, such as interfaces, the various types of shaders which can be used in a phenomenon, and the like, which can the operator can select for use in a phenomenon.
- the icons depicted in the supported graph node frame 42 can be used by an operator to form the nodes of the directed acyclic graph defining a phenomenon to be created or modified.
- nodes there are a number of types of nodes, including: (i) A primary root node, which forms the root of the directed acyclic graph and forms the connection to the scene and typically provides a color value during rendering, (ii) Several types of optional root nodes, which may be used as anchor points in a phenomenon DAG to support the main root node (item (i) above).
- Illustrative types of optional root nodes include: (a) A lens root node, which can be used to insert lens shaders or lens shader DAGs into a camera for use during rendering; (b) A volume root node, which can be used to insert global volume (or atmosphere) shaders or shader DAGs into a camera for use during rendering; (c) An environment root node, which can be used to insert global environment shader or shader DAGs into a camera for use during rendering; (d) A geometry root node, which can be used to specify geometry shaders or shader DAGs that may be pre-processed during rendering to enable procedural supporting geometry or other elements of a scene to be added to the scene database; (e) A contour store root node, which can be used to insert a contour store shader into a scene options data structure; (f) An output root node, which can be used in connection with post processing after a rendering phase, and (g) A contour contrast root, which can be used to insert a contour contrast shader into the scene
- a light node which is used in conjunction with a light source.
- a light node provides the light source with a light shader, color, intensity, origin and/or direction, and optionally, a photon emitter shader.
- a material node which is used in conjunction with a surface.
- a material node provides a surface with a color value, and has inputs for an opaque indication, indicating whether the surface is opaque, and for material, volume, environment, shadow, displacement, photon, photon volume, and contour shaders.
- a phenomenon node which is a phenomenon instance
- a constant node which provides a constant value, which may be an input to any of the other nodes. The constant value may be most types of data types in the programming language used for the entities, such as shaders, represented by any of the other nodes, such as scalar, vector, logical
- a dialog node which represents dialog boxes which may be displayed by the phenomenon editor 26 to the operator, and which may be used by the operator to provide input information to control the phenomenon before or during rendering.
- the dialog nodes may enable the phenomenon editor 26 to enable pushbuttons, sliders, wheels, and so forth, to be displayed to allow the operator to specify, for example, color and other values to be used in connection with the surface to which the phenomenon including the dialog node is connected.
- the shelf frame 41 and the supported graph node frame 42 both include left and right arrow icons, generally identified by reference numeral 47, which allow the icons shown in the respective frame to be shifted to the left or right (as shown in FIG. 3), to shift icons to be displayed in the phenomenon creator window 40 if there are more entities than could be displayed at one time.
- the controls frame 43 contains icons (not shown) which represent buttons which the operator can use to perform control operations, including, for example, deleting or duplicating nodes in the shelf frame 41 or supported graph node frame 42, beginning construction of a new phenomenon, starting an on-line help system, exiting the phenomenon creator 24, and so forth.
- the phenomenon graph canvas 44 provides an area in which a phenomenon can be created or modified by an operator. If the operator wishes to modify an existing phenomenon, he or she can, using a "drag and drop" methodology using a pointing device such as a mouse, select and drag the icon 45 from the shelf frame 41 representing the phenomenon to the phenomenon graph canvas 44. After the selected icon 45 associated with the phenomenon to be modified has been dragged to the phenomenon graph canvas 44, the operator can enable the icon 45 to be expanded to show one or more nodes, interconnected by arrows, representing the graph defining the phenomenon.
- a graph 50 representing an illustrative phenomenon is depicted in FIG. 3. As shown in FIG.
- the graph 50 includes a plurality of graph nodes, comprising circles and blocks, each of which is associated with an entity which can be used in a phenomenon, which nodes are interconnected by arrows to define the graph associated with the phenomenon.
- the operator can modify the graph defining the phenomenon. In that operation, the operator can, using a corresponding "drag and drop" methodology, select and drag icons 46 from the supported graph nodes frames 42 representing the entities to be added to the graph to the phenomenon graph canvass 44, thereby to establish a new node for the graph.
- the operator can interconnect it to a node in the existing graph by clicking on both nodes in an appropriate manner so as to enable an arrow to be displayed therebetween.
- Nodes in the graph can also be disconnected from other nodes by deleting arrows extending between the respective nodes, and deleted from the graph by appropriate actuation of a delete pushbutton in the controls frame 43.
- the operator wishes to create a new phenomenon, he or she can, using the corresponding "drag and drop" methodology, select and drag icons 46 from the supported graph nodes frames 42 representing the entities to be added to the graph to the phenomenon graph canvas 44, thereby to establish a new node for the graph to be created.
- the operator can interconnect it to a node in the existing graph by clicking on both nodes in an appropriate manner so as to enable an arrow to be displayed therebetween.
- Nodes in the graph can also be disconnected from other nodes by deleting arrows extending between the respective nodes, and deleted from the graph by appropriate actuation of a delete pushbutton in the controls frame 43.
- the phenomenon creator 24 will examine the phenomenon graph to verify that it is consistent and can be processed during rendering.
- the phenomenon creator 24 will ensure that the interconnections between graph nodes do not form a cycle, thereby ensuring that the graph or graphs associated with the phenomenon form directed acyclic graphs, and that interconnections between graph nodes represent respective input and output data types which are consistent. It will be appreciated that, if the phenomenon creator 24 determines that the graph nodes do form a cycle, the phenomenon will essentially form an endless loop that generally cannot be properly processed. These operations will ensure that the phenomenon so created or modified can be processed by the scene image generation portion when an image of a scene to which the phenomenon is attached is being rendered.
- FIG. 4 depicts an illustrative phenomenon created in connection with the phenomenon creator 24 which can be generated using the phenomenon creator window described above in connection with FIG. 3.
- the illustrative phenomenon depicted in FIG. 4, which is identified by reference numeral 60, is one which may be used for surface features of a wood material.
- the phenomenon 60 includes one root node, identified by reference numeral 61, which is used to attach the phenomenon 60 to an element of a scene.
- the dialog node 65 represents a dialog box that is displayed by the phenomenon editor 26 to allow the operator to provide input information for use with the phenomenon when the image is rendered.
- the material shader has one or more outputs, represented by "'result,” which are provided to the root node 61.
- the material shader in turn, has several inputs, including a
- the material shader node 62 represented thereby is shown as receiving inputs therefor from the dialog node 65 (in the case of the glossiness input), from the texture shader node 63 (in the case of the ambient and diffuse color inputs), from a hard- wired constant (in the case of the transparency input) and from a lights list (in the case of the lights input).
- the hard- wired constant value, indicated as "0.0,” provided to the transparency input indicates that the material is opaque.
- the "glossiness” input is connected to a "glossiness” output provided by the dialog node 65, and, when the material shader represented by node 62 is processed during rendering, it will obtain the glossiness input value therefor from the dialog box represented by the dialog node, as will be described below in connection with FIGS. 6 A and 6B.
- the ambient and diffuse inputs of the material shader represented by node 62 are provided by the output of the texture shader, as indicated by the connection of the "result" output of node 63 to the respective inputs of node 62.
- the texture shader When the wood material phenomenon 60 is processed during the rendering operation, and, in particular, when the material shader represented by node 62 is processed, it will enable the texture shader represented by node 63 to be processed to provide the ambient and diffuse color input values.
- the texture shader has three inputs, including ambient and diffuse color inputs, represented by "color 1" and “color2" inputs shown on node 63, and a "blend” input.
- the values for the ambient and diffuse color inputs are provided by the operator using the dialog box represented by the dialog node 65, as represented by the connections from the respective diffuse and ambient color outputs from the dialog node 65 to the texture shader node 63 in FIG. 4.
- the input value for the input of the texture shader represented by node 63 is provided by the coherent noise shader represented by node 64.
- the coherent noise shader has two inputs, including a "turbulence" input and a "cylindrical” input.
- the value for the turbulence input is provided by the operator using the dialog box represented by the dialog node 65, as represented by the connections from the turbulence output from the dialog node 65 to the coherent noise shader node 64.
- the input value for the cylindrical input which is shown as a logical value "TRUE,” is hard-wired into the phenomenon 60.
- FIG. 5 depicts a phenomenon editor window 70 which the phenomenon editor 26 enables to be displayed by the operator interface 27 for use by an operator in one embodiment of the invention to establish and adjust input values for phenomena which have been attached to a scene.
- the operator can use the phenomenon editor window to establish values for phenomena which are provided by dialog boxes associated with dialog nodes, such as dialog node 65 (FIG. 4), established for the respective phenomena during the creation or-modification as described above in connection with FIG. 3.
- the phenomenon editor window 70 includes a plurality of frames, including a shelf frame 71 and a controls frame 72, and also includes a phenomenon dialog window 73 and a phenomenon preview window 74.
- the shelf frame 71 depicts icons 80 representing the various phenomena which are available for attachment to a scene.
- the shelf frame includes left and right arrow icons, generally identified by reference numeral 81 , which allow the icons shown in the respective frame to be shifted to the left or right (as shown in FIG. 3), to shift icons to be displayed in the phenomenon editor window 70 if there are more icons than could be displayed at one time.
- the controls frame 73 contains icons (not shown) which represent buttons which the operator can use to perform control operations, including, for example, deleting or duplicating icons in the shelf frame 71, starting an on-line help system, exiting the phenomenon editor 26, and so forth.
- the operator can select a phenomenon whose parameter values are to be established by suitable manipulation of a pointing device such as a mouse in order to create an instance of a phenomenon. (An instance of a phenomenon corresponds to a phenomenon whose parameter values have been fixed.)
- the phenomenon editor 26 will enable the operator interface 27 to display the dialog box associated with its dialog node in the phenomenon dialog window.
- An illustrative dialog box, used in connection with one embodiment of the wood material phenomenon 60 described above in connection with FIG. 4, will be described below in connection with FIGS. 6 A and 6B.
- the phenomenon editor 26 effectively processes the phenomenon and displays the resulting output in the phenomenon preview window 74.
- the operator can use the phenomenon editor window 70 to view the result of the values which he or she establishes using the inputs available through the dialog box displayed in the phenomenon dialog window.
- FIGS. 6 A and 6B graphically depict details of a dialog node (in the case of FIG. 6A) and an illustrative associated dialog box (in the case of FIG. 6B), which are used in connection with the wood material phenomenon 60 depicted in FIG. 4.
- the dialog node which is identified by reference numeral 65 in FIG. 4, is defined and created by the operator using the phenomenon creator 24 during the process of creating or modifying the particular phenomenon with which it is associated.
- the dialog box 65 includes a plurality of tiles, namely, an ambient color tile 90, a diffuse color tile 91, a turbulence tile 92 and a glossiness tile 93.
- the respective tiles 90 through 93 are associated with the respective ambient, diffuse, turbulence and glossiness output values provided by the dialog node 65 as described above in connection with FIG. 4.
- the ambient and diffuse color tiles are associated with color values, which can be specified using the conventional red/green/blue/alpha, or "RGBA," color/transparency specification, and, thus, each of the color tiles will actually be associated with multiple input values, one for each of the red, green and blue colors in the color representation and one for transparency (alpha).
- each of the turbulence and glossiness tiles 92 and 93 is associated with a scalar value.
- FIG. 6B depicts an illustrative dialog box 100 which is associated with the dialog node 65 (FIG. 6A), as displayed by the operator interface 27 under control of the phenomenon editor 26.
- the ambient and diffuse color tiles 90 and 91 of the dialog node 65 are each displayed by the operator interface 27 as respective sets of sliders, generally identified by reference numerals 101 and 102, respectively, each of which is associated with one of the colors in the color representation to be used during processing of the associated phenomenon during rendering.
- the turbulence and glossiness tiles 92 and 93 of the dialog node 65 are each displayed by the operator interface as individual sliders 103 and 104.
- the sliders in the respective sets of sliders 101 and 102 may be manipulated by the operator, using a pointing device such as a mouse, in a conventional manner thereby to enable the phenomenon editor 26 to adjust the respective combinations of colors for the respective ambient and diffuse color values provided by the dialog node 65 to the shaders associated with the other nodes of the phenomenon 60 (FIG. 4).
- the sliders 103 and 104 associated with the turbulence and glossiness inputs may be manipulated by the operator thereby to enable the phenomenon editor 26 to adjust the respective turbulence and glossiness values provided by the dialog node 65 to the shaders associated with the other nodes of the wood material phenomenon 60.
- the scene image generator 30 operates in a series of phases, including a pre-processing phase, a rendering phase and a post-processing phase.
- the scene image generator 30 will examine the phenomena which are attached to a scene to determine whether it will need to perform pre-processing and/or post-processing operations in connection therewith (step 100). The scene image generator 30 then determines whether the operations in step 100 indicated that pre-processing operations are required in connection with at least one phenomenon attached to the scene to (step 101), and, if so, will perform the pre-processing operations (step 102).
- Illustrative preprocessing operations include, for example, generation of geometry for the scene if a phenomenon attached to the scene includes a geometry shader, to generate geometry defined thereby for the scene.
- Other illustrative pre-processing operations include, for example, shadow and photon mapping, multiple inheritance resolution, and the like.
- step 103 the scene image generator 30 can perform further pre- processing operations which may be required in connection with the scene representation prior to rendering, which are not related to phenomena attached to the scene (step 103).
- the scene image generator 30 will perform the rendering phase, in which it performs rendering operations in connection with the pre-processed scene representation to generate a rendered image (step 104).
- the scene image generator 30 will identify the phenomena stored in the scene object database 22 which are to be attached to the various components of the scene, as generated by the entity geometric representation generator 23 and attach all primary and optional root nodes of the respective phenomena to the scene components appropriate to the type of the root node. Thereafter, the scene image generator 30 will render the image.
- the scene image generator 30 will generate information as necessary which may be used in post-processing operations during the post-processing phase.
- the scene image generator 30 will perform the post-processing phase.
- the scene image generator 30 will determine whether operations performed in step 100 indicated that post-processing operations are required in connection with phenomena attached to the scene (step 105). If the scene image generator 30 makes a positive determination in step 105, it will perform the post-processing operations required in connection with the phenomena attached to the scene (step 106). In addition, the scene image generator 30 may also perform other post-processing operations which are not related to phenomena in step 106. The scene image generator 30 may perform post-processing operations in connection with manipulate pixel values for color correction, filtering to provide various optical effects.
- the scene image generator 30 may perform post-processing operations if, for example, a phenomenon attached to the scene includes an output shader that defines post-processing operations, such as depth of field or motion blur calculations that can be, in one embodiment, entirely done in an output shader, for example, dependent on the velocity and depth information stored in connection with each pixel value, in connection with the rendered image.
- an output shader that defines post-processing operations, such as depth of field or motion blur calculations that can be, in one embodiment, entirely done in an output shader, for example, dependent on the velocity and depth information stored in connection with each pixel value, in connection with the rendered image.
- the invention provides a number of advantages.
- the invention provides a computer graphics system providing arrangements for creating (reference the phenomenon creator 24) and manipulating (reference the phenomenon editor 26) phenomena.
- the phenomena so created are processed by the phenomenon creator 24 to ensure that they are consistent and can be processed during rendering. Since the phenomena are created prior to being attached to a scene, it will be appreciated that they can be created by programmers or others who are expert in the development in computer programs, thereby alleviating others, such as artists, draftsmen and the like of the necessity developing them. Also, phenomena relieve the artist from the complexity of instrumenting the scene with many different and inter-related shaders by separating it (that is, the complexity) into an independent task performed by a phenomenon creator expert user in advance. With phenomena, the instrumentation becomes largely automated.
- a phenomenon or phenomenon instance Once a phenomenon or phenomenon instance has been created, it is scene- independent and can be re-used in many scenes thus avoiding repetitive work. It will be appreciated that a number of changes and modifications may be made to the invention. As noted above, since phenomena may be created separately from their use in connection with a scene, the phenomenon creator 24 used to create and modify phenomena, and the phenomenon editor 26 used to create phenomenon instances, may be provided in separate computer graphics systems. For example, a computer graphics system 10 which includes a phenomenon editor 26 need not include a phenomenon creator 24 if, for example, the phenomenon database 25 includes appropriate previously- created phenomena and the operator will not need to create or modify phenomena.
- the values of parameters of a phenomenon may be fixed, or they may vary based on a function of one or more variables.
- the phenomenon instance can made time dependent, or "animated.” This is normally discretized in time intervals that are labeled by the frame-numbers of a series of frames comprising an animation, but the time dependency may nevertheless take on the form of any phenomenon parameter valued function over the time, each of which can be tagged with an absolute time value, so that, even if an image is rendered at successive frame numbers, the shaders are not bound to discrete intervals.
- the phenomenon editor is used to select time dependent values for one or more parameters of a phenomenon, creating a time dependent "phenomenon instance.”
- the selection of time dependent values for the parameters of a phenomenon is achieved, in one particular embodiment, by the graphically interactive attachment of what will be referred to herein as "phenomenon property control trees" to an phenomenon.
- a phenomenon property control tree which may be in the form of a tree or a DAG, is attached to phenomenon parameters, effectively outside of the phenomenon, and is stored with the phenomenon in the phenomenon instance database.
- a phenomenon property control tree consists of one or more nodes, each of which is a shader in the sense of the functions that it provides, for example, motion curves, data look-up functions and the like.
- a phenomenon property control tree preferably can remain shallow, and will normally have only very few branching levels.
- a phenomenon property control tree can consist of only one shader, which defines a function to compute the value for the parameter associated with it at run time.
- a phenomenon property control tree can remain shallow because the phenomenon allows and encourages encapsulation of the complicated shader trees or DAGs, facilitating evaluation in an optimized manner during the rendering step, by for example, storing data for re-use. Allowing an operator to attach such phenomenon property control trees to control the phenomenon's parameters greatly increases the flexibility of the user to achieve custom effects based on his use of a predefined and packaged phenomenon. The number of distinct phenomenon instances that may be created this way is therefore greatly increased, while the ease of use is not compromised thanks to the encapsulation of all complexity in the phenomenon.
- a system in accordance with the invention can be constructed in whole or in part from special purpose hardware or a general purpose computer system, or any combination thereof, any portion of which may be controlled by a suitable program.
- Any program may in whole or in part comprise part of or be stored on the system in a conventional manner, or it may in whole or in part be provided in to the system over a network or other mechanism for transferring information in a conventional manner.
- the system may be operated and/or otherwise controlled by means of information provided by an operator using operator input elements (not shown) which may be connected directly to the system or which may transfer the information to the system over a network or other mechanism for transferring information in a conventional manner.
- shader methods and systems that are platform independent, and that can unite various shading tools and applications under a single language or system construct; (2) methods and systems that enable the efficient and simple re-use and re-purposing of shaders, such as may be useful in the convergence of video games and feature films, an increasingly common occurrence (e.g., Lara Croft - Tomb Raider); (3) methods and systems that facilitate the design and construction of shaders without the need for computer programming, as may be useful for artists; and (4) methods and systems that enable the graphical debugging of shaders, allowing shader creators to find and resolve defects in shaders.
- Fig. 8 shows a flowchart of an overall method 150 according to an aspect of the invention.
- the described method enables the generation of an image of a scene in a computer graphics system from a representation to which at least one instantiated phenomenon has been attached, the instantiated phenomenon comprising an encapsulated shader DAG comprising at least one shader node.
- a metanode environment is configured that is operable for the creation of metanodes, the metanodes comprising component shaders that can be combined in networks to build more complex shaders.
- a graphical user interface is configured that is in communication with the metanode environment and is operable to manage the metanode environment to enable a user to construct shader graphs and phenomena using the metanode environment.
- a software language is provided as an interface usable by a human operator and operable to manage the metanode environment, implement shaders and unify discrete shading applications.
- the software language is configurable as a superset of a plurality of selected shader languages for selected hardware platforms, and operable to enable a compiler function to generate, from a single, re-usable description of a phenomenon expressed in the software language, optimized software code for a selected hardware platform in a selected shader language.
- at least one GUI library is provided that is usable in connection with the metanode environment to generate a GUI operable to construct shader graphs and phenomena.
- an interactive, visual, real-time debugging environment is configured that is in communication with the GUI, and that is operable to (1) enable the user to detect and correct potential flaws in shaders, and (2) provide a viewing window in which a test scene with a shader, metanode, or phenomenon under test is constantly rendered.
- a facility is configured that is in communication with the compiler function, and that is operable to convert the optimized software code for the selected hardware platform and selected shader language to machine code for selected integrated circuit instantiations, using a native compiler function for the selected shader language.
- the mental millTM technology provides an improved approach to the creation of shaders for visual effects.
- the mental mill solves many problems facing shader writers today and future-proofs shaders from the changes and evolutions of tomorrow's shader platforms.
- the mental mill further includes a library providing APIs to manage shader creation.
- This library can be integrated into third-party applications in a componentized fashion, allowing the application to use only the components of mental mill it requires.
- MetaSLTM The foundation of mental mill shading is the mental mill shading language MetaSLTM .
- MetaSL is a simple yet expressive language designed specifically for implementing shaders.
- the mental mill encourages the creation of simple and compact componentized shaders (referred to as MetanodesTM ) which can be combined in shader networks to build more complicated and visually interesting shaders.
- MetanodesTM simple and compact componentized shaders
- the goal of MetaSL is not to introduce yet another shading language but to leverage the power of existing languages through a single meta-language, MetaSL.
- MetaSL Currently existing shader languages focus on relatively specific platforms or contexts, for example hardware shading for games or software shading for feature film visual effects. MetaSL unifies these shading applications into a single language.
- the mental mill allows the creation of shader blocks called "metanodes," which are written in MetaSL to be attached and combined in order to form sophisticated shader graphs and PhenomenaTM .
- Shader graphs provide intuitive graphical user interfaces for creating shaders that are accessible to users who lack the technical expertise to write shader code.
- the mental mill graphical user interface libraries harness the shader graph paradigm to provide the user a complete graphical user interface for building shader graphs and Phenomena.
- the present invention provides a "metanode environment,” i.e., an environment that is operable for the creation and manipulation of metanodes.
- the described metanode environment may be implemented as software, or as a combination of software and hardware.
- a standalone application is included as part of mental mill, however since mental mill provides a cross-platform, componentized library, it is also designed to be integrated into third-party applications.
- the standalone mental mill application simply uses these libraries in the same way any other application would.
- the mental mill library can be broken down into the following pieces: (1) Phenomenon creator graphical user interface (GUI); (2) Phenomenon shader graph compiler; and (3) MetaSL shading language compiler.
- the mental mill Phenomenon creator GUI library provides a collection of GUI components that allow the creation of complex shaders and Phenomenon by users with a wide range of technical expertise.
- the primary GUI component is the shader graph view. This view allows the user to construct Phenomena by creating shader nodes (Metanodes or other Phenomena) and attaching them together in a graphs described.
- the shader graph provides a clear visual representation of the shader program that is not found when looking at shader code. This makes shader creation accessible to those users without the technical expertise to write shader code.
- the GUI library also provides other user interface components, summarized here:
- Shader parameter editor Provides sliders, color pickers, and other controls to facilitate the editing of shader parameter values.
- Render preview window Provides the user interactive feedback on the progress of their shader.
- Phenomenon library explorer Allows the user to browse and maintain a library of pre-built Phenomena.
- Metanode library explorer Allows the user to browse and maintain an organized toolbox of Metanodes; the fundamental building blocks of Phenomenon.
- IDE Integrated Development Environment
- GUI to provide interactive visual feedback.
- the IDE provides a high level interactive visual debugger for locating and correcting defects in shaders.
- the mental mill GUI library is both componentized and cross-platform.
- the library has been developed without dependencies on the user interface libraries of any particular operating system or platform.
- the mental mill GUI library is designed for integration into third-party applications. While the components of the GUI library have default appearances and behaviors, plug-in interfaces are provided to allow the look and feel of the Phenomenon creator GUI to be customized to match the look and feel of the host application.
- MetaSL shading language unites the many shading languages available today and is extensible to support new languages and platforms as they appear in the future. This allows MetaSL to provide insulation from platform dependencies. MetaSL is a simple yet powerful language targeted at the needs of shader writers. It allows shaders to be written in a compact and highly readable syntax that is approachable by users that might not otherwise feel comfortable programming.
- MetaSL shaders can be used in a variety of different ways.
- a single shader can be used when rendering offline in software or real-time in hardware.
- the same shader can be used across different platforms, such as those used by the next generation of video game consoles.
- the MetaSL compiler that is part of the mental mill library is itself extendable.
- the front-end of the compiler is a plug-in so that parsers for other languages or syntaxes can replace the MetaSL front end.
- the back-end of the compiler is also a plug- in so new target platforms can easily be supported in the future.
- This extensibility to both ends of the mental mill compiler library allows it to become the hub of shader generation. Shader writers typically face difficulties on several fronts. The following sections outline these issues and the rationale behind the creation of the mental mill technology, which is designed to provide a complete solution set. Shaders developed with mental mill are platform independent. This is a key feature of mental mill and insures that the effort invested in developing shaders is not wasted as target platforms evolve. This platform independence is provided for both shaders written in MetaSL and shader graphs of Metanodes.
- the mental mill libraries provide application programming interfaces (APIs) to generate shaders for a particular platform dynamically on demand from either a
- Phenomenon shader graph or a monolithic MetaSL shader Phenomenon shader graph or a monolithic MetaSL shader.
- mental mill makes it possible to export a shader in the format required by a target platform to a static file. This allows the shader to be used without requiring the mental mill library.
- FIG. 9 shows a diagram of an overall system 200 according to an aspect of the invention.
- the system 200 includes a mental mill processing module 202 that contains a number of submodules and other components, described below.
- the mental mill processing module 202 receives inputs in the form of Phenomena 204 and MetaSL code 206.
- the mental mill processing module 202 then provides as an output source code in a selected shader language, including: Cg 208, HLSL 210, GLSL 212, Cell SPU 214, C++ 216, and the like.
- the mental mill 202 is adaptable to provide as an output source code in future languages 218 that have not yet been developed.
- a component of platform independence is insulation from particular rendering algorithms.
- hardware rendering often employs a different rendering algorithm as compared to software rendering.
- Hardware rendering is very fast for rendering complex geometry, but may not directly support advanced lighting algorithms such as global illumination.
- MetaSL can be considered to be divided into three subsets or levels, with each level differing in both the amount of expressiveness and suitability for different rendering algorithms.
- FIG. 10 shows a diagram illustrating the levels of MetaSL 220 as subsets.
- the dotted ellipse region 224 shows C++ as a subset for reference.
- Level 1 (221) - This is the most general subset of MetaSL. Shaders written within this subset can easily be targeted to a wide variety of platforms. Many types of shaders will be able to be written entirely within this subset.
- Level 2 (222) A superset of Level 1 (221), Level 2 (222) adds features typically only available with software rendering algorithms such as ray tracing and global illumination. Like Level 1 (221), Level 2 (222) is still relatively simplified language and shaders written within Level 2 (222) may still be able to be partially rendered on hardware platforms. This makes it possible to achieve a blending of rendering algorithms where part of the rendering takes place on hardware and part on software. Level 3 (223) - This is a superset of both Levels 1 (221) and 2 (222). In addition
- Level 3 is also a superset of the popular C++ language. While Level 3 (223) shaders can only ever execute in software, Level 3 (223) is the most expressive of the three levels since it includes all the features of C++. However few shaders need the complexity of C++ and given that Level 1 (221) has the least general set of possible targets, most shaders will likely be written using only Levels 1 (221) and 2 (222). While Level 1 (221) appears to be the smallest subset of MetaSL, it is also the most general in the types of platforms it will support. MetaSL Level 3 (223) is the largest superset, containing even all of C++, making it extremely powerful and expressive.
- FIG. 11 is a bar chart 230 illustrating the levels of MetaSL and their applicability to hardware and software rendering.
- Level 1 and 2 shaders (221, 222) have a high degree of compatibility, with the only difference being that Level 2 shaders (222) utilize advanced algorithms not capable of running on a GPU.
- the MetaSL compiler can use a Level 2 shader (222) as if it were a Level 1 shader (221) (and target hardware platforms) by removing functions not supported by Level 1 (221) and replacing them with no-ops.
- This feature, and the ability of the MetaSL compiler to also detect the level of a given shader allows the MetaSL compiler to simultaneously generate a hardware and software version of a shader (or only generate a software shader when it is required).
- the hardware shader can be used for immediate feedback to the user through hardware rendering. A software rendering can then follow up with a more precise image.
- Another useful feature of mental mill is the ability to easily repurpose shaders.
- One key example of this comes from the convergence of video games and feature films. It is not uncommon to see video games developed with licenses to use content from successful films. Increasingly feature films are produced based on successful video games as well. It makes sense to use the same art assets for a video game and the movie it was based on, but in the past this has been a challenge for shaders since the film is rendered using an entirely different rendering algorithm than the video game.
- the mental mill overcomes this obstacle by allowing the same MetaSL shader to be used in both contexts.
- the shader graph model for constructing shaders also encourages the re-use of shaders. Shader graphs inherently encourage the construction of shaders in a componentized fashion.
- a single Metanode, implemented by a MetaSL shader can be used in different ways in many different shaders. In fact entire sub-trees of a graph can be packaged into a Phenomenon and re-used as a single node.
- the mental mill graphical user interface provides a method to construct shaders that doesn't necessarily involve programming. Therefore, an artist or someone who is not comfortable writing code will now have the ability to create shaders for themselves.
- the mental mill user interface also provides a development environment for programmers and technical directors.
- Programmers can create custom Metanodes written in MetaSL and artists can then use these nodes to create new shaders.
- Technical directors can create complex custom shader graphs which implement units of functionality and package those graphs into Phenomenon.
- shaders An important aspect of the creation of shaders is the ability to analyze flaws, determine their cause, and find solutions. In other words, the shader creator must be able to debug their shader. Finding and resolving defects in shaders is necessary regardless of whether the shader is created by attaching Metanodes to form a graph or writing MetaSL code, or both.
- the mental mill provides functionality for users to debug their shaders using a high level, visual technique. This allows shader creators to visually analyze the states of their shader to quickly isolate the source of problems.
- a prototype application has been created as a proof of concept of this shader debugging system.
- shaders such as offline or real-time interactive rendering.
- shaders are invoked in the most performance critical section of the renderer and therefore can have a significant impact on overall performance. Because of this it is crucial for shader creators to be able to analyze the performance of their shaders at a fine granularity to isolate the computationally expensive portions of their shaders.
- the mental mill provides such analysis, referred to as profiling, through an intuitive graphical representation. This allows the mental mill user to receive visual feedback indicating the relative performance of portions of their shaders. This profiling information is provided at both the node level for nodes that are part of a graph or Phenomenon, and at the statement level for the MetaSL code contained in a Metanode.
- the performance timing of a shader can be dependent on the particular input values driving that shader.
- a shader may contain a loop where the number of iterations through the loop is a function of a particular input parameter value.
- the mental mill graphical profiler allows shader performance to be analyzed in the context of the shader graph where the node resides, which makes the performance results relative to the particular input values driving the node in that context.
- the performance information at any particular granularity is normalized to the overall performance cost of a node, the entire shader, or the cost to render an entire scene with multiple shaders.
- the execution time of a MetaSL statement within a Metanode can be expressed as a percentage of the total execution time of that Metanode or the total execution time of the entire shader if the Metanode is a member of a graph.
- the graphical representation of performance results can be provided using multiple visualization techniques. For example, one technique is to present the normalized performance cost by mapping the percentage to a color gradient.
- FIG. 12 shows a screenshot 230 illustrating this aspect of the invention.
- a MetaSL code listing 232 appears at the center of the screen 230.
- a color bar 234 appears to the left of each statement 232 indicating relative performance.
- the first 10 percentage points are mapped to a blue gradient and the remaining 90 percentage points are mapped to a red gradient.
- nonlinear mappings such as this focuses the user's attention on the "hotspots" in their MetaSL code.
- the user can access the specific numeric values used to select colors from the gradient. As the user sweeps their mouse over the color bars, a popup will display the execution time of the statement as a percentage of the total execution time.
- FIG. 13 shows a performance graph 240 illustrating another visualization technique.
- Graph 240 displays performance results with respect to a range of values of a particular input parameter.
- the performance cost of the illumination loop of a shader is graphed with respect to the number of lights in the scene.
- the jumps in performance cost in this example indicate points at which the shader must be decomposed into passes to accommodate the constraints of graphics hardware.
- FIG. 14 shows a table 250, in which the performance timings of each node of a Phenomenon are displayed with respect to the overall performance cost of the entire shader.
- the graphical profiling technique provided by mental mill is platform independent. This means that performance timings can be generated for any supported target platform. As new platforms emerge and new back-end plug-ins to the mental mill compiler are provided, these new platforms can be profiled in the same way. However any particular timing is measured with respect to some selected target platform. For example the same shader can be profiled when executed on hardware versus software or on different hardware platforms. Different platforms have individual characteristics and so the performance profile of a particular shader may look quite different when comparing platforms. The ability for a shader creator to analyze their shader on different platforms is critical in order to develop a shader that executes with reasonable performance on all target platforms.
- FIG. 15 shows a diagram of the mental mill libraries component 260.
- the mental mill libraries component 260 is divided into two major categories: the Graphical User Interface (GUI) library 270 and the Compiler library 280.
- the GUI library 270 contains the following components: phenomenon graph editor 271; shader parameter editor 272; render preview window 273; phenomenon library explorer 274; Metanode library explorer 275; and code editor and IDE 276.
- the compiler library 280 contains the following components: MetaSL language compile 281 ; and Phenomenon shader graph compiler 282.
- FIG. 16 shows a more detailed diagram of the compiler library 280.
- the mental mill compiler library 280 provides the ability to compile a MetaSL shader into a shader targeted at a specific platform, or multiple platforms simultaneously.
- the compiler library 280 also provides the ability to compile the shader graphs which implement Phenomenon into flat monolithic shaders. By flattening shader graphs into single shaders, the overhead of shader to shader calls is reduced to nearly zero. This allows graphs built from small shader nodes to be used effectively without incurring a significant overhead.
- FIG. 17 shows a diagram of a renderer 290 according to this aspect of the invention.
- the extensibility of the MetaSL compiler allows multiple target platforms and shading languages to be supported. New targets can be supported in the future as they emerge. This extensibility is accomplished through plug-ins to the back-end of the compiler.
- the MetaSL compiler handles much of the processing and provides the back- end plug-in with a high level representation of the shader, which it can use to generate shader code.
- the MetaSL compiler currently targets high level languages, however the potential exists to target GPUs directly and generate machine code from the high level representation. This would allow particular hardware to take advantage of unique optimizations available only because the code generator is working from this high level representation directly and bypassing the native compiler.
- the mental mill GUI library provides an intuitive, easy-to-use interface for building sophisticated shader graphs and Phenomenon.
- the library is implemented in a componentized and platform independent method to allow integration of some or all of the UI components into third-party applications.
- a standalone Phenomenon creator application is also provided which utilizes the same GUI components available for integration directly into other applications.
- the major GUI components provided by the library are as follows:
- Shader Parameter editor The parameter editor provides intuitive user interface controls to set the parameter values for shader node inputs.
- Render Preview window A preview window provides interactive feedback to the user as they build shader graphs and edit shader parameters.
- Phenomenon library explorer Allows the user to browse a library of pre-built Phenomena. The user can add their own shader graphs to the library and organize its contents.
- Metanode library explorer The Metanode library provides a toolbox of Metanodes that the user can use to build shader graphs. New Metanodes can be created by writing MetaSL code and added to the library.
- Code editor and IDE The code editor and Integrated
- FIG. 18 shows a diagram of the graph editor user interface 300.
- Graph nodes are presented at various levels of detail to allow the user to zoom out to get a big picture of their entire graph, or zoom in to see all the details of any particular node.
- Portions of the shader graph can easily be organized into Phenomena that appear as a single node when closed. This allows the user to better deal with large complex graphs of many nodes by grouping subgraphs into single nodes. A Phenomenon can be opened allowing the user to edit its internal graph.
- Each node in the shader graph has a preview window to show the state of the shader at that point in the graph. This provides a visual debugging mechanism for the creation of shaders. A user can follow the dataflow of the graph and see the result so far at each node. At a glance, the user can see a visual representation of the construction of their shader.
- FIGS. 19-22 show a series of screenshots 310, 320, 330, 340, and 350, illustrating the mental mill Phenomenon graph editor and the integrated MetaSL graphical debugger.
- FIG. 24 shows a view of a shader parameter editor 360.
- the parameter editor 360 allows the user to set specific values for shader and Phenomenon parameters. When creating a new shader type, this allows the user to specify default parameter values for future instances of that type.
- Attachments can also be made from within the parameter view and users can follow attachment paths from one node to another within this view. This provides an alternate method for shader graph creation and editing that can be useful in some contexts.
- a sizeable render preview window allows the user to interactively visualize the result of their shaders.
- This preview window can provide real-time hardware accelerated previews of shaders as well as high quality software rendered results involving sophisticated rendering algorithms such as ray tracing and global illumination.
- the Phenomenon/Metanode library explorer view allows the user to browse and organize collections of Phenomena and Metanodes.
- FIG. 25 A shows a thumbnail view 370
- Fig. 25B shows a list view 380.
- To create a new node in the current graph from one of these libraries the user simply drags a node and drops it in their graph.
- the libraries can be sorted and categorized for organization.
- the user can view the libraries in a list view or icon view to see a sample swatch illustrating the function of each node.
- the code editor provides the ability for the user to author new Metanodes by writing MetaSL code.
- the code editor is integrated into the rest of the mental mill GUI so as a user edits shader code, the rest of the user interface interactively updates to reflect their changes.
- FIG. 26 shows a code editor and IDE view 390.
- the code editor is also integrated with the mental mill compiler. As the user edits shader code, they will receive interactive feedback from the compiler. Errors or warnings from the compiler will be presented to the user in this view including an option to highlight the portion of the code responsible for the error or warning.
- the mental mill MetaSL debugger presents the user with a source code listing containing the MetaSL code for the shader node in question. The user can then step through the shader's instructions and inspect the values of variables as they change throughout the program's execution. However instead of just presenting the user with a single numeric value, the debugger displays multiple values simultaneously as colors mapped over the surface of an object.
- Representing a variable's values as an image rather than a single number has several advantages.
- the user can also use the visual debugging paradigm to quickly locate the input conditions that produce an undesirable result. A shader bug may only appear when certain input parameters take on specific values, and such a scenario may only occur on specific parts of the geometry's surface.
- the mental mill debugger allows the user to navigate in 3D space using the mouse to find and orient the view around the location on the surface that is symptomatic of the problem.
- the mental mill MetaSL debugger allows the user to jump to any statement in- their shader code in any order.
- One particularly nice aspect of this feature is when a code statement modifies the value of a variable of interest.
- the shader writer can easily step backward and forward across this statement to toggle between the variable's value before and after the statement is executed. This makes it easier for the user to analyze the effect of any particular statement on a variable's value.
- variable's value as a color mapped over the surface of an object obviously works well when the variable is a color type. This method also works reasonably well for scalar and vector values (with three or less components), but the value must be mapped into the range 0-1 in order to produce a legitimate color.
- the mental mill UI will allow the user specify a range for scalars and vectors that will be used to map those values to colors. Alternatively mental mill can automatically compute the range for any given viewpoint by determining the minimum and maximum values of the variable over the surface as seen from that viewpoint.
- the user can utilize other visualization techniques provided by mental mill.
- One such technique for vector values allows the user to sweep the mouse over the surface of an object and the mental mill debugger will draw an arrow pointing in the direction specified by the variable at that location on the surface.
- the debugger will also display the numeric value for a variable at a pixel location, which can be selected by the mouse or specified by the user by providing the pixel coordinates.
- FIGS. 27A-C are a series of screen images 400, 410, 420, displayed in response to different mouse positions at the surface of the image.
- the mental mill shader debugger illustrates another benefit of the platform independence of mental mill.
- the debugger can operate in either hardware or software mode and works independently of any particular rendering algorithm or platform.
- the fact that the shader debugger is tightly integrated into mental mill's Phenomenon creation environment further reduces the create/test cycle and allows the shader creator to continue to work at a high level, insulated from platform dependencies.
- the mental mill GUI library is implemented in a platform independent manner.
- the shader graph editor component uses a graphics API to insure smooth performance when editing complex shader graphs.
- a graphics abstraction layer prevents a dependency on any particular API. For example some applications may prefer the use of DirectX over OpenGL to simplify integration issues when their application also uses DirectX.
- FIG. 28 shows a diagram of a GUI library architecture 430 according to this aspect of the invention.
- FIG. 28 illustrates how these abstraction layers insulate the application and the mental mill user interface from platform dependencies.
- Mouse behavior - Mouse behavior such as the mapping of mouse buttons are customizable.
- Toolbar items Each toolbar item can be omitted or included.
- Each view window is designed to operate on its own without dependencies on other windows. This allows a third party to integrate just the Phenomenon graph view into their application, for example.
- Each view window can be driven by the API so third parties can include any combination of the view windows, replacing some of the view windows with their own user interface.
- the mental mill shading language - MetaSL is simple, intuitive and yet still expressive enough to represent the full spectrum of shaders required for the broad range of platforms supported by mental mill.
- MetaSL uses concepts found in other standard shading languages as well as programming languages in general; however, MetaSL is designed for efficient shader programming. Users familiar with other languages will be able to quickly learn MetaSL while users without programming technical expertise will likely be able to understand many parts of a MetaSL shader due to its readability.
- the shader class The MetaSL shader class declaration describes the shader's interface to the outside world. Shader declarations include the specification of input and output parameters as well as other member variables.
- the shader declaration also contains the declaration of the shader's entry point, a method called main.
- an optional event method allows the shader to respond to initialization and exit events.
- Shader classes can also include declarations of other member variables and methods.
- Other member variables can hold data used by the shading calculation and are initialized from the shader's event method.
- Other member methods can serve as helper methods called by the shader's main or event methods. The following is an example shader declaration:
- MetaSL provides a comprehensive range of built-in data types.
- NxM matrix - matrices of size NxM are supported where N and M can be 2, 3, or 4.
- a standard set of math functions and operators are provided to work with vectors and matrices. Arithmetic operators are supported to multiply, divide, add and subtract matrices and vectors. This allows for compact and readable expressions such as:
- Vector3 result pt + v*mat ;
- a concept called swizzling is also supported. This allows components of a vector to be read or written to while simultaneously re-ordering or duplicating components. For example:
- yz Results in a 2d vector constructed from the y and z components of vect.
- vect .
- xxyy Results in a 4d vector with the x component of vect assigned to the first two components and the y component of vect assigned to the last two.
- Vector types can also be implicitly converted from one type to another as long as the conversion doesn't result in a loss of data.
- the Color type is provided primarily for code readability and is otherwise synonymous with Vect or 4.
- MetaSL In addition to the built-in types provided by MetaSL, custom structure types can be defined. Structures can be used for both input and output parameters as well as other variables. Both structures and built-in types can be declared as arrays. Arrays can have either a fixed or dynamic size. Array elements are accessed with bracket ([]) syntax.
- This example shows a custom structure type with a variable declared as a fixed length array of that custom type.
- MetaSL supports the familiar programming constructs that control the flow of a shader's execution. Specifically these are: for; while ; do, while; if , else ; switch, case .
- the task of iterating over scene lights and summing their illumination is abstracted in MetaSL by a light loop and iterator. An instance of a light iterator is declared and a f oreach statement iterates over each scene light. Inside the loop the light iterator variable provides access to the resulting illumination from each light.
- Color diffuse_light ( 0 , 0 , 0 , 0 ) ;
- the shader writer doesn't need to be concerned with which lights contribute to each part of the scene or how many times any given light needs to be sampled.
- the light loop automates that process.
- a shader's main method a set of special state variables are implicitly declared and available for the shader code to reference. These variables hold values describing both the current state of the Tenderer as well as information about the intersection that led to the shader call. For example, normal refers to the interpolated normal at the point of intersection. State variables are described in greater detail below.
- BRDF bidirectional reflectance distribution function
- BRDF shader approach is often more desirable for several reasons. It allows for efficient sampling and global illumination without the need to create a separate photon shader. In general a single BRDF implementation can be used unchanged by different rendering algorithms. It facilitates the ability to perform certain lighting computations, such as tracing shadow rays, in a delayed manner which allows for significant rendering optimizations. It also provides a unified description of analytical and acquired illumination models.
- MetaSL BRDFs are first class shader objects. They have the same event method as regular shaders. However, instead of a main method, several other methods must be supplied to implement BRDFs.
- FIG. 29 shows a table 440 listing the methods required to implement a BRDF shader.
- the in_dir and out_dir vectors in these methods are specified in terms of a local coordinate system.
- This coordinate system is defined by the surface normal as the z axis and the tangent vectors as the x and y axis.
- a BRDF is declared like a regular shader, except the brdf keyword is used in place of the shader keyword. Also BRDFs differ from regular shaders by the fact that they have no output variables; the BRDF itself is its own output. The following is an example implementation for a Phong BRDF:
- Color eval_dif f use ( Vector3 in_dir, Vector3 out dir ) ⁇ return in_dir . z * out_dir . z ⁇ 0 . 0 ? diffuse : Color (0, 0, 0, 0) ; ⁇ Color eval_glossy(
- Vector3 r Vector3 (-in_dir.x, -in_dir.y, in_dir.z); return pow( saturate (dot (r, out_dir) ) , exponent) * glossy;
- MetaSL supplies two functions which a surface shader can use to perform illumination computations: direct_lighting ( ) , which loops over some or all lights and evaluates the given BRDF and indirect_lighting ( ) , which computes the lighting contribution from global illumination. These two functions compute the illumination as separate diffuse, glossy, and specular components and store the results in variables passed to them as out arguments.
- variables passed to the lighting functions as out parameters can be the actual output parameters of the root surface shader node.
- the calls to the lighting functions produce the final result of the shader.
- the renderer is free to defer their computation, which allows for significant optimizations.
- FIG. 30 shows a diagram of an example configuration 450, in which two BRDFs are mixed:
- the "Mix BRDF” node is a composite BRDF that is implemented by scaling its two BRDF inputs by “amount” and "1 - amount", respectively.
- the "amount” parameter is attached to the output of a texture which controls the blending between the two BRDFs.
- the Phong BRDF' s specular reflection is attenuated by a Fresnel falloff function.
- the material Phenomenon collects together the surface shader, which itself may be represented by a shader graph, and the direct and indirect BRDF shaders.
- the BRDF shaders in the material Phenomenon are used to iterate over light samples to compute the result of the lighting functions. Since there are no dependencies on the result of the surface shader in this case, the lighting calculation can be deferred by the renderer to an optimal time.
- the BRDF shader type unifies the representation of BRDFs represented by an analytical model, such as Phong, with acquired BRDFs which are represented by data generated by a measuring device.
- the direct_lighting ( ) and indirect_lighting ( ) functions are not concerned with the implementation of the BRDFs they are given and thus operate equally well with acquired or analytical BRDFs.
- the raw data representing acquired BRDFs may be provided in many different forms and is usually sparse and unstructured. Typically the raw data is given to a standalone utility application where it is preprocessed. This application can organize the data into a regular grid, factor the data, and/or compress the data into a more practical size. By storing the data in a floating point texture, measured BRDFs can be used with hardware shading.
- FIG. 31 is a diagram illustrating a pipeline 460 for shading with acquired BRDFs.
- the standalone utility application processes raw BRDF data and stores structured data in an XML file, optionally with references to a binary file or a texture to hold the actual data.
- the XML file provides a description of the data format and associated model or factorization. This file and its associated data can then be loaded by a BRDF shader at render time and used to define the BRDF.
- the XML file can also be fed back into the utility application for further processing as required by the user. Storing the data in a floating point texture allows a data-based BRDF to operate in hardware. In this case the texture holding the data and any other parameter to describe the data model can be made explicit parameters of the BRDF node.
- BRDFs For software shading with measured BRDFs, there are two options to load the data. The first is to implement a native C++ function that reads the data from a file into an array. This native function can then be called by a Level 2 MetaSL BRDF shader. The other option is to implement the entire BRDF shader as a Level 3 MetaSL shader, which gives the shader complete access to all the features of C++. This shader can read the data file directly, but loses some of the flexibility of Level 2 shaders. As long as the data can be loaded into a Level 2 compatible representation such as an array, the first option of loading the data from a native C++ function is preferable. If the data must be represented by a structure requiring pointers (such as a kd-tree) then the part of the implementation which requires the use of pointers will need to be a Level 3 shader.
- a native C++ function that reads the data from a file into an array. This native function can then be called by a Level 2 MetaSL BR
- a technique is a variation of a shader implementation. While some shaders may only require a single technique, there are situations where it is desirable to implement multiple techniques.
- the language provides a mechanism to declare multiple techniques within a shader.
- Techniques can be used when it is desirable to execute different code in hardware or software contexts, although often the same shader can be used for both hardware and software. Another use for techniques is to describe alternate versions of the same shader with differing quality levels.
- a technique is declared within the shader class. Each technique has its own version of the main and event methods, but shares parameters and other member variables or methods with other techniques.
- the language includes a mechanism to allow material shaders to express their result as a series of components instead of a single color value. This allows the components to be stored to separate image buffers for later compositing. Individual passes can also render a subset of all components and combine those with the remaining components that have been previously rendered.
- a material shader factors its result into components by declaring a separate output for each component.
- the names of the output variable define the names of layers in the current rendering.
- Color specular_lighting Color indirect_lighting; ⁇ ; This example shows a material shader that specifies three components for diffuse, specular, and indirect lighting.
- a mechanism in the scene definition file will allow the user to specify compositing rules for combining layers into image buffers.
- the user will specify how many image buffers are to be created and for each buffer they would specify an expression which determines what color to place in that buffer when a pixel is rendered.
- the expression can be a function of layer values such as:
- Image2 diffuse_lighting + specular_lighting
- the three layers from the shader result structure in the previous example are routed to two image buffers.
- MetaSL provides functionality to annotate shader parameters, techniques, and the shader itself with additional metadata.
- Shader annotations can describes parameter ranges, default values, and tooltip descriptions among other things.
- Custom annotation types can be used to attach arbitrary data to shaders as well.
- MetaSL includes a comprehensive collection of built in functions. These include math, geometric, and texture lookup functions to name a few. In addition functions that may only be supported by software rendering platforms are also included. Some examples are functions to cast reflection rays or compute the amount of global illumination at a point in space.
- the mental millTM PhenomenonTM creation tool allows users to construct shaders interactively, without programming. Users work primarily in a shader graph view where MetanodesTM are attached to other shader nodes to build up complex effects. Metanodes are simplistic shaders that form the building blocks for constructing more complicated PhenomenaTM .
- a Phenomenon can be a shader, a shader tree, or a set of cooperating shader trees (DAGs), including geometry shaders, resulting in a single parameterized function with a domain of definition and a set of boundary conditions in 3D space, which include those boundary conditions which are created at run-time of the Tenderer, as well as those boundary conditions which are given by the geometric objects in the scene.
- DAGs cooperating shader trees
- a Phenomenon is a structure containing one or more shaders or shader DAGs and various miscellaneous "requirement" options that control rendering.
- a Phenomenon looks exactly like a shader with input parameters and outputs, but internally its function is not implemented with a programming language but as a set of shader DAGs that have special access to the Phenomenon interface parameters. Additional shaders or shader DAGs for auxiliary purposes can be enclosed as well.
- Phenomena are attached at a unique root node that serves as an attachment point to the scene. The internal structure is hidden from the outside user, but can be accessed with the mental mill Phenomenon creation tool.
- MetaSLTM For users that wish to develop shaders by writing code, mental mill will also provide an integrated development environment (IDE) for creating Metanodes using mental images' shader language: MetaSLTM. Users may develop complete monolithic shaders by writing code, or Metanodes which provide specific functionality with the intention that they will be components of Phenomenon shader graphs.
- IDE integrated development environment
- the mental mill tool also provides an automatically generated graphical user interface (GUI) for Phenomena and Metanodes.
- GUI graphical user interface
- This GUI allows the user to select values for parameters and interactively preview the result of their settings.
- parameter values Prior to being attached to a scene, parameter values must be specified to instantiate the Phenomenon.
- Phenomena There are two primary types of Phenomena which a user edits. A Phenomenon whose parameter values have not been specified (referred to as free-valued Phenomena) and Phenomena whose parameters have been fixed, or partially fixed (referred to as fixed Phenomena).
- a user When a user creates a new Phenomenon by building a shader graph or writing MetaSL code (or a combination of both), they are creating a new type of Phenomenon with free parameter values. The user can then create Phenomena with fixed parameter values based on this new Phenomena type. Typically many fixed value Phenomena will exist based on a particular Phenomenon. If the user changes a Phenomenon, all fixed Phenomena based on it will inherit that change. Changes to a fixed Phenomenon are isolated to that particular Phenomenon.
- the mental mill application UI is comprised of several different views with each view containing different sets of controls.
- the view panels are separated by four movable splitter bars. These allow the relative sizes of the views to be adjusted by the user.
- FIG. 32 is a screenshot 470 illustrating the basic simplified layout.
- the primary view panels are labeled, but for simplicity the contents of those views aren't shown. These view panels include the following: toolbox 472; phenomenon graph view 474; code editor view 476; navigation controls 478; preview 480; and parameter view 482.
- the Phenomenon graph view 474 allows the user to create new Phenomena by connecting Metanodes or Phenomenon nodes together to form graphs. An output of a node can be connected to one or more inputs which allow the connected nodes to provide values for the input parameters they are connected to.
- the Phenomenon graph view area 474 can be virtually infinitely large to hold arbitrarily complex shader graphs. The user can navigate around this area using the mouse by holding down the middle mouse button to pan and the right mouse button to zoom (button assignments are remappable).
- the navigation control described in a following section provides more methods to control the Phenomenon view.
- the user can create nodes by dragging them from the toolbox 472, as described below, into the Phenomenon graph view 474. Once in the graph view 474, nodes can be positioned by the user. A layout command will also perform an automatic layout of the graph nodes.
- FIG. 33 shows a graph node 490.
- the graph node 490 (either a Phenomenon node or Metanode) comprises several elements: • Preview - The preview window portion of the node allows the user to see the result of the shader node rendered on a surface. A sphere is the default surface, but other geometry can be specified. All nodes can potentially have preview windows, even if they are internal nodes of the shader graph.
- the preview is generated by considering the individual node as a complete shader and rendering sample geometry using that shader. This allows the user to visualize the dataflow through the shader graph since they can see the shader result at each stage of the graph.
- the preview part of the node can also be closed to reduce the size of the node.
- Each node has at least one output, but some nodes may have more than one output. The user clicks and drags on an output location to attach the output to another node's input. An output can be attached to more than one input.
- • Inputs Each node has zero or more input parameters. An input can be attached to the output of another node to allow that shader to control the input parameter's value; otherwise the value is settable by the user. An input can be attached to only one output. When the user hovers the mouse over an input for a short period of time, a tooltip is displayed that provides a short description of the parameter. The text for the tooltip is provided by an attribute associated with the shader.
- Some input or output parameters may be structures of sub-parameters.
- FIG. 34 shows a graph node 500 including sub-parameters.
- Phenomenon nodes themselves contain shader graphs.
- the user can create multiple Phenomenon nodes, each representing a new shader.
- a command will let the user dive into a Phenomenon node which causes the Phenomenon graph view to be replaced with the graph present inside the Phenomenon.
- a Phenomenon can be opened directly in the graph in which it resides. This allows the user to see the nodes outside the Phenomenon, and possibly connected to it, as well as the contents of the Phenomenon itself.
- the user has access to the Phenomenon interface parameters as well as the Phenomenon output and auxiliary roots.
- the inputs to a Phenomenon appear in the upper left corner of the Phenomenon and behave like outputs when viewed from the inside of the Phenomenon.
- the outputs of a Phenomenon appear in the upper right corner and behave as inputs when viewed from inside.
- a shader graph inside a Phenomenon can also contain other Phenomenon nodes. The user can dive into these Phenomenon nodes in the same way, and repeat the process as long as Phenomena are nested.
- FIG. 35 shows a sample graph view 510 when inside a Phenomenon. Although the entire graph is shown in this view, it may be common to have a large enough graph such that the whole graph isn't visible at once, unless the user zooms far out. Notice that in this example all of the nodes except one have their preview window closed.
- a Phenomenon When a Phenomenon is opened inside a graph it can either be maximized, in which case it takes over the entire graph view, or it can be opened in-place. When opened in place, the user is able to see the graph outside the Phenomenon as well as the graph inside as shown in the graph view 520 in FIG. 36.
- FIG. 37 shows a graph view 530 illustrating such a case. If the user drags a node into the top level, they will create a Phenomenon with fixed values based on the Phenomenon type that they chose to drag.
- the top level fixed Phenomenon node has parameters which may be edited, but not attached to other nodes.
- the fixed Phenomenon refers back to the Phenomenon from which it was created and inherits any changes to that Phenomenon.
- a command is available that converts a fixed Phenomenon into a free- valued Phenomenon, which allows the user to modify the Phenomenon without affecting other instances.
- a fixed- valued Phenomenon or Metanode will be created inside the Phenomenon, depending on the type of the node created. Nodes inside Phenomena can be wired to other nodes or Phenomenon interface parameters. If the node the user dragged into a Phenomenon was itself a Phenomenon node, then a Phenomenon with fixed values is created. Its parameter values can be set, or attached to other nodes, but because it is a fixed Phenomenon that refers back to the original, the user can not dive into the Phenomenon node and change it. Also any changes to the original will affect the node. If the user wishes to change the Phenomenon, a command is available that converts the node into a new free-valued Phenomenon which the user can enter and modify.
- the user clicks on the output area of one node and drags to position the mouse cursor over the input of another node. When they release the mouse, a connection line is drawn which represents the shader connection. If the connection is not a valid one, the cursor will indicate this to the user when the mouse is placed over a potential input during the attachment process.
- a type checking system will ensure that shaders can only be attached to inputs that match their output type.
- an attachment can be made between two parameters of different types if an adapter shader is present to handle the conversion. For example a scalar value can be attached to a color input using an adapter shader.
- the adapter shader may convert the scalar to a gray color or perform some other conversion depending on settings selected by the user.
- Adapter shaders are inserted automatically when they are available. When the user attaches parameters that require an adapter, the adapter will automatically be inserted when the user completes the attachment.
- mental mill will ensure that the user doesn't inadvertently create cycles in their graphs when making attachments.
- FIG. 38 shows a view 540, illustrating the result of attaching a color output to a scalar input.
- the 'Color to Scalar' adapter shader node is inserted in-between to perform the conversion.
- the conversion type parameter of the adapter node would allow the user to select the method in which the color is converted to a scalar. Some options for this parameter might be:
- Both nodes and connection lines can be selected and deleted. When deleting a node, all connections to that node are also deleted. When deleting a connection, only the connection itself is deleted.
- the user can organize the graph by boxing up parts of the graph into Phenomenon nodes.
- a command is available that takes the currently selected subgraph and converts it to a Phenomenon node.
- the result is a new Phenomenon with interface parameters for each input of selected nodes that are attached to an unselected node.
- the Phenomenon will have an output for each selected node whose output is attached to an unselected node.
- the new Phenomenon will be attached in place of the old subgraph which is moved inside the Phenomenon.
- the result is no change in behavior of the shader graph, but the graph will appear simplified since several nodes will be replaced by a single node.
- the ability of a Phenomenon to encapsulate complex behavior in a single node is an important and powerful feature of mental mill.
- FIG. 39 shows a view 550, in which shaders 2, 3, and 4 are boxed up into a new Phenomenon node.
- the connections to outside nodes are maintained and the results produced by the graph aren't changed, but the graph has become slightly more organized. Since Phenomenon can be nested, this type of grouping of sub-graphs into Phenomenon can occur with arbitrary levels of depth.
- a preview window displays a sample rendering of the currently selected Phenomenon.
- the image is the same image shown in the preview window of the Phenomenon node, but can be sized larger to show more detail.
- the preview will always show the result of the topmost Phenomenon node. Once the user enters a Phenomenon, the preview will show the result of that Phenomenon regardless of which node is selected. This allows the user to work on the shader graph inside a Phenomenon while still previewing the final result.
- the navigation window provides controls to allow the user to navigate the
- Buttons will allow the user to zoom to fit the selected portion of the graph within the Phenomenon graph view or fit the entire graph to the view.
- a "bird's eye” control shows a small representation of the entire shader graph with a rectangle that indicates the portion of the graph shown in the Phenomenon graph view.
- the user can click and drag on this control to position the rectangle on the portion of the graph they wish to see.
- FIG. 40 is a view 560 showing the bird's eye view control viewing a sample shader graph.
- the dark gray rectangle 562 indicates the area visible in the Phenomenon graph view.
- FIGS. 41 A-D shows a series of views 570, 572, 574, and 576, illustrating the progression of node levels of detail.
- Phenomenon node which causes the graph view to be replaced with the graph of the Phenomenon they entered. This process can continue as long as Phenomena are nested in other Phenomena.
- the navigation window provides back and forward buttons to allow users to retrace their path as they navigate through nested Phenomenon.
- the toolbox window contains the shader nodes which make up the building blocks that shader graphs are built from.
- the user can click and drag nodes from the toolbox into the Phenomenon graph view to add nodes to the shader graph.
- Nodes appear in the toolbox as an icon and a name.
- the icon will be a sphere rendered using the shader, but in some cases other icons may be used.
- the list of nodes can also be viewed in a condensed list without icons to allow more nodes to fit in the view.
- Some nodes may be Phenomenon nodes, i.e., nodes defined by a shader graph, and other nodes may be Metanode, i.e., nodes defined by MetaSL code. This often is not important to the user creating shader graphs since both types of nodes can be used interchangeably.
- Phenomenon nodes will be colored differently from Metanodes, or otherwise visually distinct, allowing the user to differentiate between the two.
- Phenomenon nodes can be edited graphically by editing their shader graphs while Metanodes can only be edited by changing their MetaSL source code.
- Nodes are sorted by category and the user can choose to view a single category or all categories by selecting from a drop down list of categories.
- the toolbox In addition to Phenomena or Metanodes, the toolbox also contains "actions." Actions are fragments of a complete shader graph that the user can use when building new shader graphs. It is common for patterns of shader nodes and attachments to appear in different shaders. A user can select a portion of a shader graph and use it to create a new action. In the future if they wish to create the same configuration of nodes they can simply drag the action from the toolbox into the shader graph to create those nodes.
- FIGS. 42 A-B are views 580 and 582, illustrating the toolbox in the two different view modes.
- the FIG. 42A view 580 shows a thumbnail view mode and the FIG. 38B view 582 shows a list view mode.
- the toolbox is populated with nodes defined in designated shader description files.
- the user can select one or more shader description files to use as the shader library that is accessible through the toolbox. There are commands to add node types to this library and remove nodes.
- the mental mill tool will provide an initial library of Metanodes as well.
- FIG. 43 shows a partial screenshot 590 illustrating a sample of some controls in the parameter view.
- the parameter view displays controls that allow parameters of the selected node to be edited. These controls include sliders, color pickers, check boxes, drop down lists, text edit fields, and file pickers, to name a few.
- buttons is present that allows the user to pick an attachment source from inside the parameter view. It should be noted that, as currently implemented, shader attachments are not allowed when editing a top-level Phenomenon. This button will cause a popup list to appear that allows the user to pick a new node or choose from other available nodes currently in the graph. A "none" option is provided to remove an attachment. When an attachment is made, the normal control for the parameter is replaced by a label indicating the name of the attached node.
- Some parameters are structures, in which case controls will be created for each element of the structure.
- the structures are displayed by the UI in "collapsed" form, and are opened with a + button to open them (recursively for nested structures).
- the structures may be always displayed in expanded form.
- parameters will appear in the order in which they are declared in the Phenomenon or Metanode however attributes in the node can also control the order and grouping of parameters.
- attributes in the node can also control the order and grouping of parameters.
- Hard limits are ranges for which the parameter is not allowed to exceed.
- Soft limits specify a range for the parameter that is generally useful, but the parameter is not strictly limited to that range.
- the extents of a slider control will be set to the soft limits of a parameter. A command in the parameter view will allow the user to change the extents of a slider past the soft limits as long as they do not exceed the hard limits.
- Controls in the parameter view will display tooltips when the user hovers the mouse over a control for a short period of time.
- the text displayed in the tooltip is a short description of the parameter that comes from an attribute associated with the shader.
- a button at the top of all controls will display a relatively short description of the function of the shader as a whole. This description is also taken from an attribute associated with the node.
- FIG. 44 shows a partial screenshot 600 illustrating a code editor view according to the present aspect of the invention.
- the code editor view allows users that wish to create shaders by writing code to do so using MetaSL. Users will be able to create monolithic shaders by writing code if need be, but more likely they will create new Metanodes that are intended to be used as a part of shader graphs.
- a command allows the user to create a new Metanode.
- the result is a Metanode at the top level of the graph (not inside a Phenomenon) that represents a new type of Metanode.
- the user can always create an instance of this new Metanode inside a Phenomenon if they wish to.
- the corresponding MetaSL code When a top level Metanode (outside of a Phenomenon) is selected, the corresponding MetaSL code will appear in the code editor view for the user to edit. After making changes to the code, a command is available to compile the shader.
- the MetaSL compiler and a C++ compiler for the user's native platform are invoked by mental mill to compile the shader. Cross-compilation for other platforms is also possible. Any errors are routed back to mental mill which in turn displays them to the user. Clicking on errors will take the user to the corresponding line in the editor. If the shader compiles successfully then the Metanode will update to show the current set of input parameters and outputs. The preview window will also update to give visual feedback on the look of the shader.
- the parameter view will also display controls for selected top level Metanodes allowing the user to edit the default value for the node's input parameters. This is analogous to editing a free-valued Phenomenon's interface parameters.
- the main menu may be configured in a number of ways, including the following:
- File - The file menu contains the following items: 1. New Phenomenon - This command is used to create a new free-valued Phenomenon. Once created, other Phenomena with fixed parameter values can be created based on this Phenomenon.
- New Metanode Creates a new Metanode type.
- a top level Metanode is created that when selected, its code is editable in the code editor.
- Open File Opens a shader description file for editing. The contents of the file will appear in the Phenomenon graph view. This could include free or fixed Phenomena as well as Metanode types. Files that are designated to be part of the toolbox can also be opened and edited. Editing a Phenomenon's shader graph will affect all fixed- valued Phenomena based on the Phenomenon. Therefore opening a toolbox file is much different than dragging a Phenomenon or Metanode from the toolbox into the Phenomenon graph view. Dragging a node from the toolbox creates a fixed-valued Phenomenon that can be modified without affecting the original. Opening a description file used by the toolbox allows the original Phenomenon to be modified. 4. Save - Saves the currently opened file. If the file has never been saved before, this command prompts the user to pick a file name for the new file.
- Undo Undoes the last change. This could be a change to the shader graph, a change of the value of a parameter, or a change to the shader code made in the code editor view.
- the mental mill tool will have virtually unlimited levels of undo. The user will be able to set the maximum number of undo levels as a preference, however this application is not memory intensive and therefore the number of undo levels can be left quite high.
- Paste Pastes the contents of the clipboard into the shader graph or code view.
- a toolbar contains tool buttons which provide easy access to common commands.
- toolbar commands operate on the shader graph selection.
- Some toolbar commands replicate commands found in the main menu.
- the list of toolbar items may include the following commands: Open file; Save file; New shader graph (Phenomenon); New shader code-based; Undo; Redo; Copy; Paste; Close.
- shaders An important aspect of the creation of shaders is the ability to analyze flaws, determine their cause, and find solutions. In other words, the shader creator must be able to debug their shader. Finding and resolving defects in shaders is necessary regardless of whether the shader is created by attaching Metanodes to form a graph or writing MetaSL code, or both.
- the mental mill provides functionality for users to debug their shaders using a high level, visual technique. This allows shader creators to visually analyze the states of their shader to quickly isolate the source of problems.
- the present aspect of the invention provides structures for debugging Phenomena.
- the mental mill GUI allows users to construct Phenomena by attaching Metanodes, or other Phenomena, to form a graph.
- Each Metanode has a representation in the UI that includes a preview image describing the result produced by that node. Taken as a whole, this network of images provides an illustration of the process the shader uses to compute its result.
- FIG. 45 shows a partial screenshot 610, illustrating this aspect of the invention.
- a first Metanode 612 might compute the illumination over a surface while another Metanode 614 computes a textured pattern.
- a third node 616 combines the results of the first two to produce its result.
- a shader creator By visually traversing the Metanode network, a shader creator can inspect their shading algorithm and spot the location where a result is not what they expected.
- a node in the network might be a Phenomenon, in which case it contains one or more networks of its own.
- the mental mill allows the user to navigate into Phenomenon nodes and inspect their shader graphs visually using the same technique.
- viewing the results of each node in a Phenomenon does not provide enough information for the user to analyze a problem with their shader. For example, all of the inputs to a particular Metanode may appear to have the correct value and yet the result of that Metanode might not appear to be as the user is expecting. Also when authoring a new Metanode by writing MetaSL code, a user may wish to analyze variable values within the Metanode as the Metanode computes its result value.
- the mental mill extends the visual debugging paradigm into the MetaSL code behind each Metanode.
- the mental mill MetaSL debugger presents the user with a source code listing containing the MetaSL code for the shader node in question. The user can then step through the shader's instructions and inspect the values of variables as they change throughout the program's execution. However instead of just presenting the user with a single numeric value, the debugger displays multiple values simultaneously as colors mapped over the surface of an object.
- Representing a variable's values as an image rather than a single number has several advantages.
- the user can also use the visual debugging paradigm to quickly locate the input conditions that produce an undesirable result.
- a shader bug may only appear when certain input parameters take on specific values, and such a scenario may only occur on specific parts of the geometry's surface.
- the mental mill debugger allows the user to navigate in 3D space using the mouse to find and orient the view around the location on the surface that is symptomatic of the problem.
- FIG. 46 shows a partial screenshot 620 illustrating a variable list according to this aspect of the invention.
- new variables may come into scope and appear in the list while others will go out of scope and be removed from the list.
- Each variable in the list has a button next to its name that allows the user to open the variable and see additional information about it such as its type and a small preview image displaying its value over a surface.
- the preview image for that variable will update to reflect the modification.
- the user can select a variable from the list to display its value in a larger preview window.
- Loops (such as for, f oreach, or while loops) and conditional statements (such as if and else) create an interesting circumstance within this debugging model. Because the shader program is operating on multiple data points simultaneously, the clause of an if/else statement may or may not be executed for each data point.
- the MetaSL debugger provides the user several options for viewing variable values inside a conditional statement. At issue is how to handle data points that do not execute the if or else clause containing the selected statement. These optional modes include the following: • Show final shader result — in this mode, data points that do not reach the selected statement are processed by the complete shader and the final result is produced in the output image.
- a loop may execute a different number of times for each data point.
- the user can arbitrarily jump to any statement in the shader program, if they select a statement inside a loop they must also specify which iteration of the loop they wish to consider. Given that the loop is potentially executed a different number of times for each data point, some data points may have already exited the loop before the desired number of iterations is reached.
- the mental mill debugger allows the user to specify a loop count value that specifies the desired number of iterations though a loop.
- the loop counter can be set to any value greater than zero. The higher the value, the more data points will likely not reach the selected statement and in fact given a large enough value no data points will reach the selected statement. The same options that control the situation where the selected statement isn't reached for a conditional apply to loops as well.
- variable's value as a color mapped over the surface of an object obviously works well when the variable is a color type. This method also works reasonably well for scalar and vector values with three or less components, but the value must be mapped into the range 0-1 in order to produce a legitimate color.
- the mental mill UI will allow the user specify a range for scalars and vectors that will be used to map those values to colors. Alternatively mental mill can automatically compute the range for any given viewpoint by determining the minimum and maximum values of the variable over the surface as seen from that viewpoint.
- Mapping scalars and vectors to colors using user specified ranges can be effective; however, it still requires the user to deduce the value of the variable by looking at the surface colors.
- the mental mill UI provides other techniques for visualizing these data types.
- vector types that represent direction (not position)
- one visualization technique is to draw the vector as an arrow positioned on the surface as the user drags the mouse over the surface as in the following illustration. This visualization technique is illustrated in a series of partial screenshots 630, 632, and 634 shown in FIGS. 47 A-C.
- numeric values of the variable are also displayed in a tooltip window as the user moves the mouse over the surface of the object.
- Matrix type variables pose another challenge to visualize. Matrices with 4 rows and columns can be viewed as a grid of numeric values formatted into standard matrix notation. This visualization technique is illustrated in the partial screenshot 640 shown in FIG. 48. Matrix type variables with three rows and columns can be considered to be a set of three direction vectors that make up the rows of the matrix. A common example of this is the tangent space matrix which is comprised of the 'u' derivative, the V derivative and the normal. The three row vectors of the matrix can be drawn as arrows under the mouse pointer as shown below. In addition, the individual values of the matrix can be displayed in a standard matrix layout. This visualization technique is illustrated in the partial screenshot 650 shown in FIG. 49,
- Vector type values that don't represent direction can be viewed using a gauge style display.
- the same user specified range that maps these values to colors can be used to set the extent of a gauge, or set of gauges, that appear as the user selects positions on the surface. As the user moves the mouse over the surface, the gauges graphically display the values relative to the user specified range. This visualization technique is illustrated in the partial screenshot 660 shown in FIG. 50.
- the user can opt to view the final shader result on the object's surface instead of the variable value mapped to colors. This allows the user to locate portions of the surface that correspond to features in the final shader result while also monitoring the value of the selected variable.
- Another useful feature the mental mill debugger provides is to lock onto a particular pixel location instead of using the mouse pointer to interactively sweep over the surface.
- the user can choose a pixel location (either by selecting it with the mouse or providing the numeric pixel coordinates) and the value of the variable at that pixel location will be displayed as the user steps through statements in the code.
- the mental mill shader debugger illustrates another benefit of the platform independence of mental mill.
- a single MetaSL Metanode, or an entire Phenomenon can be created and debugged once and yet targeted to multiple platforms.
- the debugger can operate in either hardware or software mode and works independently of any particular rendering algorithm.
- the fact that the shader debugger is tightly integrated into mental mill's Phenomenon creation environment further reduces the create/test cycle and allows the shader creator to continue to work at a high level, insulated from platform dependencies.
- a prototype application has been created as a proof of concept of this shader debugging system.
- the mental mill application is built on a modular library containing all the functionality utilized by the application.
- An API allows third-party tools to integrate the mental mill technology into their applications.
- the mental mill GUI can be customized to match the look and feel of that application.
- the mental mill API will allow GUI customization:
- ⁇ Phenomenon graph appearance Elements of the Phenomenon graph, such as Metanodes and connection lines, will be drawn by invoking a plug-in callback function.
- a default drawing function will be provided, however third parties can also provide their own to customize the appearance of the shader graph to better match their application.
- the callback function will also handle mouse point hit testing since it is possible the elements of a node could be arranged in different locations.
- ⁇ Keyboard shortcuts All keyboard commands will be remappable.
- Mouse behavior - Mouse behavior such as the mapping of mouse buttons will be customizable.
- Toolbar items Each toolbar item can be omitted or included.
- Each view window will be designed to operate on its own without dependencies on other windows. This will allow a third party to integrate just the Phenomenon graph view into their application, for example.
- Each view window can be driven by the API so third parties can include any combination of the view windows, replacing some of the view windows with their own user interface.
- the mental millTM shading language (MetaSLTM ) provides a powerful interface for implementing custom shading effects and serves as an abstraction from any particular platform where shaders may be executed. Rendering can be executed on either the CPU or on a graphics processor unit
- GPU GPU
- shaders themselves operate on either the CPU or GPU.
- a shader written in a language directly targeted at graphics hardware will likely have to be rewritten as hardware and APIs change.
- such a shader will not operate in a software renderer and will not support features, such as ray tracing, that are currently only available to software renderers.
- MetaSL solves this problem by remaining independent of any target platform.
- a shader can be written once in MetaSL and the MetaSL compiler will automatically translate it to any supported platform.
- shaders written in MetaSL will automatically take advantage of new capabilities without the need to be rewritten.
- MetaSL shader This insulation from target platforms also allows a MetaSL shader to automatically operate in software or hardware rendering modes. The same shader can be used to render in software mode on one machine and in hardware mode on others. Another use for this capability is the automatic re-purposing of shaders for a different context. For example, a shader created for use in a visual effect for film could also be used for a video game based on that film.
- MetaSL shader can be used regardless of whether rendering takes place in software or hardware so the user isn't required to implement multiple shaders. In some cases however the user may wish to customize a shader for the GPU or CPU. The language provides a mechanism to do this while still implementing both techniques in MetaSL.
- Hardware shaders generated by the MetaSL compiler are restricted such that they can be used only for rendering with the next generation of mental ray® and the Reality Server® based on neurayTM .
- the mental millTM PhenomenonTM creation technology provides the capability of generating hardware shaders that can be used for either rendering with mental ray or externally by other applications, such as games.
- MetaSL has been designed to be easy to use with a focus on programming constructs needed for common shading algorithms rather than the extensive and sometime esoteric features found in generic programming languages.
- MetaSL shader All components of a MetaSL shader are grouped together in a shader class denoted with the shader keyword.
- a single source file can have multiple shader definitions.
- a shader class can also inherit the definition of another shader by stating the name of the parent shader following the name of the child shader and separated by a colon. For example:
- a shader can have zero or more input parameters which the shader uses to determine its result value.
- Parameters can use any of the built-in types, described below, or custom structure types, also described below.
- input parameters may store literal values or be attached to the result of another shader, however a shader doesn't need to be concerned with this possibility.
- a shader can refer to an input parameter as it would any another variable. Note though that input parameters can only be read and not written to.
- a shader declares its input parameters in a section denoted by the input: label followed by a declaration of each parameter.
- This example declares a shader with two color input parameters.
- An input parameter can also be a fixed or variable sized array. Since the size of a dynamic array isn't known in advance, array input parameters include a built-in count parameter. The length of an array named my array can be referred to as my_array . count. An input parameter of fixed length will have the length indicated as part of the declaration while a dynamic array will not.
- Arrays of arrays are not supported as input parameter types.
- a shader must have at least one output parameter, but may have more than one.
- Output parameters store a shader's result. The purpose of a shader is to compute some function of its input parameters and store the result of that function in its output parameters.
- An output parameter can be of any type, including a user defined structure type; however it cannot contain a dynamic array.
- a shader declares its output parameters in a section denoted by the output: label followed by a declaration of each parameter.
- shader my_shader ⁇ output Color ambient_light ;
- Shaders can declare other variables which are not input or output parameters, but instead store other values read by the shader.
- a shader can compute values and store them in member variables. Member variables are designed to hold values that are computed once before rendering begins but do not change thereafter. This avoids redundantly computing a value each time the shader is called.
- Member variables are declared in a section denoted by the member: keyword. For example:
- the primary task of a shader is to compute one or more result values. This is implemented in the shader class by a method named main. This method is called when the renderer needs the result of the shader or when the shader is attached to an input parameter of another shader and that shader needs the parameter's value.
- the main method can be implemented as an inline method which may be convenient for simple shaders. In cases where the method is large it may be desirable to implement the method separately from the shader definition. To do this the method name must be prefixed with the shader name and separated by two colons. For example:
- Vector3 average_normals (Vector3 norml , Vector3 norm2 ) ;
- helper method can either occur directly in the shader class definition or outside it.
- the method name must be prefixed with the name of the shader followed by two colons.
- MetaSL also allows the definition of functions that are not directly associated with a particular shader class the way methods are. Functions are declared the same way as methods except the declaration appears outside of any shader class. Functions must have a declaration that appears before any reference to the function is made. The function body can be included as part of the declaration or a separate function definition can occur later in the source file, after the function declaration.
- Both methods and functions can be overloaded by defining another method or function with the same name but a different set of calling parameters.
- Overloaded versions of a function or method must have a different number of parameters or parameters that differ in type, or both. It is not sufficient for overloaded functions to only differ by return type.
- Calling parameter declarations can be qualified with one of the following qualifiers to allow a function or method to modify a calling parameter and allow that modification to be visible to the caller:
- the parameter value is not copied into the function being called and is undefined if read by the called function.
- the called function can however set the parameter's value and that result will be copied back to the variable passed by the caller.
- Another specially named method in the shader class is a method called event.
- FIG. 51 shows a table 670 setting forth a list of Event-type parameters according to this aspect of the invention.
- Instance init event is handled and a noise table array is initialized to contain random values.
- MetaSL includes a comprehensive set of fundamental types. These built-in types cover most of the needs shaders will have, but can also be used to define custom structures, as described below. The following is a list of MetaSL intrinsic types:
- Scalar A floating point value of unspecified precision. This type maps to the highest possible precision of the target platform.
- Vector2 A vector with 2 scalar components
- Matrix2x3 A matrix with 2 rows and 3 columns
- Matrix3x3 -A matrix with 3 rows and 3 columns • Matrix4x2 -A matrix with 4 rows and 2 columns
- Matrix4x4 - A matrix with 4 rows and 4 columns • Color - A color with r, g, b, and a scalar components
- MetaSL provides the capability to define an enumeration as a convenient way to represent a set of named integer constants.
- the enum keyword is used followed by a comma separated list of identifiers enclosed in brackets. For example:
- An enumeration can also be named in which case it defines a new type.
- the enumerators can be explicitly assigned values as well. For example:
- All the vector types have components that can be accessed in a similar manner.
- the components can be referred to by appending a period to the variable name followed by one of [x, y, z , w] .
- Vectors of length 2 only have x and y components
- vectors of length 3 have x, y, and z
- vectors of length 4 have all 4 components.
- multiple components can be accessed at the same time, the result being another vector of the same or different length.
- the order of the components can be arbitrary and the same component can be used more than once. For example given a vector V of length 3 :
- V . xxy y Returns a 4 component vector ⁇ x , x , y , y >
- a similar syntax can be used as a mask when writing to a vector. The difference is that a component on the left side of the assignment can not be repeated.
- V. yz Vector2 ( 0. 0, 0. 0 ) ;
- the y and z components are set to 0.0 while the x and y components are left unchanged.
- Vector components can also be accessed using array indices and the array index can be a variable.
- Vectors can be constructed directly from other vectors (or colors) providing the total number of elements is greater than or equal to the number of elements in the vector being constructed. The elements are taken from the constructor parameters in the order they are listed.
- the standard math operators ( + , - , * , / ) apply to all vectors and operate in a component- wise fashion.
- the standard math operators are overloaded to allow a mixture of scalars and vectors of different sizes in expressions however in any single expression all vectors must have the same size.
- Scalars When Scalars are mixed with vectors, the scalar is promoted to a vector of the same size with each element set to the value of the scalar.
- variables a, b, c, d and e are all previously declared scalar values.
- variable result would have the value ⁇ (a+c) *e, (b+d) *e>.
- the standard Boolean logic operators also can be applied to individual or vectors of Booleans. When applied to vectors of Booleans they operate in a component- wise fashion and produce a vector result. These operators are listed in the table 690 set forth in FlG. 53.
- Bitwise operators are not supported. Comparison operators are supported and operate on vectors in a component- wise fashion and produce a Boolean vector result.
- a list of supported comparison operators is set forth in the table 700 shown in FIG. 54.
- a ternary conditional operator can be applied to vector operands as well in a component-wise fashion.
- the conditional operand must be a single Boolean expression or a vector Boolean expression with the number of components equal to the number of components of the second and third operands.
- Vector2 result vl ⁇ v2 ?
- Vector2(3.0, 4.0) Vector2 (5.0, 6.0);
- the result variable would hold the value ⁇ 3.0 , 6.0>.
- Instances of the Color type are identical in structure to instances of Vector4 although their members are referred to by [r, g, b, a] instead of [x, y, z, w] to refer to the red, green, blue and alpha components, respectively.
- Matrices are defined with row and column sizes ranging from 2 to 4. All matrices are comprised of Scalar type elements. Matrix elements can also be referred to using array notation (row-major order) with the array index selecting a row from the matrix. The resulting row is either a Vector2, Vector3, or Vector4 depending on the size of the original matrix. Since the result of indexing a matrix is a vector and vector types also support the index operator, individual elements of a matrix can be accessed with syntax similar to a multidimensional array.
- matrices also have a constructor which accepts element values in row-major order.
- Matrices can also be constructed by specifying the row vectors as in the following example: Vector3 rowO(1.0, 0.0, 0.0); Vector3 rowl(0.0, 1.0, 0.0); Vector3 row2(0.0, 0.0, 1.0); Vector3 row3(0.0, 0.0, 0.0); Matrix4x3 mat(row0, rowl, row2, row3) ;
- the number of elements of the vectors passed to the matrix constructor must match the number of elements in a row of the matrix being constructed.
- the multiplication operator is supported to multiply two matrices or a matrix and a vector and will perform a linear algebra style multiplication between the two.
- the number of columns of the matrix on the left must equal the number of rows of the matrix on the right.
- the result of multiplying a NxT matrix with a TxM matrix is a NxM matrix.
- a vector can be multiplied on the right or left side provided the number of elements equals the number of rows when the vector is on the left side of the matrix and the number of elements equals the number of columns when the vector is on the right.
- Vector3 v3 (0,0,0) ;
- Vector2 v2 Vector2 (v3.x, v3.y); or:
- the .xyzw notation can be applied to variables of type Scalar to generate a vector. For example:
- MetaSL supports arrays of any of the built-in types or user defined structures, however only fixed length arrays can be declared in shader functions. There are two exceptions. As stated in the input parameter section, shader inputs can be declared as dynamically sized arrays. The other exception is for parameters to functions or methods, which can also be arrays of unspecified size. In both these cases, by the time the shader is invoked during rendering the actual size of the array variable will be known.
- the shader code can refer to the size of an array as name . count where name is the array variable name.
- This simple example loops over an array and sums the components. The code for this function was written without actual knowledge of the size of the array but when shading the size will be known. Either the array variable will come from an array shader parameter or a fixed size array declared in a calling function.
- Custom structure types can be defined to extend the set of types used by MetaSL shaders.
- the syntax of a structure type definition looks like the following example:
- Structure member variables can be of any built-in type or another user-defined structure type to produce a nested structure. Structure members can also be arrays.
- MetaSL User-defined structures and enumerations are the only form of user-defined types in MetaSL.
- a shader's main method a set of special state variables are implicitly declared and available for the shader code to reference. These variables hold values describing both the current state of the renderer as well as information about the intersection that led to the shader call. For example, normal refers to the interpolated normal at the point of intersection. These variables are only available inside the shader's main method. If a shader wishes to access one of these state variables within a helper method, the variable must be explicitly passed to that method.
- This set of state variables can be viewed as an implicit input to all shaders.
- the state input's data type is a struct containing all the state variables available to shaders.
- a special state shader can be connected to this implicit input.
- the state shader has an input for each state member and outputs the state struct.
- a shader that computes illumination will likely refer to the surface normal at the point of intersection.
- a bump map shader could produce a modified normal which it computes from a combination of the state normal and a perturbation derived from a gray-scale image.
- a state shader can be attached to the illumination shader thus exposing the normal as an input. The output of the bump shader can then be attached to the state shader's normal input.
- the illumination shader will most likely contain a light loop that iterates over scene lights and indirectly causes light shaders to be evaluated.
- the state values passed to the light shaders will be the same state values provided to the surface shader. If the state was modified by a state shader, the modification will also affect the light shaders.
- This system of implicit state input parameters simplifies shader writing.
- a shader can easily refer to a state variable while at the same time maintaining the possibility of attaching another shader to modify that state variable. Since the state itself isn't actually modified, there is no danger of inadvertently affecting another shader.
- FIG. 55 shows a schematic 710, illustrating bump mapping according to the present aspect of the invention. At first this might seem slightly complex, however the graph implementing bump mapping can be boxed up inside a Phenomenon node and viewed as if it was a single shader.
- the "Perturb normal" shader uses three samples of a gray-scale image to produce a perturbation amount.
- the texture coordinate used to sample the bump map texture is offset in both the U and V directions allowing the slope of the gray-scale image in the U and V directions to be computed.
- Two of these shaders are fed modified texture coordinates from the attached state shaders.
- the state shaders themselves are fed modified texture coordinates produced by "Offset coordinate" shaders.
- the whole schematic is contained in a Phenomenon so not all users have to be concerned with the details.
- the bump map Phenomenon has an input of type
- FIG. 56 shows a diagram 720 illustrating the bump map Phenomenon in use.
- the phong shader implicitly refers to the state's normal when it loops over scene lights. In this case the phong shader's state input is attached to a state shader and the modified normal produced by the bump shader is attached to the state shader's normal input.
- a set of state variables includes the following: position; normal; origin; direction; distance; texture_coord_n; and screen_position. This list can be supplemented to make it more comprehensive. Note that the preprocessor can be used to substitute common short name abbreviations, often single characters, for these longer names.
- state variable parameters are added to nodes.
- a set of special state variables are implicitly declared within a shader's main method, and are available for the shader code to reference. These variables hold values describing both the current state of the renderer as well as information about the intersection that led to the shader call.
- the normal variable refers to the interpolated normal at the point of intersection.
- these variables are only available inside the shader's main method. If a shader wishes to access one of these state variables within a helper method, the variable must be explicitly passed to that method. Alternatively, the state variable itself may be passed to another method, in which case all the state variables are then available to that method.
- This set of state variables can be viewed as implicit inputs to all shaders, which by default are attached to the state itself. However, one or more input parameters can be dynamically added to an instance of a shader that corresponds by name to a state variable. In that case, these inputs override the state value and allow a connection to the result of another shader without modifying the original shader source code. In addition to modifying a state variable with an overriding input parameter, a shader can also directly modify a state variable with an assignment statement in the MetaSL implementation.
- Exposing state variables as inputs allows one shader to refer to state variables while allowing another shader to drive the state values used by that shader. If no input parameter is present for a particular referenced state variable, that variable will continue to refer to the original state value.
- a shader that computes illumination typically refers to the surface normal at the point of intersection.
- a bump map shader may produce a modified normal which it computes from a combination of the state normal and a perturbation derived from a gray-scale image.
- a parameter called "normal" can be added to an instance of the illumination shader thus exposing the normal as an input, just for that particular instance. The output of the bump shader can then be attached to the shader' s normal input.
- the illumination shader contains a light loop that iterates over scene lights and indirectly causes light shaders to be evaluated.
- the state values passed to the light shaders will be the same state values provided to the surface shader. If a state variable was overridden by a parameter or modified within the shader, that modification will also affect the light shaders. It is not possible, however, to make modifications to a state variable that will affect shaders attached to input parameters because all input parameters are evaluated before a shader begins execution. This system of implicit state input parameters simplifies shader writing. A shader can easily refer to a state variable while at the same time maintaining the possibility of attaching another shader to modify that state variable.
- FIG. 57 shows a schematic 730 of a bump map Phenomenon 732.
- the graph implementing bump mapping can be boxed up inside a Phenomenon node and viewed as if it was a single shader.
- the "Perturb normal” shader 734 uses three samples of a gray-scale image to produce a perturbation amount.
- the texture coordinate used to sample the bump map texture is offset in both the U and V directions allowing the slope of the gray-scale image in the U and V directions to be computed.
- An “amount” input scales the amount of the perturbation.
- the “Perturb normal” shader 734 adds this perturbation to the state's normal to produce a new modified normal.
- FIG. 58 shows a diagram of a bump map Phenomenon 740 in use.
- the phong shader 742 implicitly refers to the state's normal when it loops over scene lights. This illustration shows an instance of the phong shader which has an added "normal" input allowing the normal to be attached to the output of the bump map shader.
- FIGS. 59A-B show a table 750 listing the complete set of state variables.
- State vectors are always provided in "internal" space. Internal space is undefined and can vary across different platforms. If a shader can perform calculations independently of the coordinate system then it can operate with the state vectors directly, otherwise it will need to transform state vectors into a known space.
- FIG. 60 shows a table 760 listing the transformation matrices.
- a shader node that refers to light or volume shader state variable can only be used as a light or volume shader or in a graph which is itself used as a light or volume shader.
- Light shaders can also call the state transformation functions and pass the value
- FIG. 61 shows a table 770 listing light shader state variables
- FIG. 62 shows a table 780 listing volume shader state variables.
- the ray that is responsible for the current intersection state is described by the ray type, ray shader, is ray_dispersal_group ( ) and is ray_history_group ( ) state variables and functions. These variables and functions use the following strings to describe attributes of the ray:
- a ray has exactly one of the following types:
- a ray can be a member of at most one of the following groups: • "specular" - Specular transparency, reflection, or refraction
- a ray can have zero or more of the following history flags: • "lightmap” - Lightmap shader call
- Trace__options holds parameters used by the trace ( ) and occlusion ( ) functions described in the next section.
- a shader can declare an instance of this type once and pass it to multiple trace calls.
- FIG. 63 sets forth a table 790 listing the methods of the Trace_options class.
- FIGS. 64 and 65 sets forth tables 800 and 810 listing the functions that are provided as part of the intersection state and depend on values accessible through the state variable. These functions, like state variables, can only be called within a shader's main method or any method in which the state variable is passed as a parameter.
- MetaSL supports the familiar programming constructs that control the flow of a shader's execution. Specifically these are:
- a Light_iterator class facilitates light iteration and an explicit light list shader input parameter is not required.
- the light iterator implicitly refers to scene lights through the state.
- An instance of this iterator is declared and specified as part of the foreach statement. The syntax looks like the following.
- Light_iterator light foreach (light) ⁇ // Statements that refer to members of 'light' ⁇
- the shader will likely declare one or more variables outside the loop to store the result of the lighting. Each trip through the loop the shader will add the result of the BRDF to these variables.
- Color diffuse_light ( 0, 0 , 0 , 0 ) ;
- MetaSL This is a simple example that loops over lights and sums the diffuse illumination.
- a powerful feature of MetaSL is its ability to describe shaders independent of a particular target platform. This includes the ability to run MetaSL shaders in software with software based renderers and in hardware when a GPU is available.
- Software rendering is typically more generalized and flexible allowing a variety of rendering algorithms including ray tracing and global illumination. At the time of this writing, graphics hardware doesn't generally support these features. Further more, different graphics hardware have different capabilities and resource limitations.
- the MetaSL compiler will provide feedback to the shader writer indicating the requirements for any particular shader it compiles. This will let the user know if the shader they have written is capable of executing on a particular piece of graphics hardware. When possible, the compiler will specifically indicate which part of the shader caused it to be incompatible with graphics hardware. For example, if the shader called a ray tracing function the compiler may indicate that the presence of the ray tracing call forced the shader to be software compatible only. Alternatively the user may specify a switch that forces the compiler to produce a hardware shader. Calls to APIs that aren't supported by hardware will be removed from the shader automatically.
- MetaSL includes support for the following preprocessor directives: #defme; #undef; #if; #ifdef; #ifndef; #else; #elif; #endif. These directives have the same meaning as their equivalents in the C programming language. Macros with arguments are also supported such as:
- the # include directive is also supported to add other MetaSL source files to the current file. This allows structure definitions and shader base classes to be shared across files.
- a technique is a variation of a shader implementation. While some shaders may only require a single technique, there are situations where it is desirable to implement multiple techniques.
- the language provides a mechanism to declare multiple techniques within a shader. Often times a single shader implementation can map to both software and hardware so the exact same shader can be used regardless of whether rendering takes place on the CPU or GPU. In some cases though, such as when the software shader uses features not supported by current graphics hardware, a separate method for the shader needs to be implemented to allow the shader to also operate on the GPU. Different graphics processors have different capabilities and limitations as well so a shader that works on a particular GPU might be too complicated to work on another GPU. Techniques also allow multiple versions of a shader to support different classes of hardware.
- shaders will also want to implement various shading methods that are used in different contexts. For example a material shader might implement a shadow technique that provides the amount of transparency at a surface point used when tracing shadow rays. Different techniques can also be used to implement shaders that are faster but lower quality or slower and higher quality.
- the technique declaration appears somewhat like a nested class definition inside the shader class definition.
- the technique declaration provides a name that can be used to refer to the technique.
- the technique must at least define the main method which performs the primary functionality of the shader technique.
- the technique can implement an event method to handle init and exit events.
- the main and event methods are described in previous sections.
- the technique can contain other local helper methods used by the two primary technique methods. Shader my_shader ⁇ input : Color c; output :
- Event_type event Event_type event
- void main ( ) void main ( ) ;
- void my_shader software :: event (Event_type event) ⁇ ⁇ void my_shader: : software ::main () ⁇
- This example shows a shader that implements two separate techniques for hardware and software.
- the main and event methods of the techniques can be implemented inline in the class definition or separately as illustrated in this example.
- a separate rule file accessible by the renderer at render time will inform the Tenderer how to select different techniques of a shader.
- a rule describes the criteria for selecting techniques based on the values of a predefined set of tokens.
- the token values describe the context in which the shader is to be used. Possible token values are:
- Shadow This token value is true when shading a surface point in order to determine the transparency while tracing a shadow ray.
- Energy - This token value is true when calling a light shader to determine the energy produced by the light to allow the renderer to sort rays by importance.
- Hardware vender chipset A string identifying the chipset of the current hardware. For example n v30 or r 420.
- Rules need to specify the name of the technique and an expression based on token values which defines when the particular technique should be selected. Multiple rules can match any particular set of token values.
- the process in which the renderer uses to select a technique for a shader is the following: first only rules for techniques present in the shader are considered. Out of these rules, each one is tested in order and the first matching rule selects the technique. If no rule matches then either an error/warning is produced or a default technique is used for the shader.
- the first three rules support software shaders that either have a single technique, called “standard,” to handle all shading quality levels or shaders that have two techniques, “beauty” and “fast,” to separately handle shading two different quality levels. Token values can also be available to shaders at runtime so the shader with a single standard technique could still perform optional calculations depending on the desired quality level.
- the second three rules are an example of different techniques to support different classes of hardware.
- the fancy hardware technique might take advantage of functionality only available within shader model 3.0 or better.
- the nvidia_hardware technique may use features specific to NVIDIA's nv30 chipset.
- basic_hardware could be a catchall technique for handling generic hardware.
- the language includes a mechanism to allow material shaders to express their result as a series of components instead of a single color value. This allows the components to be stored to separate image buffers for later compositing. Individual passes can also render a subset of all components and combine those with the remaining components that have been previously rendered.
- a material shader factors its result into components by declaring a separate output for each component.
- the names of the output variable define the names of layers in the current rendering.
- This example shows a material shader that specifies three components for diffuse, specular, and indirect lighting.
- a mechanism in the scene definition file will allow the user to specify compositing rules for combining layers into image buffers.
- the user will specify how many image buffers are to be created and for each buffer they would specify an expression which determines what color to place in that buffer when a pixel is rendered.
- the expression can be a function of layer values such as:
- Image2 diffuse_lighting + specular_lighting
- the three layers from the shader result structure in the previous example are routed to two image buffers.
- the standard MetaSL library provides API functions to allow shaders to cast rays.
- Ray tracing can be computationally intensive; to optimize rendering times the renderer provides a mechanism to allow the delay of ray tracing so that multiple shader calls can be grouped together. This improves cache coherency and therefore overall shader performance.
- a shader has the option of calling a function to schedule a ray for ray tracing. This function returns immediately before the ray is actually traced allowing the shader and other shaders to continue processing.
- the shader schedules a ray trace it must also provide a factor to help control the compositing of the result from the ray trace with the shader's result.
- a weight factor that describes the significance of the ray to the final image to allow for importance driven sampling.
- the factor could for example be the result of a fresnel falloff function combined with a user specified reflection amount.
- Ray scheduling implicitly defines a layer.
- the expressions in the layer compositing rules can refer to the factors provided when scheduling a ray. For example:
- Shader parameters are often set by users in an application using a graphical user interface (GUI).
- GUI graphical user interface
- an application In order for users to interact with shaders in a GUI, an application must know some additional information about the shader parameters and the shader itself.
- Informational attributes can be attached to shaders, parameters, or techniques by annotating the shader source code. Annotations are placed immediately after a shader, parameter, or technique declaration by enclosing a list of attributes in curly braces. An attribute instance is declared in a similar fashion to a class with optional parameters passed to its constructor. The syntax is:
- Color colorl ⁇ default_value (Color (0,0, 0,1) ) ; display_name( "Color 1") ;
- Color color2 ⁇ default_value (Color (1,1, 1,1) ) ; display_name ("Color 2”) ;
- MetaSL includes a standard library of intrinsic functions.
- the following lists which may be expanded without departing from the scope of the invention, do not include software-only methods, including lighting functions and ray-tracing functions.
- Texture map functions texture lookup.
- the texture functions pose an interesting problem for unifying software and hardware shaders.
- Hardware texture functions usually come in several versions that allow projective texturing (the divide by w is built into the texture lookup), explicit filter width, and depth texture lookup with depth compare.
- Cg also has RECT versions of the texture lookup which use pixel coordinates of the texture instead of normalized coordinates.
- functionality may be provided in both hardware and software. However, it may be desirable to provide a software-only texture lookup with elliptical filtering.
- FIG. 67 shows a diagram illustrating the architecture 830 of the MetaSL compiler according to a further aspect of the invention.
- the MetaSL compiler handles both conversion of MetaSL shaders to target formats and the compilation of shader graphs into single shaders.
- the architecture of the MetaSL compiler is extendable by plug-ins, which allows it to support future language targets as well as different input syntaxes.
- the compiler front end supports pluggable parser modules to support different input languages. While MetaSL is expected to be the primary input language, other languages can be supported through an extension. This will allow for example, an existing code base of shaders to be utilized if a parser is created for the language the shaders were written in.
- the compiler back end is also extensible by plug-in modules.
- the MetaSL compiler handles much of the processing and provides the back-end plug-in with a high level representation of the shader, which it can use to generate shader code. Support is planned for several languages and platforms currently in use, however in the future new platforms will almost certainly appear. A major benefit of the mill technology is to insulate shaders from these changes. As new platforms or languages become available, new back-end plug-in modules can be implemented to support these targets.
- the MetaSL compiler currently targets high level languages, however the potential exists to target GPUs directly and generate machine code from the high level representation. This would allow particular hardware to take advantage of unique optimizations available only because the code generator is working from this high level representation directly and bypassing the native compiler.
- the graph compiler processes shader graphs and compiles them into single shaders. These shaders avoid the overhead of shader attachments which makes it possible to build graphs from a greater number of simple nodes rather than a few complex nodes. This makes the internal structure of a shader more accessible to users that are not experienced programmers.
- FIG. 68 shows a diagram illustrating the architecture 840 of the MetaSL compiler according to an alternative aspect of the invention.
- the following example shows a phong shader implemented in MetaSL.
- the phong_specular function called in this example is a built-in function provided by MetaSL. State parameters such as surface normal and ray direction are implicitly passed to the function.
- the following example shows a simple checker texture shader implemented in MetaSL.
- FIG. 69 shows a screenshot of a debugger UI 850 according to a further aspect of the invention.
- the shader debugger UI 850 comprises a code view panel 852 that displays the MetaSL code for the currently loaded shader, a variable list panel 854 that displays all variables in scope at the selected statement, and a 3D view window 856 that displays the values of the selected variable, or the result of the entire shader if no variable is selected. There is also provided an error display window 858.
- FIG. 70 shows a screenshot of the debugger UI 860 that appears when loading a shader, if there are compile errors they are listed in error display window 868. Selecting an error in the list highlights the line of code 862 where the error occurred. A shader file is reloaded by pressing the F5 key.
- FIG. 71 shows a screenshot of the debugger UI 870 that appears once a shader is successfully loaded and compiles without errors. Debugging begins by selecting a statement 872 in the code view panel 874. Selected statements are shown by a light green highlight along the line of the selected statement. The variable window displays variables 876 that are in scope for the selected statement.
- a statement is selected by clicking on the line of code click on a variable to display its value in the render window.
- the "normal" variable is selected (which is of type Vector3).
- the vector values are mapped to the respective colors. Lines that are statements have a white background. Lines that are not statements are gray.
- FIG. 72 shows a screenshot of the debugger screen 880, illustrating how conditional statements and loops are handled.
- Conditional statements and loops may not be executed for some data points, and therefore variables can not be viewed for certain data points when the selected statement is in a conditional clause.
- FIG. 72 when the selected statement 882 is in a conditional, only pixels 884 where the conditional value evaluated to true display the debug value. The rest of the pixels display the original result.
- FIG. 73 shows a screenshot of a debugger screen 890, illustrating what happens when the selected statement is in a loop. In that case, the values displayed represent the first pass through the loop. A loop counter may be added to allow the user to specify which pass through the loop they want to debug.
- the user can step through statements by using the left and right arrow keys to move forward and backward through the lines of code.
- the up and down arrow keys move through the variable list.
- FIG. 74 shows a screenshot of a debugger screen 900 showing how texture coordinates are handled.
- the user can select and view texture coordinates as shown in this example.
- the prototype provides four sets of texture coordinate, each tiled twice as many times as the previous set. U and V derivative vectors are also supplied.
- FIG. 75 shows a screenshot of a debugger screen 910, in which parallax mapping produces the illusion of depth by deforming texture coordinates.
- FIG. 76 screenshot 920, the offset of the texture coordinates can be clearly seen when looking at the texture coordinates in the debugger.
- FIGS. 77 and 78 are screenshots of debugger screens 930 and 940, illustrating other shader examples.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
- Debugging And Monitoring (AREA)
Abstract
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US69612005P | 2005-07-01 | 2005-07-01 | |
US70742405P | 2005-08-11 | 2005-08-11 | |
PCT/US2006/025827 WO2007005739A2 (fr) | 2005-07-01 | 2006-06-30 | Systeme et procedes d'ombrage pour infographie |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1907964A2 EP1907964A2 (fr) | 2008-04-09 |
EP1907964A4 true EP1907964A4 (fr) | 2009-08-12 |
Family
ID=37605099
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP06774417A Withdrawn EP1907964A4 (fr) | 2005-07-01 | 2006-06-30 | Systeme et procedes d'ombrage pour infographie |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP1907964A4 (fr) |
JP (1) | JP2009500730A (fr) |
AU (1) | AU2006265815A1 (fr) |
CA (1) | CA2613541A1 (fr) |
WO (1) | WO2007005739A2 (fr) |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2917199B1 (fr) * | 2007-06-05 | 2011-08-19 | Thales Sa | Generateur de code source pour une carte graphique |
US8310483B2 (en) * | 2007-11-20 | 2012-11-13 | Dreamworks Animation Llc | Tinting a surface to simulate a visual effect in a computer generated scene |
US8345045B2 (en) * | 2008-03-04 | 2013-01-01 | Microsoft Corporation | Shader-based extensions for a declarative presentation framework |
US8698818B2 (en) * | 2008-05-15 | 2014-04-15 | Microsoft Corporation | Software rasterization optimization |
US8866827B2 (en) | 2008-06-26 | 2014-10-21 | Microsoft Corporation | Bulk-synchronous graphics processing unit programming |
JP5123353B2 (ja) | 2010-05-06 | 2013-01-23 | 株式会社スクウェア・エニックス | リアルタイムシーンを照明し,発見するバーチャルフラッシュライト |
US20130063460A1 (en) * | 2011-09-08 | 2013-03-14 | Microsoft Corporation | Visual shader designer |
US9589382B2 (en) | 2013-03-15 | 2017-03-07 | Dreamworks Animation Llc | Render setup graph |
US9218785B2 (en) * | 2013-03-15 | 2015-12-22 | Dreamworks Animation Llc | Lighting correction filters |
US9514562B2 (en) | 2013-03-15 | 2016-12-06 | Dreamworks Animation Llc | Procedural partitioning of a scene |
US9659398B2 (en) | 2013-03-15 | 2017-05-23 | Dreamworks Animation Llc | Multiple visual representations of lighting effects in a computer animation scene |
US9811936B2 (en) | 2013-03-15 | 2017-11-07 | Dreamworks Animation L.L.C. | Level-based data sharing for digital content production |
DE102014214666A1 (de) * | 2014-07-25 | 2016-01-28 | Bayerische Motoren Werke Aktiengesellschaft | Hardwareunabhängiges Anzeigen von graphischen Effekten |
US10802698B1 (en) * | 2017-02-06 | 2020-10-13 | Lucid Software, Inc. | Diagrams for structured data |
US10740074B2 (en) * | 2018-11-30 | 2020-08-11 | Advanced Micro Devices, Inc. | Conditional construct splitting for latency hiding |
CN109727186B (zh) * | 2018-12-12 | 2023-03-21 | 中国航空工业集团公司西安航空计算技术研究所 | 一种基于SystemC面向GPU片元着色任务调度方法 |
CN111460570B (zh) * | 2020-05-06 | 2023-01-06 | 北方工业大学 | 一种基于bim技术的复杂结构节点辅助施工方法 |
CN113407090A (zh) * | 2021-05-31 | 2021-09-17 | 北京达佳互联信息技术有限公司 | 界面取色方法、装置、电子设备及存储介质 |
CN114359464A (zh) * | 2021-11-30 | 2022-04-15 | 成都鲁易科技有限公司 | 一种基于glsl es的图像渲染方法及装置 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6496190B1 (en) * | 1997-07-02 | 2002-12-17 | Mental Images Gmbh & Co Kg. | System and method for generating and using systems of cooperating and encapsulated shaders and shader DAGs for use in a computer graphics system |
US20050140672A1 (en) * | 2003-02-18 | 2005-06-30 | Jeremy Hubbell | Shader editor and compiler |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7034828B1 (en) * | 2000-08-23 | 2006-04-25 | Nintendo Co., Ltd. | Recirculating shade tree blender for a graphics system |
-
2006
- 2006-06-30 WO PCT/US2006/025827 patent/WO2007005739A2/fr active Application Filing
- 2006-06-30 EP EP06774417A patent/EP1907964A4/fr not_active Withdrawn
- 2006-06-30 AU AU2006265815A patent/AU2006265815A1/en not_active Abandoned
- 2006-06-30 CA CA002613541A patent/CA2613541A1/fr not_active Abandoned
- 2006-06-30 JP JP2008519658A patent/JP2009500730A/ja active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6496190B1 (en) * | 1997-07-02 | 2002-12-17 | Mental Images Gmbh & Co Kg. | System and method for generating and using systems of cooperating and encapsulated shaders and shader DAGs for use in a computer graphics system |
US20050140672A1 (en) * | 2003-02-18 | 2005-06-30 | Jeremy Hubbell | Shader editor and compiler |
Non-Patent Citations (3)
Title |
---|
"Integrated Development Environment", WIKIPEDIA, 12 May 2005 (2005-05-12), XP002533833, Retrieved from the Internet <URL:http://web.archive.org/web/20050514210204/http://en.wikipedia.org/wiki/Integrated_development_environment> [retrieved on 20090624] * |
TATARCHUK, NATALYA: "RenderMonkey: an effective environment for shader prototyping and development", SIGGRAPH '04: ACM SIGGRAPH 2004 SKETCHES, 2004, ACM, 2 PENN PLAZA, SUITE 701 - NEW YORK USA, XP040052991 * |
TROLLTECH AS: "Qt. Cross-platform C++ GUI Application Framework. Technical Overview", INTERNET CITATION, 1 January 1999 (1999-01-01), pages 25pp, XP007908964, Retrieved from the Internet <URL:http://ftp.icm.edu.pl/packages/qt/pdf/qt-whitepaper-v10.pdf> [retrieved on 20090623] * |
Also Published As
Publication number | Publication date |
---|---|
JP2009500730A (ja) | 2009-01-08 |
WO2007005739A2 (fr) | 2007-01-11 |
AU2006265815A1 (en) | 2007-01-11 |
WO2007005739A3 (fr) | 2008-09-18 |
EP1907964A2 (fr) | 2008-04-09 |
CA2613541A1 (fr) | 2007-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7548238B2 (en) | Computer graphics shader systems and methods | |
EP1907964A2 (fr) | Systeme et procedes d'ombrage pour infographie | |
Peercy et al. | Interactive multi-pass programmable shading | |
US6496190B1 (en) | System and method for generating and using systems of cooperating and encapsulated shaders and shader DAGs for use in a computer graphics system | |
Wyman et al. | Introduction to directx raytracing | |
WO1999052080A1 (fr) | Graphe temporel de scene d'heritage pour la representation du contenu des supports | |
Najork et al. | Obliq-3D: A high-level, fast-turnaround 3D animation system | |
Silva et al. | Node-based shape grammar representation and editing | |
Ragan-Kelley | Practical interactive lighting design for RenderMan scenes | |
Bauchinger | Designing a modern rendering engine | |
Revie | Designing a Data-Driven Renderer | |
Luo | Interactive Ray Tracing Infrastructure | |
BABIČ | Shader graph module for Age | |
Atella | Rendering Hypercomplex Fractals | |
Angel et al. | An interactive introduction to WebGL | |
Granof | Submitted to the Faculty of the | |
Goliaš | Hybrid renderer | |
Seitz | Toward Unified Shader Programming | |
Vojtko | Design and Implementation of a Modular Shader System for Cross-Platform Game Development | |
Meyer-Spradow et al. | Interactive design and debugging of gpu-based volume visualizations | |
Corrie et al. | Data shader language and interface specification | |
Qin | An embedded shading language | |
May | Design and implementation of a shader infrastructure and abstraction layer | |
Samuels | Declarative Computer Graphics using Functional Reactive Programming | |
Dickson et al. | RENDERING LEAVES DYNAMICALLY IN REAL-TIME |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20080123 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA HR MK RS |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: HERKEN, ROLFC/O MENTAL IMAGES GMBH Inventor name: LEFRANCOIS, MARTIN-KARLC/O MENTAL IMAGES GMBH Inventor name: DRIEMEYER, THOMASC/O MENTAL IMAGES GMBH Inventor name: BERTEIG, ROLF |
|
DAX | Request for extension of the european patent (deleted) | ||
R17D | Deferred search report published (corrected) |
Effective date: 20080918 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06T 15/00 20060101AFI20080929BHEP |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20090714 |
|
17Q | First examination report despatched |
Effective date: 20090918 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20100330 |