WO2007005739A2 - Computer graphics shader systems and methods - Google Patents
Computer graphics shader systems and methods Download PDFInfo
- Publication number
- WO2007005739A2 WO2007005739A2 PCT/US2006/025827 US2006025827W WO2007005739A2 WO 2007005739 A2 WO2007005739 A2 WO 2007005739A2 US 2006025827 W US2006025827 W US 2006025827W WO 2007005739 A2 WO2007005739 A2 WO 2007005739A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- shader
- phenomenon
- shaders
- metanode
- node
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
Definitions
- MetaSL Mental Mill shading language
- MetaSL is designed as a simple yet expressive language specifically for implementing shaders. It is also designed to unify existing shading applications, which previously were focused on specific platforms and contexts (e.g., hardware shading for games, software shading for feature film visual effects), under a single language and management structure.
- shader graphs provide intuitive graphical user interfaces for creating shaders, which are accessible even to users lacking technical expertise to write shader software code.
- Another aspect of the invention relates to a library of APIs to manage shader creation.
- FIG. 1 depicts a computer graphics system that provides for enhanced cooperation among shaders by facilitating generation of packaged and encapsulated shader DAGs, each of which can include one or more shaders, which shader DAGs are generated in a manner so as to ensure that the shaders in the shader DAG can correctly cooperate during rendering, constructed in accordance with the invention.
- FIG. 2 is a functional block diagram of the computer graphics system depicted in
- FIG. 5 depicts a graphical user interface for one embodiment of the phenomenon editor used in the computer graphics system whose functional block diagram is depicted in FIG. 2.
- FIG. 8 depicts a flowchart of an overall method according to an aspect of the invention.
- FIG. 9 depicts a software layer diagram illustrating the platform independence of the mental mill.
- FIG. 13 depicts a bar graph of performance results with respect to a range of values of particular input parameter.
- FIGS. 25 A-B depict, respectively, a thumbnail view and a list view of a Phenomenon/Metanode library explorer.
- FIG. 26 depicts a view of a code editor and IDE.
- FIGS. 27 A-C depict a series of views of a debugger screen, in which numeric values for variables at pixel locations are displayed based upon mouse location.
- FIG. 32 depicts a screenshot of a basic layout of a GUI.
- FIGS. 4 IA-D depict a series of views illustrating the progression of node levels of detail.
- FIG. 43 depicts a parameter view for displaying controls that allows parameters of a selected node to be edited.
- FIG. 49 depicts a view illustrating a visualization technique for three direction vectors.
- FIG. 50 depicts a view illustrating a visualization technique for viewing vector type values using a gauge style display.
- FIG. 52 depicts a table illustrating the results of a vector construction method.
- FIG. 55 depicts a schematic of a bump map Phenomenon.
- FIG. 57 depicts a schematic of a bump map Phenomenon according to a further aspect of the invention.
- FIG. 58 depicts a diagram of a bump map Phenomenon in use.
- FIG. 60 depicts shows a table listing transformation matrices.
- FIG. 61 depicts a table listing light shader state variables.
- FIG. 62 depicts a table listing volume shader state variables.
- FIG. 63 depicts a table listing the methods of the Trace_options class.
- FIG. 67 depicts a diagram of the MetaSL compiler.
- FIG. 68 depicts a diagram of the MetaSL compiler according to an alternative aspect of the invention.
- FIG. 69 depicts a screenshot of a debugger screen according to a further aspect of the invention.
- FIG. 70 depicts a screenshot of a debugger screen if there are compile errors when loading a shader.
- FIG. 71 depicts a screenshot of a debugger screen once a shader has been successfully loaded and compiled without errors, at which point debugging can begin by selecting a statement.
- FIG. 74 depicts a screenshot of a debugger screen, in which texture coordinates are viewed.
- FIG. 75 depicts a screenshot of a debugger screen, in which parallax mapping produces the illusion of depth by deforming texture coordinates.
- FIG. 76 depicts a screenshot of a debugger screen, in which the offset of texture coordinates can be seen when looking at texture coordinates in the debugger.
- the present invention provides improvements to the computer graphics entity referred to as a "phenomenon", which was described in commonly owned U.S. Patent No. 6,496,190 incorporated herein by reference. Accordingly, we first discuss, in Section I below, the various aspects of the computer graphics "phenomenon" described in U.S. Patent No. 6,496,190, and then, in Section II, which is subdivided into four subsections, we discuss the present improvements to the phenomenon entity.”
- Phenomena selected for use by an operator in connection with a scene may be predefined, or they may be constructed from base shader nodes by an operator using a phenomenon creator.
- the phenomenon creator ensures that phenomena are constructed so that the shaders in the DAG or cooperating DAGs can correctly cooperate during rendering of an image of the scene.
- a phenomenon Prior to being attached to a scene, a phenomenon is instantiated by providing values, or functions which are used to define the values, for each of the phenomenon's parameters, using a phenomenon editor.
- a scene image generator can generate an image of the scene.
- the scene image generator operates in a series of phases, including a pre-processing phase, a rendering phase and a post-processing phase.
- the scene image generator can perform pre-processing operations, such as shadow and photon mapping, multiple inheritance resolution, and the like.
- the scene image generator may perform pre-processing operations if, for example, a phenomenon attached to the scene includes a geometry shader to generate geometry defined thereby for the scene.
- the scene image generator renders the image.
- the scene image generator may perform post-processing operations if for example, a phenomenon attached to the scene includes a shader that defines postprocessing operations, such as depth of field or motion blur calculations which are dependent on velocity and depth information stored in connection with each pixel value in the rendered image.
- FIG. 1 depicts elements comprising a computer graphics system 10 constructed in accordance with the invention.
- the computer graphics system 10 provides for enhanced cooperation among shaders by facilitating generation of new computer graphic components, referred to herein as "phenomenon" (in the singular) or "phenomena” (in the plural), which are used to define features of a scene for use in rendering.
- the video display device 13 is provided to display output information generated by the processor module 11 on a screen 14 to the operator, including data that the operator may input for processing, information that the operator may input to control processing, as well as information generated during processing.
- the processor module 11 generates information for display by the video display device 13 using a so-called “graphical user interface” ("GUI"), in which information for various applications programs is displayed using various "windows.”
- GUI graphical user interface
- the computer system 10 is shown as comprising particular components, such as the keyboard 12A and mouse 12B for receiving input information from an operator, and a video display device 13 for displaying output information to the operator, it will be appreciated that the computer system 10 may include a variety of components in addition to or instead of those depicted in FIG. 1.
- a network may also include, for example, printers and facsimile devices, digital audio or video storage and distribution devices, and the like, which may be shared among the various computer systems connected in the network.
- the communication links interconnecting the computer systems in the network may, as is conventional, comprise any convenient information-carrying medium, including wires, optical fibers or other media for carrying signals among the computer systems.
- Computer systems transfer information over the network by means of messages transferred over the communication links, with each message including information and an identifier identifying the device to receive the message.
- computer graphics system 10 provides for enhanced cooperation among shaders by facilitating generation of phenomena comprising packaged and encapsulated shader DAGs or cooperating shader DAGs, with each shader DAG comprising at least one shader, which define features of a three-dimensional scene.
- Phenomena can be used to define diverse types of features of a scene, including color and textural features of surfaces of objects in the scene, characteristics of volumes and geometries in the scene, features of light sources illuminating the scene, features of simulated cameras or other image recording devices which will be simulated during rendering, and numerous other features which are useful in rendering as will be apparent from the following description.
- the phenomena are constructed so as to ensure that the shaders in the DAG or cooperating DAGs can correctly cooperate during rendering of an image of the scene.
- FIG. 2 depicts a functional block diagram of the computer graphics system 10 used in one embodiment of the invention.
- the computer graphics system 10 includes two general portions, including a scene structure generation portion 20 and a scene image generation portion 21.
- the scene structure generation portion 20 is used by an artist, draftsman or the like (generally, an "operator") during a scene entity generation phase to generate a representation of various elements which will be used by the scene image generation portion 21 in rendering an image of the scene, which may include, for example, the objects in the scene and their surface characteristics, the structure and characteristics of the light source or sources illuminating the scene, and the structure and characteristics of a particular device, such as a camera, which will be simulated in generating the image when the image is rendered.
- a particular device such as a camera
- the representation generated by the scene structure generation portion 20 is in the form of a mathematical representation, which is stored in the scene object database 22.
- the mathematical representation is evaluated by the image rendering portion 21 for display to the operator.
- the scene structure generation portion 20 and the scene image generation portion 21 may reside on and form part of the same computer, in which case the scene object database 22 may also reside on that same computer or alternatively on a server for which the computer 20 is a client.
- the portions 20 and 21 may reside on and form parts of different computers, in which case the scene object database 22 may reside on either computer or a server for both computers.
- the scene structure generation portion 20 is used by the operator to generate a mathematical representation defining comprising the geometric structures of the objects in the scene, the locations and geometric characteristics of light sources illuminating the scene, and the locations, geometric and optical characteristics of the cameras to be simulated in generating the images that are to be rendered.
- the mathematical representation preferably defines the three spatial dimensions, and thus identifies the locations of the object in the scene and the features of the objects.
- the objects may be defined in terms of their one-, two- or three-dimensional features, including straight or curved lines embedded in a three-dimensional space, two- dimensional surfaces embedded in a three-dimensional space, one or more bounded and/or closed three-dimensional surfaces, or any combination thereof.
- the mathematical representations may also define a temporal dimension, which may be particularly useful in connection with computer animation, in which the objects and their respective features are considered to move as a function of time.
- the mathematical representation further defines the one or more light sources which illuminate the scene and a camera.
- the mathematical representation of a light source particularly defines the location and/or the direction of the light source relative to the scene and the structural characteristics of the light source, including whether the light source is a point source, a straight or curved line, a flat or curved surface or the like.
- the mathematical representation of the camera particularly defines the conventional camera parameters, including the lens or lenses, focal length, orientation of the image plane, and so forth.
- the scene image generation portion 21 is used by an operator during a rendering phase to generate an image of the scene on, for example, the video display unit 13 (FIG. 1).
- the phenomenon creator 24 provides a mechanism whereby the operator, using the operator interface 27 and base shader nodes from the base shader node database 32, can generate phenomena which can be used in connection with the scene or otherwise (as will be described below). After a phenomenon is generated by the phenomenon creator 24, it (that is, the phenomenon) will be stored in the phenomenon database 25. After a phenomenon has been stored in the phenomenon database 25, an instance of the phenomenon can be created by the phenomenon editor 26. In that operation, the operator will use the phenomenon editor 26 to provide values for the phenomenon's various parameters (if any).
- the phenomenon editor 26 allows the operator, through the operator interface 27, to establish, adjust or modify the particular feature.
- the values for the parameters may be either fixed, or they may vary according to a function of a variable (illustratively, time).
- the operator, using the scene assembler 34, can attach phenomenon instances generated using the phenomenon editor 26 to elements of the scene as generated by the entity geometrical representation generator 23.
- the scene image generation portion 21 includes several components including an image generator 30 and an operator interface 31. If the scene image generation portion 21 forms part of the same computer as the scene structure generation portion 20, the operator interface 31 may, but need not, comprise the same components as operator interface 27. On the other hand, if the scene image generation portion 21 forms part of a different computer from the computer of which the scene structure generation portion, the operator interface 31 will generally comprise different components as operator interface 27, although the components of the two operator interfaces 31 and 27 may be similar.
- the image generator 30, under control of the operator interface 31, retrieves the representation of the scene to be rendered from the scene representation database 22 and generates a rendered image for display on the video display unit of the operator interface 31.
- a phenomenon provides information that, in addition to the mathematical representation generated by the entity geometrical representation generator 23, is used to complete the definition of the scene which will be used in rendering, including, but not limited to, characteristics of the colors, textures, and closed volumes, and so forth, of the surfaces of the geometrical entities defined by the scene structure generation portion 20.
- a phenomenon comprises one or more nodes interconnected in the form of a directed acyclic graph ("DAG") or a plurality of cooperating DAGs. One of the nodes is a primary root node which is used to attach the phenomenon to an entity in a scene, or, more specifically, to a mathematical representation of the entity.
- DAG directed acyclic graph
- Geometry shaders essentially comprise pre- defined static or procedural mathematical representations of entities in three-dimensional space, similar to representations that are generated by the entity geometrical representation generator 23 in connection with in connection with entities in the scene, except that they can be provided at pre-processing time to, for example, define respective regions in which other shaders used in the respective phenomenon are to be delimited.
- a geometry shader essentially has access to the scene construction elements of the entity geometrical representation generator 23 so that it can alter the scene representation as stored in the scene object database to, for example, modify or create new geometric elements of the scene in either a static or a procedural manner.
- a Phenomenon that consists entirely of a geometry shader DAG or of a set of cooperating geometry shader DAGs can be used to represent objects in a scene in a procedural manner. This is in contrast to typical modeling, which is accomplished in a modeling system by a human operator by performing a sequence of modeling operations to obtain the desired representation of an object in the computer.
- typical modeling which is accomplished in a modeling system by a human operator by performing a sequence of modeling operations to obtain the desired representation of an object in the computer.
- a geometry phenomenon represents an encapsulated and automated, parameterized abstract modeling operation.
- Photon shaders which can be used to control the paths of photons in the scene and the characteristics of interaction of photons with surfaces of objects in the scene, such as absorption, reflection and the like, Photon shaders facilitate the physically correct simulation of global illumination and caustics in connection with rendering.
- a contour contrast shader is used to compare two sets of the sampling information which is collected by use of a contour store shader.
- a contour generation shader is used to generation contour dot information for storage in a buffer, which is then used by an output shader (described below) in generating contour lines, (vi)
- Output shaders which are used to process information in buffers generated by the scene image generator 30 during rendering.
- An output shader can access pixel information generated during rendering to, in one embodiment, perform compositing operations, complex convolutions, and contour line drawing from contour dot information generated by contour generation shaders as described above, (vii) Three- dimensional volume shaders, which are used to control how light, other visible rays and the like pass through part or all of the empty three-dimensional space in a scene.
- a three- dimensional volume shader may be used for any of a number of types of volume effects, including, for example, fog, and procedural effects such as smoke, flames, fur, and particle clouds.
- a phenomenon may include several DAGs, including a material shader DAG, an output shader DAG and instructions for generating a label frame buffer.
- the material shader DAG includes at least one material shader for generating a color value for a material and also stores label information about the objects which are encountered during processing of the material shader DAG in the label frame buffer which is established in connection with processing of the label frame buffer generation instructions.
- the output shader DAG includes at least one output shader which retrieves the label information from the label frame buffer to facilitate performing object-specific compositing operations.
- the phenomenon may also have instructions for controlling operating modes of the scene image generator 30 such that both DAGs can function and cooperate. For example, such instructions may control the minimum sample density required for the two DAGs to be evaluated.
- a material phenomenon may represent a material that is simulated by both a photon shader DAG, which includes at least one photon shader, and a material shader DAG, which includes at least one material shader.
- the photon shader DAG will be evaluated during caustics and global illumination pre-processing, and the material shader DAG will be evaluated later during rendering of an image.
- information representing simulated photons will be stored in such a way that it can be used during later processing of the material shader DAG to add lighting contributions from the caustic or global illumination pre-processing stage.
- the photon shader DAG stores the simulated photon information in a photon map, which is used by the photon shader DAG to communicate the simulated photon information to the material shader DAG.
- a phenomenon may include a contour shader DAG, which includes at least one shader of the contour shader type, and an output shader DAG, which includes at least one output shader.
- the contour shader DAG is used to determine how to draw contour lines by storing "dots" of a selected color, transparency, width and other attributes.
- the output shader DAG is used to collect all cells created during rendering and, when the rendering is completed, join them into contour lines.
- Illustrative types of optional root nodes include: (a) A lens root node, which can be used to insert lens shaders or lens shader DAGs into a camera for use during rendering; (b) A volume root node, which can be used to insert global volume (or atmosphere) shaders or shader DAGs into a camera for use during rendering; (c) An environment root node, which can be used to insert global environment shader or shader DAGs into a camera for use during rendering; (d) A geometry root node, which can be used to specify geometry shaders or shader DAGs that may be pre-processed during rendering to enable procedural supporting geometry or other elements of a scene to be added to the scene database; (e) A contour store root node, which can be used to insert a contour store shader into a scene options data structure; (f) An output root node, which can be used in connection with post processing after a rendering phase, and (g) A contour contrast root, which can be used to insert a contour contrast shader into the scene
- the phenomenon graph canvas 44 provides an area in which a phenomenon can be created or modified by an operator. If the operator wishes to modify an existing phenomenon, he or she can, using a "drag and drop" methodology using a pointing device such as a mouse, select and drag the icon 45 from the shelf frame 41 representing the phenomenon to the phenomenon graph canvas 44. After the selected icon 45 associated with the phenomenon to be modified has been dragged to the phenomenon graph canvas 44, the operator can enable the icon 45 to be expanded to show one or more nodes, interconnected by arrows, representing the graph defining the phenomenon.
- a graph 50 representing an illustrative phenomenon is depicted in FIG. 3. As shown in FIG.
- the operator can interconnect it to a node in the existing graph by clicking on both nodes in an appropriate manner so as to enable an arrow to be displayed therebetween.
- Nodes in the graph can also be disconnected from other nodes by deleting arrows extending between the respective nodes, and deleted from the graph by appropriate actuation of a delete pushbutton in the controls frame 43.
- the operator wishes to create a new phenomenon, he or she can, using the corresponding "drag and drop" methodology, select and drag icons 46 from the supported graph nodes frames 42 representing the entities to be added to the graph to the phenomenon graph canvas 44, thereby to establish a new node for the graph to be created.
- the operator can interconnect it to a node in the existing graph by clicking on both nodes in an appropriate manner so as to enable an arrow to be displayed therebetween.
- Nodes in the graph can also be disconnected from other nodes by deleting arrows extending between the respective nodes, and deleted from the graph by appropriate actuation of a delete pushbutton in the controls frame 43.
- the phenomenon creator 24 will examine the phenomenon graph to verify that it is consistent and can be processed during rendering.
- the phenomenon creator 24 will ensure that the interconnections between graph nodes do not form a cycle, thereby ensuring that the graph or graphs associated with the phenomenon form directed acyclic graphs, and that interconnections between graph nodes represent respective input and output data types which are consistent. It will be appreciated that, if the phenomenon creator 24 determines that the graph nodes do form a cycle, the phenomenon will essentially form an endless loop that generally cannot be properly processed. These operations will ensure that the phenomenon so created or modified can be processed by the scene image generation portion when an image of a scene to which the phenomenon is attached is being rendered.
- the dialog node 65 represents a dialog box that is displayed by the phenomenon editor 26 to allow the operator to provide input information for use with the phenomenon when the image is rendered.
- the material shader node 62 represented thereby is shown as receiving inputs therefor from the dialog node 65 (in the case of the glossiness input), from the texture shader node 63 (in the case of the ambient and diffuse color inputs), from a hard- wired constant (in the case of the transparency input) and from a lights list (in the case of the lights input).
- the hard- wired constant value, indicated as "0.0,” provided to the transparency input indicates that the material is opaque.
- the "glossiness” input is connected to a "glossiness” output provided by the dialog node 65, and, when the material shader represented by node 62 is processed during rendering, it will obtain the glossiness input value therefor from the dialog box represented by the dialog node, as will be described below in connection with FIGS. 6 A and 6B.
- the ambient and diffuse inputs of the material shader represented by node 62 are provided by the output of the texture shader, as indicated by the connection of the "result" output of node 63 to the respective inputs of node 62.
- the texture shader When the wood material phenomenon 60 is processed during the rendering operation, and, in particular, when the material shader represented by node 62 is processed, it will enable the texture shader represented by node 63 to be processed to provide the ambient and diffuse color input values.
- the texture shader has three inputs, including ambient and diffuse color inputs, represented by "color 1" and “color2" inputs shown on node 63, and a "blend” input.
- the values for the ambient and diffuse color inputs are provided by the operator using the dialog box represented by the dialog node 65, as represented by the connections from the respective diffuse and ambient color outputs from the dialog node 65 to the texture shader node 63 in FIG. 4.
- FIG. 5 depicts a phenomenon editor window 70 which the phenomenon editor 26 enables to be displayed by the operator interface 27 for use by an operator in one embodiment of the invention to establish and adjust input values for phenomena which have been attached to a scene.
- the operator can use the phenomenon editor window to establish values for phenomena which are provided by dialog boxes associated with dialog nodes, such as dialog node 65 (FIG. 4), established for the respective phenomena during the creation or-modification as described above in connection with FIG. 3.
- the phenomenon editor window 70 includes a plurality of frames, including a shelf frame 71 and a controls frame 72, and also includes a phenomenon dialog window 73 and a phenomenon preview window 74.
- the controls frame 73 contains icons (not shown) which represent buttons which the operator can use to perform control operations, including, for example, deleting or duplicating icons in the shelf frame 71, starting an on-line help system, exiting the phenomenon editor 26, and so forth.
- the operator can select a phenomenon whose parameter values are to be established by suitable manipulation of a pointing device such as a mouse in order to create an instance of a phenomenon. (An instance of a phenomenon corresponds to a phenomenon whose parameter values have been fixed.)
- the phenomenon editor 26 will enable the operator interface 27 to display the dialog box associated with its dialog node in the phenomenon dialog window.
- An illustrative dialog box, used in connection with one embodiment of the wood material phenomenon 60 described above in connection with FIG. 4, will be described below in connection with FIGS. 6 A and 6B.
- the phenomenon editor 26 effectively processes the phenomenon and displays the resulting output in the phenomenon preview window 74.
- the operator can use the phenomenon editor window 70 to view the result of the values which he or she establishes using the inputs available through the dialog box displayed in the phenomenon dialog window.
- FIGS. 6 A and 6B graphically depict details of a dialog node (in the case of FIG. 6A) and an illustrative associated dialog box (in the case of FIG. 6B), which are used in connection with the wood material phenomenon 60 depicted in FIG. 4.
- the dialog node which is identified by reference numeral 65 in FIG. 4, is defined and created by the operator using the phenomenon creator 24 during the process of creating or modifying the particular phenomenon with which it is associated.
- the dialog box 65 includes a plurality of tiles, namely, an ambient color tile 90, a diffuse color tile 91, a turbulence tile 92 and a glossiness tile 93.
- the respective tiles 90 through 93 are associated with the respective ambient, diffuse, turbulence and glossiness output values provided by the dialog node 65 as described above in connection with FIG. 4.
- the ambient and diffuse color tiles are associated with color values, which can be specified using the conventional red/green/blue/alpha, or "RGBA," color/transparency specification, and, thus, each of the color tiles will actually be associated with multiple input values, one for each of the red, green and blue colors in the color representation and one for transparency (alpha).
- each of the turbulence and glossiness tiles 92 and 93 is associated with a scalar value.
- FIG. 6B depicts an illustrative dialog box 100 which is associated with the dialog node 65 (FIG. 6A), as displayed by the operator interface 27 under control of the phenomenon editor 26.
- the ambient and diffuse color tiles 90 and 91 of the dialog node 65 are each displayed by the operator interface 27 as respective sets of sliders, generally identified by reference numerals 101 and 102, respectively, each of which is associated with one of the colors in the color representation to be used during processing of the associated phenomenon during rendering.
- the turbulence and glossiness tiles 92 and 93 of the dialog node 65 are each displayed by the operator interface as individual sliders 103 and 104.
- the sliders in the respective sets of sliders 101 and 102 may be manipulated by the operator, using a pointing device such as a mouse, in a conventional manner thereby to enable the phenomenon editor 26 to adjust the respective combinations of colors for the respective ambient and diffuse color values provided by the dialog node 65 to the shaders associated with the other nodes of the phenomenon 60 (FIG. 4).
- the sliders 103 and 104 associated with the turbulence and glossiness inputs may be manipulated by the operator thereby to enable the phenomenon editor 26 to adjust the respective turbulence and glossiness values provided by the dialog node 65 to the shaders associated with the other nodes of the wood material phenomenon 60.
- the scene image generator 30 operates in a series of phases, including a pre-processing phase, a rendering phase and a post-processing phase.
- the scene image generator 30 will perform the rendering phase, in which it performs rendering operations in connection with the pre-processed scene representation to generate a rendered image (step 104).
- the scene image generator 30 will identify the phenomena stored in the scene object database 22 which are to be attached to the various components of the scene, as generated by the entity geometric representation generator 23 and attach all primary and optional root nodes of the respective phenomena to the scene components appropriate to the type of the root node. Thereafter, the scene image generator 30 will render the image.
- the scene image generator 30 will generate information as necessary which may be used in post-processing operations during the post-processing phase.
- the scene image generator 30 will perform the post-processing phase.
- the scene image generator 30 will determine whether operations performed in step 100 indicated that post-processing operations are required in connection with phenomena attached to the scene (step 105). If the scene image generator 30 makes a positive determination in step 105, it will perform the post-processing operations required in connection with the phenomena attached to the scene (step 106). In addition, the scene image generator 30 may also perform other post-processing operations which are not related to phenomena in step 106. The scene image generator 30 may perform post-processing operations in connection with manipulate pixel values for color correction, filtering to provide various optical effects.
- the scene image generator 30 may perform post-processing operations if, for example, a phenomenon attached to the scene includes an output shader that defines post-processing operations, such as depth of field or motion blur calculations that can be, in one embodiment, entirely done in an output shader, for example, dependent on the velocity and depth information stored in connection with each pixel value, in connection with the rendered image.
- an output shader that defines post-processing operations, such as depth of field or motion blur calculations that can be, in one embodiment, entirely done in an output shader, for example, dependent on the velocity and depth information stored in connection with each pixel value, in connection with the rendered image.
- the values of parameters of a phenomenon may be fixed, or they may vary based on a function of one or more variables.
- the phenomenon instance can made time dependent, or "animated.” This is normally discretized in time intervals that are labeled by the frame-numbers of a series of frames comprising an animation, but the time dependency may nevertheless take on the form of any phenomenon parameter valued function over the time, each of which can be tagged with an absolute time value, so that, even if an image is rendered at successive frame numbers, the shaders are not bound to discrete intervals.
- the phenomenon editor is used to select time dependent values for one or more parameters of a phenomenon, creating a time dependent "phenomenon instance.”
- the selection of time dependent values for the parameters of a phenomenon is achieved, in one particular embodiment, by the graphically interactive attachment of what will be referred to herein as "phenomenon property control trees" to an phenomenon.
- a phenomenon property control tree which may be in the form of a tree or a DAG, is attached to phenomenon parameters, effectively outside of the phenomenon, and is stored with the phenomenon in the phenomenon instance database.
- a phenomenon property control tree consists of one or more nodes, each of which is a shader in the sense of the functions that it provides, for example, motion curves, data look-up functions and the like.
- a system in accordance with the invention can be constructed in whole or in part from special purpose hardware or a general purpose computer system, or any combination thereof, any portion of which may be controlled by a suitable program.
- Any program may in whole or in part comprise part of or be stored on the system in a conventional manner, or it may in whole or in part be provided in to the system over a network or other mechanism for transferring information in a conventional manner.
- the system may be operated and/or otherwise controlled by means of information provided by an operator using operator input elements (not shown) which may be connected directly to the system or which may transfer the information to the system over a network or other mechanism for transferring information in a conventional manner.
- shader methods and systems that are platform independent, and that can unite various shading tools and applications under a single language or system construct; (2) methods and systems that enable the efficient and simple re-use and re-purposing of shaders, such as may be useful in the convergence of video games and feature films, an increasingly common occurrence (e.g., Lara Croft - Tomb Raider); (3) methods and systems that facilitate the design and construction of shaders without the need for computer programming, as may be useful for artists; and (4) methods and systems that enable the graphical debugging of shaders, allowing shader creators to find and resolve defects in shaders.
- Fig. 8 shows a flowchart of an overall method 150 according to an aspect of the invention.
- the described method enables the generation of an image of a scene in a computer graphics system from a representation to which at least one instantiated phenomenon has been attached, the instantiated phenomenon comprising an encapsulated shader DAG comprising at least one shader node.
- a metanode environment is configured that is operable for the creation of metanodes, the metanodes comprising component shaders that can be combined in networks to build more complex shaders.
- a graphical user interface is configured that is in communication with the metanode environment and is operable to manage the metanode environment to enable a user to construct shader graphs and phenomena using the metanode environment.
- a software language is provided as an interface usable by a human operator and operable to manage the metanode environment, implement shaders and unify discrete shading applications.
- the software language is configurable as a superset of a plurality of selected shader languages for selected hardware platforms, and operable to enable a compiler function to generate, from a single, re-usable description of a phenomenon expressed in the software language, optimized software code for a selected hardware platform in a selected shader language.
- at least one GUI library is provided that is usable in connection with the metanode environment to generate a GUI operable to construct shader graphs and phenomena.
- an interactive, visual, real-time debugging environment is configured that is in communication with the GUI, and that is operable to (1) enable the user to detect and correct potential flaws in shaders, and (2) provide a viewing window in which a test scene with a shader, metanode, or phenomenon under test is constantly rendered.
- a facility is configured that is in communication with the compiler function, and that is operable to convert the optimized software code for the selected hardware platform and selected shader language to machine code for selected integrated circuit instantiations, using a native compiler function for the selected shader language.
- the mental millTM technology provides an improved approach to the creation of shaders for visual effects.
- the mental mill solves many problems facing shader writers today and future-proofs shaders from the changes and evolutions of tomorrow's shader platforms.
- the mental mill further includes a library providing APIs to manage shader creation.
- This library can be integrated into third-party applications in a componentized fashion, allowing the application to use only the components of mental mill it requires.
- MetaSLTM The foundation of mental mill shading is the mental mill shading language MetaSLTM .
- MetaSL is a simple yet expressive language designed specifically for implementing shaders.
- the mental mill encourages the creation of simple and compact componentized shaders (referred to as MetanodesTM ) which can be combined in shader networks to build more complicated and visually interesting shaders.
- MetanodesTM simple and compact componentized shaders
- the goal of MetaSL is not to introduce yet another shading language but to leverage the power of existing languages through a single meta-language, MetaSL.
- MetaSL Currently existing shader languages focus on relatively specific platforms or contexts, for example hardware shading for games or software shading for feature film visual effects. MetaSL unifies these shading applications into a single language.
- the mental mill allows the creation of shader blocks called "metanodes," which are written in MetaSL to be attached and combined in order to form sophisticated shader graphs and PhenomenaTM .
- Shader graphs provide intuitive graphical user interfaces for creating shaders that are accessible to users who lack the technical expertise to write shader code.
- the mental mill graphical user interface libraries harness the shader graph paradigm to provide the user a complete graphical user interface for building shader graphs and Phenomena.
- the present invention provides a "metanode environment,” i.e., an environment that is operable for the creation and manipulation of metanodes.
- the described metanode environment may be implemented as software, or as a combination of software and hardware.
- a standalone application is included as part of mental mill, however since mental mill provides a cross-platform, componentized library, it is also designed to be integrated into third-party applications.
- the standalone mental mill application simply uses these libraries in the same way any other application would.
- the mental mill library can be broken down into the following pieces: (1) Phenomenon creator graphical user interface (GUI); (2) Phenomenon shader graph compiler; and (3) MetaSL shading language compiler.
- the mental mill Phenomenon creator GUI library provides a collection of GUI components that allow the creation of complex shaders and Phenomenon by users with a wide range of technical expertise.
- the primary GUI component is the shader graph view. This view allows the user to construct Phenomena by creating shader nodes (Metanodes or other Phenomena) and attaching them together in a graphs described.
- the shader graph provides a clear visual representation of the shader program that is not found when looking at shader code. This makes shader creation accessible to those users without the technical expertise to write shader code.
- the GUI library also provides other user interface components, summarized here:
- Shader parameter editor Provides sliders, color pickers, and other controls to facilitate the editing of shader parameter values.
- Render preview window Provides the user interactive feedback on the progress of their shader.
- IDE Integrated Development Environment
- GUI to provide interactive visual feedback.
- the IDE provides a high level interactive visual debugger for locating and correcting defects in shaders.
- the mental mill GUI library is both componentized and cross-platform.
- the library has been developed without dependencies on the user interface libraries of any particular operating system or platform.
- the mental mill GUI library is designed for integration into third-party applications. While the components of the GUI library have default appearances and behaviors, plug-in interfaces are provided to allow the look and feel of the Phenomenon creator GUI to be customized to match the look and feel of the host application.
- MetaSL shaders can be used in a variety of different ways.
- a single shader can be used when rendering offline in software or real-time in hardware.
- the same shader can be used across different platforms, such as those used by the next generation of video game consoles.
- the MetaSL compiler that is part of the mental mill library is itself extendable.
- the front-end of the compiler is a plug-in so that parsers for other languages or syntaxes can replace the MetaSL front end.
- the back-end of the compiler is also a plug- in so new target platforms can easily be supported in the future.
- This extensibility to both ends of the mental mill compiler library allows it to become the hub of shader generation. Shader writers typically face difficulties on several fronts. The following sections outline these issues and the rationale behind the creation of the mental mill technology, which is designed to provide a complete solution set. Shaders developed with mental mill are platform independent. This is a key feature of mental mill and insures that the effort invested in developing shaders is not wasted as target platforms evolve. This platform independence is provided for both shaders written in MetaSL and shader graphs of Metanodes.
- the mental mill libraries provide application programming interfaces (APIs) to generate shaders for a particular platform dynamically on demand from either a
- Phenomenon shader graph or a monolithic MetaSL shader Phenomenon shader graph or a monolithic MetaSL shader.
- mental mill makes it possible to export a shader in the format required by a target platform to a static file. This allows the shader to be used without requiring the mental mill library.
- FIG. 9 shows a diagram of an overall system 200 according to an aspect of the invention.
- the system 200 includes a mental mill processing module 202 that contains a number of submodules and other components, described below.
- the mental mill processing module 202 receives inputs in the form of Phenomena 204 and MetaSL code 206.
- the mental mill processing module 202 then provides as an output source code in a selected shader language, including: Cg 208, HLSL 210, GLSL 212, Cell SPU 214, C++ 216, and the like.
- the mental mill 202 is adaptable to provide as an output source code in future languages 218 that have not yet been developed.
- a component of platform independence is insulation from particular rendering algorithms.
- hardware rendering often employs a different rendering algorithm as compared to software rendering.
- Hardware rendering is very fast for rendering complex geometry, but may not directly support advanced lighting algorithms such as global illumination.
- MetaSL can be considered to be divided into three subsets or levels, with each level differing in both the amount of expressiveness and suitability for different rendering algorithms.
- FIG. 10 shows a diagram illustrating the levels of MetaSL 220 as subsets.
- the dotted ellipse region 224 shows C++ as a subset for reference.
- Level 2 (222) A superset of Level 1 (221), Level 2 (222) adds features typically only available with software rendering algorithms such as ray tracing and global illumination. Like Level 1 (221), Level 2 (222) is still relatively simplified language and shaders written within Level 2 (222) may still be able to be partially rendered on hardware platforms. This makes it possible to achieve a blending of rendering algorithms where part of the rendering takes place on hardware and part on software. Level 3 (223) - This is a superset of both Levels 1 (221) and 2 (222). In addition
- Level 3 is also a superset of the popular C++ language. While Level 3 (223) shaders can only ever execute in software, Level 3 (223) is the most expressive of the three levels since it includes all the features of C++. However few shaders need the complexity of C++ and given that Level 1 (221) has the least general set of possible targets, most shaders will likely be written using only Levels 1 (221) and 2 (222). While Level 1 (221) appears to be the smallest subset of MetaSL, it is also the most general in the types of platforms it will support. MetaSL Level 3 (223) is the largest superset, containing even all of C++, making it extremely powerful and expressive.
- Level 1 and 2 shaders (221, 222) have a high degree of compatibility, with the only difference being that Level 2 shaders (222) utilize advanced algorithms not capable of running on a GPU.
- the MetaSL compiler can use a Level 2 shader (222) as if it were a Level 1 shader (221) (and target hardware platforms) by removing functions not supported by Level 1 (221) and replacing them with no-ops.
- This feature, and the ability of the MetaSL compiler to also detect the level of a given shader allows the MetaSL compiler to simultaneously generate a hardware and software version of a shader (or only generate a software shader when it is required).
- the hardware shader can be used for immediate feedback to the user through hardware rendering. A software rendering can then follow up with a more precise image.
- Another useful feature of mental mill is the ability to easily repurpose shaders.
- One key example of this comes from the convergence of video games and feature films. It is not uncommon to see video games developed with licenses to use content from successful films. Increasingly feature films are produced based on successful video games as well. It makes sense to use the same art assets for a video game and the movie it was based on, but in the past this has been a challenge for shaders since the film is rendered using an entirely different rendering algorithm than the video game.
- the mental mill overcomes this obstacle by allowing the same MetaSL shader to be used in both contexts.
- the shader graph model for constructing shaders also encourages the re-use of shaders. Shader graphs inherently encourage the construction of shaders in a componentized fashion.
- a single Metanode, implemented by a MetaSL shader can be used in different ways in many different shaders. In fact entire sub-trees of a graph can be packaged into a Phenomenon and re-used as a single node.
- the mental mill graphical user interface provides a method to construct shaders that doesn't necessarily involve programming. Therefore, an artist or someone who is not comfortable writing code will now have the ability to create shaders for themselves.
- shaders An important aspect of the creation of shaders is the ability to analyze flaws, determine their cause, and find solutions. In other words, the shader creator must be able to debug their shader. Finding and resolving defects in shaders is necessary regardless of whether the shader is created by attaching Metanodes to form a graph or writing MetaSL code, or both.
- the mental mill provides functionality for users to debug their shaders using a high level, visual technique. This allows shader creators to visually analyze the states of their shader to quickly isolate the source of problems.
- a prototype application has been created as a proof of concept of this shader debugging system.
- shaders such as offline or real-time interactive rendering.
- shaders are invoked in the most performance critical section of the renderer and therefore can have a significant impact on overall performance. Because of this it is crucial for shader creators to be able to analyze the performance of their shaders at a fine granularity to isolate the computationally expensive portions of their shaders.
- the mental mill provides such analysis, referred to as profiling, through an intuitive graphical representation. This allows the mental mill user to receive visual feedback indicating the relative performance of portions of their shaders. This profiling information is provided at both the node level for nodes that are part of a graph or Phenomenon, and at the statement level for the MetaSL code contained in a Metanode.
- the performance timing of a shader can be dependent on the particular input values driving that shader.
- a shader may contain a loop where the number of iterations through the loop is a function of a particular input parameter value.
- the mental mill graphical profiler allows shader performance to be analyzed in the context of the shader graph where the node resides, which makes the performance results relative to the particular input values driving the node in that context.
- the performance information at any particular granularity is normalized to the overall performance cost of a node, the entire shader, or the cost to render an entire scene with multiple shaders.
- the execution time of a MetaSL statement within a Metanode can be expressed as a percentage of the total execution time of that Metanode or the total execution time of the entire shader if the Metanode is a member of a graph.
- the graphical representation of performance results can be provided using multiple visualization techniques. For example, one technique is to present the normalized performance cost by mapping the percentage to a color gradient.
- FIG. 12 shows a screenshot 230 illustrating this aspect of the invention.
- a MetaSL code listing 232 appears at the center of the screen 230.
- a color bar 234 appears to the left of each statement 232 indicating relative performance.
- the first 10 percentage points are mapped to a blue gradient and the remaining 90 percentage points are mapped to a red gradient.
- nonlinear mappings such as this focuses the user's attention on the "hotspots" in their MetaSL code.
- the user can access the specific numeric values used to select colors from the gradient. As the user sweeps their mouse over the color bars, a popup will display the execution time of the statement as a percentage of the total execution time.
- FIG. 13 shows a performance graph 240 illustrating another visualization technique.
- Graph 240 displays performance results with respect to a range of values of a particular input parameter.
- the performance cost of the illumination loop of a shader is graphed with respect to the number of lights in the scene.
- the jumps in performance cost in this example indicate points at which the shader must be decomposed into passes to accommodate the constraints of graphics hardware.
- FIG. 14 shows a table 250, in which the performance timings of each node of a Phenomenon are displayed with respect to the overall performance cost of the entire shader.
- the graphical profiling technique provided by mental mill is platform independent. This means that performance timings can be generated for any supported target platform. As new platforms emerge and new back-end plug-ins to the mental mill compiler are provided, these new platforms can be profiled in the same way. However any particular timing is measured with respect to some selected target platform. For example the same shader can be profiled when executed on hardware versus software or on different hardware platforms. Different platforms have individual characteristics and so the performance profile of a particular shader may look quite different when comparing platforms. The ability for a shader creator to analyze their shader on different platforms is critical in order to develop a shader that executes with reasonable performance on all target platforms.
- FIG. 15 shows a diagram of the mental mill libraries component 260.
- the mental mill libraries component 260 is divided into two major categories: the Graphical User Interface (GUI) library 270 and the Compiler library 280.
- the GUI library 270 contains the following components: phenomenon graph editor 271; shader parameter editor 272; render preview window 273; phenomenon library explorer 274; Metanode library explorer 275; and code editor and IDE 276.
- the compiler library 280 contains the following components: MetaSL language compile 281 ; and Phenomenon shader graph compiler 282.
- FIG. 16 shows a more detailed diagram of the compiler library 280.
- the mental mill compiler library 280 provides the ability to compile a MetaSL shader into a shader targeted at a specific platform, or multiple platforms simultaneously.
- the compiler library 280 also provides the ability to compile the shader graphs which implement Phenomenon into flat monolithic shaders. By flattening shader graphs into single shaders, the overhead of shader to shader calls is reduced to nearly zero. This allows graphs built from small shader nodes to be used effectively without incurring a significant overhead.
- FIG. 17 shows a diagram of a renderer 290 according to this aspect of the invention.
- the extensibility of the MetaSL compiler allows multiple target platforms and shading languages to be supported. New targets can be supported in the future as they emerge. This extensibility is accomplished through plug-ins to the back-end of the compiler.
- the MetaSL compiler handles much of the processing and provides the back- end plug-in with a high level representation of the shader, which it can use to generate shader code.
- the MetaSL compiler currently targets high level languages, however the potential exists to target GPUs directly and generate machine code from the high level representation. This would allow particular hardware to take advantage of unique optimizations available only because the code generator is working from this high level representation directly and bypassing the native compiler.
- the mental mill GUI library provides an intuitive, easy-to-use interface for building sophisticated shader graphs and Phenomenon.
- the library is implemented in a componentized and platform independent method to allow integration of some or all of the UI components into third-party applications.
- a standalone Phenomenon creator application is also provided which utilizes the same GUI components available for integration directly into other applications.
- the major GUI components provided by the library are as follows:
- FIG. 29 shows a table 440 listing the methods required to implement a BRDF shader.
- FIG. 30 shows a diagram of an example configuration 450, in which two BRDFs are mixed:
- the material Phenomenon collects together the surface shader, which itself may be represented by a shader graph, and the direct and indirect BRDF shaders.
- the BRDF shaders in the material Phenomenon are used to iterate over light samples to compute the result of the lighting functions. Since there are no dependencies on the result of the surface shader in this case, the lighting calculation can be deferred by the renderer to an optimal time.
- the raw data representing acquired BRDFs may be provided in many different forms and is usually sparse and unstructured. Typically the raw data is given to a standalone utility application where it is preprocessed. This application can organize the data into a regular grid, factor the data, and/or compress the data into a more practical size. By storing the data in a floating point texture, measured BRDFs can be used with hardware shading.
- BRDFs For software shading with measured BRDFs, there are two options to load the data. The first is to implement a native C++ function that reads the data from a file into an array. This native function can then be called by a Level 2 MetaSL BRDF shader. The other option is to implement the entire BRDF shader as a Level 3 MetaSL shader, which gives the shader complete access to all the features of C++. This shader can read the data file directly, but loses some of the flexibility of Level 2 shaders. As long as the data can be loaded into a Level 2 compatible representation such as an array, the first option of loading the data from a native C++ function is preferable. If the data must be represented by a structure requiring pointers (such as a kd-tree) then the part of the implementation which requires the use of pointers will need to be a Level 3 shader.
- a native C++ function that reads the data from a file into an array. This native function can then be called by a Level 2 MetaSL BR
- Techniques can be used when it is desirable to execute different code in hardware or software contexts, although often the same shader can be used for both hardware and software. Another use for techniques is to describe alternate versions of the same shader with differing quality levels.
- the language includes a mechanism to allow material shaders to express their result as a series of components instead of a single color value. This allows the components to be stored to separate image buffers for later compositing. Individual passes can also render a subset of all components and combine those with the remaining components that have been previously rendered.
- a mechanism in the scene definition file will allow the user to specify compositing rules for combining layers into image buffers.
- the user will specify how many image buffers are to be created and for each buffer they would specify an expression which determines what color to place in that buffer when a pixel is rendered.
- the expression can be a function of layer values such as:
- Image2 diffuse_lighting + specular_lighting
- the three layers from the shader result structure in the previous example are routed to two image buffers.
- Shader annotations can describes parameter ranges, default values, and tooltip descriptions among other things.
- Custom annotation types can be used to attach arbitrary data to shaders as well.
- MetaSL includes a comprehensive collection of built in functions. These include math, geometric, and texture lookup functions to name a few. In addition functions that may only be supported by software rendering platforms are also included. Some examples are functions to cast reflection rays or compute the amount of global illumination at a point in space.
- the mental millTM PhenomenonTM creation tool allows users to construct shaders interactively, without programming. Users work primarily in a shader graph view where MetanodesTM are attached to other shader nodes to build up complex effects. Metanodes are simplistic shaders that form the building blocks for constructing more complicated PhenomenaTM .
- a Phenomenon can be a shader, a shader tree, or a set of cooperating shader trees (DAGs), including geometry shaders, resulting in a single parameterized function with a domain of definition and a set of boundary conditions in 3D space, which include those boundary conditions which are created at run-time of the Tenderer, as well as those boundary conditions which are given by the geometric objects in the scene.
- DAGs cooperating shader trees
- a Phenomenon is a structure containing one or more shaders or shader DAGs and various miscellaneous "requirement" options that control rendering.
- a Phenomenon looks exactly like a shader with input parameters and outputs, but internally its function is not implemented with a programming language but as a set of shader DAGs that have special access to the Phenomenon interface parameters. Additional shaders or shader DAGs for auxiliary purposes can be enclosed as well.
- Phenomena are attached at a unique root node that serves as an attachment point to the scene. The internal structure is hidden from the outside user, but can be accessed with the mental mill Phenomenon creation tool.
- the mental mill tool also provides an automatically generated graphical user interface (GUI) for Phenomena and Metanodes.
- GUI graphical user interface
- This GUI allows the user to select values for parameters and interactively preview the result of their settings.
- parameter values Prior to being attached to a scene, parameter values must be specified to instantiate the Phenomenon.
- Phenomena There are two primary types of Phenomena which a user edits. A Phenomenon whose parameter values have not been specified (referred to as free-valued Phenomena) and Phenomena whose parameters have been fixed, or partially fixed (referred to as fixed Phenomena).
- a user When a user creates a new Phenomenon by building a shader graph or writing MetaSL code (or a combination of both), they are creating a new type of Phenomenon with free parameter values. The user can then create Phenomena with fixed parameter values based on this new Phenomena type. Typically many fixed value Phenomena will exist based on a particular Phenomenon. If the user changes a Phenomenon, all fixed Phenomena based on it will inherit that change. Changes to a fixed Phenomenon are isolated to that particular Phenomenon.
- the mental mill application UI is comprised of several different views with each view containing different sets of controls.
- the view panels are separated by four movable splitter bars. These allow the relative sizes of the views to be adjusted by the user.
- FIG. 32 is a screenshot 470 illustrating the basic simplified layout.
- the primary view panels are labeled, but for simplicity the contents of those views aren't shown. These view panels include the following: toolbox 472; phenomenon graph view 474; code editor view 476; navigation controls 478; preview 480; and parameter view 482.
- the Phenomenon graph view 474 allows the user to create new Phenomena by connecting Metanodes or Phenomenon nodes together to form graphs. An output of a node can be connected to one or more inputs which allow the connected nodes to provide values for the input parameters they are connected to.
- the Phenomenon graph view area 474 can be virtually infinitely large to hold arbitrarily complex shader graphs. The user can navigate around this area using the mouse by holding down the middle mouse button to pan and the right mouse button to zoom (button assignments are remappable).
- the navigation control described in a following section provides more methods to control the Phenomenon view.
- the user can create nodes by dragging them from the toolbox 472, as described below, into the Phenomenon graph view 474. Once in the graph view 474, nodes can be positioned by the user. A layout command will also perform an automatic layout of the graph nodes.
- FIG. 33 shows a graph node 490.
- the graph node 490 (either a Phenomenon node or Metanode) comprises several elements: • Preview - The preview window portion of the node allows the user to see the result of the shader node rendered on a surface. A sphere is the default surface, but other geometry can be specified. All nodes can potentially have preview windows, even if they are internal nodes of the shader graph.
- the preview is generated by considering the individual node as a complete shader and rendering sample geometry using that shader. This allows the user to visualize the dataflow through the shader graph since they can see the shader result at each stage of the graph.
- the preview part of the node can also be closed to reduce the size of the node.
- Each node has at least one output, but some nodes may have more than one output. The user clicks and drags on an output location to attach the output to another node's input. An output can be attached to more than one input.
- • Inputs Each node has zero or more input parameters. An input can be attached to the output of another node to allow that shader to control the input parameter's value; otherwise the value is settable by the user. An input can be attached to only one output. When the user hovers the mouse over an input for a short period of time, a tooltip is displayed that provides a short description of the parameter. The text for the tooltip is provided by an attribute associated with the shader.
- Some input or output parameters may be structures of sub-parameters.
- a shader graph inside a Phenomenon can also contain other Phenomenon nodes. The user can dive into these Phenomenon nodes in the same way, and repeat the process as long as Phenomena are nested.
- FIG. 35 shows a sample graph view 510 when inside a Phenomenon. Although the entire graph is shown in this view, it may be common to have a large enough graph such that the whole graph isn't visible at once, unless the user zooms far out. Notice that in this example all of the nodes except one have their preview window closed.
- a Phenomenon When a Phenomenon is opened inside a graph it can either be maximized, in which case it takes over the entire graph view, or it can be opened in-place. When opened in place, the user is able to see the graph outside the Phenomenon as well as the graph inside as shown in the graph view 520 in FIG. 36.
- a fixed- valued Phenomenon or Metanode will be created inside the Phenomenon, depending on the type of the node created. Nodes inside Phenomena can be wired to other nodes or Phenomenon interface parameters. If the node the user dragged into a Phenomenon was itself a Phenomenon node, then a Phenomenon with fixed values is created. Its parameter values can be set, or attached to other nodes, but because it is a fixed Phenomenon that refers back to the original, the user can not dive into the Phenomenon node and change it. Also any changes to the original will affect the node. If the user wishes to change the Phenomenon, a command is available that converts the node into a new free-valued Phenomenon which the user can enter and modify.
- the user clicks on the output area of one node and drags to position the mouse cursor over the input of another node. When they release the mouse, a connection line is drawn which represents the shader connection. If the connection is not a valid one, the cursor will indicate this to the user when the mouse is placed over a potential input during the attachment process.
- a type checking system will ensure that shaders can only be attached to inputs that match their output type.
- an attachment can be made between two parameters of different types if an adapter shader is present to handle the conversion. For example a scalar value can be attached to a color input using an adapter shader.
- the adapter shader may convert the scalar to a gray color or perform some other conversion depending on settings selected by the user.
- Adapter shaders are inserted automatically when they are available. When the user attaches parameters that require an adapter, the adapter will automatically be inserted when the user completes the attachment.
- mental mill will ensure that the user doesn't inadvertently create cycles in their graphs when making attachments.
- FIG. 38 shows a view 540, illustrating the result of attaching a color output to a scalar input.
- the 'Color to Scalar' adapter shader node is inserted in-between to perform the conversion.
- the conversion type parameter of the adapter node would allow the user to select the method in which the color is converted to a scalar. Some options for this parameter might be:
- Both nodes and connection lines can be selected and deleted. When deleting a node, all connections to that node are also deleted. When deleting a connection, only the connection itself is deleted.
- the user can organize the graph by boxing up parts of the graph into Phenomenon nodes.
- a command is available that takes the currently selected subgraph and converts it to a Phenomenon node.
- the result is a new Phenomenon with interface parameters for each input of selected nodes that are attached to an unselected node.
- the Phenomenon will have an output for each selected node whose output is attached to an unselected node.
- the new Phenomenon will be attached in place of the old subgraph which is moved inside the Phenomenon.
- the result is no change in behavior of the shader graph, but the graph will appear simplified since several nodes will be replaced by a single node.
- the ability of a Phenomenon to encapsulate complex behavior in a single node is an important and powerful feature of mental mill.
- FIG. 39 shows a view 550, in which shaders 2, 3, and 4 are boxed up into a new Phenomenon node.
- the connections to outside nodes are maintained and the results produced by the graph aren't changed, but the graph has become slightly more organized. Since Phenomenon can be nested, this type of grouping of sub-graphs into Phenomenon can occur with arbitrary levels of depth.
- a preview window displays a sample rendering of the currently selected Phenomenon.
- the image is the same image shown in the preview window of the Phenomenon node, but can be sized larger to show more detail.
- Buttons will allow the user to zoom to fit the selected portion of the graph within the Phenomenon graph view or fit the entire graph to the view.
- a "bird's eye” control shows a small representation of the entire shader graph with a rectangle that indicates the portion of the graph shown in the Phenomenon graph view.
- the user can click and drag on this control to position the rectangle on the portion of the graph they wish to see.
- FIG. 40 is a view 560 showing the bird's eye view control viewing a sample shader graph.
- the dark gray rectangle 562 indicates the area visible in the Phenomenon graph view.
- Phenomenon node which causes the graph view to be replaced with the graph of the Phenomenon they entered. This process can continue as long as Phenomena are nested in other Phenomena.
- the navigation window provides back and forward buttons to allow users to retrace their path as they navigate through nested Phenomenon.
- the toolbox window contains the shader nodes which make up the building blocks that shader graphs are built from.
- the user can click and drag nodes from the toolbox into the Phenomenon graph view to add nodes to the shader graph.
- Nodes are sorted by category and the user can choose to view a single category or all categories by selecting from a drop down list of categories.
- the toolbox In addition to Phenomena or Metanodes, the toolbox also contains "actions." Actions are fragments of a complete shader graph that the user can use when building new shader graphs. It is common for patterns of shader nodes and attachments to appear in different shaders. A user can select a portion of a shader graph and use it to create a new action. In the future if they wish to create the same configuration of nodes they can simply drag the action from the toolbox into the shader graph to create those nodes.
- the user can select one or more shader description files to use as the shader library that is accessible through the toolbox. There are commands to add node types to this library and remove nodes.
- the mental mill tool will provide an initial library of Metanodes as well.
- FIG. 43 shows a partial screenshot 590 illustrating a sample of some controls in the parameter view.
- the parameter view displays controls that allow parameters of the selected node to be edited. These controls include sliders, color pickers, check boxes, drop down lists, text edit fields, and file pickers, to name a few.
- buttons is present that allows the user to pick an attachment source from inside the parameter view. It should be noted that, as currently implemented, shader attachments are not allowed when editing a top-level Phenomenon. This button will cause a popup list to appear that allows the user to pick a new node or choose from other available nodes currently in the graph. A "none" option is provided to remove an attachment. When an attachment is made, the normal control for the parameter is replaced by a label indicating the name of the attached node.
- parameters will appear in the order in which they are declared in the Phenomenon or Metanode however attributes in the node can also control the order and grouping of parameters.
- attributes in the node can also control the order and grouping of parameters.
- Hard limits are ranges for which the parameter is not allowed to exceed.
- Soft limits specify a range for the parameter that is generally useful, but the parameter is not strictly limited to that range.
- the extents of a slider control will be set to the soft limits of a parameter. A command in the parameter view will allow the user to change the extents of a slider past the soft limits as long as they do not exceed the hard limits.
- Controls in the parameter view will display tooltips when the user hovers the mouse over a control for a short period of time.
- the text displayed in the tooltip is a short description of the parameter that comes from an attribute associated with the shader.
- a button at the top of all controls will display a relatively short description of the function of the shader as a whole. This description is also taken from an attribute associated with the node.
- FIG. 44 shows a partial screenshot 600 illustrating a code editor view according to the present aspect of the invention.
- the code editor view allows users that wish to create shaders by writing code to do so using MetaSL. Users will be able to create monolithic shaders by writing code if need be, but more likely they will create new Metanodes that are intended to be used as a part of shader graphs.
- a command allows the user to create a new Metanode.
- the result is a Metanode at the top level of the graph (not inside a Phenomenon) that represents a new type of Metanode.
- the user can always create an instance of this new Metanode inside a Phenomenon if they wish to.
- the corresponding MetaSL code When a top level Metanode (outside of a Phenomenon) is selected, the corresponding MetaSL code will appear in the code editor view for the user to edit. After making changes to the code, a command is available to compile the shader.
- the MetaSL compiler and a C++ compiler for the user's native platform are invoked by mental mill to compile the shader. Cross-compilation for other platforms is also possible. Any errors are routed back to mental mill which in turn displays them to the user. Clicking on errors will take the user to the corresponding line in the editor. If the shader compiles successfully then the Metanode will update to show the current set of input parameters and outputs. The preview window will also update to give visual feedback on the look of the shader.
- the parameter view will also display controls for selected top level Metanodes allowing the user to edit the default value for the node's input parameters. This is analogous to editing a free-valued Phenomenon's interface parameters.
- the main menu may be configured in a number of ways, including the following:
- File - The file menu contains the following items: 1. New Phenomenon - This command is used to create a new free-valued Phenomenon. Once created, other Phenomena with fixed parameter values can be created based on this Phenomenon.
- New Metanode Creates a new Metanode type.
- a top level Metanode is created that when selected, its code is editable in the code editor.
- Open File Opens a shader description file for editing. The contents of the file will appear in the Phenomenon graph view. This could include free or fixed Phenomena as well as Metanode types. Files that are designated to be part of the toolbox can also be opened and edited. Editing a Phenomenon's shader graph will affect all fixed- valued Phenomena based on the Phenomenon. Therefore opening a toolbox file is much different than dragging a Phenomenon or Metanode from the toolbox into the Phenomenon graph view. Dragging a node from the toolbox creates a fixed-valued Phenomenon that can be modified without affecting the original. Opening a description file used by the toolbox allows the original Phenomenon to be modified. 4. Save - Saves the currently opened file. If the file has never been saved before, this command prompts the user to pick a file name for the new file.
- Undo Undoes the last change. This could be a change to the shader graph, a change of the value of a parameter, or a change to the shader code made in the code editor view.
- the mental mill tool will have virtually unlimited levels of undo. The user will be able to set the maximum number of undo levels as a preference, however this application is not memory intensive and therefore the number of undo levels can be left quite high.
- Paste Pastes the contents of the clipboard into the shader graph or code view.
- a toolbar contains tool buttons which provide easy access to common commands.
- toolbar commands operate on the shader graph selection.
- Some toolbar commands replicate commands found in the main menu.
- the list of toolbar items may include the following commands: Open file; Save file; New shader graph (Phenomenon); New shader code-based; Undo; Redo; Copy; Paste; Close.
- shaders An important aspect of the creation of shaders is the ability to analyze flaws, determine their cause, and find solutions. In other words, the shader creator must be able to debug their shader. Finding and resolving defects in shaders is necessary regardless of whether the shader is created by attaching Metanodes to form a graph or writing MetaSL code, or both.
- the mental mill provides functionality for users to debug their shaders using a high level, visual technique. This allows shader creators to visually analyze the states of their shader to quickly isolate the source of problems.
- the present aspect of the invention provides structures for debugging Phenomena.
- the mental mill GUI allows users to construct Phenomena by attaching Metanodes, or other Phenomena, to form a graph.
- Each Metanode has a representation in the UI that includes a preview image describing the result produced by that node. Taken as a whole, this network of images provides an illustration of the process the shader uses to compute its result.
- FIG. 45 shows a partial screenshot 610, illustrating this aspect of the invention.
- a first Metanode 612 might compute the illumination over a surface while another Metanode 614 computes a textured pattern.
- a third node 616 combines the results of the first two to produce its result.
- a shader creator By visually traversing the Metanode network, a shader creator can inspect their shading algorithm and spot the location where a result is not what they expected.
- a node in the network might be a Phenomenon, in which case it contains one or more networks of its own.
- the mental mill allows the user to navigate into Phenomenon nodes and inspect their shader graphs visually using the same technique.
- viewing the results of each node in a Phenomenon does not provide enough information for the user to analyze a problem with their shader. For example, all of the inputs to a particular Metanode may appear to have the correct value and yet the result of that Metanode might not appear to be as the user is expecting. Also when authoring a new Metanode by writing MetaSL code, a user may wish to analyze variable values within the Metanode as the Metanode computes its result value.
- the mental mill extends the visual debugging paradigm into the MetaSL code behind each Metanode.
- the mental mill MetaSL debugger presents the user with a source code listing containing the MetaSL code for the shader node in question. The user can then step through the shader's instructions and inspect the values of variables as they change throughout the program's execution. However instead of just presenting the user with a single numeric value, the debugger displays multiple values simultaneously as colors mapped over the surface of an object.
- Representing a variable's values as an image rather than a single number has several advantages.
- the user can also use the visual debugging paradigm to quickly locate the input conditions that produce an undesirable result.
- a shader bug may only appear when certain input parameters take on specific values, and such a scenario may only occur on specific parts of the geometry's surface.
- the mental mill debugger allows the user to navigate in 3D space using the mouse to find and orient the view around the location on the surface that is symptomatic of the problem.
- FIG. 46 shows a partial screenshot 620 illustrating a variable list according to this aspect of the invention.
- new variables may come into scope and appear in the list while others will go out of scope and be removed from the list.
- Each variable in the list has a button next to its name that allows the user to open the variable and see additional information about it such as its type and a small preview image displaying its value over a surface.
- the preview image for that variable will update to reflect the modification.
- the user can select a variable from the list to display its value in a larger preview window.
- Loops (such as for, f oreach, or while loops) and conditional statements (such as if and else) create an interesting circumstance within this debugging model. Because the shader program is operating on multiple data points simultaneously, the clause of an if/else statement may or may not be executed for each data point.
- the MetaSL debugger provides the user several options for viewing variable values inside a conditional statement. At issue is how to handle data points that do not execute the if or else clause containing the selected statement. These optional modes include the following: • Show final shader result — in this mode, data points that do not reach the selected statement are processed by the complete shader and the final result is produced in the output image.
- Vector3 average_normals (Vector3 norml , Vector3 norm2 ) ;
- Matrix2x3 A matrix with 2 rows and 3 columns
- MetaSL provides the capability to define an enumeration as a convenient way to represent a set of named integer constants.
- the enum keyword is used followed by a comma separated list of identifiers enclosed in brackets. For example:
- An enumeration can also be named in which case it defines a new type.
- the enumerators can be explicitly assigned values as well. For example:
- V . xxy y Returns a 4 component vector ⁇ x , x , y , y >
- Vector components can also be accessed using array indices and the array index can be a variable.
- the standard math operators ( + , - , * , / ) apply to all vectors and operate in a component- wise fashion.
- the standard math operators are overloaded to allow a mixture of scalars and vectors of different sizes in expressions however in any single expression all vectors must have the same size.
- Scalars When Scalars are mixed with vectors, the scalar is promoted to a vector of the same size with each element set to the value of the scalar.
- Matrices are defined with row and column sizes ranging from 2 to 4. All matrices are comprised of Scalar type elements. Matrix elements can also be referred to using array notation (row-major order) with the array index selecting a row from the matrix. The resulting row is either a Vector2, Vector3, or Vector4 depending on the size of the original matrix. Since the result of indexing a matrix is a vector and vector types also support the index operator, individual elements of a matrix can be accessed with syntax similar to a multidimensional array.
- the multiplication operator is supported to multiply two matrices or a matrix and a vector and will perform a linear algebra style multiplication between the two.
- the number of columns of the matrix on the left must equal the number of rows of the matrix on the right.
- the result of multiplying a NxT matrix with a TxM matrix is a NxM matrix.
- a vector can be multiplied on the right or left side provided the number of elements equals the number of rows when the vector is on the left side of the matrix and the number of elements equals the number of columns when the vector is on the right.
- Vector3 v3 (0,0,0) ;
- Vector2 v2 Vector2 (v3.x, v3.y); or:
- the .xyzw notation can be applied to variables of type Scalar to generate a vector. For example:
- MetaSL supports arrays of any of the built-in types or user defined structures, however only fixed length arrays can be declared in shader functions. There are two exceptions. As stated in the input parameter section, shader inputs can be declared as dynamically sized arrays. The other exception is for parameters to functions or methods, which can also be arrays of unspecified size. In both these cases, by the time the shader is invoked during rendering the actual size of the array variable will be known.
- the shader code can refer to the size of an array as name . count where name is the array variable name.
- This simple example loops over an array and sums the components. The code for this function was written without actual knowledge of the size of the array but when shading the size will be known. Either the array variable will come from an array shader parameter or a fixed size array declared in a calling function.
- Custom structure types can be defined to extend the set of types used by MetaSL shaders.
- the syntax of a structure type definition looks like the following example:
- Structure member variables can be of any built-in type or another user-defined structure type to produce a nested structure. Structure members can also be arrays.
- This set of state variables can be viewed as an implicit input to all shaders.
- the state input's data type is a struct containing all the state variables available to shaders.
- a special state shader can be connected to this implicit input.
- the state shader has an input for each state member and outputs the state struct.
- a shader that computes illumination will likely refer to the surface normal at the point of intersection.
- a bump map shader could produce a modified normal which it computes from a combination of the state normal and a perturbation derived from a gray-scale image.
- a state shader can be attached to the illumination shader thus exposing the normal as an input. The output of the bump shader can then be attached to the state shader's normal input.
- the illumination shader will most likely contain a light loop that iterates over scene lights and indirectly causes light shaders to be evaluated.
- the state values passed to the light shaders will be the same state values provided to the surface shader. If the state was modified by a state shader, the modification will also affect the light shaders.
- This system of implicit state input parameters simplifies shader writing.
- a shader can easily refer to a state variable while at the same time maintaining the possibility of attaching another shader to modify that state variable. Since the state itself isn't actually modified, there is no danger of inadvertently affecting another shader.
- FIG. 55 shows a schematic 710, illustrating bump mapping according to the present aspect of the invention. At first this might seem slightly complex, however the graph implementing bump mapping can be boxed up inside a Phenomenon node and viewed as if it was a single shader.
- Two of these shaders are fed modified texture coordinates from the attached state shaders.
- the state shaders themselves are fed modified texture coordinates produced by "Offset coordinate" shaders.
- the whole schematic is contained in a Phenomenon so not all users have to be concerned with the details.
- the bump map Phenomenon has an input of type
- FIG. 56 shows a diagram 720 illustrating the bump map Phenomenon in use.
- the phong shader implicitly refers to the state's normal when it loops over scene lights. In this case the phong shader's state input is attached to a state shader and the modified normal produced by the bump shader is attached to the state shader's normal input.
- a set of state variables includes the following: position; normal; origin; direction; distance; texture_coord_n; and screen_position. This list can be supplemented to make it more comprehensive. Note that the preprocessor can be used to substitute common short name abbreviations, often single characters, for these longer names.
- state variable parameters are added to nodes.
- a set of special state variables are implicitly declared within a shader's main method, and are available for the shader code to reference. These variables hold values describing both the current state of the renderer as well as information about the intersection that led to the shader call.
- the normal variable refers to the interpolated normal at the point of intersection.
- these variables are only available inside the shader's main method. If a shader wishes to access one of these state variables within a helper method, the variable must be explicitly passed to that method. Alternatively, the state variable itself may be passed to another method, in which case all the state variables are then available to that method.
- This set of state variables can be viewed as implicit inputs to all shaders, which by default are attached to the state itself. However, one or more input parameters can be dynamically added to an instance of a shader that corresponds by name to a state variable. In that case, these inputs override the state value and allow a connection to the result of another shader without modifying the original shader source code. In addition to modifying a state variable with an overriding input parameter, a shader can also directly modify a state variable with an assignment statement in the MetaSL implementation.
- Exposing state variables as inputs allows one shader to refer to state variables while allowing another shader to drive the state values used by that shader. If no input parameter is present for a particular referenced state variable, that variable will continue to refer to the original state value.
- a shader that computes illumination typically refers to the surface normal at the point of intersection.
- a bump map shader may produce a modified normal which it computes from a combination of the state normal and a perturbation derived from a gray-scale image.
- a parameter called "normal" can be added to an instance of the illumination shader thus exposing the normal as an input, just for that particular instance. The output of the bump shader can then be attached to the shader' s normal input.
- the illumination shader contains a light loop that iterates over scene lights and indirectly causes light shaders to be evaluated.
- the state values passed to the light shaders will be the same state values provided to the surface shader. If a state variable was overridden by a parameter or modified within the shader, that modification will also affect the light shaders. It is not possible, however, to make modifications to a state variable that will affect shaders attached to input parameters because all input parameters are evaluated before a shader begins execution. This system of implicit state input parameters simplifies shader writing. A shader can easily refer to a state variable while at the same time maintaining the possibility of attaching another shader to modify that state variable.
- FIG. 57 shows a schematic 730 of a bump map Phenomenon 732.
- the graph implementing bump mapping can be boxed up inside a Phenomenon node and viewed as if it was a single shader.
- the "Perturb normal” shader 734 uses three samples of a gray-scale image to produce a perturbation amount.
- the texture coordinate used to sample the bump map texture is offset in both the U and V directions allowing the slope of the gray-scale image in the U and V directions to be computed.
- An “amount” input scales the amount of the perturbation.
- the “Perturb normal” shader 734 adds this perturbation to the state's normal to produce a new modified normal.
- FIG. 58 shows a diagram of a bump map Phenomenon 740 in use.
- the phong shader 742 implicitly refers to the state's normal when it loops over scene lights. This illustration shows an instance of the phong shader which has an added "normal" input allowing the normal to be attached to the output of the bump map shader.
- FIGS. 59A-B show a table 750 listing the complete set of state variables.
- State vectors are always provided in "internal" space. Internal space is undefined and can vary across different platforms. If a shader can perform calculations independently of the coordinate system then it can operate with the state vectors directly, otherwise it will need to transform state vectors into a known space.
- FIG. 60 shows a table 760 listing the transformation matrices.
- a shader node that refers to light or volume shader state variable can only be used as a light or volume shader or in a graph which is itself used as a light or volume shader.
- Light shaders can also call the state transformation functions and pass the value
- FIG. 61 shows a table 770 listing light shader state variables
- FIG. 62 shows a table 780 listing volume shader state variables.
- the ray that is responsible for the current intersection state is described by the ray type, ray shader, is ray_dispersal_group ( ) and is ray_history_group ( ) state variables and functions. These variables and functions use the following strings to describe attributes of the ray:
- a ray has exactly one of the following types:
- a ray can be a member of at most one of the following groups: • "specular" - Specular transparency, reflection, or refraction
- a ray can have zero or more of the following history flags: • "lightmap” - Lightmap shader call
- Trace__options holds parameters used by the trace ( ) and occlusion ( ) functions described in the next section.
- a shader can declare an instance of this type once and pass it to multiple trace calls.
- FIG. 63 sets forth a table 790 listing the methods of the Trace_options class.
- FIGS. 64 and 65 sets forth tables 800 and 810 listing the functions that are provided as part of the intersection state and depend on values accessible through the state variable. These functions, like state variables, can only be called within a shader's main method or any method in which the state variable is passed as a parameter.
- MetaSL supports the familiar programming constructs that control the flow of a shader's execution. Specifically these are:
- a Light_iterator class facilitates light iteration and an explicit light list shader input parameter is not required.
- the light iterator implicitly refers to scene lights through the state.
- An instance of this iterator is declared and specified as part of the foreach statement. The syntax looks like the following.
- Light_iterator light foreach (light) ⁇ // Statements that refer to members of 'light' ⁇
- the shader will likely declare one or more variables outside the loop to store the result of the lighting. Each trip through the loop the shader will add the result of the BRDF to these variables.
- Color diffuse_light ( 0, 0 , 0 , 0 ) ;
- MetaSL This is a simple example that loops over lights and sums the diffuse illumination.
- a powerful feature of MetaSL is its ability to describe shaders independent of a particular target platform. This includes the ability to run MetaSL shaders in software with software based renderers and in hardware when a GPU is available.
- Software rendering is typically more generalized and flexible allowing a variety of rendering algorithms including ray tracing and global illumination. At the time of this writing, graphics hardware doesn't generally support these features. Further more, different graphics hardware have different capabilities and resource limitations.
- the MetaSL compiler will provide feedback to the shader writer indicating the requirements for any particular shader it compiles. This will let the user know if the shader they have written is capable of executing on a particular piece of graphics hardware. When possible, the compiler will specifically indicate which part of the shader caused it to be incompatible with graphics hardware. For example, if the shader called a ray tracing function the compiler may indicate that the presence of the ray tracing call forced the shader to be software compatible only. Alternatively the user may specify a switch that forces the compiler to produce a hardware shader. Calls to APIs that aren't supported by hardware will be removed from the shader automatically.
- MetaSL includes support for the following preprocessor directives: #defme; #undef; #if; #ifdef; #ifndef; #else; #elif; #endif. These directives have the same meaning as their equivalents in the C programming language. Macros with arguments are also supported such as:
- the # include directive is also supported to add other MetaSL source files to the current file. This allows structure definitions and shader base classes to be shared across files.
- a technique is a variation of a shader implementation. While some shaders may only require a single technique, there are situations where it is desirable to implement multiple techniques.
- the language provides a mechanism to declare multiple techniques within a shader. Often times a single shader implementation can map to both software and hardware so the exact same shader can be used regardless of whether rendering takes place on the CPU or GPU. In some cases though, such as when the software shader uses features not supported by current graphics hardware, a separate method for the shader needs to be implemented to allow the shader to also operate on the GPU. Different graphics processors have different capabilities and limitations as well so a shader that works on a particular GPU might be too complicated to work on another GPU. Techniques also allow multiple versions of a shader to support different classes of hardware.
- the technique declaration appears somewhat like a nested class definition inside the shader class definition.
- the technique declaration provides a name that can be used to refer to the technique.
- the technique must at least define the main method which performs the primary functionality of the shader technique.
- the technique can implement an event method to handle init and exit events.
- the main and event methods are described in previous sections.
- the technique can contain other local helper methods used by the two primary technique methods. Shader my_shader ⁇ input : Color c; output :
- Event_type event Event_type event
- void main ( ) void main ( ) ;
- This example shows a shader that implements two separate techniques for hardware and software.
- the main and event methods of the techniques can be implemented inline in the class definition or separately as illustrated in this example.
- a separate rule file accessible by the renderer at render time will inform the Tenderer how to select different techniques of a shader.
- a rule describes the criteria for selecting techniques based on the values of a predefined set of tokens.
- the token values describe the context in which the shader is to be used. Possible token values are:
- Shadow This token value is true when shading a surface point in order to determine the transparency while tracing a shadow ray.
- Energy - This token value is true when calling a light shader to determine the energy produced by the light to allow the renderer to sort rays by importance.
- Hardware vender chipset A string identifying the chipset of the current hardware. For example n v30 or r 420.
- Rules need to specify the name of the technique and an expression based on token values which defines when the particular technique should be selected. Multiple rules can match any particular set of token values.
- the process in which the renderer uses to select a technique for a shader is the following: first only rules for techniques present in the shader are considered. Out of these rules, each one is tested in order and the first matching rule selects the technique. If no rule matches then either an error/warning is produced or a default technique is used for the shader.
- the first three rules support software shaders that either have a single technique, called “standard,” to handle all shading quality levels or shaders that have two techniques, “beauty” and “fast,” to separately handle shading two different quality levels. Token values can also be available to shaders at runtime so the shader with a single standard technique could still perform optional calculations depending on the desired quality level.
- the second three rules are an example of different techniques to support different classes of hardware.
- the fancy hardware technique might take advantage of functionality only available within shader model 3.0 or better.
- the nvidia_hardware technique may use features specific to NVIDIA's nv30 chipset.
- basic_hardware could be a catchall technique for handling generic hardware.
- the language includes a mechanism to allow material shaders to express their result as a series of components instead of a single color value. This allows the components to be stored to separate image buffers for later compositing. Individual passes can also render a subset of all components and combine those with the remaining components that have been previously rendered.
- a material shader factors its result into components by declaring a separate output for each component.
- the names of the output variable define the names of layers in the current rendering.
- Color colorl ⁇ default_value (Color (0,0, 0,1) ) ; display_name( "Color 1") ;
- Color color2 ⁇ default_value (Color (1,1, 1,1) ) ; display_name ("Color 2”) ;
- MetaSL includes a standard library of intrinsic functions.
- the following lists which may be expanded without departing from the scope of the invention, do not include software-only methods, including lighting functions and ray-tracing functions.
- Texture map functions texture lookup.
- the texture functions pose an interesting problem for unifying software and hardware shaders.
- Hardware texture functions usually come in several versions that allow projective texturing (the divide by w is built into the texture lookup), explicit filter width, and depth texture lookup with depth compare.
- Cg also has RECT versions of the texture lookup which use pixel coordinates of the texture instead of normalized coordinates.
- functionality may be provided in both hardware and software. However, it may be desirable to provide a software-only texture lookup with elliptical filtering.
- FIG. 69 shows a screenshot of a debugger UI 850 according to a further aspect of the invention.
- the shader debugger UI 850 comprises a code view panel 852 that displays the MetaSL code for the currently loaded shader, a variable list panel 854 that displays all variables in scope at the selected statement, and a 3D view window 856 that displays the values of the selected variable, or the result of the entire shader if no variable is selected. There is also provided an error display window 858.
- FIG. 70 shows a screenshot of the debugger UI 860 that appears when loading a shader, if there are compile errors they are listed in error display window 868. Selecting an error in the list highlights the line of code 862 where the error occurred. A shader file is reloaded by pressing the F5 key.
- FIG. 71 shows a screenshot of the debugger UI 870 that appears once a shader is successfully loaded and compiles without errors. Debugging begins by selecting a statement 872 in the code view panel 874. Selected statements are shown by a light green highlight along the line of the selected statement. The variable window displays variables 876 that are in scope for the selected statement.
- a statement is selected by clicking on the line of code click on a variable to display its value in the render window.
- the "normal" variable is selected (which is of type Vector3).
- the vector values are mapped to the respective colors. Lines that are statements have a white background. Lines that are not statements are gray.
- FIG. 72 shows a screenshot of the debugger screen 880, illustrating how conditional statements and loops are handled.
- Conditional statements and loops may not be executed for some data points, and therefore variables can not be viewed for certain data points when the selected statement is in a conditional clause.
- FIG. 72 when the selected statement 882 is in a conditional, only pixels 884 where the conditional value evaluated to true display the debug value. The rest of the pixels display the original result.
- FIG. 73 shows a screenshot of a debugger screen 890, illustrating what happens when the selected statement is in a loop. In that case, the values displayed represent the first pass through the loop. A loop counter may be added to allow the user to specify which pass through the loop they want to debug.
- the user can step through statements by using the left and right arrow keys to move forward and backward through the lines of code.
- the up and down arrow keys move through the variable list.
- FIG. 74 shows a screenshot of a debugger screen 900 showing how texture coordinates are handled.
- the user can select and view texture coordinates as shown in this example.
- the prototype provides four sets of texture coordinate, each tiled twice as many times as the previous set. U and V derivative vectors are also supplied.
- FIG. 75 shows a screenshot of a debugger screen 910, in which parallax mapping produces the illusion of depth by deforming texture coordinates.
- FIG. 76 screenshot 920, the offset of the texture coordinates can be clearly seen when looking at the texture coordinates in the debugger.
- FIGS. 77 and 78 are screenshots of debugger screens 930 and 940, illustrating other shader examples.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP06774417A EP1907964A4 (en) | 2005-07-01 | 2006-06-30 | Computer graphics shader systems and methods |
JP2008519658A JP2009500730A (en) | 2005-07-01 | 2006-06-30 | Computer graphic shader system and method |
CA002613541A CA2613541A1 (en) | 2005-07-01 | 2006-06-30 | Computer graphics shader systems and methods |
AU2006265815A AU2006265815A1 (en) | 2005-07-01 | 2006-06-30 | Computer graphics shader systems and methods |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US69612005P | 2005-07-01 | 2005-07-01 | |
US60/696,120 | 2005-07-01 | ||
US70742405P | 2005-08-11 | 2005-08-11 | |
US60/707,424 | 2005-08-11 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2007005739A2 true WO2007005739A2 (en) | 2007-01-11 |
WO2007005739A3 WO2007005739A3 (en) | 2008-09-18 |
Family
ID=37605099
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2006/025827 WO2007005739A2 (en) | 2005-07-01 | 2006-06-30 | Computer graphics shader systems and methods |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP1907964A4 (en) |
JP (1) | JP2009500730A (en) |
AU (1) | AU2006265815A1 (en) |
CA (1) | CA2613541A1 (en) |
WO (1) | WO2007005739A2 (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008148818A1 (en) | 2007-06-05 | 2008-12-11 | Thales | Source code generator for a graphics card |
EP2063395A3 (en) * | 2007-11-20 | 2010-06-02 | DreamWorks Animation LLC | Tinting a surface to simulate a visual effect in a computer generated scene |
JP2011517803A (en) * | 2008-03-04 | 2011-06-16 | マイクロソフト コーポレーション | Shader-based extension for declarative presentation framework |
CN104050658A (en) * | 2013-03-15 | 2014-09-17 | 梦工厂动画公司 | Lighting Correction Filters |
US8866827B2 (en) | 2008-06-26 | 2014-10-21 | Microsoft Corporation | Bulk-synchronous graphics processing unit programming |
US8941654B2 (en) | 2010-05-06 | 2015-01-27 | Kabushiki Kaisha Square Enix | Virtual flashlight for real-time scene illumination and discovery |
EP2754068A4 (en) * | 2011-09-08 | 2015-12-23 | Microsoft Technology Licensing Llc | Visual shader designer |
WO2016012393A1 (en) * | 2014-07-25 | 2016-01-28 | Bayerische Motoren Werke Aktiengesellschaft | Hardware-independent display of graphic effects |
US9514562B2 (en) | 2013-03-15 | 2016-12-06 | Dreamworks Animation Llc | Procedural partitioning of a scene |
US9589382B2 (en) | 2013-03-15 | 2017-03-07 | Dreamworks Animation Llc | Render setup graph |
EP2277106A4 (en) * | 2008-05-15 | 2017-04-26 | Microsoft Technology Licensing, LLC | Software rasterization optimization |
US9659398B2 (en) | 2013-03-15 | 2017-05-23 | Dreamworks Animation Llc | Multiple visual representations of lighting effects in a computer animation scene |
US9811936B2 (en) | 2013-03-15 | 2017-11-07 | Dreamworks Animation L.L.C. | Level-based data sharing for digital content production |
CN109727186A (en) * | 2018-12-12 | 2019-05-07 | 中国航空工业集团公司西安航空计算技术研究所 | One kind is based on SystemC towards GPU piece member colouring task dispatching method |
CN111460570A (en) * | 2020-05-06 | 2020-07-28 | 北方工业大学 | Complex structure node auxiliary construction method based on BIM technology |
US10740074B2 (en) * | 2018-11-30 | 2020-08-11 | Advanced Micro Devices, Inc. | Conditional construct splitting for latency hiding |
CN113407090A (en) * | 2021-05-31 | 2021-09-17 | 北京达佳互联信息技术有限公司 | Interface color sampling method and device, electronic equipment and storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10802698B1 (en) * | 2017-02-06 | 2020-10-13 | Lucid Software, Inc. | Diagrams for structured data |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6496190B1 (en) * | 1997-07-02 | 2002-12-17 | Mental Images Gmbh & Co Kg. | System and method for generating and using systems of cooperating and encapsulated shaders and shader DAGs for use in a computer graphics system |
US7034828B1 (en) * | 2000-08-23 | 2006-04-25 | Nintendo Co., Ltd. | Recirculating shade tree blender for a graphics system |
US20050140672A1 (en) * | 2003-02-18 | 2005-06-30 | Jeremy Hubbell | Shader editor and compiler |
-
2006
- 2006-06-30 JP JP2008519658A patent/JP2009500730A/en active Pending
- 2006-06-30 WO PCT/US2006/025827 patent/WO2007005739A2/en active Application Filing
- 2006-06-30 AU AU2006265815A patent/AU2006265815A1/en not_active Abandoned
- 2006-06-30 CA CA002613541A patent/CA2613541A1/en not_active Abandoned
- 2006-06-30 EP EP06774417A patent/EP1907964A4/en not_active Withdrawn
Non-Patent Citations (1)
Title |
---|
See references of EP1907964A4 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2917199A1 (en) * | 2007-06-05 | 2008-12-12 | Thales Sa | SOURCE CODE GENERATOR FOR A GRAPHIC CARD |
US20110032258A1 (en) * | 2007-06-05 | 2011-02-10 | Thales | Source code generator for a graphics card |
WO2008148818A1 (en) | 2007-06-05 | 2008-12-11 | Thales | Source code generator for a graphics card |
EP2063395A3 (en) * | 2007-11-20 | 2010-06-02 | DreamWorks Animation LLC | Tinting a surface to simulate a visual effect in a computer generated scene |
US8310483B2 (en) | 2007-11-20 | 2012-11-13 | Dreamworks Animation Llc | Tinting a surface to simulate a visual effect in a computer generated scene |
JP2011517803A (en) * | 2008-03-04 | 2011-06-16 | マイクロソフト コーポレーション | Shader-based extension for declarative presentation framework |
EP2277106A4 (en) * | 2008-05-15 | 2017-04-26 | Microsoft Technology Licensing, LLC | Software rasterization optimization |
US8866827B2 (en) | 2008-06-26 | 2014-10-21 | Microsoft Corporation | Bulk-synchronous graphics processing unit programming |
US8941654B2 (en) | 2010-05-06 | 2015-01-27 | Kabushiki Kaisha Square Enix | Virtual flashlight for real-time scene illumination and discovery |
EP2754068A4 (en) * | 2011-09-08 | 2015-12-23 | Microsoft Technology Licensing Llc | Visual shader designer |
US9659398B2 (en) | 2013-03-15 | 2017-05-23 | Dreamworks Animation Llc | Multiple visual representations of lighting effects in a computer animation scene |
EP2779110A3 (en) * | 2013-03-15 | 2016-04-20 | DreamWorks Animation LLC | Lighting correction filters |
US9514562B2 (en) | 2013-03-15 | 2016-12-06 | Dreamworks Animation Llc | Procedural partitioning of a scene |
US9589382B2 (en) | 2013-03-15 | 2017-03-07 | Dreamworks Animation Llc | Render setup graph |
CN104050658A (en) * | 2013-03-15 | 2014-09-17 | 梦工厂动画公司 | Lighting Correction Filters |
US9811936B2 (en) | 2013-03-15 | 2017-11-07 | Dreamworks Animation L.L.C. | Level-based data sharing for digital content production |
US10096146B2 (en) | 2013-03-15 | 2018-10-09 | Dreamworks Animation L.L.C. | Multiple visual representations of lighting effects in a computer animation scene |
WO2016012393A1 (en) * | 2014-07-25 | 2016-01-28 | Bayerische Motoren Werke Aktiengesellschaft | Hardware-independent display of graphic effects |
US10740074B2 (en) * | 2018-11-30 | 2020-08-11 | Advanced Micro Devices, Inc. | Conditional construct splitting for latency hiding |
CN109727186A (en) * | 2018-12-12 | 2019-05-07 | 中国航空工业集团公司西安航空计算技术研究所 | One kind is based on SystemC towards GPU piece member colouring task dispatching method |
CN109727186B (en) * | 2018-12-12 | 2023-03-21 | 中国航空工业集团公司西安航空计算技术研究所 | SystemC-based GPU (graphics processing Unit) fragment coloring task scheduling method |
CN111460570A (en) * | 2020-05-06 | 2020-07-28 | 北方工业大学 | Complex structure node auxiliary construction method based on BIM technology |
CN113407090A (en) * | 2021-05-31 | 2021-09-17 | 北京达佳互联信息技术有限公司 | Interface color sampling method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
JP2009500730A (en) | 2009-01-08 |
CA2613541A1 (en) | 2007-01-11 |
WO2007005739A3 (en) | 2008-09-18 |
EP1907964A2 (en) | 2008-04-09 |
EP1907964A4 (en) | 2009-08-12 |
AU2006265815A1 (en) | 2007-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7548238B2 (en) | Computer graphics shader systems and methods | |
WO2007005739A2 (en) | Computer graphics shader systems and methods | |
Peercy et al. | Interactive multi-pass programmable shading | |
US6496190B1 (en) | System and method for generating and using systems of cooperating and encapsulated shaders and shader DAGs for use in a computer graphics system | |
Wyman et al. | Introduction to directx raytracing | |
WO1999052080A1 (en) | A time inheritance scene graph for representation of media content | |
Najork et al. | Obliq-3D: A high-level, fast-turnaround 3D animation system | |
Silva et al. | Node-based shape grammar representation and editing | |
Ragan-Kelley | Practical interactive lighting design for RenderMan scenes | |
Dokken et al. | An introduction to general-purpose computing on programmable graphics hardware | |
Bauchinger | Designing a modern rendering engine | |
Revie | Designing a Data-Driven Renderer | |
Luo | Interactive Ray Tracing Infrastructure | |
BABIČ | Shader graph module for Age | |
Atella | Rendering Hypercomplex Fractals | |
Angel et al. | An interactive introduction to WebGL | |
Granof | Submitted to the Faculty of the | |
Goliaš | Hybrid renderer | |
Seitz | Toward Unified Shader Programming | |
Vojtko | Design and Implementation of a Modular Shader System for Cross-Platform Game Development | |
Meyer-Spradow et al. | Interactive design and debugging of gpu-based volume visualizations | |
Corrie et al. | Data shader language and interface specification | |
Qin | An embedded shading language | |
Samuels | Declarative Computer Graphics using Functional Reactive Programming | |
Dickson et al. | RENDERING LEAVES DYNAMICALLY IN REAL-TIME |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
ENP | Entry into the national phase |
Ref document number: 2613541 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 2008519658 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006774417 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006265815 Country of ref document: AU |
|
ENP | Entry into the national phase |
Ref document number: 2006265815 Country of ref document: AU Date of ref document: 20060630 Kind code of ref document: A |